id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2304.01430
Divided Attention: Unsupervised Multi-Object Discovery with Contextually Separated Slots
We introduce a method to segment the visual field into independently moving regions, trained with no ground truth or supervision. It consists of an adversarial conditional encoder-decoder architecture based on Slot Attention, modified to use the image as context to decode optical flow without attempting to reconstruct the image itself. In the resulting multi-modal representation, one modality (flow) feeds the encoder to produce separate latent codes (slots), whereas the other modality (image) conditions the decoder to generate the first (flow) from the slots. This design frees the representation from having to encode complex nuisance variability in the image due to, for instance, illumination and reflectance properties of the scene. Since customary autoencoding based on minimizing the reconstruction error does not preclude the entire flow from being encoded into a single slot, we modify the loss to an adversarial criterion based on Contextual Information Separation. The resulting min-max optimization fosters the separation of objects and their assignment to different attention slots, leading to Divided Attention, or DivA. DivA outperforms recent unsupervised multi-object motion segmentation methods while tripling run-time speed up to 104FPS and reducing the performance gap from supervised methods to 12% or less. DivA can handle different numbers of objects and different image sizes at training and test time, is invariant to permutation of object labels, and does not require explicit regularization.
Dong Lao, Zhengyang Hu, Francesco Locatello, Yanchao Yang, Stefano Soatto
2023-04-04T00:26:13Z
http://arxiv.org/abs/2304.01430v2
# Divided Attention: ###### Abstract We introduce a method to segment the visual field into independently moving regions, trained with no ground truth or supervision. It consists of an adversarial conditional encoder-decoder architecture based on Slot Attention, modified to use the image as context to decode optical flow without attempting to reconstruct the image itself. In the resulting multi-modal representation, one modality (flow) feeds the encoder to produce separate latent codes (slots), whereas the other modality (image) conditions the decoder to generate the first (flow) from the slots. This design frees the representation from having to encode complex nuisance variability in the image due to, for instance, illumination and reflectance properties of the scene. Since customary autoencoding based on minimizing the reconstruction error does not preclude the entire flow from being encoded into a single slot, we modify the loss to an adversarial criterion based on Contextual Information Separation. The resulting min-max optimization fosters the separation of objects and their assignment to different attention slots, leading to Divided Attention, or DivA. DivA outperforms recent unsupervised multi-object motion segmentation methods while tripling run-time speed up to 104FPS and reducing the performance gap from supervised methods to 12% or less. DivA can handle different numbers of objects and different image sizes at training and test time, is invariant to permutation of object labels, and does not require explicit regularization. ## 1 Introduction The ability to segment the visual field by motion is so crucial to survival that we share it with our reptilian ancestors. A successful organism should not take too long to learn how to spot predators or obstacles and surely should require no supervision. Yet, in the benchmarks we employ to evaluate multi-object segmentation, the best performing algorithms are trained using large annotated datasets or simulation environments with ground truth. While supervision can certainly be beneficial, we take a minimalistic position and explore the extent in which an entirely unsupervised method can discover multiple objects in images.1 Footnote 1: We use the term “multi-object discovery” as a synonym of unsupervised multi-object motion segmentation, since motion of the agent or the objects is key to detecting them in the first place. To this end, we design a neural network architecture based on Slot Attention Networks (SANs) [26], and train it to segment multiple objects using only an image and its corresponding optical flow, with no other form of supervision. Before we delve into the details, we should clarify that "objects," in this paper, are regions of an image whose corresponding flow is unpredictable from their surroundings. Objects thus defined approximate the pre-image under optical projection of Gibson's "detached objects," which live in the three-dimensional ambient space. This definition is embodied in a variational principle that seeks to partition the image domain into regions that are as uninformative of each other as possible, known as Contextual Information Separation (CIS) [52, 51]. We choose Slot Attention Networks because they structurally organize latent activations into multiple components, or "slots." However, current SANs combine the slots to minimize the reconstruction error, which does not impose an explicit bias to separate objects into slots. A single slot can reconstruct the entire flow, especially in more complex and realistic scenes where objects are not salient. We hypothesize that the adversarial CIS loss can foster _"divided attention"_ SANs. However, CIS has been applied mostly to binary segmentation, and if an image has a dominant "foreground object" distinct from all others, arguably the discovery problem has already been solved by the individual who framed the image. Naively combining CIS and SANs leads to disappointing results, so modifications of both the architecture and the loss are needed to take these developments beyond the state of the art. First, to disambiguate multiple partitions in the CIS loss, we modify it to incorporate _cross-modal generation_: We force the model to reconstruct the flow on the entire image domain, _but not the image itself_, since most of its complexity can be ascribed to nuisance variability. Second, we modify SANs to enable the image to _guide_ the flow reconstruction. This is done by incorporating a Conditional Prior Network (CPN) [53], that models the distribution of flows compatible with the given image, and modifying the architecture to use a _conditional cross-modal decoder_. We call the resulting method _Divided Attention_ (DivA): It consists of a multi-modal architecture where one modality (flow) goes into the main encoder that feeds the slots, and the other (image) goes into the decoder that generates the flow (Fig. 1), trained adversarially with a loss (2) that captures both cross-modal generation and contextual information separation. These innovations allow us to improve performance on the benchmarks DAVIS and SegTrack by 5% and 7% respectively, compared to recent unsupervised methods, and to close the gap to supervised methods to 9.5% and 12% respectively, all while significantly improving inference speed: An embodiment of DiVA that matches the current state of the art improves speed by 200% (21FPS to 64FPS), and our fastest embodiment reaches 104FPS. DivA can handle a variable number of objects, which can be changed at inference time, and is invariant to permutations of object labels. It does not need explicit regularization, as we demonstrate by using the mean-squared error (MSE) as the base loss. ### Related Work **Unsupervised motion segmentation.** Optical flow [15, 36, 38, 40] estimates a dense motion field between frames. To segment it, one can solve a partial differential equation (PDE) [27], group sparse trajectories [30, 18, 19] or decompose it into layers [44, 47, 37, 23]. The number of segments is determined either from user input or through heuristics [39]. However, these methods require processing a video batch and/or solving energy minimization at run time, which makes them impractical at scale. When using deep neural networks, the number of partitions that a generic segmentation network handles is often determined by the number of output channels [29, 55, 6], which matches with object classes defined by the user. In the absence of pre-defined object classes, many motion segmentation methods [41, 17, 35] perform binary partition into "foreground" and "background," sometimes referred to as motion saliency [16, 12, 24, 1]. This assumes that the data contain a dominant moving object. Bootstrapping objects from motion [51, 7] is also related to multi-region segmentation, but again extending binary schemes to multi-region is non-trivial, as there is neither the notion of foreground and background nor an ordering of the regions. Some methods revisit layered models by utilizing supervised training on synthetic data [48] or replacing variational optimization with neural networks [54]. Like older layered decomposition approaches, both require longer video batches. In DivA, we simply use image-flow pairs as model input, which makes it more widely applicable and more efficient at inference time. Some unsupervised moving object segmentation methods [45, 32] use self-supervised pre-trained image features (e.g. DINO [4]) from ImageNet [9]. Other unsupervised moving object detection methods [2, 3] rely on motion segmentation [8] that leverages supervised image features from MS-COCO [25]. While we are not against any form of pre Figure 1: **Divided Attention Overview. Unlike traditional autoencoders that process the input (here the flow \(u\)) through an encoder \(f_{w}\), and then reconstruct it through a decoder \(g\), DivA uses a cross-modal conditional decoder (\(g_{w}\) in green) that takes a second modality as input (here the image \(I\)) in order to reconstruct the first from the encoding “slots” \(x_{i}\) (light blue). The image is used as a condition to guide the flow decoder, using a conditional prior network. To ensure that the individual slots encode separate objects, we incorporate an adversarial decoder (\(g_{w}\) in grey) that tries to reconstruct the entire flow with each slot. Training is done by optimizing a min-max criterion whereby the model tries to reconstruct the input flow within each object mask, while fooling the adversarial decoder outside. This design enforces information separation between attention slots, leading to Divided Attention. The conditional decoder is expanded in Fig. 2.** training, these methods are tangential to ours. The goal of our work is to explore the limits of fully unsupervised object discovery, given the primal nature of the task. Therefore, we choose to rely on self- and cross-modal consistency as learning criteria without any external pre-training. **Contextual Information Separation (CIS)**[52] frames unsupervised motion segmentation as an information separation task. Assuming independence between motions as the defining trait of "objects," the method segments the optical flow field into two regions that are mutually uninformative. Although still binary partitioning, this formulation removes the notion of foreground and background in motion segmentation. DivA further extends CIS by introducing a reconstruction term, which generalizes CIS to an arbitrary number of regions. **Slot Attention Networks (SANs)** infer a set of latent variables each representing an object in the image. Socalled "object-centric learning" methods [11, 14, 22, 13, 33, 34] aim to discover generative factors that correspond to parts or objects in the scene, but require solving an iterative optimization at test time; SANs process the data in a feed-forward pass by leveraging the attention mechanism [43] that allocates latent variables to a collection of permutation-invariant slots. SANs are trained to minimize the reconstruction loss, with no explicit mechanism enforcing separation of objects. When objects have distinct features, the low-capacity bottleneck is sufficient to separate objects into slots. However, in realistic images, more explicit biases are needed. To that end, [20] resorts to external cues such as bounding boxes, while [10] employs Lidar, which limits applicability. DivA modifies the SAN architecture by using a cross-modal conditional decoder, along the lines of [53]. By combining information separation, architectural separation, and conditional generation, DivA fosters better partition of the slots into objects. DivA is most closely related to [49], which applies SAN for iterative binding to the flow. DivA has two key advantages: [49] is limited to binary segmentation and relies on learned slot initializations, making slots no longer invariant to permutation, which results in poor generalization (Fig. 5). Second, [49] only uses optical flow while DivA employs a conditional cross-modal decoder incorporating image information, which guides the segmentation mask to increased accuracy. Another recent method [28] learns motion segmentation by fitting the flow to multiple pre-defined motion patterns (_e.g._, affine, quadratic) using Expectation-Maximization (EM). However, due to the rigidity of the network architecture, one needs to adjust the number of output channels and re-train when changing the number of objects. Note that, due to the same architectural constraint, [50], which employs a variant of CIS, is subject to the same limitations. ## 2 Method In this section, we describe the main components of our approach: The SAN architecture in Sect. 2.1, its extension to using a conditional decoder in Sect. 2.2, and the adversarial inference criterion, derived from Contextual Information Separation, in Sect. 2.3. DivA ingests a color (RGB) image \(I\in\mathbb{R}^{\mathrm{H}\times\mathrm{W}\times 3}\) with \(\mathrm{H}\times\mathrm{W}\) pixels and its associated optical flow \(u\in\mathbb{R}^{\mathrm{H}\times\mathrm{W}\times 2}\) defined on the same lattice (image domain), and encoded as an RGB image using a color-wheel. DivA outputs a collection of \(n\) binary masks \(m_{i},\;i=1,\ldots,n\) and the reconstructed flow \(\hat{a}\). ### Preliminaries: Slot Attention Autoencoder The DivA architecture comprises a collection of latent codes \(X_{n}=\{x_{1},\ldots,x_{n}\}\), with each \(x_{i}\in\mathbb{R}^{1\times K}\) representing a "slot." The encoder \(f_{w}(u)=X_{n}\) is the same as a Slot Attention Network (SAN), described in detail in [26] and summarized here. SANs are trained as autoencoders: An image \(I\) is passed through a CNN backbone with an appended positional embedding, but instead of having a single latent vector, SAN uses a collection of them \(\{x_{i}|i=1,\cdots,n\}\), called _slots_, in the bottleneck, where \(n\) may change anytime during training or testing. Slots are initially sampled from a Gaussian with learned mean and standard deviation, without conditioning additional variables, including the slot ID. This affords changing the number of slots without re-training, and yields invariance to permutation of the ordering of the slots, which are updated iteratively using dot-product attention normalized over slots, which fosters competition among slots. The result is passed through a Gated Recurrent Unit (GRU) with a multi-layer perceptron (MLP) to yield the update residual for slots. All parameters are shared among slots to preserve permutation symmetry. Each slot is then decoded independently with a spatial broadcast decoder [46]\(g\) producing slot reconstructions \(\hat{I}_{i}\)'s and segmentation masks \(m_{i}\)'s. The final image reconstruction is \(\tilde{I}=\sum_{i=1}^{n}\hat{I}_{i}\odot m_{i}\) where \(\odot\) denotes element-wise multiplication. SAN is trained by minimizing a reconstruction loss (typically MSE) between \(I\) and \(\hat{I}\). ### Cross-modal Conditional Slot Decoder Experimental evidence shows that slots can learn representations of independent simple objects on synthetic images. However, naive use of SAN to jointly auto-encode real-world images and flow leads to poor reconstruction. Since the combined input is complex and the slots are low-dimensional, slots tend to either approximate the entire input or segment the image in a grid pattern that ignores the objects. Both lead to poor separation of objects via the learned slots. For these reasons, we choose _not_ to use the image as input to be reconstructed, but as _context_ to condition the re construction of a simpler modality - the one least affected by complex nuisance variability - flow in our case. The conditional decoder \(g_{w}\) maps each latent code \(x_{i}\)_and_ image \(I\) onto a reconstructed flow \(\hat{u}_{i}=g_{u}(x_{i},I)\) and a mask \(m_{i}=g_{m}(x_{i},I)\). With an abuse of notation, we write \(g_{w}=(g_{u},g_{m})\) depending on whether we emphasize its dependency on the weights \(w\) or on the component that generates the flow \(u\) and the mask \(m\) respectively. This decoder performs cross-modal transfer, since the image is used as a prior for generating the flow. This is akin to a Conditional Prior Network [53], but instead of reconstructing the entire flow, we reconstruct individual flow components (modes) corresponding to objects in the scene indicated by the decoded masks. The reconstructed flow is then obtained as \(\hat{u}=\sum_{i=1}^{n}\hat{u}_{i}\odot m_{i}\). In the next section, we will also use an _adversarial conditional decoder_ to generate \(\tilde{u}=g_{\theta}(x_{i},I)\), that attempts to reconstruct the entire flow \(\tilde{u}\)_from each individual slot \(x_{i}\), which in turn encourages the separation between different slots. ### Adversarial Learning with Contextual Information Separation We now describe the separation criterion, which we derive from CIS [51], to divide slots into objects. Each image \(I\) and flow \(u\) in the training set contribute a term in the loss detailed below: \[\ell(u,I)=\left\|u-\sum_{i=1}^{n}\hat{u}_{i}\odot m_{i}\right\|\\ -\frac{\lambda}{n}\sum_{i=1}^{n}(1-m_{i})\odot\left\|u-\tilde{u}_ {i}\right\| \tag{1}\] The first term penalizes the reconstruction error by the cross-modal autoencoder combining all the slots, i.e., we want the reconstructed flow to be as good as possible. The second term combines the reconstruction error of the adversarial decoder using a single slot at a time, and maximizes its error outside the object mask. Note that, in the second term, the adversarial decoder \(g_{\theta}\) tries to approximate the entire flow \(u\) with a single slot \(\tilde{u}_{i}=g_{\theta}(x_{i},I)\), which is equivalent to maximizing the contextual information separation between different slots. Also note that we use the mean-square reconstruction error (MSE) \(d(u,v)=\|u-v\|_{2}\) for simplicity, but one can replace MSE with any other distance or discrepancy measure such as empirical cross-entropy, without altering the core method. The overall loss, averaged over training samples in a dataset \(D\), is then minimized with respect to the parameters \(w\) of the encoder \(f_{w}\) and conditional decoder \(g_{w}\), and maximized with respect to the parameters \(\theta\) of the adversarial decoder \(g_{\theta}\): \[\min_{w}\max_{\theta}L(w,\theta)=\sum_{(u_{j},I_{j})\in D}\ell(u_{j},I_{j}). \tag{2}\] Compared to CIS, our method is minimizing mutual information _between different slots_, where the data is encoded, rather than directly _between different regions_, which eliminates degenerate slots thanks to the reconstruction objective. The resulting design is a natural extension of CIS to non-binary partitions, and also leads to increased versatility: DivA can be trained with a certain number of slots, on images of a certain size, and used at inference time with a different number of slots, on images of different size. Note that the DivA loss does not need explicit regularization, although one can add it if so desired. ## 3 Experiments **Encoder.** The encoder \(f\) consists of 4 convolutional layers with kernel size equal to \(5\times 5\), consistent with the original implementation of [26]. With padding, the spatial dimension of the feature map is the same as the network input. Since optical flow is simpler to encode than RGB images, we choose \(K=48\) instead of 64 used by the original SAN, resulting in a narrower bottleneck. A learned 48-dimensional positional embedding is then added to the feature map. We tried Fourier positional embedding and the network shows similar behavior. We keep the iterative update of slots the same as the original SAN, and fix the number of slot iterations to be 3. **Conditional decoder and adversarial decoder.** The architecture of the conditional decoder \(g_{w}\), shown in Fig. 2, consists of two parts: an image encoder, and a flow decoder. We use 5 convolutional layers with filter size {5,3,3,3,3} in the image encoder. Same as \(f\), with padding, the size of the feature maps remains the same as \(I\). We limit the capacity of this encoder by setting the output channel dimension of each layer to 24 to avoid overfitting the flow to the image, ignoring information from the slots. The flow decoder takes one \(1\times 48\) slot vector \(x_{i}\) as input. It first broadcasts \(x_{i}\) spatially to \(h\times w\times 48\), and adds it to a learned positional embedding. Note that the positional embedding is different from the one in \(f\). The broadcasted slot then passes through 6 convolutional layers. The feature Figure 2: **Architecture of the conditional decoder.** map at each layer is concatenated with the image feature at the corresponding level. The last convolutional layer outputs a \(h\times w\times 4\) field, where the first 3 channels reconstruct the optical flow, and the 4th channel outputs a segmentation mask. The adversarial decoder shares the same architecture, except for the last layer, which outputs 3 channels aiming to reconstruct the flow on the entire image domain. **Training.** We implement implicit differentiation during training as proposed by [5] for stability. We aim to keep the training pipeline simple and all models are trained on a single Nvidia 1080Ti GPU with PyTorch. We apply alternating optimization to \(w\) and \(\theta\) by Eq. (2). In each iteration, we first fix \(g_{\theta}\) and update \(w\), then use _torch.detach()_ to stop gradient computation on \(x_{i}\) before updating \(\theta\). This makes sure that only \(\theta\) is updated when training the adversarial decoder. We train with batch size 32 in all experiments and apply the ADAM optimizer with an initial learning rate \(8e^{-4}\) and decay schedule for both \(w\) and \(\theta\). We notice that the model is not sensitive to the choice of learning rate. Further details on training parameters and data normalization are provided in the Supplementary Material. ### Diagnostic data and ablation We reproduce the ideal conditions of the experiments reported in [52] to understand the effect of our conditional decoder on the adversarial training scheme. Instead of using binary segmentation masks, we generate flow-image pairs that contain \(n=2,3,4\) regions and corresponding statistically independent motion patterns. We adopt object masks from the DAVIS dataset and paste them onto complex background images so that the network cannot overfit to image appearance for reconstruction and segmentation. During training, we fix the number of slots to be 4 and the network is unaware of the number of objects. We evaluate our models on 300 validation samples and measure the performance by two criteria: _bootstrapping IoU (bIoU)_ and _successful partition counts (SPC)_. We match each ground truth object mask to the most likely segment (with the highest IoU), and then compute bloU by averaging this IoU across all objects in the dataset. This metric measures how successfully each object is bootstrapped by motion. However, bloU does not penalize falsely bootstrapped blobs. In addition, SPC counts the number of cases where the number of objects in the segmentation output matches the ground truth number of objects. As the architecture itself is not aware of the number of objects in each flow-image pair during testing, multiple slots may map to the same object. We call this phenomenon slot confusion. A higher SPC is achieved when information is separated among slots, reducing confusion. Table 1 summarizes the results on these diagnostic data. Our conditional decoder allows exploiting photometric in \begin{table} \begin{tabular}{c|c|c c c c} & SAN & \(\lambda=0\) & \(\lambda=0.01\) & \(\lambda=0.03\) & \(\lambda=0.05\) \\ \hline bloU & 51.18 & 82.93 & **84.49** & 82.33 & 79.56 \\ \hline SPC & 120 & 133 & 161 & **182** & 174 \\ \end{tabular} \end{table} Table 1: **Results on diagnostic data.** Conditional decoding significantly improves bootstrapping IoU (bIoU), indicating better awareness of object shape; adversarial training significantly improves successful partition count (SPC), indicating better information separation in slots. Figure 3: **Multi-region moving object discovery on real data.** Our method discovers moving objects in videos without any human annotation or supervision. The slots are randomly initialized, so as to be permutable. We visualize them in a particular order for ease of visualization. formation, in addition to motion, improving bIoU; adversarial learning fosters better separation among slots, reducing slot confusion and improving SPC. To better understand the role of adversarial training, in Fig. 4 we display the scatter between reconstruction error and segmentation Mask Entropy \(=\sum_{i}^{4}m_{i}\log(m_{i})\) during training. A smaller entropy corresponds to a more certain mask output, indicating that the decoder relies more on single slots to reconstruct optical flow for each particular object. At each level of reconstruction error, the larger \(\lambda\), the smaller the mask entropy. This phenomenon is expected since, with information separation, only one slot contains information about the flow of each object, validating our adversarial training. Note that entropy regularization is applied to existing unsupervised segmentation methods (_e.g.,_[49]), and our adversarial loss provides a potential alternative to entropy regularization. ### Real-world data and benchmark We first introduce baseline methods we compare with. **CIS**[52] uses a conventional binary segmentation network architecture and is trained using the contextual information principle. The method employs an auxiliary flow inpainting network and enforces information separation by minimizing mutual inpainting accuracy on the regions. **Moseg**[49] uses a variant of SAN that is tuned for iterative binding to the flow field. The method performs binary segmentation by fixing the number of slots to 2, and learning slot initialization instead of using random Gaussian. The method is trained by reconstruction loss together with temporal consistency loss on videos. **EM**[28] uses a conventional segmentation network. During training, the method pre-defines a class of target motion patterns (_e.g.,_ affine), and updates the predicted segmentation mask and estimated motion pattern using EM. We select these methods as paragons since they all 1) use single-frame (instantaneous motion) without exploiting long-range cross-frame data association; 2) do not rely on the presence of a dominant "foreground object," even though that may improve performance in artificial benchmarks given the bias of human-framed photographs and videos. Our method is chosen so as to be efficient and flexible, leading to 1), and not biased towards purposefully framed videos, leading to 2). **Datasets.** We test our method on DAVIS2016 [31], SegTrackV2 [42] and FBMS59 [30]. **DAVIS2016** consists of 50 image sequences ranging from 25 to 100 frames along with high-quality per-pixel segmentation masks. Each video contains one primary moving object that is annotated. We perform training and validation using the 480P resolution. **SegtrackV2** has 14 video sequences with a total of 947 frames with per-frame annotation. Annotated objects in the video have apparent motion relative to the background. **FBMS-59** dataset contains 59 videos ranging from 19 to 800 frames. In the test set, 69 objects are labeled in the 30 videos. Following the baseline methods, we test our method for binary segmentation on DAVIS2016 and SegTrackV2 by setting \(n=2\). On FBMS-59, we make use of per-instance annotations offered by the dataset and test multi-object bootstrapping. **Training and testing.** We apply RAFT [40] for optical flow estimation following [49, 28]. For image sequence \(\{I_{t}\}\), RAFT computes optical flow from \(I_{t}\) to \(I_{t+\delta t}\), where \(\delta t\) is randomly sampled from -2 to 2. Same to the baselines, optical flow is computed on original resolution images, then downsampled together with the reference image before feeding to the network. During training, we warm up our model on DAVIS2016 dataset, setting \(n=4\). Dur \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline Method & Resolution & Multi & DAVIS & ST & FPS \\ \hline \multicolumn{6}{c}{Unsupervised} \\ \hline CIS [52](4) & \(128\times 224\) & N & 59.2 & 45.6 & 10 \\ CIS(4)+CRF & \(128\times 224\) & N & 71.5 & 62.0 & 0.09 \\ MoSeg[49] & \(128\times 224\) & N & 65.6 & - & 78 \\ MoSeg(4) & \(128\times 224\) & N & 68.3 & 58.6 & 20 \\ EM* [28] & \(128\times 224\) & Y* & 69.3 & 60.4 & 21 \\ \hline DivA & \(128\times 128\) & Y & 68.6 & 60.3 & **104** \\ DivA & \(128\times 224\) & Y & 70.8 & 60.1 & 64 \\ DivA-Recursive & \(128\times 224\) & Y & 71.0 & 60.9 & 66 \\ DivA(4) & \(128\times 224\) & Y & **72.4** & **64.6** & 16 \\ \hline \multicolumn{6}{c}{With Additional Supervision} \\ \hline FSEG [17] & Full & N & 70.7 & 61.4 & - \\ OCLR [48](30) & \(128\times 224\) & Y & 72.1 & 67.6 & - \\ ARP [21] & Full & N & 76.2 & 57.2 & 0.015 \\ DyStab [51] & Full & N & 80.0 & 73.2 & - \\ \hline \end{tabular} \end{table} Table 2: **Moving object segmentation on real-world data.** Although our model is not specifically designed for binary segmentation, we still achieve state-of-the-art accuracy on DAVIS16 and SegTrackV2 datasets with better inference speed. Multi: generalizes to multiple regions. EM*. Trained unsupervised on synthetic data and needs to alter network architecture and retrain when changing the number of regions. Figure 4: **Adversarial loss is an implicit mask entropy regularizer.** At each level of reconstruction error, the higher \(\lambda\) we apply, the smaller entropy we get in the segmentation output. With the adversarial loss, the model successfully predicts the number of independent motion patterns on more samples. ing this warm-up stage, we set \(\lambda=0\) so that the model can learn reconstruction in the initial stage without the interference of an adversarial decoder. After the warm-up, we train the model on each particular dataset, with \(n=2\) on DAVIS2016 and SegtrackV2, and \(n=4\) on FBMS-59. Referring to the empirical evidence on the diagnostic dataset, We set \(\lambda=0.03\), and decrease it to \(\lambda-0.01\) towards the end of the training. We use a batch size of 32 and set the spatial resolution of model input to be \(128\times 128\) when \(n=4\) and \(128\times 224\) when \(n=2\), tailored by the GPU's memory constraint. Details about hyper-parameters and data normalization are in the supplementary material. At test time, we keep \(n=2\) on DAVIS2016 and SegtrackV2, and vary \(n\) on FBMS-59. On binary segmentation, since the model is trained without the notion of foreground and background, we follow the baselines to match the ground truth with the most likely segmentation mask and compute intersection-over-union (IoU) to measure the segmentation accuracy. We upsample the segmentation output to the dataset's original resolution for evaluation. Unlike baseline methods that upsample low-resolution segmentation masks by interpolation, our generative model reconstructs flow from each slot, so we can refine the segmentation outputs by upsampling \(\{\hat{u}_{i}\}\)'s to the full resolution, then refine the segmentation boundaries by \(\operatorname*{arg\,min}_{i}|u-\hat{u}_{i}|_{2}\). This practice creates negligible computational overhead (0.00015s) and empirically improves segmentation accuracy. On multi-object bootstrapping, we follow the bootstrapping IoU defined in Sect. 3.1 and evaluate the accuracy per instance. Note that this is different from the baseline methods that merge all instances into a single foreground for evaluation. **Results on DAVIS2016 and SegTrackV2** are summarized in Table 2. Our best result outperforms the best-performing baseline EM by 4.5% and 6.9%, respectively. We measure the run time of one single forward pass of the model. Our fastest setting using \(128\times 128\) spatial resolution reaches 104 frames per second. Note that in addition to running on consecutive frames, CIS and MoSeg merge segmentation results on optical mapping from the reference frame to 4 adjacent temporal frames (\(\delta t\) = -2,-1,1 and 2), marked by "(4)" in the table. The same protocol improves our mIoU on both datasets by 1.4 and 3.7, respectively. Compared to models using additional supervision, our best performance exceeds the FSEG [17] (supervised) and OCLR [48] (supervised, trained on synthetic data), and closes the gap with the best-performing method DyStab [51] (with supervised image feature) to 12%. Inspired by [10], we also test DivA by recursively inheriting slots from the previous frames instead of randomly initializing them, and reducing the number of slot iterations from 3 to 1. Both segmentation accuracy and runtime speed improve marginally. The base model randomly assigns labels to the foreground and background; with recursion, label-flipping is reduced. Unlike MoSeg and DyStab that employ explicit regularization mechanisms, our method exhibits temporal consistency without any modification to the training pipeline. Furthermore, we compare the model pipelines of MoSeg and DivA, both of which use slot attention. MoSeg only takes flow as input. Without the conditional decoder, due to the highly compressed bottleneck of each slot, the network relies on the decoder's capacity to reconstruct fine structures of the input flow. This mechanism forces the decoder to memorize shape information. Together with learned slot initializations, the network is subject to overfitting to the data seen during training. Fig. 5 shows an example, where simply flipping the input image leads to performance degradation. Conditioning the decoder on the image frees it from memorizing training data to be reconstructed, thus making the model less vulnerable to overfitting. **Results on FBMS-59.** We demonstrate DivA's performance on multi-object discovery using FBMS-59.2 We train with \(n=4\) aiming to extract _multiple_ independently moving objects from a scene. At test time, we vary \(n\) ranging from 3 to 6 without re-training the model, and report bootstrapping IoU. We test the model on optical flow computed with \(\delta t=1\) and \(\delta t=2\), and keep the spatial resolution to \(128\times 128\) in both training and testing. Footnote 2: CIS, MoSeg and EM are trained for binary segmentation and thus not amenable to comparison on this task. We use the original SAN as the baseline, and also in Figure 5: **Conditional decoder is less vulnerable to overfitting.** Reconstructing complex flow input from compressed slots forces the decoder to overfit to seen data. Simply flipping the input flow drastically decreases segmentation accuracy. By introducing the conditional decoder, DivA is less vulnerable to overfitting. clude results combining SAN with the conditional decoder, as in Sect. 3.1. Table 3 summarizes the results. Similar to Sect. 3.1, the conditional decoder improves segmentation accuracy substantially, and the adversarial training further improves accuracy by 3.9% in the best-performing case. We notice that \(n=4\) is a sweet spot for the dataset, due to the dataset's characteristic of having around 3 moving objects in most sequence. Fig. 3 shows qualitative examples. For many objects segmented, we observe a change in the level of granularity of the segmentation mask when varying \(n\). Two examples are given in Fig. 6. Although depending on the annotation, both cases may be considered as over-segmenting the object in quantitative evaluations and may degrade IoU, we argue that such behavior is desirable. As our model can vary the number of slots without re-training, it reveals DivA's potential in developing adaptive, interactive, or hierarchical segmentation schemes in future work. ## 4 Discussion DivA is a multi-object discovery model trained to perform contextual information separation and cross-modal validation, with no manual supervision. The architecture is based on Slot Attention Networks, but modified to comprise a cross-modal decoder, turning a simple autoencoder into a conditional encoder, and an adversarial decoder. The loss function uses the adversarial decoder to enforce Contextual Information Separation but _not_ directly in the data, but in the latent space of the slot encodings. The overall system enjoys certain properties not observed in current methods: _First_, it does not require specifying the number of objects, and allows for their upper bound to change between training and test time. _Second_, the labels are permutation invariant, so DivA can be used in a recursive fashion to reduce label flipping. We can use slots from previous frames as initialization for incoming frames, and only run one slot iteration. Other methods for doing so [10] require manual input and modification of the training pipeline or require consistency heuristics to ensure temporal continuity [51, 49]. _Third_, the model can be trained on images of a certain resolution, and then used on images of different resolutions at inference time. We have tested training on \(128\times 128\) images to spare GPU memory, and testing on \(128\times 224\) for accuracy, and vice-versa one can train on larger images and use at reduced resolution for speed. _Fourth_, DivA does not require sophisticated initialization: It is trained on randomly initialized slots and random image-flow pairs, and tested recursively using slots from the previous frame to initialize for the new frame. It is trained with 4 slots, and tested with 3, 4, 5 and 6 slots, capturing most of the objects in common benchmarks. _Fifth_, DivA is fast. All our models are trained on a single GPU, and perform at multiples of frame-rate at test time. _Finally_, since DivA is a generative model, it can be used to generate segmentation masks at high resolution, by upsampling the decoded slots to the original resolution. Some failure cases are shown in Fig. 7. While we have tested DivA on a handful of objects in benchmark datasets, scaling to hundreds or more slots is yet untested. There are many variants possible to the general architecture of Fig. 1, including using Transformer models (e.g. [33, 34]) in lieu of the conditional prior network. With more powerful multi-modal decoders, the role of motion may become diminished, but the question of how the large model is trained (currently with aligned visual-textual pairs), remains. Since our goal is to understand the problem of object discovery _ab ovo_, we keep our models minimalistic to be trained efficiently even if they do not incorporate the rich semantics of natural language or human-generated annotations. We can envision several extensions of DivA. For example, even though the amphibian visual system requires objects to move in order to spot them, primate vision can easily detect and describe objects in static images, thus, a bootstrapping strategy can be built from DivA. Moreover, varying the number of slots changes the level of granularity of the slots (Fig. 6), which leads to the natural question of how to extend DivA to hierarchical partitions. Figure 6: **Varying the number of slots changes the granularity of segmentation.** All the above results are obtained by only varying the number of slots at inference time without re-training. This gives users additional control over the granularity of segmentation. Figure 7: **Failure modes.** DivA is trained using the CIS principle, which assumes objects move independently. Consequently, two objects moving in the same way may share high mutual information, e.g., even though the car and the bus are different detached objects, if their motions are highly correlated in the current video, they are seen as one by DivA (of course, as time goes by, their motions will diverge, allowing DivA to correctly separate them).
2305.19156
An explicit central element of $\mathcal{U}_q(\mathfrak{so}_5)$ and its corresponding quantum Hamiltonian
A previous paper of the author developed a general method for producing explicit central elements of quantized Lie algebras using Lusztig's inner product. This method had previously been applied for the type $C_2$, $D_3$ and $D_4$ Lie algebras. The current paper repeats the calculation for the type $B_2$ Lie algebra, which is actually isomorphic to the $C_2$ Lie algebra. The explicit expression for the corresponding quantum Hamiltonian is computed.
Jeffrey Kuan
2023-05-27T18:23:28Z
http://arxiv.org/abs/2305.19156v1
An explicit central element of \(\mathcal{U}_{q}(\mathfrak{so}_{5})\) and its corresponding quantum Hamiltonian ###### Abstract A previous paper of the author developed a general method for producing explicit central elements of quantized Lie algebras using Lusztig's inner product. This method had previously been applied for the type \(C_{2},D_{3}\) and \(D_{4}\) Lie algebras. The current paper repeats the calculation for the type \(B_{2}\) Lie algebra, which is actually isomorphic to the \(C_{2}\) Lie algebra. The explicit expression for the corresponding quantum Hamiltonian is computed. Accessibility Statement: An accessible version of this PDF, which meets the guidelines of WCAG Level 2.1AA, can be found on the first author's webpage at this link: [https://www.math.tamu.edu/~jkuan/CentralElementB2_Accessible.pdf](https://www.math.tamu.edu/~jkuan/CentralElementB2_Accessible.pdf) ## 1 Introduction In mathematical physics, quantum Hamiltonians can be constructed from central elements of quantization of Lie algebras. In order to compute the entries of the Hamiltonian, these central elements need to be explicitly written in terms of the generators. To this end, the author proved a formula in [17] to produce explicit expressions for central elements, using Lusztig's inner product. The method had found explicit central elements for the Lie algebras with Dynkin diagrams \(C_{2}\) and \(D_{3},D_{4}\)[17, 18] (for the type \(A\) Lie algebras, the central element had already been found in [19] and used in [17], but their method does not appear to apply here). In the present paper, we carry about the calculation for \(B_{2}\). Note that the \(B_{2}\) and \(C_{2}\) Lie algebras are isomorphic, but whereas the \(C_{2}\) calculation uses the four-dimensional representation \((\mathfrak{sp}_{4})\), this paper uses the five-dimensional representation \((\mathfrak{so}_{5})\). Indeed, by the Harish-Chandra isomorphism, the center of the \(B_{2}\cong C_{2}\) Lie algebra is generated by two elements. These two elements can be taken to be the central elements of [17] and the one here. See also [17] for an analog of the explicit central elements generating the center in the case of the \(A_{n}\) Lie algebras. In the calculation done in this paper, we observe that not every element of the dual basis needs to be calculated, and that the automorphisms of the Lie algebra streamline some of the computations. This will slightly reduce computational work in the future. From the central element, the co-product of the quantized Lie algebra produces a quantum Hamiltonian. If all the matrix entries of the quantum Hamiltonian are non-negative, then a ground state conjugation will define a Markov process, which can be interpreted as an interacting particle system. We will also provide the matrix entries of the quantum Hamiltonian. If the entries of the quantum Hamiltonian are non-negative, then a ground state transformation will produce a Markov process. This ideas goes back to [10] and [10]. More recently it was generalized in [11] (see also [11]). If some of the entries are negative, it is still sometimes possible to remove such states, as in [12] or [13]. ## 2 Background ### Description of type \(B_{2}\cong C_{2}\) Lie algebra Define the symplectic Lie algebra \(\mathfrak{sp}_{4}\) to be the Lie algebra consisting of \(4\times 4\) matrices of the form (where \(A,B,C,D\) are \(2\times 2\) blocks) \[\left\{\left(\begin{array}{cc}A&B\\ C&D\end{array}\right):A=-D^{T},B=B^{T},C=C^{T}\right\}\] Using the notation that \(E_{ij}\) is a matrix with a \(1\) in the \((i,j)\)-entry and \(0\) everywhere else, set \[e_{1}=E_{12}-E_{43},\quad f_{1}=E_{21}-E_{34},\quad h_{1}=E_{11} -E_{22}-E_{33}+E_{44}\] \[e_{2}=E_{24},\quad f_{2}=E_{42},\quad h_{2}=E_{22}-E_{44}\] Then \(\mathfrak{sp}_{4}\) has a basis \(e_{1},e_{2},e_{1}e_{2},e_{2}e_{1},e_{1}e_{2}e_{1},f_{1},f_{2},f_{1}f_{2},f_{2} f_{1},f_{1}f_{2}f_{1},h_{1},h_{2}\). Each \(e_{i},f_{i},h_{i}\) generate a copy of \(\mathfrak{sl}_{2}\), and the two copies are related by \[[h_{1},e_{2}]=-2e_{2},\quad[h_{1},f_{2}]=2f_{2},\quad[2h_{2},e_{1}]=-2e_{1}, \quad[2h_{2},f_{1}]=2f_{1},\] where we have written \(2h_{2}\) to preserve the symmetry. As usual, these relations can be summarized by describing the roots of \(\mathfrak{sp}_{4}\). Let \(\mathfrak{h}\) denote the linear span of \(h_{1},h_{2}\) and identify \(\mathfrak{h}\cong\mathbb{R}^{2}\) via \(h_{1}\mapsto x_{1},2h_{2}\mapsto x_{2}\). Define \(\alpha_{1}(x_{1})=2,\alpha_{1}(x_{2})=-2\) and \(\alpha_{2}(x_{1})=-2,\alpha_{2}(x_{2})=4\) so that the above relations are \[[x_{i},e_{j}]=\alpha_{j}(x_{i})e_{j}\quad[x_{i},f_{j}]=-\alpha_{j}(x_{i})f_{j}\] If \(\mathfrak{h}^{*}\) is identified with \(\mathbb{R}^{2}\) then \(\alpha_{1}=(1,-1),\alpha_{2}=(0,2)\), so that \(\alpha_{j}(x_{i})=(\alpha_{i},\alpha_{j})\) where \((\cdot,\cdot)\) is the usual Euclidean inner product. Let \(V\) be the natural four-dimensional representation of \(\mathfrak{sp}_{4}\). Using the identification of \(\mathfrak{h}^{*}\) with \(\mathbb{R}^{2}\), show that \(V\) has a basis \(v_{1},v_{2},v_{4},v_{3}\) which are in the weight spaces \(V[(1,0)],V[(0,1)],V[(0,-1)],V[(-1,0)]\) respectively. ### A five-dimensional representation Let \(V\) be the four-dimensional fundamental representation of \(\mathfrak{sp}_{4}\). The exterior power \(V\wedge V\) is a representation of dimension \(\binom{4}{2}=6\). The action of \(\mathfrak{sp}_{4}\) preserves the subspace \(\mathbb{C}v\) where \(v=v_{1}\wedge v_{3}-v_{2}\wedge v_{4}\), and therefore preserves the five-dimensional quotient space \(W:=(V\wedge V)/\mathbb{C}v\). Define the basis \(\{w_{1},\ldots,w_{5}\}\) by \[w_{1} =v_{1}\wedge v_{2}\] \[w_{2} =v_{1}\wedge v_{4}\] \[w_{3} =v_{1}\wedge v_{3}+v_{2}\wedge v_{4}\] \[w_{4} =v_{2}\wedge v_{3}\] \[w_{5} =v_{4}\wedge v_{3}\] Then (again using the identification of \(\mathfrak{h}^{*}\) with \(\mathbb{R}^{2}\)) \[W[(1,1)] =\mathbb{C}w_{1}\] \[W[(1,-1)] =\mathbb{C}w_{2}\] \[W[(0,0)] =\mathbb{C}w_{3}\] \[W[(-1,1)] =\mathbb{C}w_{4}\] \[W[(-1,-1)] =\mathbb{C}w_{5}\] The representation of \(\mathfrak{sp}_{4}\) onto \(W\) actually defines an isomorphism \(\mathfrak{sp}_{4}\cong\mathfrak{so}_{5}\), which can also be seen in the Dynkin diagram. ### Quantum group The quantization of a finite-dimensional simple Lie algebra \(\mathfrak{g}\) depends on its Dynkin diagram. Rather than define \(\mathcal{U}_{q}(\mathfrak{g})\) most generally, here is the definition of \(\mathcal{U}_{q}(\mathfrak{sp}_{4})\). It is generated by \(\{e_{i},f_{i},k_{i}\},i=1,2\) with the Weyl relations \[[e_{i},f_{j}]=\delta_{ij}\frac{k_{i}-k_{i}^{-1}}{q_{i}-q_{i}^{-1}},\quad[k_{i},k_{j}]=0\] \[k_{i}e_{j}=q^{(\alpha_{i},\alpha_{j})}e_{j}k_{i}\quad k_{i}f_{j}=q^{-(\alpha_{ i},\alpha_{j})}f_{j}k_{i},\quad 1\leq i,j\leq 2\] (where \(q_{1}=q,q_{2}=q^{2}\)) and the Serre relations \[e_{2}^{2}e_{1}-(q^{2}+q^{-2})e_{2}e_{1}e_{2}+e_{1}e_{2}^{2} =0\] \[e_{1}^{3}e_{2}-(q^{2}+1+q^{-2})e_{1}^{2}e_{2}e_{1}+(q^{2}+1+q^{-2 })e_{1}e_{2}e_{1}^{2}-e_{2}e_{1}^{3} =0\] \[f_{2}^{2}f_{1}-(q^{2}+q^{-2})f_{2}f_{1}f_{2}+f_{1}f_{2}^{2} =0\] \[f_{1}^{3}f_{2}-(q^{2}+1+q^{-2})f_{1}^{2}f_{2}f_{1}+(q^{2}+1+q^{- 2})f_{1}f_{2}f_{1}^{2}-f_{2}f_{1}^{3} =0\] The co-product is \[\Delta(e_{i})=e_{i}\otimes 1+k_{i}\otimes e_{i}\quad\Delta(f_{i})=1\otimes f_{i }+f_{i}\otimes k_{i}^{-1},\quad\Delta(k_{i})=k_{i}\otimes k_{i},\] ### Central Element Construction Here, we explain how to construct a central element of \(\mathcal{U}_{q}(\mathfrak{g})\). Letting \(\mathfrak{b}_{\pm}\subset\mathfrak{g}\) denote the Borel subalgebras (that is, \(\mathfrak{b}_{+}\) is generated by \(e_{i},h_{i}\) and \(\mathfrak{b}_{-}\) is generated by \(f_{i},h_{i}\)), there is a bi-linear pairing (see Proposition 6.12 of [J]) on \(\mathcal{U}_{q}(\mathfrak{b}_{-})\times\mathcal{U}_{q}(\mathfrak{b}_{+})\) defined on generators by \[\langle k_{\alpha},k_{\beta}\rangle=q^{-(\alpha,\beta)_{\mathfrak{g}}},\quad \langle f_{i},e_{j}\rangle=\frac{-\delta_{ij}}{q_{i}-q_{i}^{-1}},\quad\langle k _{i},e_{j}\rangle=\langle f_{i},k_{j}\rangle=\langle 1,e_{i},\rangle= \langle f_{j},1\rangle=0,\quad\langle 1,1\rangle=1\] and extended to all of \(\mathcal{U}_{q}(\mathfrak{b}_{-})\times\mathcal{U}_{q}(\mathfrak{b}_{+})\) by \[\langle y,xx^{\prime}\rangle=\langle\Delta(y),x^{\prime}\otimes x\rangle, \quad\langle yy^{\prime},x\rangle=\langle y\otimes y^{\prime},\Delta(x) \rangle,\quad\langle y\otimes y^{\prime},x\otimes x^{\prime}\rangle=\langle y,x\rangle\langle y^{\prime},x^{\prime}\rangle. \tag{1}\] where \((\cdot,\cdot)_{\mathfrak{g}}\) is the invariant, non-degenerate invariant symmetric bilinear form on \(\mathfrak{h}^{*}\). Furthermore, according to Lemma 6.16 of [J], \[\langle\omega(x),\omega(y)\rangle=\langle y,x\rangle=\langle\tau(y),\tau(x)\rangle \tag{2}\] where \(\omega\) is the automorphism and \(\tau\) is the antiautomorphism defined by \[\omega(e_{i})=f_{i},\quad\omega(f_{i})=e_{i},\quad\omega(k_{i})= k_{i}^{-1}\] \[\tau(e_{i})=e_{i},\quad\tau(f_{i})=f_{i},\quad\tau(k_{i})=k_{i}^ {-1}\] Choose an ordering \(\geq\) on the weight space. Given \(\nu\geq 0\), let \(r(\nu)\) be the dimension of \(U[\nu]\). Let \(\{u_{\nu}^{i}\}_{1\leq i\leq r(\nu)}\) be an arbitrary basis of \(U[\nu]\), and let \(\{v_{\nu}^{i}\}_{1\leq i\leq r(\nu)}\) be the dual basis of \(U[-\nu]\) under \(\langle\cdot,\cdot\rangle\). Let \(V\) be a fundamentalrepresentation of \(\mathfrak{g}\) and let \(\rho\) be half the sum of the positive roots of \(\mathfrak{g}\), where a root \(\alpha\) is positive if \(\alpha>0\). Explicitly, \(\rho=(2,1)\) for \(\mathfrak{sp}_{4}\). Let \(\{v_{\lambda}\}\) be a basis of \(V\) where each \(v_{\lambda}\in V[\lambda]\), and let \(f_{\lambda}\) be a dual basis. The next lemma is Lemma 2.1 of [Kua16], which constructs a central element of \(\mathcal{U}_{q}(\mathfrak{g})\). Recall that the root lattice of \(\mathfrak{g}\) is the lattice in \(\mathfrak{h}^{*}\) spanned by the roots of \(\mathfrak{g}\). When \(\mathfrak{g}=\mathfrak{sp}_{4}\), the root lattice can be explicitly written as \(\{(x_{1},x_{2}):x_{1}+x_{2}\in 2\mathbb{Z}\}\in\mathbb{Z}^{2}\). **Lemma 1**.: _If \(q\) is not a root of unity and \(2\mu\) is in the root lattice of \(\mathfrak{g}\) for all weights \(\mu\) of \(V\), then the element_ \[\sum_{\mu\geq\lambda}\sum_{i,j=1}^{r(\mu-\lambda)}q^{(\mu-\lambda,\mu)}q^{-(2 \rho,\mu)}f_{\lambda}(v_{\mu-\lambda}^{j}u_{\mu-\lambda}^{i}v_{\lambda})v_{ \mu-\lambda}^{j}k_{-\lambda-\mu}u_{\mu-\lambda}^{i} \tag{3}\] _is central in \(\mathcal{U}_{q}(\mathfrak{g})\), where the sum is taken over all \(\mu,\lambda\) such that \(V[\mu]\) and \(V[\lambda]\) are nonzero._ ## 3 Main Result and Proof **Theorem 1**.: _The element \(C\) defined by_ \[q^{-2}\left(q-q^{-1}\right)^{2}\left((1-q^{2})f_{1}f_{2}f_{1}f_{2}+ (q^{4}-q^{-2})f_{2}f_{1}^{2}f_{2}+(1-q^{2})f_{2}f_{1}f_{2}f_{1}+(1-q^{2})f_{1}f_ {2}^{2}f_{1}\right)k_{(0,0)}\\ \times\left((1-q^{2})e_{1}e_{2}e_{1}e_{2}+(q^{4}-q^{-2})e_{2}e_{1} e_{2}+(1-q^{2})e_{2}e_{1}e_{2}e_{1}+(1-q^{2})e_{1}e_{2}^{2}e_{1}\right)\\ +\left(q-q^{-1}\right)^{4}f_{1}f_{1}k_{(0,0)}e_{1}e_{1}\\ +\left(q-q^{-1}\right)^{2}\left((1+q^{2})f_{1}f_{2}f_{1}-f_{2}f_{ 1}f_{1}-q^{2}f_{1}f_{1}f_{2}\right)k_{(0,2)}\left((1+q^{2})e_{1}e_{2}e_{1}-e_{ 1}e_{1}e_{2}-q^{2}e_{2}e_{1}e_{1}\right)\\ +\left(q-q^{-1}\right)^{2}\left(q+q^{-1}\right)\left(f_{1}f_{2}q^ {2}-f_{2}f_{1}\right)k_{(1,1)}\left(e_{2}e_{1}q^{2}-e_{1}e_{2}\right)\\ +\left(q^{2}-q^{-2}\right)^{2}q^{4}f_{2}k_{(2,0)}e_{2}\\ +q^{-4}\left(q-q^{-1}\right)^{2}\left((1+q^{2})f_{1}f_{2}f_{1}-f_ {1}f_{2}-q^{2}f_{2}f_{1}f_{1}\right)k_{(0,-2)}\left((1+q^{2})e_{1}e_{2}e_{1}-e _{2}e_{1}e_{1}-q^{2}e_{1}e_{2}\right)\\ +q^{-4}\left(q-q^{-1}\right)^{2}\left(q+q^{-1}\right)\left(q^{2}f _{2}f_{1}-f_{1}f_{2}\right)k_{(-1,-1)}\left(q^{2}e_{1}e_{2}-e_{2}e_{1}\right) +q^{-4}\left(q^{2}-q^{-2}\right)^{2}f_{2}k_{(-2,0)}e_{2}\\ +\left(q-q^{-1}\right)^{2}\left(q+q^{-1}\right)f_{1}k_{(-1,1)}e_{ 1}+\left(q-q^{-1}\right)^{2}\left(q+q^{-1}\right)f_{1}k_{(1,-1)}e_{1}\\ +q^{6}k_{(2,2)}+q^{-6}k_{(-2,-2)}+q^{2}k_{(2,-2)}+q^{-2}k_{(-2,2) }+k_{(0,0)}\] _is central._ **Corollary 1**.: _The action of_ \[\left(\frac{1}{q^{5}}-\frac{1}{q^{3}}-q^{3}+q^{5}\right)^{-1}\left(C-\left(1+ \frac{1}{q^{10}}+\frac{1}{q^{6}}+q^{6}+q^{10}\right)\mathrm{Id}\right)\] _on the tensor power of two \(4\)-dimensional representations is given by the following \(16\times 16\) matrix:_ \[\begin{array}{ ### Proof We wish to apply Lemma 1. The first step is to find the dual elements. Most of them were found in Lemma 2.2 of [11]. There is an additional dual element that needs to be found, due to our use of the five-dimensional representation rather than the four-dimensional representation. The remaining one is stated in the following lemma. **Lemma 2**.: _1. The set \(\{e_{2}^{2}e_{1},e_{1}e_{2}^{2}\}\) is a basis of \(U[(1,3)]\), and the dual of \(e_{2}^{2}e_{1}\) is_ \[(q^{-2}-q^{2})(q^{2}+q^{-2})^{-1}(e_{2}e_{1})^{*}f_{2}.\] _2. The (ordered) set \(\mathcal{B}=\{e_{1}e_{2}e_{1}e_{2},e_{2}e_{1}^{2}e_{2},e_{1}e_{2}^{2}e_{1},e_{2 }e_{1}e_{2}e_{1}\}\) is a basis of \(U[(2,2)]\). Its dual basis is the (ordered) set_ \[\mathcal{B}^{*}=\{-(e_{1}e_{1}e_{2})^{*}f_{2}+f_{2}(e_{1}e_{1}e_{ 2})^{*},q^{-2}(e_{1}e_{1}e_{2})^{*}f_{2}-q^{2}f_{2}(e_{1}e_{1}e_{2})^{*},\\ (q+q^{-1})^{-1}((e_{2}e_{2}e_{1})^{*}f_{1}-q^{2}f_{1}(e_{2}e_{2}e_ {1})^{*}),-f_{2}(e_{2}e_{1}e_{1})^{*}+(e_{2}e_{1}e_{1})^{*}f_{2}\},\] _where_ \[f_{2}(e_{2}e_{1}e_{1})^{*}=\frac{q-q^{-1}}{q+q^{-1}}\left((q^{2}-q^{4})f_{2}f_ {1}f_{2}f_{1}-f_{2}f_{1}^{2}f_{2}+q^{2}f_{1}f_{2}^{2}f_{1}\right)\] \[(e_{2}e_{1}e_{1})^{*}f_{2}=\frac{q-q^{-1}}{q+q^{-1}}\left((1-q^{-2})f_{1}f_{2}f _{1}f_{2}+f_{1}f_{2}^{2}f_{1}-q^{2}f_{2}f_{1}^{2}f_{2}\right)\] _and applying \(\tau\)_ \[f_{2}(e_{1}e_{1}e_{2})^{*}=\frac{q-q^{-1}}{q+q^{-1}}\left((1-q^{-2})f_{2}f_{1} f_{2}f_{1}+f_{1}f_{2}^{2}f_{1}-q^{2}f_{2}f_{1}^{2}f_{2}\right)\] \[(e_{1}e_{1}e_{2})^{*}f_{2}=\frac{q-q^{-1}}{q+q^{-1}}\left((q^{2}-q^{4})f_{1}f_ {2}f_{1}f_{2}-f_{2}f_{1}^{2}f_{2}+q^{2}f_{1}f_{2}^{2}f_{1}\right)\] Proof.: 1. By Serre's relations, \(U[(1,3)]\) is two-dimensional (rather than three-dimensional). It is immediate that \[\langle(e_{2}e_{1})^{*}f_{2},e_{1}e_{2}^{2}\rangle=\langle f_{2}(e_{1}e_{2})^{ *},e_{2}^{2}e_{1}\rangle=0.\] because \(e_{2}e_{1}\) cannot appear in the left tensor power of \(\Delta(e_{1}e_{2}e_{2})\). One the needs only compute that \[\langle(e_{2}e_{1})^{*}f_{2},e_{2}^{2}e_{1}\rangle =\langle(e_{2}e_{1})^{*}\otimes f_{2},\Delta(e_{2}^{2}e_{1})\rangle\] \[=\langle(e_{2}e_{1})^{*}\otimes f_{2},e_{2}k_{2}e_{1}\otimes e_{2 }+k_{2}e_{2}e_{1}\otimes e_{2}\rangle\] \[=(1+q^{4})\langle(e_{2}e_{1})^{*}\otimes f_{2},e_{2}k_{2}e_{1} \otimes e_{2}\rangle\] \[=(1+q^{4})q^{-2}\langle(e_{2}e_{1})^{*}\otimes f_{2},e_{2}e_{1}k_{ 2}\otimes e_{2}\rangle\] \[=(q^{2}+q^{-2})(q^{-2}-q^{2})^{-1}.\] 2. By Serre's relations, \(U[(2,2)]\) is four-dimensional (rather than six-dimensional). Finding the first two elements amounts to showing that \[\langle-(e_{1}e_{1}e_{2})^{*}f_{2}+f_{2}(e_{1}e_{1}e_{2})^{*},e_{1}e_ {2}e_{1}e_{2}\rangle =1\] \[\langle-(e_{1}e_{1}e_{2})^{*}f_{2}+f_{2}(e_{1}e_{1}e_{2})^{*},e_{2 }e_{1}^{2}e_{2}\rangle =0\] \[\langle-(e_{1}e_{1}e_{2})^{*}f_{2}+f_{2}(e_{1}e_{1}e_{2})^{*},e_{1 }e_{2}^{2}e_{1}\rangle =0\] \[\langle-(e_{1}e_{1}e_{2})^{*}f_{2}+f_{2}(e_{1}e_{1}e_{2})^{*},e_{2 }e_{1}e_{2}e_{1}\rangle =0\] \[\langle q^{-2}(e_{1}e_{1}e_{2})^{*}f_{2}-q^{2}f_{2}(e_{1}e_{1}e_{2 })^{*},e_{1}e_{2}e_{1}e_{2}\rangle =0\] \[\langle q^{-2}(e_{1}e_{1}e_{2})^{*}f_{2}-q^{2}f_{2}(e_{1}e_{1}e_{ 2})^{*},e_{2}e_{1}^{2}e_{2}\rangle =1\] \[\langle q^{-2}(e_{1}e_{1}e_{2})^{*}f_{2}-q^{2}f_{2}(e_{1}e_{1}e_{ 2})^{*},e_{1}e_{2}^{2}e_{1}\rangle =0\] \[\langle q^{-2}(e_{1}e_{1}e_{2})^{*}f_{2}-q^{2}f_{2}(e_{1}e_{1}e_{ 2})^{*},e_{2}e_{1}e_{2}e_{1}\rangle =0\] Observe that the third, fourth, seventh, and eighth lines are immediate, since \[\langle(e_{1}e_{1}e_{2})^{*},e_{1}e_{2}e_{1}\rangle=\langle(e_{1}e_{1}e_{2})^ {*},e_{2}e_{1}e_{1}\rangle=0.\] For example, \[\langle(e_{1}e_{1}e_{2})^{*}f_{2},e_{2}e_{1}e_{2}e_{1}\rangle=\langle(e_{1}e_{ 1}e_{2})^{*}\otimes f_{2},\Delta\left(e_{2}e_{1}e_{2}e_{1}\right)\rangle\] equals zero because the term \(e_{1}e_{1}e_{2}\) cannot appear in \(\Delta\left(e_{2}e_{1}e_{2}e_{1}\right)\). This leaves four pairings to compute, which are \[\langle(e_{1}e_{1}e_{2})^{*}f_{2},e_{1}e_{2}e_{1}e_{2}\rangle =\langle(e_{1}e_{1}e_{2})^{*}\otimes f_{2},\Delta(e_{1}e_{2}e_{1} e_{2})\rangle\] \[=\langle(e_{1}e_{1}e_{2})^{*}\otimes f_{2},e_{1}k_{2}e_{1}e_{2} \otimes e_{2}\rangle\] \[=q^{2}\langle(e_{1}e_{1}e_{2})^{*},(e_{1}e_{1}e_{2})k_{2}\rangle( q^{-2}-q^{2})^{-1}\] \[=q^{2}(q^{-2}-q^{2})^{-1}\] and \[\langle f_{2}(e_{1}e_{1}e_{2})^{*},e_{1}e_{2}e_{1}e_{2}\rangle =\langle f_{2}\otimes(e_{1}e_{1}e_{2})^{*},\Delta(e_{1}e_{2}e_{1} e_{2})\rangle\] \[=\langle f_{2}\otimes(e_{1}e_{1}e_{2})^{*},k_{1}e_{2}k_{1}k_{2} \otimes e_{1}e_{1}e_{2}\rangle\] \[=\langle f_{2},k_{1}e_{2}k_{1}k_{2}\rangle\] \[=q^{-2}\langle f_{2},e_{2}k_{1}k_{1}k_{2}\rangle\] \[=q^{-2}(q^{-2}-q^{2})^{-1}\] and \[\langle(e_{1}e_{1}e_{2})^{*}f_{2},e_{2}e_{1}^{2}e_{2}\rangle =\langle(e_{1}e_{1}e_{2})^{*}\otimes f_{2},\Delta(e_{2}e_{1}^{2} e_{2})\rangle\] \[=\langle(e_{1}e_{1}e_{2})^{*}\otimes f_{2},k_{2}e_{1}^{2}e_{2} \otimes e_{2}\rangle\] \[=\langle(e_{1}e_{1}e_{2})^{*},(e_{1}e_{1}e_{2})k_{2}\rangle(q^{-2 }-q^{2})^{-1}\] \[=(q^{-2}-q^{2})^{-1}\] \[\langle f_{2}(e_{1}e_{1}e_{2})^{*},e_{2}e_{1}^{2}e_{2}\rangle =\langle f_{2}\otimes(e_{1}e_{1}e_{2})^{*},\Delta(e_{2}e_{1}^{2}e_{ 2})\rangle\] \[=\langle f_{2}\otimes(e_{1}e_{1}e_{2})^{*},e_{2}k_{1}k_{1}k_{2} \otimes e_{1}e_{1}e_{2}\rangle\] \[=\langle f_{2},e_{2}k_{1}k_{1}k_{2}\rangle\] \[=(q^{-2}-q^{2})^{-1}.\] Finding the fourth element is easy by applying \(\tau\) to the first four elements above. One gets \[\langle-f_{2}(e_{2}e_{1}e_{1})^{*}+(e_{2}e_{1}e_{1})^{*}f_{2},e_{2 }e_{1}e_{2}e_{1}\rangle =1\] \[\langle-f_{2}(e_{2}e_{1}e_{1})^{*}+(e_{2}e_{1}e_{1})^{*}f_{2},e_{2 }e_{1}^{2}e_{2}\rangle =0\] \[\langle-f_{2}(e_{2}e_{1}e_{1})^{*}+(e_{2}e_{1}e_{1})^{*}f_{2},e_{1 }e_{2}^{2}e_{1}\rangle =0\] \[\langle-f_{2}(e_{2}e_{1}e_{1})^{*}+f(e_{2}e_{1}e_{1})^{*}f_{2},e_{ 1}e_{2}e_{1}e_{2}\rangle =0\] Now, finding the third element amounts to showing that \[\langle(e_{2}e_{2}e_{1})^{*}f_{1}-q^{2}f_{1}(e_{2}e_{2}e_{1})^{*}, e_{2}e_{1}e_{2}e_{1}\rangle =0\] \[\langle(e_{2}e_{2}e_{1})^{*}f_{1}-q^{2}f_{1}(e_{2}e_{2}e_{1})^{*},e_{1}e_{2}^{2}e_{1}\rangle =q+q^{-1}\] \[\langle(e_{2}e_{2}e_{1})^{*}f_{1}-q^{2}f_{1}(e_{2}e_{2}e_{1})^{*},e_{2}e_{1}^{2}e_{2}\rangle =0\] \[\langle(e_{2}e_{2}e_{1})^{*}f_{1}-q^{2}f_{1}(e_{2}e_{2}e_{1})^{*},e_{1}e_{2}e_{1}e_{2}\rangle =0\] The third and fourth lines follow from similar reasoning as above. This leaves two pairings to compute, which are \[\langle(e_{2}e_{2}e_{1})^{*}f_{1},e_{1}e_{2}^{2}e_{1}\rangle =\langle(e_{2}e_{2}e_{1})^{*}\otimes f_{1},\Delta(e_{1}e_{2}^{2}e _{1})\rangle\] \[=\langle(e_{2}e_{2}e_{1})^{*}\otimes f_{1},k_{1}e_{2}^{2}e_{1} \otimes e_{1}\rangle\] \[=q^{-2}\langle(e_{2}e_{2}e_{1})^{*},(e_{2}e_{2}e_{1})k_{1}\rangle (q^{-1}-q)^{-1}\] \[=q^{-2}(q^{-1}-q)^{-1}\] and \[\langle f_{1}(e_{2}e_{2}e_{1})^{*},e_{1}e_{2}^{2}e_{1}\rangle =\langle f_{1}\otimes(e_{2}e_{2}e_{1})^{*},\Delta(e_{1}e_{2}^{2}e _{1})\rangle\] \[=\langle f_{1}\otimes(e_{2}e_{2}e_{1})^{*},e_{1}k_{2}k_{2}k_{1} \otimes e_{2}e_{2}e_{1}\rangle\] \[=\langle f_{1},e_{1}k_{2}k_{2}k_{1}\rangle\] \[=(q^{-1}-q)^{-1}.\] This finishes the proof of the lemma. The next step is to write the representation matrices. **Lemma 3**.: \[f_{2}=\left(\begin{array}{ccccc}0&0&0&0&0\\ 1&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&1&0\end{array}\right),\quad f_{1}=\left(\begin{array}{ccccc}0&0&0&0&0\\ 0&0&0&0&0\\ 0&1&0&0&0\\ 0&0&q+q^{-1}&0&0\\ 0&0&0&0&0\end{array}\right),\quad e_{2}=\left(\begin{array}{ccccc}0&1&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&1\\ 0&0&0&0&0\end{array}\right),\] \[e_{1}=\left(\begin{array}{ccccc}0&0&0&0&0\\ 0&0&q+q^{-1}&0&0\\ 0&0&0&1&0\\ 0&0&0&0&0\end{array}\right),\quad k_{2}=\left(\begin{array}{ccccc}q^{2}&0&0 &0&0\\ 0&q^{-2}&0&0&0\\ 0&0&1&0&0\\ 0&0&0&q^{-2}&0\\ 0&0&0&0&1\end{array}\right),\quad k_{1}=\left(\begin{array}{ccccc}1&0&0&0&0 \\ 0&q^{2}&0&0&0\\ 0&0&1&0&0\\ 0&0&0&q^{-2}&0\\ 0&0&0&0&1\end{array}\right)\] Proof.: By using the five-dimensional representation and extending to the quantum group, the result follows. One can also check the relations directly. By applying the previous two lemmas and applying Lemma 1, this computes every term in the central element except for the first one. Because the terms \(v_{3},v_{4}\) were not explicitly found, the term is \[q^{-2}\left(q-q^{-1}\right)^{2}\left((1-q^{2})f_{1}f_{2}f_{1}f_{2 }+(q^{4}-q^{-2})f_{2}f_{1}^{2}f_{2}+(1-q^{2})f_{2}f_{1}f_{2}f_{1}+(1-q^{2})f_{1 }f_{2}^{2}f_{1}\right)k_{(0,0)}\\ \times\left((1-q^{2})e_{1}e_{2}e_{1}e_{2}+(q^{4}-q^{-2})e_{2}e_{1 }^{2}e_{2}+Ae_{2}e_{1}e_{2}e_{1}+Be_{1}e_{2}^{2}e_{1}\right)\] for some \(A,B\). Since the bases were arbitrary, by using (2) with the automorphism \(\omega\), it can be seen that \(A=B=(1-q^{2})\). Another way to find \(A\) and \(B\) is to use the following explicit representation. One can check that the action on \(\mathbb{C}^{4}\otimes\mathbb{C}^{4}\) is given by the \(16\times 16\) matrices \[e_{1} =E_{12}+E_{34}+E_{67}+E_{10,12}-E_{59}-E_{8,11}-E_{13,14}-E_{15,16}\] \[+qE_{13}+q^{-1}E_{24}+qE_{58}-qE_{6,10}-q^{-1}E_{7,12}+q^{-1}E_{9,11}-qE_{13,15}-q^{-1}E_{14,16}\] \[e_{2} =E_{25}+E_{48}+E_{7,13}+E_{12,15}+E_{3,6}+q^{2}E_{47}+q^{-2}E_{8,13 }+E_{11,14}\] \[f_{1} =E_{31}+E_{42}+E_{85}-E_{10,6}-E_{12,7}+E_{11,9}-E_{15,13}-E_{16,14}\] \[+q^{-1}E_{21}+qE_{43}+q^{-1}E_{76}+qE_{12,10}-q^{-1}E_{95}-qE_{11,8 }-q^{-1}E_{14,13}-qE_{16,15}\] \[f_{2} =E_{63}+E_{74}+E_{13,8}+E_{14,11}+E_{52}+q^{-2}E_{84}+q^{2}E_{13,7 }+E_{15,12}\] \[k_{(a,b)} =q^{2a}E_{11}+q^{a+b}(E_{22}+E_{33})+q^{2b}E_{44}+q^{a-b}(E_{55}+E _{66})+E_{77}+E_{88}+E_{99}+E_{10,10}\] \[+q^{b-a}(E_{11,11}+E_{12,12})+q^{-2b}E_{13,13}+q^{-b-a}(E_{14,14}+ E_{15,15})+q^{-2a}E_{16,16}\] and that the action of the element only commutes with the other generators when \(A=B=1-q^{2}\). This action only provides a way to double-check that the answer is correct. Note that one could also check that the action on the four and five-dimensional fundamental representations is a multiple of the identity, but this would not determine the values of \(A\) and \(B\). Finally, the corollary follows by explicitly calculating the action of \(C\) on \(\mathbb{C}^{4}\otimes\mathbb{C}^{4}\), using the above representation.
2304.11623
Cache-Aided Communications in MISO Networks with Dynamic User Behavior: A Universal Solution
A practical barrier to the implementation of cache-aided networks is dynamic and unpredictable user behavior. In dynamic setups, users can freely depart and enter the network at any moment. The shared caching concept has the potential to handle this issue by assigning $K$ users to $P$ caching profiles, where all $\eta_{p}$ users assigned to profile $p$ store the same cache content defined by that profile. The existing schemes, however, cannot be applied in general and are not dynamic in the true sense as they put constraints on the transmitter-side spatial multiplexing gain $\alpha$. Specifically, they work only if $\alpha \leq \min_{p} \eta_{p}$ or $\alpha \geq \hat{\eta}$, where in the latter case, $\gamma$ is the normalized cache size of each user, $\hat{\eta}$ is an arbitrary parameter satisfying $1 \leq \hat{\eta} \leq \max_{p} \eta_{p}$, and the extra condition of $\alpha \geq K\gamma$ should also be met. In this work, we propose a universal caching scheme based on the same shared-cache model that can be applied to any dynamic setup, extending the working region of existing schemes to networks with $\min_{p} \eta_{p} \leq \alpha \leq \hat{\eta}$ and removing any other constraints of existing schemes. We also derive the closed-form expressions for the achievable degrees-of-freedom (DoF) of the proposed scheme and show that it achieves the optimal DoF for uniform user distributions. Notably, it is the first scheme to achieve the optimal DoF of $K\gamma+\alpha$ for networks with uniform user distribution, $\alpha > \hat{\eta}$, and non-integer $\frac{\alpha}{\hat{\eta}}$, without imposing any other constraints. Finally, we use numerical simulations to assess how non-uniform user distribution impacts the DoF performance and illustrate that the proposed scheme provides a noticeable improvement over unicasting for uneven distributions.
Milad Abolpour, MohammadJavad Salehi, Antti Tölli
2023-04-23T11:40:53Z
http://arxiv.org/abs/2304.11623v1
# Cache-Aided Communications in MISO Networks with Dynamic User Behavior: A Universal Solution ###### Abstract A practical barrier to the implementation of cache-aided networks is dynamic and unpredictable user behavior. In dynamic setups, users can freely depart and enter the network at any moment. The shared caching concept has the potential to handle this issue by assigning \(K\) users to \(P\) caching profiles, where all \(\eta_{p}\) users assigned to profile \(p\) store the same cache content defined by that profile. The existing schemes, however, cannot be applied in general and are not dynamic in the true sense as they put constraints on the transmitter-side spatial multiplexing gain \(\alpha\). Specifically, they work only if \(\alpha\leq\min_{p}\eta_{p}\) or \(\alpha\geq\hat{\eta}_{p}\) where in the latter case, \(\gamma\) is the normalized cache size of each user, \(\hat{\eta}\) is an arbitrary parameter satisfying \(1\leq\hat{\eta}\leq\max_{p}\eta_{p}\), and the extra condition of \(\alpha\geq K\gamma\) should also be met. In this work, we propose a universal caching scheme based on the same shared-cache model that can be applied to any dynamic setup, extending the working region of existing schemes to networks with \(\min_{p}\eta_{p}\leq\alpha\leq\hat{\eta}\) and removing any other constraints of existing schemes. We also derive the closed-form expressions for the achievable degrees-of-freedom (DoF) of the proposed scheme and show that it achieves the optimal DoF for uniform user distributions. Notably, it is the first scheme to achieve the optimal DoF of \(K\gamma+\alpha\) for networks with uniform user distribution, \(\alpha>\hat{\eta}\), and non-integer \(\frac{\alpha}{\hat{\eta}}\), without imposing any other constraints. Finally, we use numerical simulations to assess how non-uniform user distribution impacts the DoF performance and illustrate that the proposed scheme provides a noticeable improvement over unicasting for uneven distributions. coded caching; shared caching; dynamic networks; multi-antenna communications ## I Introduction The increasing volume and diversity of multimedia content require wireless networks to be enhanced to serve users at higher data rates and with lower latency [1], while network providers must further develop infrastructure in anticipation of evolving applications such as wireless immersive viewing [2, 3]. To facilitate the efficient delivery of such multimedia content, coded caching (CC) has been proposed to increase the data rates by leveraging the cache memory across the network as a communication resource [4]. Accordingly, incorporating CC into a single-stream downlink network boosts the achievable rate by a multiplicative factor proportional to the cumulative cache size in the entire network via multicasting carefully designed codewords to different user groups. In light of the significance of multi-antenna connectivity in deploying next-generation networks [1], various works have studied the performance of cache-aided multi-input single-output (MISO) configurations [5, 6, 7, 8, 9, 10]. For instance, [8] and [11] discussed the design of optimized beamformers in finite signal-to-noise-ratio (SNR), and [2] explored the capability of CC to cope with location-dependent file request applications. In practice, however, practically achievable CC gains are constrained by the subpacketization process [12, 13, 14]. That is, in a network with \(K\) users, each file should be split into many smaller parts, the number of which grows exponentially with \(K\). A promising way to overcome this impediment is to use the shared caching concept, where there exist \(P\leq K\)_caching profiles_, and \(\eta_{p}\) users are assigned to profile \(p\in\{1,\cdots,P\}\). Even though with this concept, multiple users with a cache ratio of \(\gamma\) could be assigned to the same profile and cache exactly the same data, in [15], it is shown that in MISO setups with \(\alpha\geq\frac{K}{P}\), the scaling factor in the degrees-of-freedom (DoF) could be the same as the case of dedicated users' caches, i.e., \(K\gamma+\alpha\), where \(\alpha\) is the spatial multiplexing gain. However, for a shared-cache MISO setup with \(\alpha\leq\frac{K}{P}\), the optimal DoF is \(\alpha(1+P\gamma)\)[16]. Interestingly, shared caching can also address another critical issue with coded caching schemes: handling networks with a dynamic population of users departing and entering the network at any time. The problem with conventional CC schemes is that they require the placement phase to be designed based on the number of users known a priori. By contrast, this problem is alleviated with the shared-cache model since the cache placement phase is built upon knowledge of the number of profiles \(P\), and not the number of users \(K\). Accordingly, in some cache-aided scenarios, such as extended reality applications [3], the server is aware of the cache ratio of users rather than the number of existing users. In this sense, the authors in [17] and [18], apply the shared caching idea to address the dynamicity issue. In this method, the server only needs to know the cache ratio of users to determine the number of caching profiles and design the content placement phase. Although shared caching is crucial for managing dynamic conditions, the existing models are not dynamic in the true sense and only support two regions: _1)_\(\alpha\leq\min_{p}\eta_{p}\)[16], and _2)_\(\alpha\geq\hat{\eta}\) with \(\alpha\geq K\gamma\) and arbitrary \(\hat{\eta}\) satisfying \(1\leq\hat{\eta}\leq\max_{p}\eta_{p}\)[17, 18]. Therefore, a universal shared-cache setup that supports any user-to-profile association is not yet available. In this work, we design a universal cache-assisted MISO system capable of handling any instantaneous user distribution among caching profiles. Our system operation is comprised of two phases: _i) Content placement phase_, and _ii) content delivery phase_. In the placement phase, the server determines the number of caching profiles according to the cache ratio \(\gamma\). Then, upon connecting to the network, each user is assigned to a single profile and stores the cache content of that profile. During the content delivery phase, the server employs a clever combination of multicast and unicast transmissions to maximize the DoF. In this paper, we obtain closed-form expressions for the DoF, revealing the DoF loss caused by non-uniformness in users' distribution. Particularly, for the uniform user associations, it is shown that our proposed scheme achieves the optimal DoF not only in the regions covered in the literature but also in the region \(\alpha\geq\hat{\eta}\) with non-integer \(\nicefrac{{\alpha}}{{\hat{\eta}}}\). Notably, our proposed scheme supports any user distribution, including the regions omitted in the existing literature [15, 16, 17] such as networks with: _i)_ uneven user association with \(\alpha>\hat{\eta}\) and non-integer \(\frac{\alpha}{\hat{\eta}}\) unlike [15], _ii)_\(\min_{p}\eta_{p}\leq\alpha\leq\hat{\eta}\) unlike [16], and _iii)_\(\alpha\geq\hat{\eta}\) with \(\alpha<K\gamma\) unlike [17]. In this paper, bold lower-case and calligraphic letters show vectors and sets, respectively. \([a:b]\) shows the set \(\{a,\cdots,b\}\), \([a]=\{1,\cdots,a\}\), \(|\mathcal{A}|\) is the cardinality of \(\mathcal{A}\), and for \(\Lambda\subseteq\mathcal{A}\), \(\mathcal{A}_{\setminus\Lambda}\) represents \(\mathcal{A}-\Lambda\). \(\left(\mathcal{A}\|\mathcal{A}\right)_{x}\) denotes \(x\) concatenations of \(\mathcal{A}\) with itself, and \(\mathcal{A}\|\mathcal{B}\) is the concatenation of \(\mathcal{A}\) and \(\mathcal{B}\). ## II System Model In this paper, we focus on a dynamic MISO network, where a base station (BS) equipped with \(L\) transmit antennas and the spatial multiplexing gain of \(\alpha\leq L\) serves several cache-enabled single-antenna users. The BS has access to a library \(\mathcal{F}\) with \(N\) equal-sized files, and each user is equipped with a large enough memory to store a portion \(0<\gamma<1\) of the entire library. We suppose \(\gamma=\frac{\bar{t}}{\mathcal{P}}\), where \(\bar{t}\) and \(P\) are natural numbers and \(\gcd(\bar{t},P)=1\). In this dynamic setup, users can move, enter and depart the network at any time. Accordingly, the BS does not have any prior knowledge about the number of available users during the transmission. When a user \(k\) enters the network, it is assigned to a profile represented by \(\mathsf{p}[k]\in[P]\), and the content of its cache is updated based on a _content placement algorithm_. In the placement phase, by following the same way as in [4], each file \(W^{n}\in\mathcal{F}\), \(n\in[N]\), is split into \(\binom{P}{\bar{t}}\) equal-sized mini-files \(W^{n}_{P}\) such that \(W^{n}\rightarrow\{W^{n}_{P}:\mathcal{P}\subseteq[P],|\mathcal{P}|=\bar{t}\}\). The cache content associated with profile \(p\in[P]\), represented by \(\mathcal{Z}_{p}\), includes a portion \(\gamma\) of each file \(W^{n}\) as \[\mathcal{Z}_{p}=\{W^{n}_{P}:\mathcal{P}\ni p,\mathcal{P}\subseteq[P],| \mathcal{P}|=\bar{t},\forall n\in[N]\}.\] Then, defining \(\mathcal{U}_{p}\) as the set of users assigned to profile \(p\), i.e., \(\mathcal{U}_{p}=\{k:\mathsf{p}[k]=p\}\), each user \(k\in\mathcal{U}_{p}\) stores the cache content \(\mathcal{Z}_{p}\) during the placement phase. The dynamic nature of the network causes a fluctuating user population throughout time. During regular intervals, the network's demanding users reveal their required files from the library \(\mathcal{F}\) to the BS. Then, using a _content delivery algorithm_, the BS constructs and transmits a set of codewords, enabling users to retrieve their requested files. In this paper, we focus on the content delivery procedure over a specific time interval, where it is assumed that the number of present users during the BS's transmission is \(K\). In line with the general approach in the literature, we utilize the total DoF as the metric of interest, representing the average number of concurrent users served in parallel across all transmit intervals. The main contribution of this paper is to design delivery algorithms that provide a maximum combination of global caching and spatial multiplexing gains under the proposed dynamic conditions. As part of the delivery process, we discuss the transmission strategies to reduce the DoF loss caused by non-uniformness in user-to-profile association in the following section. ## III Resource Allocation and Data Delivery In this section, we discuss the resource allocation and transmission strategies during the content delivery phase. This phase commences once the set of active users reveals their requested files and comprises two consecutive steps: _1) Coded caching (CC) data delivery_; and _2) Unicast (UC) data delivery_. Let us define the number of users assigned to profile \(p\) as the length of profile \(p\) denoted by \(\eta_{p}\), where without loss of generality, it is assumed that \(\eta_{1}\geq\eta_{2}\geq\cdots\geq\eta_{P}\). By choosing a _delivery parameter_\(\hat{\eta}\leq\max_{p}\eta_{p}\)1, the BS builds and transmits a set of codewords to serve at most \(\hat{\eta}\) users assigned to each profile with a novel CC-based approach. In this regard, for every profile \(p\) Footnote 1: Here, the delivery parameter plays a similar role as the unifying length parameter in [17]. Both delivery and unifying length parameters tune the DoF loss caused by the non-uniformness in the user-to-profile association. \(\bullet\) if \(\hat{\eta}<\eta_{p}\), we exclude \(\eta_{p}-\hat{\eta}\) users, and exempt BS to serve these users during the CC delivery step. Accordingly, the excluded users are served in the UC delivery step. \(\bullet\) if \(\hat{\eta}\geq\eta_{p}\), all \(\eta_{p}\) users are served via the CC delivery step. Now, let us suppose that the set of users assigned to profile \(p\in[P]\) and served during the CC delivery step is denoted by \(\mathcal{V}_{p}\) such that \(|\mathcal{V}_{p}|=\delta_{p}\), \(\mathcal{V}_{p}=\left\{v_{p,1},v_{p,2}\cdots,v_{p,\delta_{p}}\right\}\), and \(v_{p,i}\in\mathcal{U}_{p}\) for \(i\in[\delta_{p}]\). We note that \(\delta_{p}=\min(\hat{\eta},\eta_{p})\), and clearly, \(\delta_{1}=\hat{\eta}\) and \(\delta_{1}\geq\delta_{2}\geq\cdots\geq\delta_{P}\). In order to build the transmission vectors, as depicted in Fig. 1, the BS selects a parameter \(Q\), which represents Fig. 1: System model for a dynamic coded caching setup, where \(P=2\), \(\gamma=\frac{1}{2}\), \(\hat{t}=1\), \(\alpha=4\), \(\hat{\eta}=4\), \(Q=2\) and \(\beta=3\). During the placement phase, each user assigned to profiles \(A\) and \(B\) stores the cache content associated with those profiles. For the delivery phase, user \(5\) is served via unicasting and other users are served via \(3\) multicast transmissions. the number of profiles served in each transmission, and a parameter \(\beta\), which expresses the number of users chosen from each profile to serve in each transmission. The necessary conditions for choosing any arbitrary values for \(Q\) and \(\beta\) are defined in Remark 1. **Remark 1**: _In order to serve \(Q\) profiles each with a maximum of \(\beta\) users, the network parameters should satisfy the constraints \(\bar{t}+1\leq Q\leq\bar{t}+\lceil\nicefrac{{\alpha}}{{\beta}}\rceil\) and \(\beta\leq\min(\alpha,\hat{\eta})\)._ Proof:: The proof is relegated to Appendix A. In the following, we present the system operation maximizing the DoF performance separately for two regimes: _i_) \(\alpha\leq\hat{\eta}\) and _ii_) \(\alpha>\hat{\eta}\). For the CC delivery step, each of these cases operates either with _Strategy A_ (cf. Section III-A) or _Strategy B_ (cf. Section III-B). The chosen transmission strategy only depends on the parameters \(\alpha\) and \(\hat{\eta}\), and it is independent of the content placement phase. In other words, the server performs the placement phase only based on parameter \(\gamma\) without considering which transmission strategy the system will use for the CC delivery step. #### Iii-A1 System operation for \(\alpha\leq\hat{\eta}\) For the case of \(\alpha\leq\hat{\eta}\), we set \(\beta=\alpha\), and the only option for \(Q\) to maximize the DoF is \(Q=\bar{t}+1\). Here, _Strategy A_ is utilized to build the transmission vectors. Replacing the constraint \(\alpha\leq\hat{\eta}\) with \(\alpha\leq\min_{p}\delta_{p}\) reduces our system model to [16], while our proposed scheme also works for the scenarios with \(\min_{p}\delta_{p}<\alpha\leq\hat{\eta}\). #### Iii-A2 System operation for \(\alpha>\hat{\eta}\) For this case, we set \(\beta=\hat{\eta}\), and define \(\hat{\alpha}=\frac{\alpha}{\eta}\). If \(\hat{\alpha}\) is an integer, then we follow _Strategy A_ to build the transmission vectors. For non-integer \(\frac{\alpha}{\eta}\), the server can serve users via _Strategy A_ by setting \(\bar{t}+1\leq Q\leq\bar{t}+\lfloor\nicefrac{{\alpha}}{{\hat{\eta}}}\rfloor\), and via _Strategy B_ by choosing \(Q=\bar{t}+\lceil\nicefrac{{\alpha}}{{\hat{\eta}}}\rceil\). In this case, if we assume that users are uniformly distributed among caching profiles, and \(\frac{\alpha}{\eta}\) is an integer, the system performance is simplified to [15], while our proposed scheme also covers the uneven user-to-profile associations with non-integer \(\frac{\alpha}{\eta}\). ### _Transmission Strategy A_ In this strategy, each mini-file \(W_{\mathcal{P}}^{n}\) is split into \(\beta\binom{P-\bar{t}-1}{Q-\bar{t}-1}\) subpackets \(W_{\mathcal{P},q}^{n}\), where \(q\in[\beta\binom{P-\bar{t}-1}{Q-\bar{t}-1}]\) increases sequentially after each transmission to ensure none of subpackets is transmitted twice. Next, we follow the so-called _elevation process_ to serve \(Q\) caching profiles each with at most \(\beta\) users. _Elevation process:_ In this process, the aim is to characterize the set of users that are served in each transmission. Accordingly, we let \(\phi_{p}=\max(\beta,\delta_{p})\), and use the so-called _transmission triple_\((r,c,l)\), where \(r\in[P-Q+1]\), \(c\in[\phi_{r}]\) and \(l\in\left[\binom{P-r}{Q-1}\right]\). Also, it is assumed that \(\eta_{1}\geq\eta_{2}\geq\cdots\geq\eta_{P}\), which results in \(\phi_{1}\geq\phi_{2}\geq\cdots\geq\phi_{P}\). This process creates \(\mathcal{T}_{r,c,l}\), the set of users that are served during the transmission triple \((r,c,l)\). In this regard, first, for every \(p\in[P]\), we elevate the set \(\mathcal{V}_{p}\) to the set \(\mathcal{R}_{p}\) as follows. \[\mathcal{R}_{p}=\mathcal{R}_{p,1}\|\cdots\|\mathcal{R}_{p,\phi_{p}}, \tag{1}\] where, for \(j\in[\phi_{p}]\), \(\mathcal{R}_{p,j}\) is defined as: \[\mathcal{R}_{p,j}=\begin{cases}\mathcal{V}_{p}&\delta_{p}\leq\beta\\ \{v_{p,l}:l=(i+j-1)\%\delta_{p},1\leq i\leq\beta\}&\delta_{p}>\beta\end{cases}.\] Here, \(\%\) sign demonstrates the mod operator, for which \(c\%c=c\) and \((d+c)\%c=d\%c\). Furthermore, we use generalized multiset definition where we allow the same elements to be repeated in the sets, e.g., \(\{a,a\}\) cannot be reduced to \(\{a\}\). Now, for \(p\in[P]\), we define \(\mathcal{S}_{p}=\mathcal{S}_{p,1}\|\cdots\|\mathcal{S}_{p,\bar{\eta}}\), while \[\mathcal{S}_{p,j}=\begin{cases}\mathcal{R}_{p,j}&1\leq j\leq\phi_{p}\\ \varnothing&\phi_{p}+1\leq j\leq\hat{\eta}\end{cases}. \tag{2}\] Then, for each \(r\in[P-Q+1]\), we define the set \(\mathcal{M}_{r}\) as: \[\mathcal{M}_{r}=\{\mathcal{F}:\mathcal{F}\subseteq[r+1:P],\,|\mathcal{F}|=Q-1\}. \tag{3}\] In the proceeding, we use \(\mathcal{M}_{r}(l)\) to indicate the \(l\)-th \((Q-1)\)-tuple of \(\mathcal{M}_{r}\). Finally, the set \(\mathcal{T}_{r,c,l}\) for the transmission triple \((r,c,l)\) is given by: \[\mathcal{T}_{r,c,l}=\{\mathcal{S}_{r,c}\|\mathcal{S}_{b_{1},c}\|\cdots\| \mathcal{S}_{b_{Q-1},c}:b_{i}\in\mathcal{M}_{r}(l),\forall i\in[Q-1]\}. \tag{4}\] Generally speaking, for the transmission triple \((r,c,l)\), if \(\delta_{r}=0\), the BS does not transmit any signal; otherwise, the BS serves users assigned to \(\mathcal{T}_{r,c,l}=\mathcal{S}_{r,c}\|\mathcal{S}_{b_{1},c}\|\cdots\|\mathcal{ S}_{b_{Q-1},c}\), while \(\{b_{1},\cdots,b_{Q-1}\}=\mathcal{M}_{r}(l)\). During the transmission triple \((r,c,l)\), by defining \(\mathcal{N}=\{r\}\cup\mathcal{M}_{r}(l)\), the BS constructs the transmission vector as follows. \[\mathbf{x}_{r,c,l}=\sum_{\Lambda\subseteq\mathcal{N}:|\Lambda|=\bar{t}}\ \sum_{k\in \mathcal{S}_{p,c}:p\in\mathcal{N}_{\setminus\Lambda}}W_{\Lambda,q}^{k}\mathbf{w} _{\mathcal{G}_{\Lambda}^{k}},\] where \(\mathbf{w}_{\mathcal{G}_{\Lambda}^{k}}\in\mathbb{C}^{L\times 1}\) is the zero-forcing (ZF) precoder that cancels out the interference of user \(k\) at the set \(\mathcal{G}_{\Lambda}^{k}=\left\{j\in\mathcal{S}_{p,c}:\forall p\in\mathcal{N}_ {\setminus\Lambda},j\neq k\right\}\). In Appendix D, it is proven that all users served with _Strategy A_ can decode their requested files at the end of the CC delivery step. Now, assuming \(\mathbf{h}_{i}\in\mathbb{C}^{L\times 1}\) as the channel gain of user \(i\), at the end of the transmission triple \((r,c,l)\), the received signal at user \(i\) is given by: \[y_{i}=\mathbf{h}_{i}^{\mathrm{H}}\mathbf{x}_{r,c,l}+n_{i},\] where \(n_{i}\) is the zero-mean additive white Gaussian noise of unit variance. In order to give further insight, an example for data delivery with _Strategy A_ is provided in Appendix B. ### _Transmission Strategy B_ When \(\alpha>\hat{\eta}\) and \(\frac{\alpha}{\eta}\) is not an integer, we can serve users by setting \(\beta=\hat{\eta}\), and \(Q=\bar{t}+\lceil\nicefrac{{\alpha}}{{\hat{\eta}}}\rceil\).2 Here, first, we split each mini-file into \((\hat{\eta}\bar{t}+\alpha)\binom{P-\bar{t}-1}{Q-\bar{t}-1}\binom{Q-2}{Q-\bar{t}- 2}\) subpackets. Then, we use the elevation process to serve \(Q\) profiles each with at most \(\beta\) users. Footnote 2: For \(\alpha>\hat{\eta}\) and non-integer \(\frac{\alpha}{\eta}\), we can still serve users with _Strategy A_. However, the achievable DoF is less than the one with _Strategy B_. _Elevation process:_ This process builds the set of users served in each transmission. First, for \(r\in[P]\), we define \(\mathcal{Y}_{r}\) as: \[\mathcal{Y}_{r}=\begin{cases}\mathcal{V}_{r}&\delta_{r}=\hat{\eta}\\ \mathcal{V}_{r}\|(f^{*}\|f^{*})_{\hat{\eta}-\delta_{r}}&\mathrm{o.w.}\end{cases}, \tag{5}\] where \(f^{*}\) denotes the phantom (non-existent) users. Generally speaking, in each transmission of _Strategy B_, we serve the users assigned to \(Q\) profiles such that we select at most \(\hat{\eta}\) users from \(Q-1\) profiles, and pick \(\theta=\alpha-\hat{\eta}\lfloor\nicefrac{{\alpha}}{{\hat{\eta}}}\rfloor\) users from another profile. Next, for \(r\in[P]\) and \(m\in[\hat{\eta}]\), we consider the set \(\mathcal{E}_{r}^{m}\) as: \[\mathcal{E}_{r}^{m}=\bigcup\nolimits_{i=0}^{\theta-1}\mathcal{Y}_{r}((i+m) \%\hat{\eta}), \tag{6}\] where \(\mathcal{Y}_{r}(i)\) is the \(i\)-th element of \(\mathcal{Y}_{r}\). Indeed, \(\mathcal{E}_{r}^{m}\) shifts \(\mathcal{Y}_{r}\) to the right for \(m\) times, and picks \(\theta\) elements from it. Then, for each \(u\in\mathcal{E}_{r}^{m}\), we define the set \(\mathcal{K}_{r}^{m,u}\) as follows. \[\mathcal{K}_{r}^{m,u}=(u\|u)_{\nu_{1}}\|(f^{*}\|f^{*})_{\nu_{2}-\nu_{1}}, \tag{7}\] where \(\nu_{1}=\binom{Q-2}{Q-i-2}\) and \(\nu_{2}=\binom{Q-1}{Q-i-1}\). Next, by defining \(\mathcal{K}_{r}^{m,u}(s)\) as the \(s\)-th element of \(\mathcal{K}_{r}^{m,u}\), for \(s\in[\nu_{2}]\), it is assumed that \(\mathcal{K}_{r,s}^{m,u}\) is the \(s\) circular shifts of \(\mathcal{K}_{r}^{m,u}\), such that: \[\mathcal{K}_{r,s}^{m,u}=\bigcup\nolimits_{i=0}^{\nu_{2}}\mathcal{K}_{r}^{m,u }((i+s)\%\nu_{2}). \tag{8}\] Moreover, we assume that \(\overline{\mathcal{P}}_{r}=[P]_{\setminus r}\) for \(r\in[P]\), and \(\bar{\delta}_{c}=\overline{\mathcal{P}}_{r}(c)\) is the \(c\)-th element of \(\overline{\mathcal{P}}_{r}\). The caching profiles in \(\overline{\mathcal{P}}_{r}\) are sorted in descending order such that if \(i<j\), then \(\delta_{\overline{\mathcal{P}}_{r}(i)}\geq\delta_{\overline{\mathcal{P}}_{r}( j)}\). Next, for a given \(r\) and \(c\in[P-Q+1]\), let \(\mathcal{I}_{c}^{r}\) as: \[\mathcal{I}_{c}^{r}=\big{\{}\mathcal{F}:\mathcal{F}\subseteq\{\overline{ \mathcal{P}}_{r}(c+1),\cdots,\overline{\mathcal{P}}_{r}(P-1)\},|\mathcal{F}|= Q-2\}. \tag{9}\] Furthermore, denote the \(l\)-th \((Q-2)\)-tuple of \(\mathcal{I}_{c}^{r}\) by \(\mathcal{I}_{c}^{r}(l)\), where \(l\in\left[\binom{P-c-1}{Q-2}\right]\). For the delivery process with _Strategy B_, we use the so-called _transmission quintuple_\((r,c,l,m,s)\), where \(r\in[P]\), \(c\in[P-Q+1]\), \(l\in\left[\binom{P-c-1}{Q-2}\right]\), \(m\in[\hat{\eta}]\) and \(s\in[\nu_{2}]\). In each transmission quintuple \((r,c,l,m,s)\), users assigned to the caching profiles \(\mathcal{B}=\big{\{}\bar{\delta}_{c}\big{\}}\cup\mathcal{I}_{c}^{r}(l)\), and users in the set \(\mathcal{E}_{r}^{m}\) are served. Suppose that \(\bar{\mathcal{C}}=\big{\{}\mathcal{F}:\mathcal{F}\subseteq\mathcal{B},| \mathcal{F}|=\lfloor\nicefrac{{\alpha}}{{\hat{\eta}}}\rfloor\big{\}}\), and \(\mathcal{C}(n)\) is the \(n\)-th \(\lfloor\nicefrac{{\alpha}}{{\hat{\eta}}}\rfloor\)-tuple of \(\mathcal{C}\) with \(n\in[\nu_{2}]\). We define the function \(I^{+}\big{(}\bar{\delta}_{c},\mathcal{E}_{r}^{m}\big{)}\) such that \(I^{+}\big{(}\bar{\delta}_{c},\mathcal{E}_{r}^{m}\big{)}=0\), if \(\mathcal{E}_{r}^{m}=f^{*}\) and \(\bar{\delta}_{c}=0\); otherwise, \(I^{+}\big{(}\bar{\delta}_{c},\mathcal{E}_{r}^{m}\big{)}=1\). If \(I^{+}\big{(}\bar{\delta}_{c},\mathcal{E}_{r}^{m}\big{)}=1\), after eliminating the impacts of the phantom users \(f^{*}\), the BS builds the transmission vector for the transmission quintuple \((r,c,l,m,s)\) as follows. \[\mathbf{x}_{r,c,l}^{m,s}=\sum\nolimits_{n=1}^{\nu_{2}}\sum\limits_{k\in \mathcal{K}_{r,s}^{m,u}(n)\cup\mathcal{V}_{p}:u\in\mathcal{E}_{r}^{m},p\in \mathcal{C}(n)}W_{\Theta_{n},q}^{k}\mathbf{w}_{\mathcal{H}_{\mathcal{C}(n)}^{k }},\] where \(\Theta_{n}=\mathcal{B}_{\setminus\mathcal{C}(n)}\) with \(|\Theta|=\bar{t}\), and \(\mathbf{w}_{\mathcal{H}_{c}(n)}\in\mathbb{C}^{L\times 1}\) is the precoder that suppresses the interference of user \(k\) at the set \(\mathcal{H}_{\mathcal{C}(n)}^{k}=\{j\in\mathcal{E}_{r}^{m}\cup\mathcal{V}_{p}:p \in\mathcal{C}(n),j\neq k,f^{*}\}\). In Appendix D, we prove that all users can decode their requested files with _Strategy B_. Accordingly, user \(i\) receives the signal \[y_{i}=\mathbf{h}_{i}^{\mathrm{H}}\mathbf{x}_{r,c,l}^{m,s}+n_{i}.\] In order to give further insight, an example for data delivery with _Strategy B_ is provided in Appendix C. ### _Unicast (UC) Data Delivery_ In the UC delivery step, the BS transmits data to the users excluded from the CC delivery step. Here, unlike the CC delivery step that benefits from the global coded caching and spatial multiplexing gains, only local coded caching and spatial multiplexing gains are available. Suppose that the BS serves \(K_{U}\) users during the UC delivery step such that \(K_{U}=\sum_{p=1}^{P}\max(0,\eta_{p}-\hat{\eta})\). Each of the requested files by these users is split into the same number of subpackets as in the CC delivery step. Then, in order to transmit these missing subpackets, we follow a greedy algorithm similar to [17], which comprises 3 processes: 1) sort users based on the number of their missing subpackets in descending order; 2) create a transmission vector to deliver one missing subpacket to each of the first \(\min(\alpha,K_{U})\) users; 3) repeat processes 1 and 2 until all missing files are transmitted. ## IV DoF Analysis In this section, we use DoF as the metric of interest to measure performance. Here, the DoF is defined as the average number of users served concurrently during the delivery phase. In CC and UC delivery steps, we denote the total transmissions by \(T_{M}\) and \(T_{U}\), respectively, and the number of served users by \(J_{M}\) and \(J_{U}\). Therefore, DoF is computed as follows. \[\mathrm{DoF}=\frac{J_{M}+J_{U}}{T_{M}+T_{U}}. \tag{10}\] Furthermore, we suppose that \(K_{M}\) and \(K_{U}\) users are served during the CC and UC delivery steps such that \(K_{M}=\sum_{p}\min(\hat{\eta},\eta_{p})\) and \(K_{U}=\sum_{p}\max(0,\eta_{p}-\hat{\eta})\). The next theorem characterizes the DoF for the cache-aided networks operating with strategies \(A\) and \(B\) during the CC delivery step. **Theorem 1**: _Consider a dynamic MISO network with the spatial multiplexing gain of \(\alpha\), cache ratio \(\gamma\) and the delivery parameter \(\hat{\eta}\). If the system operates with Strategy A in the CC delivery step, the DoF is given by:_ \[\mathrm{DoF}=\begin{cases}\frac{K\big{(}\binom{P-1}{Q-1}\big{)}\beta}{\sum\limits _{p=1}^{P-Q+1}D(\delta_{r})\big{(}\binom{P}{Q-1}\big{)}}&K_{U}=0\\ \frac{K_{M}\big{(}\binom{P-1}{Q-1}\big{)}+K_{U}(1-\gamma)\big{(}\binom{P}{Q} \delta^{\prime}\big{)}}{\sum\limits_{p=1}^{P-Q+1}D(\delta_{r})\big{(}\binom{P}{Q-1} \big{)}+\left\lceil\frac{K_{U}(1-\gamma)\big{(}\binom{P}{Q}\delta^{\prime}}{ \min(K_{U},\alpha)}\right\rceil}\end{cases}\end{cases}, \tag{11}\] _where \(\beta^{\prime}=\beta\binom{P-\bar{t}-1}{Q-\bar{t}-1}\) and \(D(\delta_{r})=\phi_{r}\) if \(\delta_{r}\neq 0\); otherwise, \(D(\delta_{r})=0\). If Strategy B is applied during the CC delivery step, the DoF takes the form as follows._ \[\mathrm{DoF}=\begin{cases}\frac{K\big{(}\binom{P-1}{Q-1}\big{)}(\tilde{\eta} \tilde{t}+\alpha)\nu_{2}}{\frac{K_{U}}{\frac{K_{M}}{\sum\limits_{p=1}^{P}D(\delta_{r}) \nu_{2}+K_{U}(1-\gamma)\big{(}\binom{P}{P}\delta^{\prime}}}&K_{U}=0\\ \frac{K_{M}\big{(}\frac{P obtained in [16]. If \(\alpha>\hat{\eta}\), the achievable DoF of our scheme is simplified to the optimal DoF \(K\gamma+\alpha\) obtained in [15]. However, unlike the scheme proposed in [15], our scheme supports the networks with non-integer \(\frac{\alpha}{\hat{\eta}}\) (cf. Appendix E). Indeed, increasing non-uniformness in user distribution prevents the system from achieving optimal DoF performance. For instance, for the case \(\alpha>\hat{\eta}\), suppose that all users are served with _Strategy A_ during the CC delivery step, such that \(\beta=\hat{\eta}\) and \(Q\leq\bar{t}+\lfloor\nicefrac{{\alpha}}{{\hat{\eta}}}\rfloor\). In this setup, to boost the DoF performance, we can set \(Q=\bar{t}+\lfloor\nicefrac{{\alpha}}{{\hat{\eta}}}\rfloor\). By defining \(\eta_{\text{avg}}=\frac{K}{P}\) and assuming \(\alpha\) is divisible by \(\hat{\eta}\) and \(\eta_{\text{avg}}\), the DoF loss (compared to uniform user distribution) is \(\alpha(1-\nicefrac{{\eta_{\text{avg}}}}{{\hat{\eta}}})\). Setting \(Q=\bar{t}+\lfloor\nicefrac{{\alpha}}{{\hat{\eta}}}\rfloor\), however, requires to implement successive interference cancellation (SIC) at the receivers. To avoid using SIC, we can set \(Q=\bar{t}+1\), which simplifies the DoF of the uniform association to \(\eta_{\text{avg}}(\bar{t}+1)\). Here, we should compare the achievable DoF of \(\eta_{\text{avg}}(\bar{t}+1)\) with \(\alpha\) such that: _i_) if \(\eta_{\text{avg}}(\bar{t}+1)\geq\alpha\), we use the proposed CC scheme to simultaneously benefit from global CC and spatial multiplexing gains; _ii_) if \(\eta_{\text{avg}}(\bar{t}+1)<\alpha\), we serve users via unicasting to not have any loss in DoF. **Remark 3**: _For the non-uniform user-to-profile association with \(Q=\bar{t}+1\) and \(\eta_{1}\leq\alpha\), the best possible DoF is achievable by setting \(\hat{\eta}=\eta_{1}\) (cf. Appendix F)._ ## V Numerical Results In this section, we examine the impacts of non-uniformness in user distribution on the achievable DoF. In this regard, assume a cache-aided MISO setup with \(\bar{t}=1\), \(P=6\) and \(\gamma=\frac{1}{6}\), in which \(K=30\) users are present during the delivery phase. Here, for each association, we compute the standard deviation \(\sigma\) as \(\sigma^{2}=\frac{1}{P}\sum_{p=1}^{P}(\eta_{p}-\eta_{\text{avg}})^{2}\), where \(\eta_{\text{avg}}=5\). Fig. 2 illustrates the maximum achievable DoF for different \(Q\) and \(\sigma\) values with \(\alpha=7\). As observed from Fig. (a)a, for the uniform user distribution, i.e., \(\sigma=0\), our scheme achieves the optimal DoF \(K\gamma+\alpha=12\) with \(\hat{\eta}=\eta_{\text{avg}}=5\) and \(Q=\bar{t}+\lceil\nicefrac{{\alpha}}{{\hat{\eta}}}\rceil=3\). Here, we note that the system performance during the CC delivery step corresponds to _Strategy B_ for the regime \(\alpha>\hat{\eta}\) and non-integer \(\nicefrac{{\alpha}}{{\hat{\eta}}}\), which is missing in the literature. For small \(\sigma\) values (e.g., \(\sigma=0.3\)), although the achievable DoF for \(\hat{\eta}=6\) and \(Q=\bar{t}+1=2\) is slightly less than \(\hat{\eta}=6\) and \(Q=3\), the receiver structure for \(Q=2\) is more straightforward, as they do not need to implement SIC. For large \(\sigma\) values (e.g., \(\sigma=3\)), when \(\max_{p}\eta_{p}>\alpha\), setting \(Q=\bar{t}+1\) and \(\hat{\eta}=\max_{p}\eta_{p}\) maximizes the achievable DoF. In Fig. 3, for any association, we find \(\mathrm{DoF}_{\text{max}}\) which is the maximum achievable DoF obtained by a line search over \(\hat{\eta}\) and \(Q\) values. Accordingly, for the associations with the same \(\sigma\) value, \(\mathrm{DoF}_{M}\) indicates the average of \(\mathrm{DoF}_{\text{max}}\). Here, our proposed scheme is compared with the optimal case, which corresponds to uniform user distribution (described as _only multicast_), and the case that all users are served via unicasting (described as _only unicast_). Although the placement phase was designed solely based on \(\gamma\), our scheme boosts the maximum achievable DoF by \(10\%-70\%\) over unicasting for moderate \(\sigma\) (e.g., \(\sigma=1-4.5\)). So, to maximize the achievable DoF, the server should serve users via the proposed approach for moderate \(\sigma\), and serve users via unicasting for large \(\sigma\). ## VI Conclusion We proposed a novel coded caching scheme for handling network dynamicity where the users can freely enter or depart the network at any time. The conventional schemes in the literature are not truly dynamic as they are only applicable if: _1)_ minimum profile length (the number of users assigned to the profile) exceeds the spatial multiplexing gain \(\alpha\), and _2)_\(\alpha\geq\hat{\eta}\) and \(\alpha\) exceeds the global CC gain, where \(\hat{\eta}\) can be the length of any profile. Our proposed scheme addressed this bottleneck by providing a universal solution applicable to any dynamic network setup, removing all the constraints imposed by existing solutions. We also analyzed the degrees-of-freedom (DoF) performance of the proposed scheme, and for the uniform distribution, we showed that it achieves the optimal DoF not only in the regions covered in the literature but also in the region \(\alpha\geq\hat{\eta}\) with non-integer \(\nicefrac{{\alpha}}{{\hat{\eta}}}\).
2306.10344
The depth of Tsirelson's norm
Tsirelson's norm $\|\cdot \|_T$ on $c_{00}$ is defined as the supremum over a certain collection of iteratively defined, monotone increasing norms $\|\cdot \|_k$. For each positive integer $n$, the value $j(n)$ is the least integer $k$ such that for all $x \in \mathbb{R}^n$ (here $\mathbb{R}^n$ is considered as a subspace of $c_{00}$), $\|x\|_T = \|x\|_k$. In 1989 Casazza and Shura asked what is the order of magnitude of $j(n)$. It is known that $j(n) \in \mathcal{O}(\sqrt{n})$. We show that this bound is tight, that is, $j(n) \in \Omega(\sqrt{n})$. Moreover, we compute the tight order of magnitude for some norms being modifications of the original Tsirelson's norm.
Kevin Beanland, Jędrzej Hodor
2023-06-17T13:16:20Z
http://arxiv.org/abs/2306.10344v1
# The depth of Tsirelson's norm ###### Abstract. Tsirelson's norm \(\|\cdot\|_{T}\) on \(c_{00}\) is defined as the supremum over a certain collection of iteratively defined, monotone increasing norms \(\|\cdot\|_{k}\). For each positive integer \(n\), the value \(j(n)\) is the least integer \(k\) such that for all \(x\in\mathbb{R}^{n}\) (here \(\mathbb{R}^{n}\) is considered as a subspace of \(c_{00}\)), \(\|x\|_{T}=\|x\|_{k}\). In 1989 Casazza and Shura [11] asked what is the order of magnitude of \(j(n)\). It is known that \(j(n)\in\mathcal{O}(\sqrt{n})\)[6]. We show that this bound is tight, that is, \(j(n)\in\Omega(\sqrt{n})\). Moreover, we compute the tight order of magnitude for some norms being modifications of the original Tsirelson's norm. Key words : Banach space, Tsirelson's norm, Schreier family, regular families 2010 Mathematics Subject Classification: Primary: J. Hodor is partially supported by a Polish National Science Center grant (BEETHOVEN; UMO-2018/31/G/ST1/03718). ## 1. Introduction In 1974 Tsirelson [19] constructed a reflexive Banach space containing no embedding of \(c_{0}\) or \(\ell_{p}\) for each \(1\leqslant p<\infty\). The idea evolved throughout the years, and what is nowadays called Tsirelson's space is the dual of the original space, usually presented according to the description given in [12]. Tsirelson's space not only served as a counterexample in Banach space theory but the inductive process Tsirelson developed for defining the norm eventually lead to many breakthroughs in several areas of mathematics. See Tsirelson's webpage for an exhaustive list of publications concerning Tsirelson's space up to 2004 [18], we refer directly to some of the most notable ones [16, 14, 4]. We also refer to a monograph of Casazza and Shura on Tsirelson's space [11]. Tsirelson's space is the completion of \(c_{00}\) under a certain norm - we call it Tsirelson's norm. Let \(\mathcal{S}_{1}:=\{F\subseteq\mathbb{N}:|F|\leqslant\min F\}\) be the Schreier family [17]. We start by defining \(\|\cdot\|_{0}\) as the supremum norm on \(c_{00}\). Next, for each non-negative integer \(m\) and for each \(x\in c_{00}\) we define \[\|x\|_{m+1}:=\max\left\{\|x\|_{m},\sup\left\{\frac{1}{2}\sum_{i}^{d}\|E_{i}x \|_{m}:E_{1}<\cdots<E_{d},\{\min E_{i}:i\in[d]\}\in\mathcal{S}_{1}\right\} \right\}.\] For subsets of integers \(E,E^{\prime}\) and \(x\in c_{00}\), by \(Ex\) we mean the coordinatewise multiplication of \(x\) and the characteristic function of \(E\), and by \(E<E^{\prime}\) we mean \(\max E<\min E^{\prime}\). See the next section for a more careful definition, however, generalized and stated in a slightly different spirit. Tsirelson's norm \(\|x\|_{T}\) is defined as the supremum over \(\|x\|_{m}\) for all non-negative integers \(m\). The above definition can be described more intuitively as a combinatorial game. A vector \(x\in c_{00}\) is provided on input and the goal is to maximize the result. We start with the supremum norm. Then, in each step, we either take the current result or split the vector in some way dependent on the family \(\mathcal{S}_{1}\). However, if we choose to split, we must pay a penalty of multiplying the current result by \(\frac{1}{2}\). Next, we proceed with the same game on each part of the split, summing the results afterward. It is not hard to see that for every \(x\in c_{00}\) there exists a positive integer number \(M\) such that the norm stabilizes starting from the \(M\)th step, that is, \(\|x\|_{M}=\|x\|_{M+1}=\cdots=\|x\|_{T}\). We denote such minimal \(M\) as \(j(x)\). Now, for a positive integer \(n\), we define \(j(n)\) to be the maximum value of \(j(x)\) over all \(x\in c_{00}\) such that \(x_{n+1}=x_{n+2}=\cdots=0\). The function \(j(n)\) measures the complexity of computing Tsirelson's norm for finite vectors. In the game interpretation, \(j(n)\) is the value of the longest optimal strategy for an input of length \(n\). This concept was introduced and initially studied in 1989 by Casazza and Shura [11]. They proved that \(j(n)\in\mathcal{O}(n)\), and asked for the exact order of magnitude of \(j(n)\). In 2017, Beanland, Duncan, Holt, and Quigley [7, Theorem 3.17] provided the first non-trivial lower bound, namely, they showed that \(j(n)\in\Omega(\log n)\). One year later, Beanland, Duncan, and Holt [6] proved that \(j(n)\leqslant\mathcal{O}(\sqrt{n})\). In this paper, we finally resolve the question of Casazza and Shura, by proving that \(\Omega(\sqrt{n})\leqslant j(n)\) (see Corollary 19). Combining the two results, we obtain the following. **Theorem 1**.: _For every positive integer \(n\) we have_ \[\sqrt{2n}-3\leqslant j(n)\leqslant 2\sqrt{n}+4.\] In the remaining part of the introduction, we discuss some natural modifications of Tsirelson's norm. From a modern point of view, the choice of \(\frac{1}{2}\) and \(\mathcal{S}_{1}\) in the definition of Tsirelson's norm may seem a little bit artificial. Indeed, one can replace \(\frac{1}{2}\) by any real number \(0<\theta<1\) and \(\mathcal{S}_{1}\) by any regular family \(\mathcal{F}\) to obtain a norm \(\|\cdot\|_{T[\theta,\mathcal{F}]}\), and in turn a Banach space \(T[\theta,\mathcal{F}]\) - see e.g. [2, Chapter 1] or [3, Chapter 3]. The generalized Tsirelson's spaces have many interesting properties and they are connected to various branches of mathematics, e.g. to logic [9]. The notion of regular families is a natural abstraction of the crucial properties of the Schreier family - see the next section for the definition. For the generalized version of the norm, one can still define the function \(j_{\theta,\mathcal{F}}\) in an analogous way, and again ask for the order of magnitude. In this paper, we stick to \(\theta=\frac{1}{2}\), and we will consider various examples of regular families \(\mathcal{F}\). Let us define them now. First, for every \(\varphi\) increasing and superadditive function on positive integers, we define \[\mathcal{S}_{\varphi}:=\{F\subseteq\mathbb{N}:|F|\leqslant\varphi(\min F)\}.\] Note that \(\mathcal{S}_{\mathrm{id}}=\mathcal{S}_{1}\). This is a very natural generalization of the Schreier family. Some properties for some particular cases of \(\varphi\) were studied in [5]. It is also worth noting that the collection of Banach spaces \(T[\frac{1}{2},\mathcal{S}_{\varphi}]\) gained the attention of the research community in connection with the meta-problem of so-called explicitly defined Banach spaces [13, 15, 10]. Let \(k\) be a positive integer, we consider the family consisting of unions at most \(k\) Schreier sets, namely, we define \[k\mathcal{S}_{1}:=\{F\subseteq\mathbb{N}:\exists_{E_{1},\ldots,E_{k}\in \mathcal{S}_{1}}F=\bigcup_{i=1}^{k}E_{i}\}.\] Some properties of such families were studied in [8]. Moreover, \(k\mathcal{S}_{1}\) can be seen as the so-called convolution of the family \(\mathcal{S}_{1}\) with the family consisting of all sets with at most \(k\) elements. By convoluting regular families, one can produce many Banach spaces with interesting properties - see [1] or [3, Chapter 2]. We consider two more examples of regular families constructed in a similar spirit. The first one, denoted by \(\mathcal{S}_{2}\), is the convolution of the Schreier family with itself, and the second one is the convolution of the Schreier family with \(\mathcal{S}_{2}\). We have \[\mathcal{S}_{2}:=\{F\subseteq\mathbb{N}:\exists_{E_{1},\ldots,E_{\ell}\in \mathcal{S}_{1}}F=\bigcup_{i=1}^{\ell}E_{i},\{\min E_{i}:i\in[\ell]\}\in \mathcal{S}_{1}\},\] \[\mathcal{S}_{3}:=\{F\subseteq\mathbb{N}:\exists_{E_{1},\ldots,E_{\ell}\in \mathcal{S}_{2}}F=\bigcup_{i=1}^{\ell}E_{i},\{\min E_{i}:i\in[\ell]\}\in \mathcal{S}_{1}\}.\] For each of the above families, we give the exact order of magnitude for the function \(j_{\mathcal{F}}(n)\). Note that in some cases we decided to present less technical proofs instead of obtaining better constants. **Theorem 2**.: _Let \(\varphi\) be an increasing and superadditive function on positive integers. For each positive integer \(n\), let \(\varphi_{\Sigma}^{-1}(n)=\min\{\ell\in\mathbb{Z}:n\leqslant\sum_{i=1}^{\ell} \varphi(i)\}.\) We have_ \[j_{\mathcal{S}_{\varphi}}(n)\in\Theta(\varphi_{\Sigma}^{-1}(n)).\] _For all positive integers \(n,k\), let \(p_{k}(n)=n^{k}\) and \(e_{k}(n)=k^{n}\), then_ \[j_{\mathcal{S}_{p_{k}}}(n)\in\Theta(n^{\frac{1}{k+1}})\ \ \mathrm{and}\ \ j_{ \mathcal{S}_{e_{k}}}(n)\in\Theta(\log_{k}n).\] _For a fixed positive integer \(k\) with \(k\geqslant 2\), we have_ \[j_{k\mathcal{S}_{1}}(n)\in\Theta(\log n).\] _If \(k\) is not fixed, then_ \[j_{k\mathcal{S}_{1}}(n)\in\Theta(\frac{1}{k}\log n).\] _Last but not least, we have_ \[j_{\mathcal{S}_{2}}(n)\in\Theta(\sqrt{\log n})\ \ \mathrm{and}\ \ j_{\mathcal{S}_{2}}(n)\in\Theta(\sqrt{\log^{*}n}).\] For convenience of the reader, we refer to the proof of each of the bounds in the following table. \begin{tabular}{c|c|c} & Lower bounds (Section 5) & Upper bounds (Section 7) \\ \hline \(\mathcal{S}_{1}\) & Corollary 19 & [6] or Theorem 34 \\ \hline \(\mathcal{S}_{\varphi}\) & Corollary 18 & Theorem 34 \\ \hline \(\mathcal{S}_{p},\mathcal{S}_{e}\) & direct computation & direct computation \\ \hline \(k\mathcal{S}_{1}\) & Corollary 20 & Theorem 35 \\ \hline \(\mathcal{S}_{2}\) & Corollary 21 & Theorem 36 \\ \hline \(\mathcal{S}_{3}\) & Corollary 22 & Theorem 37 \\ \end{tabular} By \(\log^{*}n\) we mean the iterated logarithm of \(n\), that is, how many times do we have to take the logarithm of \(n\) until we reach a number below \(1\). This function emerges often in computer science, and in particular in the field of analyzing the complexity of algorithms. For example, the average time complexity of a query in the classical Find-Union data structure is of order \(\log^{*}n\). The iterated logarithm is an extremely slow-growing function, e.g. \(\log^{*}(2^{65536})=5\). Observe that one can define the families \(\mathcal{S}_{4},\mathcal{S}_{5},\dots\) analogously to \(\mathcal{S}_{2}\) and \(\mathcal{S}_{3}\). It is clear that the functions \(j_{\mathcal{S}_{k}}\) for \(k\geqslant 4\) are even slower growing than \(\log^{*}\). This indicates that these functions do not even have natural names, therefore, to avoid unnecessary technicalities, we decided not to consider these families. However, we believe that our tools are sufficient to compute the order of magnitude functions of \(j_{\mathcal{S}_{k}}\) for any positive integer \(k\). The paper is organized as follows. In the next section, we settle the required notation. In Section 3, we discuss special members in regular families called _full_ sets that are very useful in proving our results. In Section 4, we introduce some abstract tools for the lower bounds, and in Section 5, we use the tools to establish lower bounds on the function \(j_{\mathcal{F}}(n)\) in the case of \(\mathcal{F}\) being one of the families that we are interested in. Next, in Section 6, we introduce abstract tools for the upper bounds, and in Section 7, we use the tools to establish upper bounds on the function \(j_{\mathcal{F}}(n)\) in the case of \(\mathcal{F}\) being one of the families that we are interested in. Finally, in Section 8, we discuss some related open problems. ## 2. Preliminaries Let \(\mathbb{N}\) be the set of all positive integers, and let \(\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}\). For two integers \(a,b\) with \(a\leqslant b\) we write \([a,b]\) to denote the set \(\{a,a+1,\dots,b\}\), if \(a>b\), then \([a,b]:=\emptyset\), and \([0]:=\emptyset\). For a positive integer \(a\), we abbreviate \([a]:=[1,a]\). For any \(E,F\subseteq\mathbb{N}\) the expression \(E<F\) is a short form of writing that \(\max E<\min F\), and similarly for \(\leqslant,>,\geqslant\). An inequality between \(E\) and some \(a\in\mathbb{N}\) should be understood as an inequality between and \(\{a\}\). For every \(E\subseteq\mathbb{N}\) and for all distinct \(a,b\in E\) we say that \(a,b\) are _consecutive in \(E\)_ if \([a+1,b-1]\cap E=\emptyset\). When we omit a base of a logarithm, we consider the base \(2\). Let \(\tau\) be the power tower function, that is, for every real number \(x\), we set \(\tau(0,r)=r\), and for every \(i\in\mathbb{N}\) we set \(\tau(i,r)=2^{\tau(i-1,r)}\). Let \(\log^{*}\) be the iterated logarithm, that is, for every real number \(r\), we have \(\log^{*}r=\min\{i\in\mathbb{N}_{0}:r\leqslant\tau(i,1)\}\). For a vector \(x\in\mathbb{R}^{\mathbb{N}}\) we write \(\operatorname{supp}x:=\{i\in\mathbb{N}:x_{i}\neq 0\}\). We write \(c_{00}\) for all the vectors \(x\in\mathbb{R}^{\mathbb{N}}\) such that \(|\operatorname{supp}x|<\infty\). We write \(c_{00}^{+}\) for all nonzero vectors \(x\in c_{00}\) such that \(x_{i}\geqslant 0\) for all \(i\in\mathbb{N}\). For all \(E\subseteq\mathbb{N}\) and \(x\in c_{00}\) we write \(Ex\) for the projection of \(x\) onto \(E\), that is, \((Ex)_{i}=x_{i}\) whenever \(i\in E\) and \((Ex)_{i}=0\) otherwise. For each \(i\in\mathbb{N}\) we define \(e_{i}\in c_{00}^{+}\) to be the vector with \((e_{i})_{i}=1\) and \((e_{i})_{j}=0\) for each \(j\in\mathbb{N}\backslash\{i\}\). For a linear functional \(f:\mathbb{R}^{\mathbb{N}}\to\mathbb{R}\) we write \(\operatorname{supp}f:=\{i\in\mathbb{N}:f(e_{i})\neq 0\}\). For each \(i\in\mathbb{N}\), the functional \(e_{i}^{*}:\mathbb{R}^{\mathbb{N}}\to\mathbb{R}\) is such that for each \(x\in c_{00}\) we have \(e_{i}^{*}(x)=x_{i}\). Let \(\mathcal{F}\) be a family of finite subsets of \(\mathbb{N}\). We say that \(\mathcal{F}\) is _hereditary_ if for every \(F\in\mathcal{F}\) and \(G\subseteq F\) we have \(G\in\mathcal{F}\). We say that \(\mathcal{F}\) is _spreading_ if for every \(n\in\mathbb{N}\) and for all \(\ell_{1},\dots,\ell_{n},k_{1},\dots,k_{n}\in\mathbb{N}\) such that \(\ell_{i}\leqslant k_{i}\) for all \(i\in[n]\), and \(\{\ell_{1},\dots,\ell_{n}\}\in\mathcal{F}\) we have \(\{k_{1},\dots,k_{n}\}\in\mathcal{F}\). We say that \(\mathcal{F}\) is _compact_ if it is compact as a subset of \(\{0,1\}^{\mathbb{N}}\) with the product topology under the natural identification. Finally, we say that \(\mathcal{F}\) is _regular_ if it is hereditary, spreading, and compact. Perhaps, the most prominent example of a regular family is the Schreier family \(\mathcal{S}_{1}\) defined in the introduction. It is quite straightforward to check that all the families defined in the introduction are regular (\(\mathcal{S}_{\varphi}\), \(k\mathcal{S}_{1}\), \(\mathcal{S}_{2}\), and \(\mathcal{S}_{3}\)). Next, we proceed with introducing Tsirelson's norm. Fix a regular family \(\mathcal{F}\). We define \[W_{0}(\mathcal{F}):=\{e_{i}^{*}:i\in\mathbb{N}\}\cup\{-e_{i}^{*}:i\in\mathbb{ N}\}.\] For each \(m\in\mathbb{N}_{0}\) we define \[W_{m+1}(\mathcal{F}):=W_{m}(\mathcal{F})\cup\] \[\left\{\frac{1}{2}\sum_{i=1}^{d}f_{i}:\ \ f_{i}\in W_{m}(\mathcal{F}),\ \ \{\min \operatorname{supp}f_{i}:i\in[d]\}\in\mathcal{F},\ \ \operatorname{supp}f_{1}<\dots< \operatorname{supp}f_{d}\right\}.\] We define the set of _norming functionals for \(\mathcal{F}\)_, \[W(\mathcal{F}):=\bigcup_{m=0}^{\infty}W_{m}(\mathcal{F}).\] As the name suggests, for all \(x\in c_{00}\) and \(m\in\mathbb{N}_{0}\) we define \[\|x\|_{\mathcal{F},m}:=\sup\{f(x):f\in W_{m}(\mathcal{F})\}.\] And finally, for every \(x\in c_{00}\) we define the \(\mathcal{F}\)-Tsirelson's norm \[\|x\|_{\mathcal{F}}:=\sup\{\|x\|_{\mathcal{F},m}:m\in\mathbb{N}_{0}\}.\] It is not hard to observe that the norm \(\|\cdot\|_{\mathcal{S}_{1}}\) coincide with the norm \(\|\cdot\|_{T}\) defined in the introduction (see e.g. [2, Chapter 1]). The definition introduced in Section 1 is the classical definition, whereas the definition above is much more handy to work with. For every \(f\in W(\mathcal{F})\) the _depth of \(f\) with respect to the family \(\mathcal{F}\)_ is \[\operatorname{depth}_{\mathcal{F}}(f):=\min\{m\in\mathbb{N}_{0}:f\in W_{m}( \mathcal{F})\}\] Let \(f\in W(\mathcal{F})\) and suppose that \(\operatorname{depth}_{\mathcal{F}}(f)=m+1\) for some \(m\in\mathbb{N}_{0}\). By definition, there exist \(f_{1},\dots,f_{d}\in W_{m}(\mathcal{F})\) such that \(f=\frac{1}{2}\sum_{i=1}^{d}f_{i}\), the set \(\{\min\operatorname{supp}f_{i}:i\in[d]\}\) is in \(\mathcal{F}\), and \(\operatorname{supp}f_{1}<\dots<\operatorname{supp}f_{d}\). Note that in general, \(f_{1},\dots,f_{d}\) are not uniquely determined. We say that \(f_{1},\dots,f_{d}\) are \(\mathcal{F}\)_-building for \(f\)_. For every \(x\in c_{00}\) we define \[j_{\mathcal{F}}(x):=\min\{\operatorname{depth}_{\mathcal{F}}(f):\ \ f\in W( \mathcal{F}),\ \ f(x)=\|x\|_{\mathcal{F}}\}.\] For all \(a,b\in\mathbb{N}\) with \(a\leqslant b\) we define \[j_{\mathcal{F}}(a,b):=\max\{j_{\mathcal{F}}(x):\ \ x\in c_{00},\ \operatorname{ supp}x\subseteq[a,b]\}.\] It is not difficult to see that in the above definition, \(c_{00}\) can be replaced with \(c_{00}^{+}\) without changing any value of \(j_{\mathcal{F}}(a,b)\). We will use this fact implicitly sometimes. Finally, for each \(n\in\mathbb{N}\) we define \[j_{\mathcal{F}}(n):=j_{\mathcal{F}}(1,n).\] We will need the following simple observation on the behavior of the function \(j_{\mathcal{F}}\). **Observation 3**.: _Let \(\mathcal{F}\) be a regular family, and let \(a,b,c,d\in\mathbb{N}\) such that \([a,b]\subseteq[c,d]\). We have \(j_{\mathcal{F}}(a,b)\leqslant j_{\mathcal{F}}(c,d)\)._ Proof.: Let \(x\in c_{00}\) with \(\operatorname{supp}x\subseteq[a,b]\), and such that \(j_{\mathcal{F}}(a,b)=j_{\mathcal{F}}(x)\). Then, \(\operatorname{supp}x\subseteq[c,d]\), thus \(j_{\mathcal{F}}(a,b)=j_{\mathcal{F}}(x)\leqslant j_{\mathcal{F}}(c,d)\). ## 3. Full sets in regular families Given a positive integer, e.g. \(a=10\), and a regular family \(\mathcal{F}\), starting with \(\{a\}\), one can greedily add consecutive integers to the set to determine the threshold after which the set is no longer a member of \(\mathcal{F}\). Say that \(\mathcal{F}=\mathcal{S}_{1}\). It is clear that \(\{10,11,12,\dots,19\}\) is still in \(\mathcal{S}_{1}\), however, \(\{10,11,12,\dots,19,20\}\) is not. On the other hand, if \(\mathcal{F}=2\mathcal{S}_{1}\), then not only \(\{10,11,12,\dots,19,20\}\) is in \(2\mathcal{S}_{1}\), but even \(\{10,11,12,\dots,19,20,21,\dots,38,39\}\) is in \(2\mathcal{S}_{1}\). The threshold varies a lot among regular families. We find it useful to formalize this notion as follows. For each regular family \(\mathcal{F}\), and for each \(a\in\mathbb{N}\), let \[\operatorname{range}_{\mathcal{F}}(a):=\max\{m\in\mathbb{N}:\ [a,m-1]\in \mathcal{F}\}.\] First, note the following trivial observation. **Observation 4**.: _Let \(F\subseteq\mathbb{N}\). For every regular family \(\mathcal{F}\) if \(\max F<\operatorname{range}_{\mathcal{F}}(\min F)\), then \(F\in\mathcal{F}\)._ Next, observe that for some of the families that we consider it is very easy to compute the value of \(\operatorname{range}_{\mathcal{F}}\). **Observation 5**.: _For every integer \(k\) with \(k\geqslant 2\), for each superadditive and increasing function \(\varphi:\mathbb{N}\to\mathbb{N}\), and for each \(a\in\mathbb{N}\), we have_ \[\operatorname{range}_{\mathcal{S}_{\varphi}}(a)=a+\varphi(a),\ \operatorname{range}_{k \mathcal{S}_{1}}(a)=2^{k}a,\ \operatorname{range}_{\mathcal{S}_{2}}(a)=2^{a}a.\] We do not attach the proof of this observation as it is straightforward, however, we encourage the reader to verify the above values for a better understanding of the structure of the families. Sometimes, we will use this observation implicitly. The formula for \(\operatorname{range}_{\mathcal{S}_{3}}\) is not as clean, although, using a simple inequality \(2^{a}\leqslant\operatorname{range}_{\mathcal{S}_{2}}(a)\leqslant 2^{2a}\), we obtain a useful estimation. **Observation 6**.: _For every \(a\in\mathbb{N}\), we have_ \[\tau(a,a)\leqslant\operatorname{range}_{\mathcal{S}_{3}}(a)\leqslant\tau(a,3a).\] In our consideration, we will be particularly interested in the sets that are maximal in a given regular family \(\mathcal{F}\). We call such sets \(\mathcal{F}\)_-full_. The main reason why such sets are interesting is the fact that they have to be sufficiently large. For our arguments in the next sections, we also need some more technical notions concerning full sets. For each regular family \(\mathcal{F}\), for each family \(\mathcal{G}\) of subsets of \(\mathbb{N}\), and for all \(a,b\in\mathbb{N}\) we define: \[\operatorname{full}(\mathcal{F}) :=\{F\in\mathcal{F}:\;\;n\in\mathbb{N}\backslash F\Longrightarrow F \cup\{n\}\notin\mathcal{F}\},\] \[[a,b]\mathcal{G} :=\{F\in\mathcal{G}:\;\;F\subseteq[a,b],\;\;a\in F\},\] \[\operatorname{full}_{a,b}(\mathcal{F}) :=\{F\in[a,b]\mathcal{F}:\;\;n\in[a,b]\backslash F \Longrightarrow F\cup\{n\}\notin\mathcal{F}\}.\] In the special case of \(\mathcal{F}=\mathcal{S}_{1}\), we write that \(F\subseteq\mathbb{N}\) is a _full Schreier set_ if \(F\in\operatorname{full}(\mathcal{S}_{1})\). As mentioned, the main feature of full sets is the fact that are reasonably large. One can verify the following two observations. **Observation 7**.: _Let \(a,b,s\in\mathbb{N}\), and let \(F_{1},\ldots,F_{s}\) be full Schreier sets such that \(F_{1}<\cdots<F_{s}\). If \(F_{1}\cup\cdots\cup F_{s}\subseteq[a,b]\), then \(b\geqslant 2^{s}\)a._ **Observation 8**.: _Let \(a,b,s\in\mathbb{N}\), and let \(F_{1},\ldots,F_{s}\in\operatorname{full}(\mathcal{S}_{2})\) be such that \(F_{1}<\cdots<F_{s}\). If \(F_{1}\cup\cdots\cup F_{s}\subseteq[a,b]\), then \(b\geqslant\tau(s,a)\)a._ Observation 8 implies that disjoint \(\mathcal{S}_{2}\)-full sets need a lot of space. Now, we want to argue that having reasonably large space, we can fit many disjoint \(\mathcal{S}_{2}\)-full sets. Note that in the case of full Schreier sets an analogous computation is straightforward. The below requires some technical computation. **Lemma 9**.: _Let \(a\in\mathbb{N}\). For every \(s\in\mathbb{N}\), there exist \(F_{1},\ldots,F_{s}\in\operatorname{full}(\mathcal{S}_{2})\) with \(F_{1}<\cdots<F_{s}\) such that \(F_{i}\subseteq[a,\tau(s,2a+s-1)-1]\) for each \(i\in[s]\)._ Proof.: First, we claim that for every \(m\in\mathbb{N}\), the interval \([m,\tau(1,2m)-1]\) contains an \(\mathcal{S}_{2}\)-full set. Indeed, \([m,2^{m}m-1]\in\operatorname{full}(\mathcal{S}_{2})\) and \(2^{m}m\leqslant 2^{2m}=\tau(1,2m)\). We proceed by induction on \(s\). If \(s=1\), then we use the above claim directly for \(m=a\). Assume that \(s>1\) and that the assertion holds for \(s-1\), namely, the interval \([a,\tau(s-1,2a+s-2)-1]\) contains some \(F_{1},\ldots,F_{s-1}\in\operatorname{full}(\mathcal{S}_{2})\) with \(F_{1}<\cdots<F_{s-1}\). By the initial claim applied to \(m=\tau(s-1,2a+s-2)\), we obtain that \([\tau(s-1,2a+s-2),\tau(1,2\tau(s-1,2a+s-2))-1]\) contains an \(\mathcal{S}_{2}\)-full set. Note that \[\tau(1,2\tau(s-1,2a+s-2))\leqslant\tau(1,\tau(s-1,2a+s-1))=\tau(s,2a+s-1).\] By taking the \(\mathcal{S}_{2}\)-full set in the interval as \(F_{s}\), we finish the proof. Let us comment a little on differences between \(\operatorname{full}(\mathcal{F})\) and \(\operatorname{full}_{a,b}(\mathcal{F})\). By definition, \([a,b]\operatorname{full}(\mathcal{F})\subseteq\operatorname{full}_{a,b}( \mathcal{F})\). In general, the inclusion can be strict. The simplest way to see this is to take \(a,b\) with \(|b-a|\) small, and the set \([a,b]\). For instance, \[\{7,8,9\}\in\operatorname{full}_{7,9}(\mathcal{S}_{1})\;\;\;\text{and}\;\;\; \{7,8,9\}\notin\operatorname{full}(\mathcal{S}_{1}).\] For the Schreier family, one can prove that all sets in \(\mathrm{full}_{a,b}(\mathcal{F})\backslash[a,b]\mathrm{full}(\mathcal{F})\) are of this type (see the lemma below), however, this is not always the case. For instance, \[\{2,3,5,6,7,8\}\in\mathrm{full}_{2,8}(2\mathcal{S}_{1})\ \ \ \mathrm{and}\ \ \{2,3,5,6,7,8\}\notin\mathrm{full}(2\mathcal{S}_{1}).\] **Lemma 10**.: _Let \(\varphi:\mathbb{N}\to\mathbb{N}\) be an increasing and superadditive function. Let \(a,b\in\mathbb{N}\) be such that \([a,b]\notin\mathcal{S}_{\varphi}\) and let \(F\subseteq\mathbb{N}\). If \(F\in\mathrm{full}_{a,b}(\mathcal{S}_{\varphi})\), then \(F\in[a,b]\mathrm{full}(\mathcal{S}_{\varphi})\). In particular, \(\mathrm{full}_{a,b}(\mathcal{S}_{\varphi})=[a,b]\mathrm{full}(\mathcal{S}_{ \varphi})\)._ Proof.: Let \(F\in\mathrm{full}_{a,b}(\mathcal{S}_{\varphi})\), and suppose that \(F\notin\mathrm{full}(\mathcal{S}_{\varphi})\), that is, \(|F|<\varphi(\min F)=\varphi(a)\) Since \([a,b]\notin\mathcal{S}_{\varphi}\), there exists \(m\in[a,b]\backslash F\). It follows that \(F\cup\{m\}\in[a,b]\,\mathcal{S}_{\varphi}\), which is a contradiction. The sets in \(k\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) can be seen as unions of some number of Schreier sets. In general the ingredients of the union are not uniquely defined, however, one can fix them to be unique by a simple greedy process described below. For every \(F\subseteq\mathbb{N}\), and for every \(i\in\mathbb{N}\) we define \(E_{i}(F)\) with the following inductive procedure. Let \(F\subseteq\mathbb{N}\). First, if \(F=\emptyset\), then we set \(E_{1}(F):=\emptyset\). Otherwise, we set \(E_{1}(F)\) to be \(F\) if \(|F|\leqslant\min F\), and to be the first \(\min F\) elements of \(F\) if \(|F|>\min F\). Now, let \(i\in\mathbb{N}\), and assume that \(E_{1}(F),\ldots,E_{i}(F)\) are already defined. We set \(E_{i+1}(F):=E_{1}(F\backslash E_{i}(F))\). For instance, for \(F=[10]\), we have \[E_{1}(F)=\{1\},\;E_{2}(F)=\{2,3\},\;E_{3}(F)=\{4,5,6,7\}, \;E_{4}(F)=\{8,9,10\},\] \[\mathrm{and}\;E_{5}(F)=E_{6}(F)=\cdots=\emptyset.\] Let \(F\subseteq\mathbb{N}\). Note that if \(E_{i}(F)=\emptyset\) for some \(i\in\mathbb{N}\), then \(E_{i+1}(F)=E_{i+2}(F)=\cdots=\emptyset\). Using the operators \(E_{i}\) one can characterize sets in the families \(k\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\). Observe that \(F\in\mathcal{S}_{1}\) if and only if \(E_{2}(F)=\emptyset\), next, for every \(k\in\mathbb{N}\), we have \(F\in k\mathcal{S}_{1}\) if and only if \(E_{k+1}(F)=\emptyset\), and finally, \(F\in\mathcal{S}_{2}\) if and only if \(E_{\min F+1}(F)=\emptyset\). Now, analogously to Lemma 10, we study the sets in \(\mathrm{full}_{a,b}(\mathcal{F})\) assuming that \([a,b]\notin\mathcal{F}\), where \(\mathcal{F}\) is either \(k\mathcal{S}_{1}\) or \(\mathcal{S}_{2}\). Intuitively, we prove that such full sets are large. **Lemma 11**.: _Let \(k\) be a positive integer with \(k\geqslant 2\), let \(a,b\in\mathbb{N}\) be such that \([a,b]\notin k\mathcal{S}_{1}\), and let \(F\subseteq\mathbb{N}\). If \(F\in\mathrm{full}_{a,b}(k\mathcal{S}_{1})\), then \(E_{1}(F),\ldots,E_{k-1}(F)\) are full Schreier sets._ Proof.: Let \(i\) be the least positive integer such that \(E_{i}(F)\) is not a full Schreier set. If there exists \(m\in[\min E_{i}(F),b]\backslash F\), then \(F\cup\{m\}\in[a,b](k\mathcal{S}_{1})\), which is a contradiction, hence, \([\min E_{i}(F),b]\subseteq F\). It follows that \(E_{i+1}(F)=\emptyset\). Suppose that \(i<k\). We have \([a,b]\notin k\mathcal{S}_{1}\), thus, there exists \(m\in[a,b]\backslash F\). Observe that \(F\cup\{m\}\in[a,b](k\mathcal{S}_{1})\), which is again a contradiction. Therefore, \(i=k\), which ends the proof. By repeating exactly the same proof, we obtain a similar result for \(\mathcal{S}_{2}\). **Lemma 12**.: _Let \(a,b\in\mathbb{N}\) be such that \([a,b]\notin\mathcal{S}_{2}\) and let \(F\subseteq\mathbb{N}\). If \(F\in\mathrm{full}_{a,b}(\mathcal{S}_{2})\), then \(E_{1}(F),\ldots,E_{a-1}(F)\) are full Schreier sets._ Proof.: Let \(i\) be the least positive integer such that \(E_{i}(F)\) is not a full Schreier set. If there exists \(m\in[\min E_{i}(F),b]\backslash F\), then \(F\cup\{m\}\in[a,b]\mathcal{S}_{2}\), which is a contradiction, hence, \([\min E_{i}(F),b]\subseteq F\). It follows that \(E_{i+1}(F)=\emptyset\). Suppose that \(i<a\). We have \([a,b]\notin\mathcal{S}_{2}\), thus, there exists \(m\in[a,b]\backslash F\). Observe that \(F\cup\{m\}\in[a,b]\mathcal{S}_{2}\), which is again a contradiction. Therefore, \(i=a\), which ends the proof. Intuitively, the last set (that is, \(E_{k}(F)\), or \(E_{a}(F)\)) usually is also quite large. As we do not care much for the constants in this paper, we do not investigate this in detail in general. However, the investigation is necessary for later applications in the case of \(2\mathcal{S}_{1}\). **Lemma 13**.: _Let \(a,b\in\mathbb{N}\) be such that \([a,b]\notin 2\mathcal{S}_{1}\), and let \(F\subseteq\mathbb{N}\). If \(F\in\operatorname{full}_{a,b}(2\mathcal{S}_{1})\), then \(E_{1}(F)\) is a full Schreier set and_ \[\min E_{2}(F)\leqslant\frac{b}{2}+2.\] Proof.: The first part follows from Lemma 11. Suppose that the second part does not hold, that is \[\min E_{2}(F)\geqslant\frac{b}{2}+3.\] Since \(F\in\operatorname{full}_{a,b}(2\mathcal{S}_{1})\), we have \(E_{2}(F)=[\min E_{2},b]\). Note that by rearranging the above, we have \[\min E_{2}(F)-2\geqslant b-\min E_{2}(F)+3.\] This yields \([\min E_{2}(F)-2,b]\in\mathcal{S}_{1}\). We claim that there exists \(F^{\prime}\in 2\mathcal{S}_{1}\) such that \(F\subsetneq F^{\prime}\subseteq[a,b]\). If \(\min E_{2}(F)-1\notin E_{1}(F)\), then \(F^{\prime}:=E_{1}(F)\cup\{\min E_{2}(F)-1\}\cup E_{2}(F)\) is a proper choice. Hence, we assume that \(\min E_{2}(F)-1\in E_{1}(F)\). Suppose that \(E_{1}(F)\) is an interval. Then, we set \(F^{\prime}:=F^{\prime}_{1}\cup F^{\prime}_{2}\), where \(F^{\prime}_{1}:=[\min E_{1}(F)-1,\min E_{2}(F)-3]\), and \(F^{\prime}_{2}:=[\min E_{2}(F)-2,b]\) - note that \(\min E_{1}(F)-1\in[a,b]\) because \([a,b]\notin 2\mathcal{S}_{1}\). Finally, we assume that \(E_{1}(F)\) is not an interval. Let \(m\in[\min E_{1}(F),\max E_{1}(F)]\backslash E_{1}(F)\). We set \(F^{\prime}:=F^{\prime}_{1}\cup F^{\prime}_{2}\), where \(F^{\prime}_{1}:=E_{1}(F)\cup\{m\}\backslash\{\min E_{2}(F)-1\}\), and \(F^{\prime}_{2}:=[\min E_{2}(F)-1,b]\). This proves the claim, namely, there exists \(F^{\prime}\in 2\mathcal{S}_{1}\) such that \(F\subsetneq F^{\prime}\subseteq[a,b]\), which contradicts \(F\in\operatorname{full}_{a,b}(2\mathcal{S}_{1})\). The operators \(E_{i}\) are useful to describe sets in the families \(k\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\). In order to describe sets in the family \(\mathcal{S}_{3}\), we need analogous operators extracting subsequent \(\mathcal{S}_{2}\)-full subsets. For every \(F\subseteq\mathbb{N}\), and for every \(i\in\mathbb{N}\) we define \(E_{i}^{*}(F)\) with the following inductive procedure. Let \(F\subseteq\mathbb{N}\). First, if \(F=\emptyset\), then we set \(E_{1}^{*}(F):=\emptyset\). Otherwise, we set \(E_{1}^{*}(F)\) to be \(F\) if \(F\in\mathcal{S}_{2}\), and to be \(E_{1}(F)\cup\dots\cup E_{\min F}(F)\) if \(F\notin\mathcal{S}_{2}\). Now, let \(i\in\mathbb{N}\), and assume that \(E_{1}^{*}(F),\dots,E_{i}^{*}(F)\) are already defined. We set \(E_{i+1}^{*}(F):=E_{1}^{*}(F\backslash E_{i}^{*}(F))\). **Lemma 14**.: _Let \(a,b\in\mathbb{N}\) be such that \([a,b]\notin\mathcal{S}_{3}\), and let \(F\subseteq\mathbb{N}\). If \(F\in\operatorname{full}_{a,b}(\mathcal{S}_{3})\), then \(E_{1}^{*}(F),\dots,E_{a-1}^{*}(F)\) are \(\mathcal{S}_{2}\)-full sets._ Proof.: Let \(i\) be the least positive integer such that \(E_{i}^{*}(F)\) is not an \(\mathcal{S}_{2}\)-full set. Let \(a^{\prime}=\min E_{i}^{*}(F)\). It is clear that \(E_{i}^{*}(F)\in\operatorname{full}_{a^{\prime},b}(\mathcal{S}_{2})\). Suppose that \(i<a\). If \([a^{\prime},b]\in\mathcal{S}_{2}\), then there exists \(m\in[a,b]\backslash F\), and \(F\cup\{m\}\in\mathcal{S}_{3}\), which is a contradiction. We can assume that \([a^{\prime},b]\notin\mathcal{S}_{2}\). By Lemma 12, \(E_{1}(E_{i}^{*}(F)),\dots,E_{a^{\prime}-1}(E_{i}^{*}(F))\) are full Schreier sets. If there exists \(m\in[\min E_{a^{\prime}}(E_{i}^{*}(F)),b]\backslash F\), then \(F\cup\{m\}\in[a,b]\mathcal{S}_{3}\), which is a contradiction. It follows that \[[\min E_{a^{\prime}}(E_{i}^{*}(F)),b]\subseteq F,\] which yields \(E_{i+1}^{*}(F)=\emptyset\). Since \([a,b]\notin\mathcal{S}_{3}\), there exists \(m\in[a,b]\backslash F\). Observe that \(F\cup\{m\}\in\mathcal{S}_{3}\), which is again a contradiction. Therefore, \(i=a\), which ends the proof. ## 4. Tools for lower bounds The idea for proving the lower bounds is the same for all the regular families that we consider. For this reason, we are going to prove an abstract lemma, and then apply it to various families. The general plan of constructing an element of \(c_{00}\) with high \(j_{\mathcal{F}}(x)\) is to put a very high value on the first coordinate and a bunch of last coordinates. This way, we force every functional attaining the norm to be a sum of many functionals from \(W_{0}(\mathcal{F})\). Intuitively, this leaves the largest possible space to proceed with inductive construction. See an example in Figure 1. **Lemma 15**.: _Let \(\mathcal{F}\) be a regular family and let \(d\in\mathbb{N}\). If there exist \(F_{1},\ldots,F_{d}\in\operatorname{full}(\mathcal{F})\) and \(a_{1},b_{1},\ldots,a_{d-1},b_{d-1}\in\mathbb{N}\) such that_ 1. _for all_ \(i\in[d]\) _we have_ \(|F_{i}|\geqslant 3\)_,_ 2. _for all_ \(i\in[d-1]\) _elements_ \(a_{i},b_{i}\) _are consecutive in_ \(F_{i}\)_,_ 3. _for all_ \(i\in[d-1]\) _we have_ \(a_{i}\in F_{i+1}\) _and_ \(F_{i+1}\subseteq[a_{i},b_{i}-1]\)_,_ 4. _for all_ \(i\in[d-1]\) _and distinct_ \(a,a^{\prime}\in[a_{i},b_{i}-1]\) _we have_ \((F_{i}\backslash\{a_{i}\})\cup\{a,a^{\prime}\}\notin\mathcal{F}\)_,_ _then there exists \(x\in c_{00}^{+}\) with \(\operatorname{supp}x\subseteq\bigcup_{i=1}^{d}F_{i}\) and \(j_{\mathcal{F}}(x)\geqslant d\), in particular \(j_{\mathcal{F}}(\max F_{1})\geqslant d\)._ Figure 1. Consider the case, where \(\mathcal{F}=\mathcal{S}_{1}\). Let \(x\) be a sequence constructed as in the figure, that is, we put some value on coordinates \(6,7,8,9,10,11\), next we put much greater value on coordinates \(5,12,13,14\), and so on. The values on coordinates \(3\) and \(17\) are so big that we have to take them with the smallest possible weight, that is \(\frac{1}{2}\), however, as the minimum coordinate is \(3\) and we work with \(\mathcal{S}_{1}\), there is only room for one more functional in the sum. Hence, if \(f(x)=\|x\|_{\mathcal{S}_{1}}\), than \(f=\frac{1}{2}(e_{3}^{*}+g+e_{17}^{*})\) with \(\operatorname{supp}g\subseteq[4,16]\). Now, we repeat the reasoning for \(g\), that is, values on coordinates \(4\), \(15\), and \(16\) are so big that we have to take them with the smallest possible weight obtaining that \(g=\frac{1}{2}(e_{4}^{*}+h+e_{15}^{*}+e_{16}^{*})\) with \(\operatorname{supp}h\subseteq[5,14]\). We continue, finally obtaining \(j_{\mathcal{S}_{1}}(x)\geqslant 4\). Proof.: We proceed by induction on \(d\). Let us start with the case of \(d=1\). Suppose that there exists \(F_{1}\in\operatorname{full}(\mathcal{F})\) satisfying items (11)-(14), namely, we have \(|F_{1}|\geqslant 3\). For each \(i\in\mathbb{N}\) define \[x_{i}:=\begin{cases}1&\text{if }i\in F_{1},\\ 0&\text{otherwise}.\end{cases}\] Let \(x:=(x_{i})_{i\in\mathbb{N}}\). For every \(e\in W_{0}\), we have \(e(x)\leqslant 1\). Let \(c_{1},c_{2},c_{3}\in F\) be three distinct elements. Define \(f:=\frac{1}{2}(e_{c_{1}}^{*}+e_{c_{2}}^{*}+e_{c_{3}}^{*})\). Since \(\mathcal{F}\) is hereditary, we have \(f\in W(\mathcal{F})\). Clearly, \(f(x)=\frac{3}{2}\) and \(\operatorname{depth}_{\mathcal{F}}(f)=1\), hence, \(j_{\mathcal{F}}(x)\geqslant 1\). Now, let \(d>1\) and suppose that there exist \(F_{1},\ldots,F_{d}\in\operatorname{full}(\mathcal{F})\) and \(a_{1},b_{1},\ldots,a_{d-1},b_{d-1}\in\mathbb{N}\) satisfying items (11)-(14). By the inductive assumption applied to \(F_{2},\ldots,F_{d}\) and \(a_{2},b_{2},\ldots,a_{d-1},b_{d-1}\), we obtain \(x^{\prime}\in c_{00}^{+}\) with \(\operatorname{supp}x^{\prime}\subseteq\bigcup_{i=2}^{d}F_{i}\) and \(j_{\mathcal{F}}(x^{\prime})\geqslant d-1\). Let \(s\) be the sum of all the coefficients of \(x^{\prime}\), and let \(F^{\prime}:=\bigcup_{i=2}^{d}F_{i}\). By (12) and (13), \(F_{1}\backslash F^{\prime}=F_{1}\backslash\{a_{1}\}\). For each \(i\in\mathbb{N}\) we define \[x_{i}:=\begin{cases}x_{i}^{\prime}&\text{if }i\in F^{\prime},\\ 2s&\text{if }i\in F_{1}\backslash F^{\prime},\\ 0&\text{otherwise}.\end{cases}\] Let \(x:=(x_{i})_{i\in\mathbb{N}}\). Let \(f\in W(\mathcal{F})\) be such that \(f(x)=\|x\|_{\mathcal{F}}\) and let \(f^{\prime}\in W(\mathcal{F})\) be such that \(f^{\prime}(x^{\prime})=\|x^{\prime}\|_{\mathcal{F}}\). Since \(j_{\mathcal{F}}(x^{\prime})\geqslant d-1\), we have \(\operatorname{depth}_{\mathcal{F}}(f^{\prime})\geqslant d-1\). We define \(g:=\frac{1}{2}\left(f^{\prime}+\sum_{i\in F_{1}\backslash F^{\prime}}e_{i}^{* }\right)\). Note that \(F_{1}\backslash F^{\prime}\cup\{\min\operatorname{supp}f^{\prime}\}\in \mathcal{F}\), thus, \(g\in W(\mathcal{F})\). The goal is to prove that \(f\) is of a similar form as \(g\). First observe that, \[g(x)=\frac{1}{2}\left(f^{\prime}(x)+(|F_{1}\backslash F^{\prime}|)\cdot 2s \right)=\frac{1}{2}\|x^{\prime}\|_{\mathcal{F}}+(|F_{1}|-1)\cdot s>2s.\] By definition, for each \(i\in\mathbb{N}\), we have \(x_{i}\leqslant 2s\), thus, if \(\operatorname{depth}_{\mathcal{F}}(f)=0\), then \(\|x\|_{\mathcal{F}}=f(x)\leqslant 2s<g(x)\leqslant\|x\|_{\mathcal{F}}\), which is a contradiction. Therefore, \(\operatorname{depth}_{\mathcal{F}}(f)>0\). It follows that for each \(i\in\mathbb{N}\), \(f(e_{i})\leqslant\frac{1}{2}\). We claim that for each \(i\in F_{1}\backslash F^{\prime}\), we have \(f(e_{i})=\frac{1}{2}\). For a contradiction, suppose that \(f(e_{i0})<\frac{1}{2}\) for some \(i_{0}\in F_{1}\backslash F\). Since \(f\in W(\mathcal{F})\) the value of \(f(e_{i})\) has to be an inverse of a power of \(2\), thus, \(f(e_{i})\leqslant\frac{1}{4}\). We have \[\|x\|_{\mathcal{F}}=f(x) =\sum_{i\in\mathbb{N}}f(e_{i})\cdot x_{i}=\sum_{i\in F_{1} \cup F^{\prime}}f(e_{i})\cdot x_{i}=\sum_{i\in(F_{1}\cup F^{\prime}) \backslash\{i_{0}\}}f(e_{i})\cdot x_{i}+f(e_{i_{0}})x_{i_{0}}\] \[\leqslant\sum_{i\in(F_{1}\cup F^{\prime})\backslash\{i_{0}\}} \frac{1}{2}\cdot x_{i}+\frac{1}{4}x_{i_{0}}=\sum_{i\in F_{1}\backslash(F^{ \prime}\cup\{i_{0}\})}\frac{1}{2}\cdot x_{i}+\sum_{i\in F^{\prime}}\frac{1}{2} \cdot x_{i}+\frac{1}{4}x_{i_{0}}\] \[=(|F_{1}|-2)\cdot s+\frac{1}{2}s+\frac{1}{4}\cdot 2s=s\cdot(|F_{1}|-1) <g(x)\leqslant\|x\|_{\mathcal{F}}.\] This is a contradiction, and so, for each \(i\in F_{1}\backslash F^{\prime}\), we have \(f(e_{i})=\frac{1}{2}\). In particular, \[f=\frac{1}{2}\left(f_{1}+\cdots+f_{m}+\sum_{i\in F_{1}\backslash F^{\prime}}e_{ i}^{*}\right)\] for some \(m\in\mathbb{N}_{0}\) and \(f_{1},\ldots,f_{m}\in W(\mathcal{F})\) such that \[F_{1}\backslash\{a_{1}\}\cup\{\min\operatorname{supp}f_{1},\ldots,\min \operatorname{supp}f_{m}\}\in\mathcal{F}.\] However, by (14), the above gives \[|\{\min\operatorname{supp}f_{1},\ldots,\min\operatorname{supp}f_{m}\}\cap[a_{1},b _{1}-1]|\leqslant 1.\] By comparing \(f(x)\) with \(g(x)\), we have \(f_{1}(x^{\prime})+\ldots f_{m}(x^{\prime})\geqslant\|x^{\prime}\|_{\mathcal{F} }>0\). Clearly, if for some \(\ell\in[m]\), \(\min\operatorname{supp}f_{\ell}\notin[a_{1},b_{1}-1]\), then \(f_{\ell}(x^{\prime})=0\), and so, there exists \(\ell\in[m]\) such that \(\min\operatorname{supp}f_{\ell}\in[a_{1},b_{1}-1]\). Since \(f_{\ell}(x^{\prime})=\|x^{\prime}\|_{\mathcal{F}}\), we have \(\operatorname{depth}_{\mathcal{F}}(f_{\ell})\geqslant d-1\), and so, \(\operatorname{depth}_{\mathcal{F}}(f)\geqslant d\), which ends the proof. As explained in the caption of Figure 1, the strategy is to take \(a_{1}=3,a_{2}=4,a_{3}=5\), and so on. We are almost ready to proceed with the construction of the sequences of sets for some regular families. The last remaining detail to take care of is to make sure that the families that we consider satisfy (14). To this end, we abstract the following property of a regular family. We say that a regular family \(\mathcal{F}\) is _strong_ if for every integer \(a\) with \(a\geqslant 3\), and for all integers \(b,c\) with \(a+1<b\leqslant c\) such that \([a,a+1]\cup[b,c]\in\operatorname{full}(\mathcal{F})\), for all distinct \(a^{\prime},a^{\prime\prime}\in[a+2,b-1]\) we have \(\{a,a^{\prime},a^{\prime\prime}\}\cup[b,c]\notin\mathcal{F}\). The following is immediate to check. **Observation 16**.: _For every increasing and superadditive function \(\varphi:\mathbb{N}\to\mathbb{N}\) and for every integer \(k\) with \(k\geqslant 2\), the families \(\mathcal{S}_{\varphi},k\mathcal{S}_{1},\mathcal{S}_{2},\mathcal{S}_{3}\) are strong._ As the construction of the sequence \(F_{1},\ldots,F_{d}\) is virtually the same for all the families that we consider, we For all \(t,s\in\mathbb{N}\) with \(s+1<t\), we define \[r_{\mathcal{F}}(s,t):=\min\{m\in\mathbb{N}:\{s,s+1\}\cup[t,m-1]\in \operatorname{full}(\mathcal{F})\}.\] Next, for all \(s,t\in\mathbb{N}\) with \(s+1<t\) and for each \(u\in\mathbb{N}_{0}\) we define \[q_{\mathcal{F}}(u,s,t)=\begin{cases}t&\text{if }u>s,\\ r_{\mathcal{F}}(s,t)&\text{if }u=s,\\ r_{\mathcal{F}}(u,q_{\mathcal{F}}(u+1,s,t))&\text{if }u\leqslant s.\end{cases}\] For example, \(q_{\mathcal{F}}(3,5,10)=r_{\mathcal{F}}(3,r_{\mathcal{F}}(4,r_{\mathcal{F}}(5, 10)))\). The definition of \(q_{\mathcal{F}}\) is a little convoluted, and to understand its purpose one should read the next lemma. **Lemma 17**.: _Let \(\mathcal{F}\) be a strong regular family such that \(\operatorname{range}_{\mathcal{F}}(3)\geqslant 3\). For every \(n,d\in\mathbb{N}\) if \(q_{\mathcal{F}}(3,d+2,d+4)\leqslant n\), then_ \[d\leqslant j_{\mathcal{F}}(n).\] Proof.: Let \(n,d\in\mathbb{N}\), be such that \(q_{\mathcal{F}}(3,d+2,d+4)\leqslant n\). For each \(i\in[d]\), we define \[F_{i}:=\{i+2,i+3\}\cup[q_{\mathcal{F}}(i+3,d+2,d+4),q_{\mathcal{F}}(i+2,d+2,d+ 4)-1].\] It follows that \(F_{i}\in\operatorname{full}(\mathcal{F})\) and \(|F_{i}|\geqslant 3\) (so (11) is satisfied). If \(i<d\), then we define \(a_{i}:=i+2\) and \(b_{i}:=q_{\mathcal{F}}(i+3,d+2,d+4)\). Observe that \(a_{i}\in F_{i+1}\) and \(F_{i+1}\subseteq[a_{i},b_{i}-1]\) (so (13) is satisfied). Item (12) is clearly satisfied. Since \(\mathcal{F}\) is strong, (14) is also satisfied. Therefore, by Lemma 15, for every \(n\in\mathbb{N}\) such that \(q_{\mathcal{F}}(3,d+2,d+4)\leqslant n\), we have \[d\leqslant j_{\mathcal{F}}(\max F_{1})\leqslant j_{\mathcal{F}}(q_{\mathcal{ F}}(3,d+2,d+4))\leqslant j_{\mathcal{F}}(n).\qed\] ## 5. Lower bounds In this section we apply Lemma 17 to the families \(\mathcal{S}_{1},\mathcal{S}_{\varphi},k\mathcal{S}_{1},\mathcal{S}_{2}\). In each case, we establish some bounds on \(q_{\mathcal{F}}(3,d+2,d+4)\), and then compare the bound to \(n\), in order to obtain the final lower bound on \(j_{\mathcal{F}}(n)\) by applying Lemma 17. ### Lower bound for \(\mathcal{S}_{\varphi}\) Let \(\varphi:\mathbb{N}\to\mathbb{N}\) be a non-decreasing. Fix some \(u,s,t\in\mathbb{N}\) with \(s+1<t\). It is clear that \[r_{\mathcal{S}_{\varphi}}(s,t)=t+\varphi(s)-2.\] It follows that \(q_{\mathcal{S}_{\varphi}}(u,s,t)\leqslant t+\sum_{i=u}^{s}(\varphi(i)-2)\). For every integer \(d\in\mathbb{N}\), we have \[q_{\mathcal{S}_{\varphi}}(3,d+2,d+4)=d+4+\sum_{i=3}^{d+2}(\varphi(i)-2)=\sum_{ i=3}^{d+2}\varphi(i)-d+4\leqslant\sum_{i=3}^{d+3}\varphi(i).\] By Lemma 17, if \(\sum_{i=3}^{d+3}\varphi(i)\leqslant n\), then \(j_{\mathcal{S}_{\varphi}}(n)\geqslant d\) for every \(n\in\mathbb{N}\). **Corollary 18**.: _For each \(n\in\mathbb{N}\), we have_ \[j_{\mathcal{S}_{\varphi}}(n)\geqslant\max\{m\in\mathbb{N}:\sum_{j=3}^{m+3} \varphi(j)<n\}.\] Substituting \(\varphi=\mathrm{id}\) in the above, gives a lower bound for \(j_{\mathcal{S}_{1}}(n)\). **Corollary 19**.: _For each \(n\in\mathbb{N}\), we have_ \[j_{\mathcal{S}_{1}}(n)\geqslant\sqrt{2n}-3.\] ### Lower bound for \(k\mathcal{S}_{1}\) Let \(k\) be an integer with \(k\geqslant 2\). Fix some \(u,s,t\in\mathbb{N}\) with \(s+1<t\). It is clear that \[r_{k\mathcal{S}_{1}}(s,t)=2^{k-1}(t+s-2)\leqslant 2^{k}t.\] It follows that \(q_{k\mathcal{S}_{1}}(u,s,t)\leqslant 2^{k(s-u+1)}t\leqslant 2^{k(s-u+1)+t}\). Let \(n\in\mathbb{N}\). For every integer \(d\in\mathbb{N}\), we have \[q_{k\mathcal{S}_{1}}(3,d+2,d+4)\leqslant 2^{(k+1)d+4}.\] By Lemma 17, if \(2^{(k+1)d+4}\leqslant n\), then \(j_{k\mathcal{S}_{1}}(n)\geqslant d\) for every \(n\in\mathbb{N}\). **Corollary 20**.: _For each \(n\in\mathbb{N}\), and for each integer \(k\) with \(k\geqslant 2\), we have_ \[j_{k\mathcal{S}_{1}}(n)\geqslant\frac{1}{k+1}\log n-\frac{4}{k+1}-1.\] ### Lower bound for \(\mathcal{S}_{2}\) Fix some \(u,s,t\in\mathbb{N}\) with \(s+1<t\). It is clear that \[r_{\mathcal{S}_{2}}(s,t)=2^{s-1}(t+s-2)\leqslant 2^{s}t.\] It follows that \(q_{\mathcal{S}_{2}}(u,s,t)\leqslant\prod_{i=u}^{s}2^{i}t\leqslant 2^{(s(s+1))/2+t}\). For every integer \(d\in\mathbb{N}\), we have \[q_{\mathcal{S}_{2}}(3,d+2,d+4)\leqslant 2^{(d^{2}+7d+14)/2}.\] By Lemma 17, if \(2^{(d^{2}+7d+14)/2}\leqslant n\), then \(j_{\mathcal{S}_{2}}(n)\geqslant d\) for every \(n\in\mathbb{N}\). **Corollary 21**.: _For each \(n\in\mathbb{N}\), we have_ \[j_{\mathcal{S}_{2}}(n)\geqslant\sqrt{2\log n}-5.\] ### Lower bound for \(\mathcal{S}_{3}\) Fix some \(u,s,t\in\mathbb{N}\) with \(s+1<t\). To estimate \(r_{\mathcal{S}_{3}}(s,t)\), note that the full set \(F:=\{s,s+1\}\cup[t,r_{\mathcal{S}_{3}}(s,t)-1]\) consists of two parts. The first part is a prefix of \(F\) that is an \(\mathcal{S}_{2}\)-full set. The second part is the rest of \(F\), it starts after the element \(t2^{t}\), and it is the union of \(s-1\) pairwise disjoint \(\mathcal{S}_{2}\)-full sets. By Lemma 9, we have \[r_{\mathcal{S}_{3}}(s,t)\leqslant\tau(s-1,2t2^{t}+s-1)\leqslant\tau(s-1,2^{2t +2})=\tau(s,2t+2)\leqslant\tau(s+1,t).\] It follows that \(q_{\mathcal{S}_{3}}(u,s,t)\leqslant\tau(\sum_{i=u}^{s}(i+1),t)\leqslant\tau((s +2)^{2}/2,t)\). For every integer \(d\in\mathbb{N}\), we have \[q_{\mathcal{S}_{3}}(3,d+2,d+4)\leqslant\tau((d+4)^{2}/2,d+4).\] By Lemma 17, if \(\tau((d+4)^{2}/2,d+4)\leqslant n\), then \(j_{\mathcal{S}_{3}}(n)\geqslant d\) for every \(n\in\mathbb{N}\). **Corollary 22**.: _For each \(n\in\mathbb{N}\), we have_ \[j_{\mathcal{S}_{3}}(n)\geqslant\sqrt{2\log^{*}n}-5.\] ## 6. Tools for upper bounds ### Some auxiliary definitions and simple observations Let \(\mathcal{F}\) be a regular family, let \(x\in c_{00}\), and let \(f,g\in W(\mathcal{F})\). We write \[\operatorname{span}x:=[\min\operatorname{supp}x,\max\operatorname{supp}x]\ \ \ \text{and}\ \ \ \operatorname{span}f:=[\min\operatorname{supp}f,\max\operatorname{supp}f].\] We say that \(f\) is \(\mathcal{F}\)_-realizing for \(x\)_ if * \(f(x)=\|x\|_{\mathcal{F}}\), * \(\operatorname{depth}_{\mathcal{F}}(f)=j_{\mathcal{F}}(x)\), and * \(\operatorname{span}f\subseteq\operatorname{span}x\). We say that \(g\)_is not \(\mathcal{F}\)-worse than \(f\) for \(x\)_ if * \(g(x)\geqslant f(x)\), * \(\operatorname{depth}_{\mathcal{F}}(g)\leqslant\operatorname{depth}_{ \mathcal{F}}(f)\), and * \(\operatorname{span}g\subseteq\operatorname{span}f\). Observe that if \(f\) is \(\mathcal{F}\)-realizing for \(x\) and \(f\) is not \(\mathcal{F}\)-worse than \(g\) for \(x\), then \(g\) is \(\mathcal{F}\)-realizing for \(x\). Moreover, this relation is transitive. We will use these facts implicitly and repeatedly. We define \[\operatorname{full}_{f}(\mathcal{F}):=\operatorname{full}_{\min\operatorname{ supp}f,\max\operatorname{supp}f}(\mathcal{F}).\] Let \(m\in\mathbb{N}\) and let \(f_{1},\dots,f_{m}\in W(\mathcal{F})\). We say that \((f_{1},\dots,f_{m})\) is \(\mathcal{F}\)_-full-building for \(f\)_ if \((f_{1},\dots,f_{m})\) is \(\mathcal{F}\)-building for \(f\) and \[\{\min\operatorname{supp}f_{i}:i\in[m]\}\in\operatorname{full}_{f}(\mathcal{ F}).\] We say that \(f\) is \(\mathcal{F}\)_-full_ if there exist a positive integer \(m\) and \(f_{1},\dots,f_{m}\in W(\mathcal{F})\) such that \((f_{1},\dots,f_{m})\) is \(\mathcal{F}\)-full-building for \(f\). For each \(i\in\mathbb{N}\), we define \[(x|f)_{i}:=\begin{cases}x_{i}&\text{if }\min\operatorname{supp}f\leqslant i \leqslant\max\operatorname{supp}f,\\ 0&\text{otherwise,}\end{cases}\] and we let \((x|f):=((x|f))_{i\in\mathbb{N}}\). **Observation 23**.: _Let \(x\in c_{00}^{+}\), let \(\mathcal{F}\) be a regular family, and let \(f\in W(\mathcal{F})\). If \(\operatorname{supp}f\in\mathcal{F}\) then there exists \(g\in W(\mathcal{F})\) with \(\operatorname{depth}_{\mathcal{F}}(g)\leqslant 1\) that is not \(\mathcal{F}\)-worse than \(f\) for \(x\)._ _In particular, if \([a,b]\in\mathcal{F}\) for some \(a,b\in\mathbb{N}\) with \(a\leqslant b\), then \(j_{\mathcal{F}}(a,b)\leqslant 1\)._ Proof.: Assume that \(\operatorname{supp}f\in\mathcal{F}\). If \(\operatorname{depth}_{\mathcal{F}}(f)\leqslant 1\), then \(g:=f\) satisfies the assertion. Otherwise, we put \(g:=\frac{1}{2}\sum_{i\in\operatorname{supp}f}e_{i}^{*}\). Clearly, \(g(x)\geqslant f(x)\) and \(\operatorname{depth}_{\mathcal{F}}(g)=1\). Next, we prove that for each \(x\in c_{00}^{+}\), the norm \(\|x\|_{\mathcal{F}}\) is always realized wither by a very shallow functional, of by an \(\mathcal{F}\)-full functional. **Lemma 24**.: _Let \(x\in c_{00}^{+}\) and let \(\mathcal{F}\) be a regular family. For every \(f\in W(\mathcal{F})\) such that \(f(x)=\|x\|_{\mathcal{F}}\) there exists \(g\in W(\mathcal{F})\) that is not \(\mathcal{F}\)-worse than \(f\) for \(x\), and either_ * \(\operatorname{depth}_{\mathcal{F}}(g)\leqslant 1\) _or_ * \(g\) _is_ \(\mathcal{F}\)_-full._ Proof.: Suppose that the assertion does not hold. Let us choose a counterexample \(f\in W(\mathcal{F})\) satisfying the premise of the lemma according to the following rule. For each \(f\) being a counterexample choose the maximum \(d\in\mathbb{N}\) such that there exist \(f_{1},\ldots,f_{d}\) with \((f_{1},\ldots,f_{d})\) being \(\mathcal{F}\)-building for \(f\). Now, we choose \(f\) such that the value \(d\) is maximum among all counterexamples \(f\). Fix \(f_{1},\ldots,f_{d}\in W(\mathcal{F})\) as above, and let \(F_{f}:=\{\min\operatorname{supp}f_{i}:i\in[d]\}\). Observe that, as \(f\) can not be taken to be \(g\), therefore, \(\operatorname{depth}_{\mathcal{F}}(f)\geqslant 2\) and \(F_{f}\notin\operatorname{full}_{f}(\mathcal{F})\). It follows that there exists \(n\in\operatorname{span}f\backslash F_{f}\) with \(F_{f}\cup\{n\}\in\mathcal{F}\). Let \(n^{*}\) be the maximum such number. Let \(t\) be the maximum number in \([d]\) such that \(\max\operatorname{supp}f_{t}\leqslant n^{*}\). First, suppose that \(\max\operatorname{supp}f_{t}<n^{*}\), then we define \[f^{\prime}:=\frac{1}{2}\left(\sum_{i=1}^{t}f_{i}+e_{n^{*}}^{*}+\sum_{i=t+1}^{ d}f_{i}\right).\] Next, suppose that \(\max\operatorname{supp}f_{t}=n^{*}\). Note that \(f_{t}\neq e_{n^{*}}\), and so \(f_{t}^{\prime}:=f_{t}|_{[1,n^{*}-1]}\) is a well-defined member of \(W(\mathcal{F})\). We define \[f^{\prime}:=\frac{1}{2}\left(\sum_{i=1}^{t-1}f_{i}+f_{t}^{\prime}+e_{n^{*}}^{ *}+\sum_{i=t+1}^{d}f_{i}\right).\] Since \(F\cup\{n^{*}\}\in\mathcal{F}\), in both cases \(f^{\prime}\in W(\mathcal{F})\). Moreover, \(f(x)\leqslant f^{\prime}(x)\) in both cases, and in particular, \(f^{\prime}(x)=\|x\|_{\mathcal{F}}\). Furthermore, \(\operatorname{depth}_{\mathcal{F}}(f^{\prime})=\operatorname{depth}_{ \mathcal{F}}(f)\) and \(\operatorname{span}f^{\prime}=\operatorname{span}f\). It follows that \(f^{\prime}\) is not \(\mathcal{F}\)-worse than \(f\) for \(x\). By the choice of \(f\), the functional \(f^{\prime}\) is not a counterexample, and so, there exists \(g\in W(\mathcal{F})\) not \(\mathcal{F}\)-worse than \(f^{\prime}\) for \(x\) that satisfies (f1) or (f2). However, we obtain that \(g\) is not \(\mathcal{F}\)-worse than \(f\) for \(x\), which contradicts the fact that \(f\) is a counterexample. ### The insertion property In this section, the main goal is to generalize the core step of the proof of an upper bound for \(j_{\mathcal{S}_{1}}(n)\) by Beanland, Duncan, and Holt [6, Lemma 1.8]. We reprove the result with \(\mathcal{S}_{1}\) replaced with a regular family satisfying a certain abstract property. More precisely, we aim to develop a property of a regular family such that assuming it, we can strengthen condition (f2) in Lemma 24. We say that a regular family \(\mathcal{F}\)_has the insertion property_ if for all \(a,s,t\in\mathbb{N}\) with \(s,t\geqslant 2\), and for all \(n_{2},\ldots,n_{s},m_{2},\ldots,m_{t}\in\mathbb{N}\) with \(m_{2}<\cdots<m_{t}<n_{2}<\cdots<n_{s}\) if \[\{a,m_{2},\ldots,m_{t}\},\{a,n_{2},\ldots,n_{s}\}\in\mathcal{F}\ \ \text{and}\ \ \ \operatorname{range}_{\mathcal{F}}(a)<m_{2},\] then \[\{m_{2},m_{3},\dots,m_{t},n_{2},\dots,n_{s}\}\in\mathcal{F}.\] First, we show that the families \(\mathcal{S}_{\varphi},\mathcal{S}_{2}\), and \(\mathcal{S}_{3}\) have the insertion property. Here, we use the superadditivity of \(\varphi\). **Lemma 25**.: _Let \(\varphi:\mathbb{N}\to\mathbb{N}\) be increasing and superadditive. The family \(\mathcal{S}_{\varphi}\) has the insertion property._ Proof.: Let \(a,s,t\in\mathbb{N}\) be such that \(s,t\geqslant 2\), and let and \(n_{2},\dots,n_{s},m_{2},\dots,m_{t}\in\mathbb{N}\) with \(m_{2}<\dots<m_{t}<n_{2}<\dots<n_{s}\). Let \(F:=\{a,n_{2},\dots,n_{s}\}\) and \(G:=\{a,m_{2},\dots,m_{t}\}\). Assume that \(F,G\in\mathcal{S}_{\varphi}\) and \(\operatorname{range}_{\mathcal{S}_{\varphi}}(a)<m_{2}\). Let \(H:=\{m_{2},m_{3},\dots,m_{t},n_{2},\dots,n_{s}\}\). Since \(\operatorname{range}_{\mathcal{S}_{\varphi}}(a)=\varphi(a)+a<m_{2}\), we have \(\varphi(\varphi(a)+a)<\varphi(m_{2})\), moreover, \(\varphi(a)+\varphi(a)\leqslant\varphi(\varphi(a)+a)\). Since \(F,G\in\mathcal{S}_{\varphi}\), we have \(t=|F|\leqslant\varphi(\min F)=\varphi(a)\) and \(s=|G|\leqslant\varphi(\min G)=\varphi(a)\). Therefore, \[|H|=s+t-2<s+t\leqslant\varphi(a)+\varphi(a)\leqslant\varphi(\varphi(a)+a)< \varphi(m_{2})=\varphi(\min H).\] This yields \(H\in\mathcal{S}_{\varphi}\). **Lemma 26**.: _For each \(\ell\in\{2,3\}\), the family \(\mathcal{S}_{\ell}\) has the insertion property._ Proof.: Let \(a,s,t\in\mathbb{N}\) be such that \(s,t\geqslant 2\), and let and \(n_{2},\dots,n_{s},m_{2},\dots,m_{t}\in\mathbb{N}\) with \(m_{2}<\dots<m_{t}<n_{2}<\dots<n_{s}\). Let \(F:=\{a,n_{2},\dots,n_{s}\}\) and \(G:=\{a,m_{2},\dots,m_{t}\}\). Assume that \(F,G\in\mathcal{S}_{\varphi}\) and \(\operatorname{range}_{\mathcal{S}_{\ell}}(a)\leqslant m_{2}\). Let \(H:=\{m_{2},m_{3},\dots,m_{t},n_{2},\dots,n_{s}\}\). There exist \(F_{1},\dots,F_{a},G_{1},\dots,G_{a}\in\mathcal{S}_{\ell-1}\) such that \(F_{1}<\dots<F_{a}\), \(G_{1}<\dots<G_{a}\), and \(F=F_{1}\cup\dots\cup F_{a},G=G_{1}\cup\dots\cup G_{a}\). Observe that \[H=G_{1}\backslash\{a\}\cup G_{2}\cup\dots\cup G_{a}\cup F_{1}\backslash\{a\} \cup F_{2}\cup\dots\cup F_{a}.\] Since \(\operatorname{range}_{\mathcal{S}_{\ell}}(a)\leqslant m_{2}=\min H\), in order to prove that \(H\in\mathcal{S}_{\ell}\), it suffices to check if \(2a<\operatorname{range}_{\mathcal{S}_{\ell}}(a)\), which is clear in both cases by Observation 5. (Note that \(F\in\mathcal{S}_{\ell}\) requires \(a>1\).) Observe that the family \(2\mathcal{S}_{1}\) does not have the insertion property. Indeed, consider the following example: \[F:=\{2,99\}\cup[100,199]\ \ \ \mathrm{and}\ \ \ G:=\{2,19\}\cup[20,39].\] Clearly, \(F,G\in 2\mathcal{S}_{1}\). The greatest element of \(G\), that is, \(39\) is less than the second least element of \(F\), that is \(99\). It is easy to compute that \(\operatorname{range}_{2\mathcal{S}_{1}}(2)=6\), thus the inequality \(\operatorname{range}_{\mathcal{S}_{2}}(a)<m_{2}-a+1\) takes form \(6<19-2+1\), which is clearly true. The insertion property would give \[\{19\}\cup[20,39]\cup\{99\}\cup[100,199]\in 2\mathcal{S}_{1}.\] This can be easily verified to be false. Following a similar idea, one can construct counterexamples showing that \(k\mathcal{S}_{1}\) does not have the insertion property for all \(k\geqslant 2\). This fact indicates that the family \(k\mathcal{S}_{1}\) has to be treated differently. As already announced, we now strengthen Lemma 24 for regular families that have the insertion property. **Lemma 27**.: _Let \(x\in c_{00}^{+}\) and let \(\mathcal{F}\) be a regular family that has the insertion property. For every \(f\in W(\mathcal{F})\) such that \(f(x)=\|x\|_{\mathcal{F}}\) there exists \(g\in W(\mathcal{F})\) that is not \(\mathcal{F}\)-worse than \(f\) for \(x\) and either_ * \(\operatorname{depth}_{\mathcal{F}}(g)\leqslant 1\) _or_ * \(\operatorname{depth}_{\mathcal{F}}(g)\geqslant 2\) _and there exists a positive integer_ \(d\) _and_ \(g_{1},\ldots,g_{d}\in W(\mathcal{F})\) _such that_ \((g_{1},\ldots,g_{d})\) _is_ \(\mathcal{F}\)_-full-building for_ \(g\)_, and either_ \(\operatorname{depth}_{\mathcal{F}}(g_{1})=0\) _or there exist a positive integer_ \(e\) _and_ \(t_{1},\ldots,t_{e}\in W(\mathcal{F})\) _such that_ \((t_{1},\ldots,t_{e})\) _is_ \(\mathcal{F}\)_-building for_ \(g_{1}\)_, and_ \(\operatorname{depth}_{\mathcal{F}}(t_{1})\leqslant 1\)_._ Proof.: By Lemma 24, there exists \(g^{\prime}\in W(\mathcal{F})\) that is not \(\mathcal{F}\)-worse than \(f\) for \(x\), and either \(\operatorname{depth}_{\mathcal{F}}(g^{\prime})\leqslant 1\), or \(g^{\prime}\) is \(\mathcal{F}\)-full. Fix such \(g^{\prime}\) with \(\min\operatorname{supp}g^{\prime}\) maximal. If \(\operatorname{depth}_{\mathcal{F}}(g^{\prime})\leqslant 1\), then (sfl) is satisfied for \(g:=g^{\prime}\). Therefore, we can assume that \(\operatorname{depth}_{\mathcal{F}}(g^{\prime})\geqslant 2\), and that \(g^{\prime}\) is \(\mathcal{F}\)-full for \(x\). Let \(d\in\mathbb{N}\) and let \(g_{1},\ldots,g_{d}\in W(\mathcal{F})\) be such that \((g_{1},\ldots,g_{d})\) is \(\mathcal{F}\)-full-building for \(g^{\prime}\). If \(\operatorname{depth}_{\mathcal{F}}(g_{1})=0\), then (sfl) is satisfied. Thus, we can assume that \(\operatorname{depth}_{\mathcal{F}}(g_{1})\geqslant 1\). Let \(e\in\mathbb{N}\) and let \(t_{1},\ldots,t_{e}\in W(\mathcal{F})\) be such that \((t_{1},\ldots,t_{e})\) is \(\mathcal{F}\)-building for \(g_{1}\). If \(\operatorname{depth}_{\mathcal{F}}(t_{1})\leqslant 1\), then (sfl) is satisfied, hence, we assume that \(\operatorname{depth}_{\mathcal{F}}(t_{1})\geqslant 2\). Let \(a:=\min\operatorname{supp}t_{1}\). If \(\max\operatorname{supp}t_{1}<\operatorname{range}_{\mathcal{F}}(a)\), then \(\operatorname{supp}t_{1}\in\mathcal{F}\), and so, by Observation 23, there exists \(t_{1}^{\prime}\) that is not \(\mathcal{F}\)-worse than \(t_{1}\) for \(x\). Let \(g\) be obtained from \(g^{\prime}\) by replacing \(t_{1}\) with \(t_{1}^{\prime}\). Item (sfl) is satisfied, thus, we assume that \(\operatorname{range}_{\mathcal{F}}(a)\leqslant\max\operatorname{supp}t_{1}\), and so, \(\operatorname{range}_{\mathcal{F}}(a)<\min\operatorname{supp}t_{2}\). Since \(\mathcal{F}\) has the insertion property, we have \[H:=\{\min\operatorname{supp}t_{2},\ldots,\min\operatorname{supp}t_{e},\min \operatorname{supp}g_{2},\ldots,\min\operatorname{supp}g_{d}\}\in\mathcal{F}.\] This yields \[h_{1} :=\frac{1}{2}(t_{2}+\cdots+t_{e}+g_{2}+\cdots+g_{d})\in W( \mathcal{F})\;\;\text{and}\] \[h_{2} :=\frac{1}{2}(t_{1}+g_{2}+\cdots+g_{d})\in W(\mathcal{F}).\] We have \(\frac{1}{2}(h_{1}+h_{2})=g^{\prime}\) and \(\|x\|_{\mathcal{F}}=f(x)\leqslant g^{\prime}(x)\leqslant\|x\|_{\mathcal{F}}\). Therefore, \(\|x\|_{\mathcal{F}}=g^{\prime}(x)=h_{1}(x)=h_{2}(x)\). It is easy to verify that \(h_{1}\) is not \(\mathcal{F}\)-worse than \(f\) for \(x\). However, by Lemma 24, this yields the existence of \(h_{1}^{\prime}\in W(\mathcal{F})\) such that \(h_{1}^{\prime}\) is not \(\mathcal{F}\)-worse than \(h_{1}\) for \(x\) and either \(\operatorname{depth}_{\mathcal{F}}(h_{1}^{\prime})\leqslant 1\), or \(h_{1}^{\prime}\) is \(\mathcal{F}\)-full. Note that \(\min\operatorname{supp}g^{\prime}<\min\operatorname{supp}h_{1}\leqslant\min \operatorname{supp}h_{1}^{\prime}\), which contradicts the choice of \(g^{\prime}\). ### Optimal sequences of realizing functionals In the final part of this section, we inductively apply Lemma 24 and Lemma 27 in order to derive "optimal sequences" of realizing functionals for each \(x\in c_{00}^{+}\). First, we need the following technical observation. **Observation 28**.: _Let \(x\in c_{00}^{+}\) and let \(\mathcal{F}\) be a regular family. Let \(g\in W(\mathcal{F})\) be \(\mathcal{F}\)-realizing for \(x\) with \(\operatorname{depth}_{\mathcal{F}}(g)\geqslant 4\). Let \(d\in\mathbb{N}\) and let \(g_{1},\ldots,g_{d}\in W(\mathcal{F})\) be such that \((g_{1},\ldots,g_{d})\) is \(\mathcal{F}\)-building for \(g\). For each \(i\in[d]\) with \(\operatorname{depth}_{\mathcal{F}}(g_{i})=0\), let \(d_{i}:=0\); for each \(i\in[d]\) with \(\operatorname{depth}_{\mathcal{F}}(g_{i})\geqslant 1\) let \(d_{i}\in\mathbb{N}\) be such that there exist \(t_{1}^{(i)},\ldots,t_{d_{i}}^{(i)}\in W(\mathcal{F})\) with \((t_{1}^{(i)},\ldots,t_{d_{i}}^{(i)})\) being \(\mathcal{F}\)-building for \(g_{i}\). Then, there exists \(i\in[d]\) and \(j\in[d_{i}]\) such that_ * _if_ \(\operatorname{depth}(g_{1})=0\) _or_ \(\operatorname{depth}_{\mathcal{F}}(t_{1}^{(1)})\leqslant 1\)_, then_ \((i,j)\neq(1,1)\)_;_ * \(\operatorname{depth}_{\mathcal{F}}(g)=\operatorname{depth}_{\mathcal{F}}(t_{j}^{( i)})+2\)_;_ * \(t_{j}^{(i)}\) _is_ \(\mathcal{F}\)_-realizing for_ \(x|t_{j}^{(i)}\)_;_ * \(|\mathrm{span}\,t_{j}^{(i)}|<|\mathrm{span}\,x|\)_._ Proof.: Since \(\mathrm{depth}_{\mathcal{F}}(g)\geqslant 4\), if for some \(i\in[d]\) and \(j\in[d_{i}]\) item (o2) holds, then item (o1) holds. Let \(I\) be the set of all pairs of integers \(i\in[d],j\in[d_{i}]\) such that item (o2) is satisfied. By definition, \(I\) is nonempty. Fix some \((i,j)\in I\), and let \(t:=t_{j}^{(i)}\), \(x^{\prime}:=x|t_{j}^{(i)}\). We claim that \(t(x^{\prime})=\|x^{\prime}\|_{\mathcal{F}}\). Indeed, if there exists \(t^{\prime}\in W(\mathcal{F})\) with \(t(x^{\prime})<t^{\prime}(x^{\prime})\), then \(g^{\prime}\) obtained from \(g\) by replacing \(t\) with \(t^{\prime}\) satisfies \(\|x\|_{\mathcal{F}}=g(x)<g^{\prime}(x)\), which is a contradiction. Suppose that for every \((i,j)\in I\), the functional \(t_{j}^{(i)}\) is not \(\mathcal{F}\)-realizing for \(x|t_{j}^{(i)}\). That is, \(\mathrm{depth}_{\mathcal{F}}(t_{j}^{(i)})>j(x|t_{j}^{(i)})\). For each \((i,j)\in I\), let \(s_{j}^{(i)}\) be \(\mathcal{F}\)-realizing for \(x|t_{j}^{(i)}\). Note that \(\mathrm{depth}_{\mathcal{F}}(t_{j}^{(i)})>\mathrm{depth}_{\mathcal{F}}(s_{j}^ {(i)})\). Let \(g^{\prime}\) be obtained from \(g\) by replacing \(t_{j}^{(i)}\) with \(s_{j}^{(i)}\) for each \((i,j)\in I\). It follows that \(g(x)=g(x^{\prime})\), \(\mathrm{depth}_{\mathcal{F}}(g)>\mathrm{depth}_{\mathcal{F}}(g^{\prime})\), and \(\mathrm{span}\,g^{\prime}\subseteq\mathrm{span}\,g\), which contradicts the fact that \(g\) is \(\mathcal{F}\)-realizing for \(x\). Therefore, there exists \((i,j)\in I\) such that \(t_{j}^{(i)}\) is \(\mathcal{F}\)-realizing for \(x|t_{j}^{(i)}\). Finally, we prove that for \((i,j)\in I\) as above item (o4) hold. Observe that \(t_{j}^{(i)}(x)<g(x)=\|x\|_{\mathcal{F}}\), as otherwise, \(g\) is not \(\mathcal{F}\)-realizing. It follows that \(\mathrm{span}\,t_{j}^{(i)}\) is a strict subset of \(\mathrm{span}\,g\), and so \(\mathrm{span}\,x\). **Lemma 29**.: _Let \(\mathcal{F}\) be a regular family. For every \(x\in c_{00}^{+}\), there exist \(c\in\mathbb{N}_{0}\) and \(f_{0},\ldots,f_{c}\in W(\mathcal{F})\) such that_ * \(\mathrm{depth}_{\mathcal{F}}(f_{0})\leqslant 3\)_;_ * \(f_{c}\) _is_ \(\mathcal{F}\)_-realizing for_ \(x\)_;_ * _for every_ \(m\in[c]\)_,_ \(f_{m-1}\) _is_ \(\mathcal{F}\)_-realizing for_ \(x|f_{m-1}\)_;_ * _for every_ \(m\in[c]\)_,_ \(\mathrm{span}\,f_{m-1}\subseteq\mathrm{span}\,f_{m}\)_;_ * _for every_ \(m\in[c]\)_,_ \(\mathrm{depth}_{\mathcal{F}}(f_{m})=\mathrm{depth}_{\mathcal{F}}(f_{m-1})+2\)_;_ * _for every_ \(m\in[c]\)_, if_ \(\mathcal{F}\) _has the insertion property, then_ \(\min\mathrm{supp}\,f_{m}<\min\mathrm{supp}\,f_{m-1}\)_;_ * _for every_ \(m\in[c]\)_, there exist_ \(F_{1},F_{2}\subseteq\mathbb{N}\) _with_ \(\min\mathrm{supp}\,f_{m}\in F_{1}\) _and_ \[\min\mathrm{supp}\,f_{m}\leqslant F_{1}\leqslant\mathrm{span}\,f_{m-1}<F_{2} \leqslant\max\mathrm{supp}\,f_{m},\] _such that_ \(F_{1}\cup F_{2}\in\mathrm{full}_{f_{m}}(\mathcal{F})\)_._ Proof.: Suppose the lemma was false. Let \(x\in c_{00}^{+}\) be a counterexample with \(|\mathrm{supp}\,x|\) is minimal. Let \(f\in W(\mathcal{F})\) be an \(\mathcal{F}\)-realizing functional for \(x\). By Lemma 24, there exists \(g\in W(\mathcal{F})\) that is not \(\mathcal{F}\)-worse than \(f\) for \(x\) and one of either (f1) or (f2) is satisfied. In the case, where \(\mathcal{F}\) has the insertion property, by Lemma 27, the condition (f2) can be replaced with (sf2). It follows that \(g\) is \(\mathcal{F}\)-realizing for \(x\). If \(\mathrm{depth}_{\mathcal{F}}(g)\leqslant 3\), then we put \(c:=0\) and \(f_{0}:=g\). It is easy to check that items (f1)-(f7) are satisfied. Therefore, we can assume that (f2) (or (sf2)) is satisfied, and \(\mathrm{depth}_{\mathcal{F}}(g)\geqslant 4\). Let \(d\in\mathbb{N}\) and let \(g_{1},\ldots,g_{d}\in W(\mathcal{F})\) be such that \((g_{1},\ldots,g_{d})\) is \(\mathcal{F}\)-full-building for \(g\). For each \(i\in[d]\) with \(\mathrm{depth}_{\mathcal{F}}(g_{i})=0\), let \(d_{i}:=0\); for each \(i\in[d]\) with \(\mathrm{depth}_{\mathcal{F}}(g_{i})\geqslant 1\) let \(d_{i}\in\mathbb{N}\) be such that there exist \(t_{1}^{(i)},\ldots,t_{d_{i}}^{(i)}\in W(\mathcal{F})\) with \((t_{1}^{(i)},\ldots,t_{d_{i}}^{(i)})\) being \(\mathcal{F}\)-building for \(g_{i}\). In the case, where \(\mathcal{F}\) has the insertion property, by (sf2) we can assume that either \(\mathrm{depth}_{\mathcal{F}}(g_{1})=0\) or \(\mathrm{depth}_{\mathcal{F}}(t_{1}^{(1)})\leqslant 1\). By Observation 28, there exist \(i\in[d]\) with \(\mathrm{depth}_{\mathcal{F}}(g_{i})\geqslant 1\) and \(j\in[d_{i}]\) such that (o1)-(o4) are satisfied. Let \(t:=t_{j}^{(i)}\). By minimality of \(x\) and (o4), for \(x|t\) there exist \(c^{\prime}\in\mathbb{N}\) and \(f_{0},\dots,f_{c^{\prime}}\in W(\mathcal{F})\) such that (r1)-(r7) hold. From now on, we refer to the statements in the items for \(x|t\) and \(f_{0},\dots,f_{c^{\prime}}\) as [(r1)]-[(r7)]. Let \(c:=c^{\prime}+1\), and let \(f_{c}:=g\). We claim that the sequence \(f_{0},\dots,f_{c}\) satisfy (r1)-(r7) for \(x\). Since \(x\) is a counterexample this claim leads to a contradiction, thus, it suffices to end the proof of the lemma. Items (r1) and (r2) are obvious. Note that for each \(m\in[c-1]=[c^{\prime}]\), the statements in (r3)-(r7) follow from the corresponding statements in [(r3)]-[(r7)], hence it suffices to prove them for \(m=c\). Item (r3) follows from [(r2)]. By [(r2)], we have \[\operatorname{span}f_{c-1}\subseteq\operatorname{span}x|t=\operatorname{span }t\subseteq\operatorname{span}g=\operatorname{span}f_{c}.\] This yields (r4). By (o2) and (o3), we have \[\operatorname{depth}_{\mathcal{F}}(f_{c})=\operatorname{depth}_{\mathcal{F}}( g)=\operatorname{depth}_{\mathcal{F}}(t)+2=\operatorname{depth}_{\mathcal{F}}(f_{c-1 })+2.\] This yields (r5). Recall that \(t=t_{j}^{(i)}\). To prove the next item, assume that \(\mathcal{F}\) has the insertion property, by (o1), we have \((i,j)\neq(1,1)\), therefore, \[\min\operatorname{supp}f_{c}=\min\operatorname{supp}g\leqslant\min \operatorname{supp}g_{1}<\min\operatorname{supp}t\leqslant\min\operatorname{ supp}f_{c-1}.\] This yields (r6). For the last item we define \[F_{1} =\{\min\operatorname{supp}g_{\ell}:\ell\in[i]\},\] \[F_{2} =\{\min\operatorname{supp}g_{\ell}:\ell\in[i+1,d]\}.\] Since \((g_{1},\dots,g_{d})\) is \(\mathcal{F}\)-full-building for \(g\), we have \(F_{1}\cup F_{2}\in\operatorname{full}_{g}(\mathcal{F})=\operatorname{full}_{f _{c}}(\mathcal{F})\). The sequence of inequalities in (r7) is clear, thus, (r7) follows. ## 7. Upper bounds ### Upper bound for \(\mathcal{S}_{\varphi}\) Fix an increasing and superadditive function \(\varphi:\mathbb{N}\to\mathbb{N}\). **Lemma 30**.: _Let \(a,b\in\mathbb{N}\) with \(a\leqslant b\). If \(|[a,b]|\leqslant\varphi(a)\), then \(j_{\mathcal{S}_{\varphi}}(a,b)\leqslant 1\). Otherwise,_ \[j_{\mathcal{S}_{\varphi}}(a,b)\leqslant 2+\max\left(j_{\mathcal{S}_{\varphi}}(a+ i,b-\varphi(a)+i+1):i\in[\varphi(a)-1]\right)\cup\{1\}\right).\] Proof.: The first part follows immediately from Observation 23. Suppose that \(|[a,b]|>\varphi(a)\). Let \(x\in c_{00}^{+}\) be such that \(j_{\mathcal{S}_{\varphi}}(x)=j_{\mathcal{S}_{\varphi}}(a,b)\). By Lemma 29, there exist \(c\in\mathbb{N}_{0}\) and \(f_{0},\dots,f_{c}\in W(\mathcal{S}_{\varphi})\) such that (r1)-(r7) are satisfied. By Lemma 25, the family \(\mathcal{S}_{\varphi}\) has the insertion property, thus, by (r6), we have \(\min\operatorname{supp}f_{c}<\min\operatorname{supp}f_{c-1}\). By (r2), \(f_{c}\) is \(\mathcal{F}\)-realizing for \(x\). If \(c=0\), then by (r1), we have \[j_{\mathcal{S}_{\varphi}}(a,b)=j_{\mathcal{S}_{\varphi}}(x)=\operatorname{ depth}_{\mathcal{S}_{\varphi}}(f_{0})\leqslant 3=2+1.\] Suppose that \(c\geqslant 1\). First, we claim that there exists \(i\in[\varphi(a)-1]\) such that \(\operatorname{span}f_{c-1}\subseteq[a+i,b-\varphi(a)+i+1]\). By (r7), there exist \(F_{1},F_{2}\subseteq\mathbb{N}\) with \[\min\operatorname{supp}f_{c}\leqslant F_{1}\leqslant\operatorname{span}f_{c-1 }<F_{2}\leqslant\max\operatorname{supp}f_{c},\] such that \(F_{1}\cup F_{2}\in\operatorname{full}_{f_{c}}(\mathcal{S}_{\varphi})\). Since \(\operatorname{depth}_{\mathcal{S}_{\varphi}}(f_{c})\geqslant 2\), it follows that \(\operatorname{span}f_{c}\notin\mathcal{S}_{\varphi}\), and so, Lemma 10 gives \(|F_{1}\cup F_{2}|=\varphi(\min F_{1})\geqslant\varphi(a)\). Let \(e:=|F_{1}|\), we have \[\operatorname{span}f_{c-1}\subseteq[\max F_{1},\min F_{2}-1]\subseteq[a+(e-1),b -(\varphi(a)-e)].\] If \(2\leqslant e\leqslant\varphi(a)\), then we put \(i:=e-1\), and in turn, \(\operatorname{span}f_{c-1}\subseteq[a+i,b-\varphi(a)+i+1]\). If \(\varphi(a)<e\), then we put \(i:=\varphi(a)-1\), and we have \((\varphi(a)-1),b]\). Suppose that \(e=1\), it follows that \(\operatorname{span}f_{c-1}\subseteq[a,b-\varphi(a)+1]\). However, \(\min\operatorname{supp}f_{c}<\min\operatorname{supp}f_{c-1}\), thus, \(\operatorname{span}f_{c-1}\subseteq[a+1,b-\varphi(a)+1]\subseteq[a+1,b-\varphi(a )+1+1]\), and we put \(i:=1\). This concludes the claim that there exists \(i\in[\varphi(a)-1]\) such that \(\operatorname{span}f_{c-1}\subseteq[a+i,b-\varphi(a)+i+1]\). By (F5), (F3), and Observation 3, we have \[j_{\mathcal{S}_{\varphi}}(a,b)=j_{\mathcal{S}_{\varphi}}(x) =\operatorname{depth}_{\mathcal{S}_{\varphi}}(f_{c})\] \[=\operatorname{depth}_{\mathcal{S}_{\varphi}}(f_{c-1})+2\] \[=j_{\mathcal{S}_{\varphi}}(x|f_{c-1})+2\leqslant j_{\mathcal{S}_ {\varphi}}(\min\operatorname{supp}f_{c-1},\max\operatorname{supp}f_{c-1})+2\] \[\leqslant j_{\mathcal{S}_{\varphi}}(a+i,b-\varphi(a)+i+1)+2. \qed\] The above lemma justifies the following definition. For all \(a,b\in\mathbb{N}\) with \(a\leqslant b\) let \[\widehat{j}(a,b):=\begin{cases}1&\text{if }|[a,b]|\leqslant\varphi(a),\\ 2+\max\left(\widehat{j}(a+i,b-\varphi(a)+i+1):i\in[\varphi(a)-1]\right)\cup \{1\}\end{cases}\] otherwise. Clearly, for all \(a,b\in\mathbb{N}\), we have \(j_{\mathcal{S}_{\varphi}}(a,b)\leqslant\widehat{j}(a,b)\). Moreover, applying an elementary induction, we obtain an analogous monotonicity result to Observation 3 for \(\widehat{j}\) - see below. **Observation 31**.: _For all positive integers \(a,b,c,d\) such that \([a,b]\subseteq[c,d]\) we have \(\widehat{j}(a,b)\leqslant\widehat{j}(c,d)\)._ Furthermore, we can prove a stronger property for \(\widehat{j}\) that is intuitive for \(j_{\mathcal{S}_{\varphi}}\) but apparently difficult and technical to derive. **Lemma 32**.: _For all \(a,b,t\in\mathbb{N}\), we have \(\widehat{j}(a+t,b+t)\leqslant\widehat{j}(a,b)\)._ Proof.: We proceed by induction on \(s=b-a\). If \(|[a,b]|\leqslant\varphi(a)\) then for every \(t\in\mathbb{N}\), we have \(1=\widehat{j}(a+t,b+t)=\widehat{j}(a,b)\). Suppose that \(|[a,b]|>\varphi(a)\). By induction, \[\widehat{j}(a,b)=2+\max\left(\left\{\widehat{j}(a+i,b-\varphi(a)+i+1):i\in[ \varphi(a)-1]\right\}\cup\{1\}\right)=2+\widehat{j}(a+1,b-\varphi(a)+2).\] Similarly, for every \(t\in\mathbb{N}\), \[\widehat{j}(a+t,b+t)=2+\widehat{j}(a+t+1,b+t-\varphi(a+t)+2).\] Therefore, it suffices to prove that \[\widehat{j}(a+t+1,b+t-\varphi(a+t)+2)\leqslant\widehat{j}(a+1,b-\varphi(a)+2).\] By Observation 31 and since \(\varphi(a)<\varphi(a+t)\), \[\widehat{j}(a+t+1,b+t-\varphi(a+t)+2)\leqslant\widehat{j}(a+t+1,b+t-\varphi(a )+2).\] Finally, by induction, \[\widehat{j}(a+t+1,b+t-\varphi(a)+2)\leqslant\widehat{j}(a+1,b-\varphi(a)+2),\] which ends the proof. In particular, the above lemma gives that for all \(a,b\in\mathbb{N}\), \[\widehat{j}(a,b)=\begin{cases}1&\text{if }|[a,b]|\leqslant\varphi(a),\\ 2+\widehat{j}(a+1,b-\varphi(a)+2)&\text{otherwise.}\end{cases}\] **Lemma 33**.: _For all \(a,b\in\mathbb{N}\) and every \(c\in\mathbb{N}_{0}\) if_ \[|[a,b]|\leqslant\sum_{i=0}^{c}\varphi(a+i)-c,\] _then_ \[\widehat{j}(a,b)\leqslant 2c+1.\] Proof.: We proceed by induction on \(c\). If \(c=0\), then \(|[a,b]|\leqslant\varphi(a)\), clearly yields \(\widehat{j}(a,b)=1\). Suppose that \(c\geqslant 1\). If \(|[a,weassumedthatb]|\leqslant\varphi(a)\), then the assertion follows, hence, let us assume otherwise. We have \(\widehat{j}(a,b)\leqslant 2+\widehat{j}(a+1,b-\varphi(a)+2)\). Observe that \[|[a+1,b-\varphi(a)+2]|=|[a,b]|-\varphi(a)+1\leqslant\sum_{i=0}^{c}\varphi(a+ i)-c-\varphi(a)+1=\sum_{i=0}^{c-1}\varphi((a+1)+i)-(c-1).\] Therefore, by induction, \(\widehat{j}(a+1,b-\varphi(a)+2)\leqslant 2(c-1)+1\), and so, \(\widehat{j}(a,b)\leqslant 2c+1\). **Theorem 34**.: _For every \(n\in\mathbb{N}\) and every \(c\in\mathbb{N}_{0}\) if_ \[n\leqslant\sum_{i=1}^{c+1}\varphi(i)-c\] _then_ \[j_{\mathcal{S}_{\varphi}}(n)\leqslant 2c+1.\] Proof.: By Lemma 33, applied with \(a=1\) and \(b=n\), we obtain \(\widehat{j}(1,n)\leqslant 2c+1\), and so, \(j_{\mathcal{S}_{\varphi}}(n)=j_{\mathcal{S}_{\varphi}}(1,n)\leqslant\widehat{ j}(1,n)\). ### Upper bound for \(k\mathcal{S}_{1}\) Fix an integer \(k\) with \(k\geqslant 2\). As already mentioned the family \(k\mathcal{S}_{1}\) does not have the insertion property, hence, we need a different approach than in the case of \(\mathcal{S}_{\varphi}\). **Theorem 35**.: _For every \(n\in\mathbb{N}\), we have_ \[j_{2\mathcal{S}_{1}}(n)\leqslant 4\log n+25,\] _and for every integer \(k\) with \(k\geqslant 3\) we have_ \[j_{k\mathcal{S}_{1}}(n)\leqslant\frac{8}{k-2}\log n+3.\] Proof.: Most of the proof is the same for the case of \(k=2\) and \(k\geqslant 3\), however, there is one detail that differs. Let us start by fixing some \(n\in\mathbb{N}\) and an integer \(k\) with \(k\geqslant 2\). Next, let us fix \(x\in c_{00}^{+}\) such that \(j_{k\mathcal{S}_{1}}(n)=j_{k\mathcal{S}_{1}}(x)\). We apply Lemma 29 to the family \(k\mathcal{S}_{1}\) and \(x\) to obtain \(c\in\mathbb{N}_{0}\) and \(f_{0},\ldots,f_{c}\in W(k\mathcal{S}_{1})\) such that (r1)-(r7) hold. Note that by (r2), (r1), and (r5), \[j_{k\mathcal{S}_{1}}(n)=j_{k\mathcal{S}_{1}}(x)=\operatorname{depth}_{k \mathcal{S}_{1}}(f_{c})\leqslant 2c+3.\] In the case, where \(c=0\), this concludes the proof, so assume that \(c\geqslant 1\). Let \(k^{\prime}:=\lfloor k/2\rfloor\) and define \[Z:=\{m\in[c]:\min\operatorname{supp}f_{m-1}\geqslant 2^{k^{\prime}}\min \operatorname{supp}f_{m}\}.\] Let \(z:=|Z|\). Clearly, \[n\geqslant\min\operatorname{supp}f_{0}\geqslant 2^{k^{\prime}z}\min \operatorname{supp}f_{c}\geqslant 2^{k^{\prime}z}.\] Let \(Z^{\prime}:=[c]\setminus Z\) and \(z^{\prime}:=|Z^{\prime}|\). Fix some \(m\in Z^{\prime}\). Let \(F_{1}\) and \(F_{2}\) be as in (77). We have, \[\min F_{1}=\min\operatorname{supp}f_{m}\leqslant\max F_{1}\leqslant\min \operatorname{supp}f_{m-1}<2^{k^{\prime}}\min\operatorname{supp}f_{m}.\] Therefore, and by Observation 7, \(E_{i}(F_{1}\cup F_{2})\subseteq F_{2}\) for every integer \(i\) with \(i\geqslant k^{\prime}+1\). Since \(F_{1}\cup F_{2}\in\operatorname{full}_{f_{m}}(k\mathcal{S}_{1})\) and \(\operatorname{depth}_{k\mathcal{S}_{1}}(f_{m})\geqslant 2\), we have \(\operatorname{span}f_{m}\notin k\mathcal{S}_{1}\). It follows that the assumptions of Lemma 11 are satisfied, and so, \(E_{1}(F_{1}\cup F_{2}),\ldots,E_{k-1}(F_{1}\cup F_{2})\) are full Schreier sets. First, consider the case, where \(k\geqslant 3\). For each \(i\in[k^{\prime}+1,k-1]\), the set \(E_{i}(F_{1}\cup F_{2})\) is a full Schreier set and is a subset of \(F_{2}\). What is more, \(F_{2}\subseteq[\max\operatorname{supp}f_{m-1}+1,\max\operatorname{supp}f_{m}]\). By Observation 7, \[\max\operatorname{supp}f_{m}\geqslant 2^{k-1-k^{\prime}}(\max\operatorname{ supp}f_{m-1}+1)\geqslant 2^{k-1-k^{\prime}}\max\operatorname{supp}f_{m-1}.\] Next, we focus on the case, where \(k=2\). By Lemma 13, \[\max\operatorname{supp}f_{m}/2+2\geqslant\min E_{2}(F_{1}\cup F_{2})\geqslant \min F_{2}>\max\operatorname{supp}f_{m-1}.\] In particular, \(\max\operatorname{supp}f_{m}\geqslant 2\max\operatorname{supp}f_{m-1}-4\). Summing up, in the case of \(k\geqslant 3\), \[n\geqslant\max\operatorname{supp}f_{c}\geqslant 2^{(k-1-k^{\prime})z^{\prime} }\max\operatorname{supp}f_{0}\geqslant 2^{(k-1-k^{\prime})z^{\prime}},\] and in the case of \(k=2\), \[n\geqslant\max\operatorname{supp}f_{c}\geqslant 2^{z^{\prime}}\max\operatorname{ supp}f_{0}-4z^{\prime}\geqslant 2^{z^{\prime}}-4z^{\prime}.\] Combining this with the relation of \(n\) and \(z\), if \(k\geqslant 3\), we have \[n\geqslant 2^{(k/2-1)\max\{z,z^{\prime}\}}\geqslant 2^{(k/2-1)c/2}.\] Finally, \[\frac{8}{k-2}\log n+3\geqslant 2c+3\geqslant j_{k\mathcal{S}_{1}}(n).\] Similarly, in the case of \(k=2\), for \(c\geqslant 12\), we have \[n\geqslant 2^{\max\{z,z^{\prime}\}-4\max\{z,z^{\prime}\}}\geqslant 2^{c/2}-2c \geqslant 2^{c/2-1},\] and so, \(4\log n+7\geqslant 2c+3\geqslant j_{k\mathcal{S}_{1}}(n)\). If \(c<12\), then \(j_{k\mathcal{S}_{1}}(n)\leqslant 25\), thus, the bound in the assertion holds. ### Upper bound for \(\mathcal{S}_{2}\) The family \(\mathcal{S}_{2}\) has the insertion property, however, we will also use some ideas from the previous section. Note that the proof of the upper bound for \(\mathcal{S}_{3}\) is very similar. **Theorem 36**.: _For every \(n\in\mathbb{N}\), we have_ \[j_{\mathcal{S}_{2}}(n)\leqslant 8\sqrt{\log n}+9.\] Proof.: Fix some \(n\in\mathbb{N}\) and \(x\in c_{00}^{+}\) such that \(j_{\mathcal{S}_{2}}(n)=j_{\mathcal{S}_{2}}(x)\). We apply Lemma 29 to the family \(\mathcal{S}_{2}\) and \(x\) to obtain \(c\in\mathbb{N}_{0}\) and \(f_{0},\ldots,f_{c}\in W(\mathcal{S}_{2})\) such that (71)-(77) hold. Note that by (72), (71), and (75) we have \[j_{\mathcal{S}_{2}}(n)=j_{\mathcal{S}_{2}}(x)=\operatorname{depth}_{\mathcal{ S}_{2}}(f_{c})\leqslant 2c+3.\] In the case, where \(c=0\), this concludes the proof, so assume that \(c\geqslant 1\). Define \[Z:=\{m\in[c]:\min\operatorname{supp}f_{m-1}\geqslant\min\operatorname{supp} f_{m}\cdot 2^{\min\operatorname{supp}f_{m}/2}\}.\] Let \(z:=|Z|\). By Lemma 26 and (f6), we have \(\min\operatorname{supp}f_{m}\geqslant c-m+1\). By (f4), we obtain \[n\geqslant\min\operatorname{supp}f_{0}\geqslant\prod_{i=c-z+1}^{c}2^{\min \operatorname{supp}f_{i}/2}\min\operatorname{supp}f_{c}\geqslant\prod_{i=1}^{ z}2^{i/2}\geqslant 2^{(z^{2}+z)/4}.\] Let \(Z^{\prime}:=[c]\backslash Z\) and \(z^{\prime}:=|Z^{\prime}|\). Fix some \(m\in Z^{\prime}\). Let \(F_{1}\) and \(F_{2}\) be as in (f7), and let \(a:=\min F_{1}\). We have, \[a=\min F_{1}=\min\operatorname{supp}f_{m}\leqslant\max F_{1}\leqslant\min \operatorname{supp}f_{m-1}<\min\operatorname{supp}f_{m}\cdot 2^{\min\operatorname{ supp}f_{m}/2}=2^{a/2}a.\] In particular, \(F_{1}\subseteq[a,2^{a/2}a]\). Let \(s\) be the greatest positive integer such that \(E_{s}(F_{1}\cup F_{2})\subseteq F_{1}\), or let \(s:=0\) if \(E_{1}(F_{1}\cup F_{2})\not\subseteq F_{1}\). By Observation 7, \(2^{a/2}a\geqslant 2^{s}a\), and so \(a/2\geqslant s\). It follows that \(E_{i}(F_{1}\cup F_{2})\subseteq F_{2}\) for every integer \(i\) with \(i>a/2\). Since \(F_{1}\cup F_{2}\in\operatorname{full}_{f_{m}}(\mathcal{S}_{2})\) and \(\operatorname{depth}_{\mathcal{S}_{2}}(f_{m})\geqslant 2\), we have \(\operatorname{span}f_{m}\notin\mathcal{S}_{2}\). This ensures that the assumptions of Lemma 12 are satisfied, and so \(E_{1}(F_{1}\cup F_{2}),\ldots,E_{a-1}(F_{1}\cup F_{2})\) are full Schreier sets. For each \(a/2<i\leqslant a\), the set \(E_{i}(F_{1}\cup F_{2})\) is a full Schreier set and is a subset of \(F_{2}\). What is more, \(F_{2}\subseteq[\max\operatorname{supp}f_{m-1}+1,\max\operatorname{supp}f_{m}]\). By Observation 7, \[\max\operatorname{supp}f_{m}\geqslant 2^{a-(a/2+1)}(\max\operatorname{supp}f_{m- 1}+1)\geqslant 2^{a/2-1}\max\operatorname{supp}f_{m-1}.\] Therefore, \[n\geqslant\max\operatorname{supp}f_{c}\geqslant\prod_{i=c-z+1}^{c}2^{\min \operatorname{supp}f_{i}/2-1}\max\operatorname{supp}f_{0}\geqslant\prod_{i=1} ^{z^{\prime}}2^{i/2-1}=2^{(z^{\prime 2}-z^{\prime})/4}.\] Combining this with the relation of \(n\) and \(z\) and using \(z+z^{\prime}=c\), we obtain \[n\geqslant\max\{2^{(z^{2}+z)/4},2^{(z^{\prime 2}-z^{\prime})/4}\}\geqslant 2^{(c^{2 }-6c)/16}\geqslant 2^{(c-3)^{2}/16}.\] Finally, \[8\sqrt{\log n}+9\geqslant 2c+3\geqslant j_{\mathcal{S}_{2}}(n).\qed\] ### Upper bound for \(\mathcal{S}_{3}\) **Theorem 37**.: _For every \(n\in\mathbb{N}\) we have_ \[j_{\mathcal{S}_{3}}(n)\leqslant 8\sqrt{\log^{*}n}+9.\] Proof.: Fix some \(n\in\mathbb{N}\) and \(x\in c_{00}^{+}\) such that \(j_{\mathcal{S}_{3}}(n)=j_{\mathcal{S}_{3}}(x)\). We apply Lemma 29 to the family \(\mathcal{S}_{3}\) and \(x\) to obtain \(c\in\mathbb{N}_{0}\) and \(f_{0},\ldots,f_{c}\in W(\mathcal{S}_{3})\) such that (r1)-(r7) hold. Note that by (r2), (r1), and (r5) we have \[j_{\mathcal{S}_{3}}(n)=j_{\mathcal{S}_{3}}(x)=\operatorname{depth}_{\mathcal{ S}_{3}}(f_{c})\leqslant 2c+3.\] In the case, where \(c=0\), this concludes the proof, so assume that \(c>0\). Define \[Z:=\{m\in[c]:\min\operatorname{supp}f_{m-1}\geqslant\tau(\lfloor\min \operatorname{supp}f_{m}/2\rfloor,\min\operatorname{supp}f_{m})\}.\] Let \(z:=|Z|\). By Lemma 26 and (f6) we have \(\min\operatorname{supp}f_{m}\geqslant c-m+1\). By (f4), we obtain \[n\geqslant\min\operatorname{supp}f_{0}\geqslant\tau(\sum_{i=c-z+ 1}^{c}\lfloor\min\operatorname{supp}f_{i}/2\rfloor,\min\operatorname{supp}f_{ c})\geqslant\\ \tau(\sum_{i=1}^{z}\lfloor i/2\rfloor,1)\geqslant\tau((z^{2}-3z)/ 4,1).\] Let \(Z^{\prime}:=[c]\backslash Z\) and \(z^{\prime}:=|Z^{\prime}|\). Fix some \(m\in Z^{\prime}\). Let \(F_{1}\) and \(F_{2}\) be as in (7), and let \(a:=\min F_{1}\). We have, \[a=\min F_{1}=\min\operatorname{supp}f_{m} \leqslant\max F_{1}\leqslant\min\operatorname{supp}f_{m-1}\] \[<\tau(\lfloor\min\operatorname{supp}f_{m}/2\rfloor,\min \operatorname{supp}f_{m})=\tau(\lfloor a/2\rfloor,a).\] In particular, \(F_{1}\subseteq[a,\tau(\lfloor a/2\rfloor,a)]\). Let \(s\) be the greatest positive integer such that \(E_{s}(F_{1}\cup F_{2})\subseteq F_{1}\), or let \(s:=0\) if \(E_{1}(F_{1}\cup F_{2})\not\subseteq F_{1}\). By Observation 8, \(\tau(\lfloor a/2\rfloor,a)\geqslant\tau(s,a)\), and so, \(\lfloor a/2\rfloor\geqslant s\). It follows that \(E_{i}(F_{1}\cup F_{2})\subseteq F_{2}\) for every integer \(i\) with \(i>\lfloor a/2\rfloor\). Since \(F_{1}\cup F_{2}\in\operatorname{full}_{f_{m}}(\mathcal{S}_{3})\) and \(\operatorname{depth}_{\mathcal{S}_{3}}(f_{m})\geqslant 2\), we have \(\operatorname{span}f_{m}\notin\mathcal{S}_{3}\). This ensures that the assumptions of Lemma 14 are satisfied, and so, \(E_{1}(F_{1}\cup F_{2}),\ldots,E_{a-1}(F_{1}\cup F_{2})\) are \(\mathcal{S}_{2}\)-full sets. For each \(\lfloor a/2\rfloor<i\leqslant a\), the set \(E_{i}(F_{1}\cup F_{2})\) is an \(\mathcal{S}_{2}\)-full set and is a subset of \(F_{2}\). What is more, \(F_{2}\subseteq[\max\operatorname{supp}f_{m-1}+1,\max\operatorname{supp}f_{m}]\). By Observation 8, \[\max\operatorname{supp}f_{m}\geqslant\tau(a-(\lfloor a/2\rfloor+1),\max \operatorname{supp}f_{m-1}+1)\geqslant\tau(a/2-1,\max\operatorname{supp}f_{m- 1}).\] Therefore, \[n\geqslant\max\operatorname{supp}f_{c}\geqslant\tau(\sum_{i=c- z+1}^{c}\min\operatorname{supp}f_{i}/2-1,\max\operatorname{supp}f_{0})\geqslant\] \[\tau(\sum_{i=1}^{z^{\prime}}i/2-1,1)=\tau((z^{\prime 2}-3z^{\prime})/4,1).\] Combining this with the relation of \(n\) and \(z\) and using \(z+z^{\prime}=c\), we obtain \[n\geqslant\max\{\tau((z^{2}-3z)/2,1),\tau((z^{\prime 2}-3z^{\prime})/4,1)\} \geqslant\tau((c^{2}-6c)/16,1)\geqslant\tau((c-3)^{2}/16,1).\] Finally, \[8\sqrt{\log^{*}n}+9\geqslant 2c+3\geqslant j_{\mathcal{S}_{3}}(n).\qed\] ## 8. Open problems To conclude, we want to mention a few interesting research directions related to computing the function \(j(n)\). The first natural problem is to give an even more precise estimation of the original function \(j_{\mathcal{S}_{1}}(n)\), that is, up to an additive constant. **Problem 1**.: _Find a real number \(C\) such that there exist real numbers \(A,B\) such that for every \(n\in\mathbb{N}\),_ \[C\sqrt{n}+A\leqslant j(n)\leqslant C\sqrt{n}+B.\] By Theorem 1, we know that if the constant \(C\) exists, then \(C\in[\sqrt{2},2]\). Recall that a generalized Tsirelson's norm \(T[\theta,\mathcal{F}]\) depends on a real number \(0<\theta<1\) and a regular family \(\mathcal{F}\). In this paper, we studied the function \(j_{T[\frac{1}{2},\mathcal{F}]}\) for some regular families \(\mathcal{F}\). Another approach is to determine what is the order of magnitude of the function \(j_{T[\theta,\mathcal{F}]}\) when we fix \(\mathcal{F}=\mathcal{S}_{1}\) and change \(\theta\). There are two versions of this problem. **Problem 2**.: _For a fixed real number \(\theta\) with \(0<\theta<1\), compute the order of magnitude of the function \(j_{T[\theta,\mathcal{S}_{1}]}(n)\)._ **Problem 3**.: _Compute the order of magnitude of a two-variable function \(j_{T[\theta,\mathcal{S}_{1}]}(n)\)._ The last problem is inspired by the discussion in [13]. We believe that the problem of computing the Tsirelson's norm \(\|x\|_{T}\) for a vector \(x\in c_{00}\) with \(\operatorname{supp}x\subseteq[n]\) can be solved using a dynamic programming scheme in polynomial time. More precisely, in time \(\operatorname{poly}(n)\cdot j(n)\), which is clearly a polynomial function. The situation seems to be similar in the case of \(\|x\|_{\mathcal{S}_{\varphi}}\) for any function \(\varphi\). In particular, this gives a polynomial time algorithm for any fast-growing function \(\varphi\). However, for slow-growing functions, it does not give satisfactory running time - recall that we established a lower bound on \(j_{\mathcal{S}_{\varphi}}\), which in the case of slow-growing functions is a fast-growing function. **Problem 4**.: _Is there a non-decreasing function \(\varphi:\mathbb{N}\to\mathbb{N}\) such that the problem of computing the norm \(\|\cdot\|_{T[\frac{1}{2},\mathcal{S}_{\varphi}]}\) is hard in the sense of computational complexity?_
2305.09345
Cauchy dual and Wold-type decomposition for bi-regular covariant representations
The notion of Cauchy dual for left-invertible covariant representations was studied by Trivedi and Veerabathiran. Using the Moore-Penrose inverse, we extend this notion for the covariant representations having closed range and explore several useful properties. We obtain a Wold-type decomposition for {regular} completely bounded covariant representation whose Moore-Penrose inverse is regular. Also, we discuss an example related to the non-commutative bilateral weighted shift. We prove that the Cauchy dual of the concave covariant representation $(\sigma, V)$ modulo $N(\wV)$ is hyponormal modulo $N(\wV)$.
Dimple Saini
2023-05-16T10:56:37Z
http://arxiv.org/abs/2305.09345v4
# Cauchy dual and Wold-type decomposition for bi-regular covariant representations ###### Abstract. The notion of Cauchy dual for left-invertible covariant representation was studied in [26]. Using Moore-Penrose inverse, we extend this notion for the covariant representations having closed range and explore several useful properties. Based on [10], we obtain a Wold-type decomposition for the class of regular completely bounded covariant representation whose Moore-Penrose inverse is regular. Also, we discuss examples related to bilateral weighted shift considered in [23]. We prove that the Cauchy dual of concave covariant representations modulo \(N(\widetilde{V})\) are hyponormal modulo \(N(\widetilde{V})\). Key words and phrases:Covariant representations, wandering subspaces, Moore-Penrose inverse, regular operator, tensor product 2020 Mathematics Subject Classification: Primary 46L08, 47A15, 47B37; Secondary 47B38, 47L30, 47L55 ## 1. Introduction The fundamental theorem of Wold [27] says that: Every isometry on a Hilbert space is either a unitary, or a shift, or uniquely divides as a direct summand of both. In [5], Beurling proved each closed \(M_{z}\)-invariant subspace of Hardy space \(H^{2}(\mathbb{D})\) is a range of an inner operator, which is an application of Wold decomposition. A generalization of Beurling's theorem is presented by Halmos [15] as the wandering subspace theorem. **Definition 1.1**.: _For \(n\in\mathbb{N},\) a bounded linear operator \(V\) on a Hilbert space \(\mathcal{H}\) is called \(n\)-expansive if_ \[\sum_{i=0}^{n}(-1)^{i}\binom{n}{i}V^{*i}V^{i}\leq 0.\] _If \(V\) is \(n\)-expansive for all \(n\geq 1,\) then we say that \(V\) is completely hyperexpansive._ Aleman [2] and Athavale [3] studied completely hyperexpansive operators. Richter [22] proved that every 2-expansive (or concave) operators are expansive, and hence they are left invertible. In [22] Richter showed that wandering subspace theorem holds for every pure concave operators. Shimorin [24] discussed an elementary proof of Richter's wandering subspace theorem by giving a Wold-type decomposition for the concave operators. Olofsson [19] generalized Richter's wandering subspace theorem for pure expansive operators. **Definition 1.2**.: _Let \(V\) be a bounded linear operator on a Hilbert space \(\mathcal{H}.\) The reduced minimum modulus for \(V\) is defined by_ \[\gamma(V):=\begin{cases}\inf\{\|Vh\|;\|h\|=1,h\in N(V)^{\perp}\}&\text{if }V\neq 0\\ \infty,&\text{if }V=0.\end{cases}\] Ezzahraoui, Mbekhta, and Zerouali [8] extended Olofsson's wandering subspace theorem for regular operators using the reduced minimum modulus \(\gamma(V)\geq 1\) in the following way: **Theorem 1.3**.: \((\)_Ezzahraoui-Mbekhta-Zerouali\()\) Assume \(V\) to be a regular bounded linear operator on a Hilbert space \(\mathcal{H}\) such that_ \[\|V^{n}h\|^{2}\leq d_{n}(\|Vh\|^{2}-\|V^{\dagger}Vh\|^{2})+\|V^{\dagger}Vh\|^{2 },\ \ h\in\mathcal{H}\] _with \(\gamma(V)\geq 1\) and \(\sum_{n\geq 2}\frac{1}{d_{n}}=\infty,\) then \(\mathcal{H}=[\mathcal{H}\ominus V\mathcal{H}]_{V}+\bigcap_{n=0}^{\infty}V^{n} \mathcal{H}.\)_ In [10], Ezzahraoui, Mbekhta, and Zerouali discussed a Wold-type decomposition holds for the class of regular operators with regular Moore-Penrose inverse. The notion of Cauchy dual for left-invertible operators introduced by Shimorin [24]. Ezzahraoui, Mbekhta, and Zerouali [9] extended the notion of Cauchy dual to operators with closed range. Using the Cauchy dual technique, Chavan [6] proved that the following theorem: **Theorem 1.4**.: _Suppose that \(V\) is a concave operator on a Hilbert space \(\mathcal{H},\) then the Cauchy dual \(V^{\prime}\) of \(V\) is hyponormal contraction._ Cuntz [7] studied a \(C^{*}\)-algebra, known as Cuntz algebra, generated by isometries \(V_{1},\ldots,V_{n}\) (\(n\geq 2\)) on a Hilbert space with \(\sum_{i=1}^{n}V_{i}V_{i}^{*}=I,\) or equivalently, orthogonal ranges. Wold decomposition for two isometries with orthogonal range was first introduced by Frazho [12]. Popescu [21] extended this decomposition to the case of an infinite sequence of isometries with orthogonal final spaces. Using the isometric representations of \(C^{*}\)-correspondences, Pimsner [20] generalized the concept of Cuntz algebras. Muhly and Solel [18] presented the Wold decomposition for the isometric representations of tensor algebras of \(C^{*}\)-correspondences based on [21]. Trivedi and Veerabathiran [26] proved a Halmos-Richter-type wandering subspace theorem for concave covariant representations of \(C^{*}\)-correspondences and introduced the notion of Cauchy dual for left-invertible representations. Rohilla, Veerabathiran, and Trivedi [23] extended this decomposition for regular covariant representations having reduced minimum modulus \(\gamma(\widetilde{V})\geq 1\) that satisfies the growth condition. The main purpose of this paper is to investigate the conditions when the wandering subspace theorem fails, and to study the Cauchy dual of concave and bi-regular covariant representations of \(C^{*}\)-correspondence. The section-wise plan is as follows: In Section 2, we recall the definition of generalized inverse for completely bounded covariant representation and derive its properties. In Section 3, we obtain the notion of Cauchy dual of covariant representations and a Wold-type decomposition for the class of regular completely bounded covariant representation with regular Moore-Penrose inverse. In Section 4, we study the powers of the Moore-Penrose inverse of completely bounded covariant representations. In Section 5, we discuss the Cauchy dual of concave completely bounded covariant representations based on [9]. ### Preliminaries and Notations Here, we'll review some basic concepts from [16, 17, 18]. Suppose that \(\mathcal{B}\) is a \(C^{*}\)-algebra and a \(C^{*}\)_-correspondence_\(E\) over \(\mathcal{B}\) with the left action \(\phi:\mathcal{B}\rightarrow\mathcal{L}(E)\) given by \(b\xi=\phi(b)\xi,b\in\mathcal{B}\) and \(\xi\in E,\) where \(\mathcal{L}(E)\) is the collection of all adjointable operators on \(E.\) In this paper, each \(*\)-homomorphism is nondegenerate. Throughout this paper, we will use \(\mathcal{H}\) is a Hilbert space, \(E\) is a \(C^{*}\)-correspondence and \(B(\mathcal{H})\) is the algebra of bounded linear operators on \(\mathcal{H}.\) First, we recall the following definition. **Definition 1.5**.: _The pair \((\sigma,V)\) is called a covariant representation (cf. [17]) of \(E\) on \(\mathcal{H}\) if_ \[V(b\xi c)=\sigma(b)V(\xi)\sigma(c)\qquad(\xi\in E,b,c\in\mathcal{B}),\] _where \(V:E\to B(\mathcal{H})\) is a linear map and \(\sigma:\mathcal{B}\to B(\mathcal{H})\) is a representation. We say that \((\sigma,V)\) is completely bounded covariant representation (simply say, c.b.c. representation) if \(V\) is completely bounded._ The following result is due to Muhly and Solel [17] which is useful to classify the c.b.c. representations of a \(C^{*}\)-correspondence. **Lemma 1.6**.: _The function \((\sigma,V)\mapsto\widetilde{V}\) provides a bijection between the set of all c.b.c. representations \((\sigma,V)\) and the set of all bounded linear maps \(\widetilde{V}:E\otimes_{\sigma}\mathcal{H}\rightarrow\mathcal{H}\) defined by_ \[\widetilde{V}(\eta\otimes h)=V(\eta)h\qquad(h\in\mathcal{H},\eta\in E),\] _such that \(\sigma(b)\widetilde{V}=\widetilde{V}(\phi(b)\otimes I_{\mathcal{H}})\) for all \(b\in\mathcal{B}.\)_ **Definition 1.7**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}.\) A closed subspace \(\mathcal{K}\) of \(\mathcal{H}\) is called \((\sigma,V)\)-invariant \((resp.(\sigma,V)\)-reducing) (cf. [26]) if it is \(\sigma\)-invariant and (resp. both \(\mathcal{K},\mathcal{K}^{\perp}\)) is invariant by each operator \(V(\xi)\) for all \(\xi\in E.\)_ For each \(n\in\mathbb{N}_{0}(=\mathbb{N}\cup\{0\}),\)\(E^{\otimes n}=E\otimes_{\phi}\cdots\otimes_{\phi}E\) (\(n\)-times) (here \(E^{\otimes 0}=\mathcal{B}\)) is a \(C^{*}\)-correspondence over \(\mathcal{B},\) with the left module action of \(\mathcal{B}\) on \(E^{\otimes n}\) defined as \[\phi_{n}(b)(\xi_{1}\otimes\cdots\otimes\xi_{n})=b\xi_{1}\otimes\cdots\otimes \xi_{n},\ \ \ b\in\mathcal{B},\xi_{i}\in E.\] For \(n\in\mathbb{N},\) define \(\widetilde{V}_{n}:E^{\otimes n}\otimes\mathcal{H}\rightarrow\mathcal{H}\) by \[\widetilde{V}_{n}(\xi_{1}\otimes\ldots\otimes\xi_{n}\otimes h)=V(\xi_{1}) \ldots V(\xi_{n})h\] for all \(\xi_{i}\in E,h\in\mathcal{H}.\) The _Fock space_ of \(E\) (cf. [11]), \(\mathcal{F}(E)=\bigoplus_{n\geq 0}E^{\otimes n},\) is a \(C^{*}\)-correspondence over \(\mathcal{B},\) where the left module action of \(\mathcal{B}\) on \(\mathcal{F}(E)\) is defined by \[\phi_{\infty}(b)\left(\oplus_{n\geq 0}\xi_{n}\right)=\oplus_{n\geq 0}\phi_{n}(b )\xi_{n},\ \xi_{n}\in E^{\otimes n}.\] For \(\xi\in E,\) the _creation operator_\(V_{\xi}\) on \(\mathcal{F}(E)\) is defined by \[V_{\xi}(\eta)=\xi\otimes\eta,\ \eta\in E^{\otimes n},n\geq 0.\] For more details see [17, 18, 26]. Now, we recall the definition of regular c.b.c. representation from [23]. We denote by \(R(\widetilde{V})\) and \(N(\widetilde{V}),\) the range of \(\widetilde{V}\) and the kernel of \(\widetilde{V},\) respectively. **Definition 1.8**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}.\) We say that \((\sigma,V)\) is regular if \(N(\widetilde{V})\subseteq E\otimes R^{\infty}(\widetilde{V})\) and its range \(R(\widetilde{V})\) is closed, where \(R^{\infty}(\widetilde{V})=\bigcap_{n\geq 0}R(\widetilde{V}_{n})\) is the generalized range of \((\sigma,V).\)_ Suppose that \((\sigma,V)\) is a left-invertible c.b.c. representation (that is, \((\widetilde{V}^{*}\widetilde{V})^{-1}\widetilde{V}^{*}\) is a left inverse of \(\widetilde{V}\)), then \(R(\widetilde{V})\) is closed and \(N(\widetilde{V})=\{0\},\) and hence \((\sigma,V)\) is regular. Therefore, every left-invertible c.b.c. representations are regular. The following result is from [23, Theorem 2.2] which characterize the regular c.b.c. representations. **Theorem 1.9**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}.\) Then for each \(m,n\in\mathbb{N}\), the following statements are equivalent:_ 1. \(N(\widetilde{V})\subseteq(I_{E}\otimes\widetilde{V}_{m})(E^{\otimes(m+1)} \otimes\mathcal{H})\) _;_ 2. \(N(\widetilde{V}_{m})\subseteq(I_{E^{\otimes n}}\otimes\widetilde{V})(E^{ \otimes(n+1)}\otimes\mathcal{H})\) _;_ 3. \(N(\widetilde{V}_{n})\subseteq(I_{E^{\otimes n}}\otimes\widetilde{V}_{m})(E^{ \otimes(n+m)}\otimes\mathcal{H})\) _;_ 4. \(N(\widetilde{V}_{n})=(I_{E^{\otimes n}}\otimes\widetilde{V}_{m})N(\widetilde{ V}_{m+n}).\)__ ## 2. Properties of generalized inverse of regular covariant representations In this section, we will discuss the definition of _generalized inverse_ and derive some interesting properties of the generalized inverse. Also, we study the necessary and sufficient conditions for regular covariant representations. **Definition 2.1**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}.\) A bounded operator \(S:\mathcal{H}\to E\otimes\mathcal{H}\) is called generalized inverse of \(\widetilde{V}\) if \(S\widetilde{V}S=S\) and \(\widetilde{V}S\widetilde{V}\)=\(\widetilde{V}\). For every \(n\in\mathbb{N},\) define_ \[S^{(n)}=(I_{E^{\otimes n-1}}\otimes S)(I_{E^{\otimes n-2}}\otimes S)\dots(I_{ E}\otimes S)S.\] The next proposition gives the interesting property of generalized inverses. **Proposition 2.2**.: _Let \((\sigma,V)\) be a regular c.b.c. representation of \(E\) on \(\mathcal{H}\) and let \(S\) be a generalized inverse of \(\widetilde{V}\). Then \((I_{E}\otimes S)N(\widetilde{V})\subseteq E^{\otimes 2}\otimes R^{\infty}( \widetilde{V}).\)_ Proof.: Let \(\eta\in N(\widetilde{V})\subseteq E\otimes R(\widetilde{V}),\) then there exists \(u\in E^{\otimes 2}\otimes\mathcal{H}\) such that \(\eta=(I_{E}\otimes\widetilde{V})u.\) We obtain \[\widetilde{V}_{2}(I_{E}\otimes S)\eta=\widetilde{V}_{2}(I_{E}\otimes S \widetilde{V})u=\widetilde{V}(I_{E}\otimes\widetilde{V}S\widetilde{V})u= \widetilde{V}(I_{E}\otimes\widetilde{V})u=\widetilde{V}\eta=0.\] From Theorem 1.9, we have \((I_{E}\otimes S)N(\widetilde{V})\subseteq N(\widetilde{V}_{2})\subseteq E^{ \otimes 2}\otimes R^{\infty}(\widetilde{V}).\) The following result from [23] summarizes various properties of regular c.b.c. representations. **Proposition 2.3**.: _Let \((\sigma,V)\) be a regular c.b.c. representation of \(E\) on \(\mathcal{H}\) with generalized inverse \(S\). Then_ 1. \(R^{\infty}(\widetilde{V})\) _is closed;_ 2. \(\widetilde{V}(E\otimes R^{\infty}(\widetilde{V}))=R^{\infty}(\widetilde{V});\)__ _;_ * \(S(R^{\infty}(\widetilde{V}))\subseteq E\otimes R^{\infty}(\widetilde{V});\)__ * \(\widetilde{V}_{n}S^{(n)}\widetilde{V}_{n}=\widetilde{V}_{n}\) _for all_ \(n\in\mathbb{N}.\)__ **Proposition 2.4**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}\) with closed range and let \(S\) be the generalized inverse of \(\widetilde{V}\). The following statements are equivalent:_ * \((\sigma,V)\) _is regular;_ * \((I_{E}\otimes S^{(k)})N(\widetilde{V})\subseteq E^{\otimes(k+1)}\otimes R^{ \infty}(\widetilde{V})\) _for all_ \(k\geq 0;\)__ * \((I_{E}\otimes S^{(k)})N(\widetilde{V})\subseteq E^{\otimes(k+1)}\otimes R( \widetilde{V})\) _for all_ \(k\geq 0.\)__ Proof.: \((i)\Rightarrow(ii)\) It follows from Propositions 2.2 and 2.3. \[(I_{E}\otimes S^{(k)})N(\widetilde{V}) =(I_{E^{\otimes 2}}\otimes S^{(k-1)})(I_{E}\otimes S)N( \widetilde{V})\subseteq E^{\otimes 2}\otimes S^{(k-1)}R^{\infty}(\widetilde{V})\] \[\subseteq E^{\otimes(k+1)}\otimes R^{\infty}(\widetilde{V}).\] \((ii)\Rightarrow(iii)\) Since \(R^{\infty}(\widetilde{V})\subseteq R(\widetilde{V}),\) we have \((I_{E}\otimes S^{(k)})N(\widetilde{V})\subseteq E^{\otimes(k+1)}\otimes R( \widetilde{V}).\) \((iii)\Rightarrow(i)\) For \(n\geq 1\) and \(\zeta\in N(\widetilde{V}_{n}),\) from [23, Lemma 5.4], we get \[\zeta=\zeta-S^{(n)}\widetilde{V}_{n}\zeta=\sum_{i=0}^{n-1}(I_{E^{\otimes n-i} }\otimes S^{(i)})(I_{E^{\otimes n-(i+1)}}\otimes P_{N(\widetilde{V})})(I_{E^{ \otimes n-i}}\otimes\widetilde{V}_{i})\zeta.\] Since \((I_{E^{\otimes n-(i+1)}}\otimes P_{N(\widetilde{V})})(I_{E^{\otimes n-i}} \otimes\widetilde{V}_{i})\zeta\in E^{\otimes n-(i+1)}\otimes N(\widetilde{V})\) and \((I_{E}\otimes S^{(k)})N(\widetilde{V})\subseteq E^{\otimes(k+1)}\otimes R( \widetilde{V})\) for \(k\geq 0,\) we have \(\zeta\in E^{\otimes n}\otimes R(\widetilde{V}),\) and hence \(N(\widetilde{V}_{n})\subseteq E^{\otimes n}\otimes R(\widetilde{V}).\) From Theorem 1.9, \((\sigma,V)\) is regular. Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}\) with closed range. The _Moore-Penrose inverse_\(\widetilde{V}^{\dagger}\) of \((\sigma,V)\)[23, 4, 13] is defined by \[\widetilde{V}^{\dagger}:=\widetilde{V}_{0}^{-1}P_{R(\widetilde{V})},\] where \(\widetilde{V}_{0}=\widetilde{V}|_{N(\widetilde{V})^{\perp}}:N(\widetilde{V}) ^{\perp}\to R(\widetilde{V})\) and \(P_{R(\widetilde{V})}\) is the orthogonal projection of \(\mathcal{H}\) onto \(R(\widetilde{V}).\) Equivalently, the Moore-Penrose Inverse of \((\sigma,V)\) is defined as the unique solution of the following four equations \[\widetilde{V}\widetilde{V}^{\dagger}\widetilde{V}=\widetilde{V},\quad \widetilde{V}^{\dagger}\widetilde{V}\widetilde{V}^{\dagger}=\widetilde{V}^{ \dagger},\quad(\widetilde{V}\widetilde{V}^{\dagger})^{*}=\widetilde{V} \widetilde{V}^{\dagger},\quad(\widetilde{V}^{\dagger}\widetilde{V})^{*}= \widetilde{V}^{\dagger}\widetilde{V}.\] **Proposition 2.5**.: _[_13, 23_]_ _Suppose that \((\sigma,V)\) is a c.b.c. representation of \(E\) on \(\mathcal{H}\) with closed range. Then_ * \(R(\widetilde{V}^{\dagger})=R(\widetilde{V}^{*})=N(\widetilde{V})^{\perp};\)__ \(5.\)__\(N(\widetilde{V})=N(\widetilde{V}^{\dagger}\widetilde{V})=N(\widetilde{V}^{ \dagger*});\)__ * \(\widetilde{V}\widetilde{V}^{\dagger}=P_{R(\widetilde{V})}\)_,_\(\widetilde{V}^{\dagger}\widetilde{V}=P_{R(\widetilde{V}^{*})};\)__ \(6.\)__\(\widetilde{V}^{*}\widetilde{V}\widetilde{V}^{\dagger}=\widetilde{V}^{ \dagger}\widetilde{V}\widetilde{V}^{*}=\widetilde{V}^{*};\)__ * \(N(\widetilde{V}^{\dagger})=N(\widetilde{V}\widetilde{V}^{\dagger})=N( \widetilde{V}^{*});\)__ \(7.\)__\((\widetilde{V}^{\dagger})^{\dagger}=\widetilde{V};\)__ * \(R(\widetilde{V})=R(\widetilde{V}\widetilde{V}^{\dagger})=R(\widetilde{V}^{ \dagger*});\)__ \(8.\)__\((\widetilde{V}^{*})^{\dagger}=(\widetilde{V}^{\dagger})^{*}.\)__ **Remark 2.6**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}\) with closed range. Let \(\widetilde{V}^{\dagger}\) be the Moore-Penrose inverse of \((\sigma,V),\) then \((\widetilde{V}^{*}\widetilde{V})^{\dagger}=\widetilde{V}^{\dagger}\widetilde{V} ^{*\dagger}.\) Indeed, let \(A=\widetilde{V}^{*}\widetilde{V}\) and \(B=\widetilde{V}^{\dagger}\widetilde{V}^{*\dagger},\) then \(ABA=A,BAB=B,(AB)^{*}=AB,(BA)^{*}=BA.\) From the uniqueness of the Moore-Penrose inverse, we have \((\widetilde{V}^{*}\widetilde{V})^{\dagger}=\widetilde{V}^{\dagger}\widetilde{V}^{ *\dagger}.\)_ **Proposition 2.7**.: _Let \((\sigma,V)\) be a regular c.b.c. representation of \(E\) on \(\mathcal{H}\) with closed range. Then_ 1. \(N(\widetilde{V}^{\dagger(n)})\cap R(\widetilde{V}_{n})=\{0\};\)__ 2. \(R(\widetilde{V}_{n})=\{h:\ \widetilde{V}_{n}\widetilde{V}^{\dagger(n)}h=h\}.\)__ Proof.: Suppose that \((\sigma,V)\) is regular, then by [23, Remark 3.7], we get \(\{h:\ \widetilde{V}_{n}\widetilde{V}^{\dagger(n)}h=h\}\subseteq R( \widetilde{V}_{n}).\) Let \(h\in R(\widetilde{V}_{n}),\) from Proposition 2.3, we have \[h=P_{R(\widetilde{V}_{n})}h=\widetilde{V}_{n}\widetilde{V}_{n}^{\dagger}h=( \widetilde{V}_{n}\widetilde{V}^{\dagger(n)}\widetilde{V}_{n})\widetilde{V}_{ n}^{\dagger}h=\widetilde{V}_{n}\widetilde{V}^{\dagger(n)}(\widetilde{V}_{n} \widetilde{V}_{n}^{\dagger}h)=\widetilde{V}_{n}\widetilde{V}^{\dagger(n)}h.\] This implies that for \(h\in N(\widetilde{V}^{\dagger(n)})\cap R(\widetilde{V}_{n}),\) we get \(h=\widetilde{V}_{n}\widetilde{V}^{\dagger(n)}=0.\) ## 3. Wold-type decomposition for bi-regular covariant representation In this section, we will discuss the notion of Cauchy dual of covariant representations with closed range and a Wold-type decomposition for bi-regular c.b.c. representations. Also give various properties of such class of representations. First, we introduce the notion of _Cauchy dual_ for the c.b.c. representation of \(E\) on \(\mathcal{H}\) with closed range. Define \(\widetilde{V}^{\prime}:E\otimes\mathcal{H}\rightarrow\mathcal{H}\) by \[\widetilde{V}^{\prime}:=\widetilde{V}(\widetilde{V}^{*}\widetilde{V})^{\dagger}.\] Since \(\widetilde{V}^{*}\widetilde{V}(\phi(b)\otimes I_{\mathcal{H}})=\widetilde{V}^ {*}\sigma(b)\widetilde{V}=(\phi(b)\otimes I_{\mathcal{H}})\widetilde{V}^{*} \widetilde{V}\) for all \(b\in\mathcal{B}.\) Pre and post-multiply in the last inequality by \((\widetilde{V}^{*}\widetilde{V})^{\dagger},\) since \((\widetilde{V}^{*}\widetilde{V})^{\dagger}=\widetilde{V}^{\dagger}\widetilde{V} ^{*\dagger},\) we get \[\widetilde{V}^{\dagger}\widetilde{V}(\phi(b)\otimes I_{\mathcal{H}})( \widetilde{V}^{*}\widetilde{V})^{\dagger}=(\widetilde{V}^{*}\widetilde{V})^{ \dagger}(\phi(b)\otimes I_{\mathcal{H}})\widetilde{V}^{\dagger}\widetilde{V}.\] Again pre-multiply by \(\widetilde{V},\) we have \[\sigma(b)\widetilde{V}^{\prime}=\widetilde{V}^{\prime}(\phi(b)\otimes I_{ \mathcal{H}})\widetilde{V}^{\dagger}\widetilde{V}. \tag{3.1}\] Since \(\widetilde{V}(\phi(b)\otimes I_{\mathcal{H}})=\sigma(b)\widetilde{V},\) simple computations prove that \[\widetilde{V}^{\prime}(\phi(b)\otimes I_{\mathcal{H}})=\widetilde{V}^{*\dagger }\widetilde{V}^{\dagger}\sigma(b)\widetilde{V}. \tag{3.2}\] Let \(\eta\in E\otimes\mathcal{H}=R(\widetilde{V}^{*})\oplus R(\widetilde{V}^{*})^{ \perp}=R(\widetilde{V}^{*})\oplus N(\widetilde{V}),\) then there exist \(\eta_{1}\in R(\widetilde{V}^{*})\) and \(\eta_{2}\in N(\widetilde{V})\) such that \(\eta=\eta_{1}+\eta_{2}.\) Since \(N(\widetilde{V}^{\prime})=N(\widetilde{V}),\) from Equations (3.1) and (3.2), we have \[\sigma(b)\widetilde{V}^{\prime}\eta =\widetilde{V}^{\prime}(\phi(b)\otimes I_{\mathcal{H}}) \widetilde{V}^{\dagger}\widetilde{V}\eta=\widetilde{V}^{\prime}(\phi(b)\otimes I _{\mathcal{H}})\eta_{1}\] \[=\widetilde{V}^{\prime}(\phi(b)\otimes I_{\mathcal{H}})(\eta_{1} +\eta_{2})=\widetilde{V}^{\prime}(\phi(b)\otimes I_{\mathcal{H}})\eta,\] and hence \(\widetilde{V}^{\prime}(\phi(b)\otimes I_{\mathcal{H}})=\sigma(b)\widetilde{V}^ {\prime}.\) From Lemma 1.6, \((\sigma,V^{\prime})\) is c.b.c. representation. **Definition 3.1**.: _We say that the c.b.c. representation \((\sigma,V^{\prime})\) defined as above is Cauchy dual of \((\sigma,V).\)_ **Remark 3.2**.: _Suppose that \((\sigma,V)\) is a left-invertible c.b.c. representation of \(E\) on \(\mathcal{H}\), then \(\widetilde{V}^{\prime}=\widetilde{V}(\widetilde{V}^{*}\widetilde{V})^{-1}.\)_ The next result is a characterization of the Cauchy dual which is an analogue of [9, Proposition 2.1]. **Proposition 3.3**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}\) with closed range. Then_ 1. \(\widetilde{V}^{\prime}=\widetilde{V}^{\dagger*}=\widetilde{V}^{*\dagger};\)__4. \(\widetilde{V}^{\prime*}\widetilde{V}^{\prime}=(\widetilde{V}^{*}\widetilde{ V})^{\prime};\)__ 2. \(\widetilde{V}^{\prime\prime}=\widetilde{V};\)__\(5.\)__\(\widetilde{V}^{\prime}=\widetilde{V}\) _if and only if_ \(\widetilde{V}\) _is a partial isometry;_ 3. \(\widetilde{V}^{*\prime}=\widetilde{V}^{\prime*}\)_;_ \(6.\)__\(\widetilde{V}^{*}\widetilde{V}^{\prime}=\widetilde{V}^{*\prime}\widetilde{V}=P_{N( \widetilde{V})^{\perp}},\)__\(\widetilde{V}^{\prime}\widetilde{V}^{*}=\widetilde{V}\widetilde{V}^{*\prime}=P_{R( \widetilde{V})}.\)__ Now we present the Cauchy dual is transferred by unitary equivalence. **Proposition 3.4**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}\) with closed range. If \(U\) is a unitary operator on \(\mathcal{H},\) then the Cauchy dual_ \[(U^{*}\widetilde{V}(I_{E}\otimes U))^{\prime}=U^{*}\widetilde{V}^{\prime}(I_{ E}\otimes U).\] Proof.: Suppose that \(A=(I_{E}\otimes U^{*})\widetilde{V}^{*}U\) and \(B=U^{*}\widetilde{V}^{\prime}(I_{E}\otimes U),\) then \(ABA=A,BAB=B,(AB)^{*}=AB,(BA)^{*}=BA.\) From uniqueness of the Moore-Penrose inverse, we have \(A^{\dagger}=B.\) By Propostion 3.3, we get \[(U^{*}\widetilde{V}(I_{E}\otimes U))^{\prime}=(U^{*}\widetilde{V}(I_{E} \otimes U))^{*\dagger}=((I_{E}\otimes U^{*})\widetilde{V}^{*}U)^{\dagger}=U^{ *}\widetilde{V}^{\prime}(I_{E}\otimes U).\] **Example 3.5**.: _For \(n\in\mathbb{N},\) consider a Hilbert space \(\mathcal{H}\) with an orthonormal basis \(\{e_{m}:m\in\mathbb{Z}\}\) and a bounded set of real numbers \(\{w_{i,m}:i\in I_{n}:=\{1,2,\ldots,n\},\;\;m\in\mathbb{Z}\}\) such that \(w_{i,0}=0\) and \(w_{i,m}\neq 0,m\in\mathbb{Z}\setminus\{0\}\) for all \(i\in I_{n}.\) Suppose \(E\) is an \(n\)-dimensional Hilbert space with the orthonormal basis \(\{\delta_{i}\}_{i\in I_{n}}.\) The bilateral weighted shift c.b.c. representation, (see [23, Section 7]), \((\rho,S^{w})\) of \(E\) on \(\mathcal{H}\) is defined by_ \[S^{w}(\delta_{i})=V_{i}\text{ and }\;\rho(b)=bI_{\mathcal{H}},\;\;b\in \mathbb{C},\] _where \(V_{i}(e_{m})=w_{i,m}e_{nm+i}\) for all \(m\in\mathbb{Z}\) and \(i\in I_{n}.\) It is easy to verify that \(N(V_{i})=span\{e_{0}\},\)\(R(V_{i})=\overline{span}\{e_{j}:\;j\in B_{i}:=\{nm+i:\;m\in\mathbb{Z} \setminus\{0\}\}\},R(V_{i})\perp R(V_{j})\) for distinct \(i,j\in I_{n}\) and_ \[V_{i}^{*}(e_{j})=\begin{cases}w_{i,\frac{j-i}{n}}e_{\frac{j-i}{n}}&\text{if }j \in B_{i};\\ 0&\text{if }j\notin B_{i}.\end{cases}\] _Clearly \(R(V_{i}^{*})=\{e_{m}:\;m\in\mathbb{Z}\setminus\{0\}\}.\) Let \(x\in N(V_{i})^{\perp},\) then_ \[(\inf_{m\in\mathbb{Z}\setminus\{0\}}|w_{i,m}|)\|x\|\leq\|V_{i}x\|\quad for\quad i \in I_{n}.\] _It follows that \(V_{i}\) is closed range if and only if \(\inf\{|w_{i,m}|:w_{i,m}\neq 0\}>0.\) Now, simple computations show that the Cauchy dual of \(V_{i}\)_ \[V_{i}^{\prime}(e_{m})=V_{i}^{*\dagger}(e_{m})=\begin{cases}\frac{1}{w_{i,m}}e_ {nm+i}&\text{if }m\neq 0;\\ 0&\text{if }m=0.\end{cases}\] _Therefore the Cauchy dual \(\widetilde{S}^{w^{\prime}}(\delta_{i}\otimes e_{0})=V_{i}^{\prime}(e_{0})=0\) and \(\widetilde{S}^{w^{\prime}}(\delta_{i}\otimes e_{m})=V_{i}^{\prime}(e_{m})= \frac{1}{w_{i,m}}e_{nm+i}\) for \(m\neq 0\) and \(i\in I_{n}.\)_ Suppose that \((\sigma,V)\) is a left-invertible c.b.c. representation of \(E\) on \(\mathcal{H},\) then the Moore-Penrose inverse \(\widetilde{V}^{\dagger}=(\widetilde{V}^{*}\widetilde{V})^{-1}\widetilde{V}^{*}\) is the left inverse of \(\widetilde{V},\) and hence \(\widetilde{V}\) and \(\widetilde{V}^{\dagger}\) are regular. Therefore, every left-invertible c.b.c. representations are bi-regular. In general, the Moore-Penrose inverse of regular c.b.c. representation need not be regular. We have the following definition. **Definition 3.6**.: _Let \((\sigma,V)\) be a regular c.b.c. representation of \(E\) on \(\mathcal{H}.\) We say that \((\sigma,V)\) is bi-regular if \(N(I_{E^{\otimes n}}\otimes\widetilde{V}^{\dagger})\subseteq R(\widetilde{V}^ {\dagger(n)})\) for all \(n\in\mathbb{N},\) that is, \(\widetilde{V}^{\dagger}\) is regular, where \(\widetilde{V}^{\dagger(n)}:=(I_{E^{\otimes n-1}}\otimes\widetilde{V}^{\dagger })(I_{E^{\otimes n-2}}\otimes\widetilde{V}^{\dagger})\ldots(I_{E}\otimes \widetilde{V}^{\dagger})\widetilde{V}^{\dagger}.\)_ **Remark 3.7**.: _Let \((\sigma,V)\) be a regular c.b.c. representation of \(E\) on \(\mathcal{H},\) then \(\widetilde{V}^{*}\) is also regular. Indeed, for \(n\in\mathbb{N},\) let \(\eta\in N(I_{E^{\otimes n}}\otimes\widetilde{V}^{*})\) and \(\gamma\in R(\widetilde{V}_{n}^{*})^{\perp}=N(\widetilde{V}_{n})\subseteq E^{ \otimes n}\otimes R(\widetilde{V}),\) then there exists \(\zeta\in E^{\otimes n+1}\otimes\mathcal{H}\) such that \((I_{E^{\otimes n}}\otimes\widetilde{V})\zeta=\gamma.\) It follows that_ \[\langle\eta,\gamma\rangle=\langle\eta,(I_{E^{\otimes n}}\otimes\widetilde{V} )\zeta\rangle=\langle(I_{E^{\otimes n}}\otimes\widetilde{V}^{*})\eta,\zeta \rangle=0.\] _Thus \(N(I_{E^{\otimes n}}\otimes\widetilde{V}^{*})\subseteq R(\widetilde{V}_{n}^{*})\) for all \(n\in\mathbb{N}.\)_ **Theorem 3.8**.: _[_23_]_ _Let \((\sigma,V)\) be a regular c.b.c. representation of \(E\) on \(\mathcal{H}\) and \(R^{\infty}(\widetilde{V})\) reduces \((\sigma,V),\) then \((\sigma,V)\) is bi-regular._ **Example 3.9**.: _Let \((\sigma,V)\) be a partial isometric c.b.c. representation of \(E\) on \(\mathcal{H}\)\((\) that is, \(\widetilde{V}\widetilde{V}^{*}\widetilde{V}=\widetilde{V}),\) then \(\widetilde{V}^{*}=\widetilde{V}^{\dagger}.\) This implies that every regular partial isometric c.b.c. representations are bi-regular._ **Example 3.10**.: _For \(n\in\mathbb{N},\) let \(\mathcal{H}\) be a Hilbert space with the orthonormal basis \(\{e_{m}:\,m\in\mathbb{Z}\}\) and \(E\) be an \(n\)-dimensional Hilbert space with the orthonormal basis \(\{\delta_{i}\}_{i\in I_{n}}.\) It has been shown in [23] that if bilateral weighted shift c.b.c. representation \((\rho,S^{w})\) of \(E\) on \(\mathcal{H}\) is regular, then there exists atmost \(m_{0}\in\mathbb{Z}\) such that \(w_{i,m_{0}}=0.\) Let \((\rho,S^{w})\) be a regular c.b.c. representation such that \(w_{i,0}=0\) and \(w_{i,m}\neq 0,m\in\mathbb{Z}\setminus\{0\}\) for all \(i\in I_{n}.\) Since \(V_{i}(e_{m})=w_{i,m}e_{nm+i}\) for all \(m\in\mathbb{Z}\) and \(i\in I_{n}.\) It is easy to see that \(N(V_{i})=span\{e_{0}\}\), \(R(V_{i})=\overline{span}\{e_{j}:\,j\in B_{i}=\{nm+i:\,m\in\mathbb{Z}\setminus\{ 0\}\}\},R(V_{i})\perp R(V_{j})\) for distinct \(i,j\in I_{n}\) and_ \[V_{i}^{*}(e_{j})=\begin{cases}w_{i,\frac{j-i}{n}}e_{\frac{j-i}{n}}&\text{if }j \in B_{i};\\ 0&\text{if }j\notin B_{i}.\end{cases}\] _By definition of the Moore-Penrose Inverse, we have_ \[V_{i}^{\dagger}(e_{j})=\begin{cases}\frac{1}{w_{i},\frac{j-i}{n}}e_{\frac{j-i }{n}}&\text{if }j\in B_{i};\\ 0&\text{if }j\notin B_{i}.\end{cases}\] _This implies that \(V_{i}^{\dagger}(e_{j})=\frac{1}{w_{i},\frac{j-i}{n}}e_{\frac{j-i}{n}}=\frac{1 }{(w_{i},\frac{j-i}{n})^{2}}V_{i}^{*}(e_{j})\) for all \(j\in B_{i}.\) It follows that \(V_{i}^{\dagger}\) is regular for all \(i\in I_{n}.\) Since \(R(V_{i})\perp R(V_{j})\) for distinct \(i,j\in I_{n},\) we have \((\rho,S^{w})\) is bi-regular._ ### Cauchy dual of bi-regular representations A closed \(\sigma(\mathcal{B})\)-invariant subspace \(\mathcal{E}\) of \(\mathcal{H}\) is called _wandering_ for \((\sigma,V)\) if \(\mathcal{E}\) is orthogonal to \(\widetilde{V}_{n}(E^{\otimes n}\otimes\mathcal{E})\) for all \(n\in\mathbb{N}.\) We say that \((\sigma,V)\) has _generating wandering subspace property_ if \[\mathcal{H}=[\mathcal{E}]_{\widetilde{V}}:=\bigvee_{n\geq 0}\widetilde{V}_{n}(E ^{\otimes n}\otimes\mathcal{E}),\] where the wandering subspace \(\mathcal{E}\) is called _generating wandering subspace_. **Definition 3.11**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}.\) We say that \((\sigma,V)\) admits extended Wold-type decomposition if there exists a wandering subspace \(\mathcal{E}\) for \((\sigma,V)\) which divides \(\mathcal{H}\) into the direct sum of \((\sigma,V)\)-reducing closed subspaces_ \[\mathcal{H}=[\mathcal{E}]_{\widetilde{V}}\oplus R^{\infty}(\widetilde{V})\] _such that \(\widetilde{V}\big{|}_{E\otimes R^{\infty}(\widetilde{V})\cap N(\widetilde{V} )^{\perp}}:E\otimes R^{\infty}(\widetilde{V})\cap N(\widetilde{V})^{\perp} \to R^{\infty}(\widetilde{V})\) is a unitary._ Note that, if \((\sigma,V)\) admits the extended Wold-type decomposition and \(\widetilde{V}\big{|}_{E\otimes R^{\infty}(\widetilde{V})}\) is unitary or \(\widetilde{V}\) is one-to-one, then \((\sigma,V)\) satisfies the Wold-type decomposition (see [26, Definition 3.1]). For example, let \((\sigma,V)\) be a regular c.b.c. representation of \(E\) on \(\mathcal{H}\) with fulfilling growth condition (see [23, Theorem 5.10]) such that \(\widetilde{V}^{\dagger}\) is a contraction, then \((\sigma,V)\) admits extended Wold-type decomposition. Throughout this paper, we will use \(\mathcal{E}=\mathcal{H}\ominus\widetilde{V}(E\otimes\mathcal{H}).\) **Proposition 3.12**.: _Let \((\sigma,V)\) be a bi-regular c.b.c. representation of \(E\) on \(\mathcal{H},\) then_ \[\mathcal{H}=[\mathcal{E}]_{\widetilde{V}}\oplus R^{\infty}(\widetilde{V}^{ \prime})=[\mathcal{E}]_{\widetilde{V}^{\prime}}\oplus R^{\infty}(\widetilde{V }).\] Proof.: Since \(S=\widetilde{V}^{\dagger}\) is regular, \(\widetilde{V}^{\prime}=\widetilde{V}^{\dagger*}\) is also regular. We know that \(\mathcal{E}=R(\widetilde{V}^{\prime})^{\perp}\) and \(\widetilde{V}^{\prime\prime}=\widetilde{V},\) then from [23, Corollary 5.6], we get \[R^{\infty}(\widetilde{V})^{\perp}=[\mathcal{E}]_{\widetilde{V}^{\prime}}\quad and \quad R^{\infty}(\widetilde{V}^{\prime})^{\perp}=[\mathcal{E}]_{\widetilde{V }}.\qed\] **Corollary 3.13**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}.\) Let \(\widetilde{V}\) be a left-invertible and \(L\) be its left inverse defined by \(L=(\widetilde{V}^{*}\widetilde{V})^{-1}\widetilde{V}^{*}.\) Then_ \[\mathcal{H}=[\mathcal{E}]_{\widetilde{V}}\oplus R^{\infty}(L^{*})=[\mathcal{E }]_{L^{*}}\oplus R^{\infty}(\widetilde{V}).\] **Corollary 3.14**.: _Let \((\sigma,V)\) be a regular c.b.c. representation of \(E\) on \(\mathcal{H}\). Suppose that \(R^{\infty}(\widetilde{V})\) reduces \((\sigma,V),\) then_ \[\mathcal{H}=[\mathcal{E}]_{\widetilde{V}}\oplus R^{\infty}(\widetilde{V}^{ \prime})=[\mathcal{E}]_{\widetilde{V}^{\prime}}\oplus R^{\infty}(\widetilde{V }).\] **Corollary 3.15**.: _Let \((\sigma,V)\) be a regular c.b.c. representation of \(E\) on \(\mathcal{H}\). Suppose that \(R^{\infty}(\widetilde{V})\) reduces \((\sigma,V)\) and \(R^{\infty}(\widetilde{V}^{\prime})\) reduces \((\sigma,V^{\prime}),\) then \([\mathcal{E}]_{\widetilde{V}}=[\mathcal{E}]_{\widetilde{V}^{\prime}}\) and \(R^{\infty}(\widetilde{V}^{\prime})=R^{\infty}(\widetilde{V}).\) Moreover, \(\mathcal{H}=[\mathcal{E}]_{\widetilde{V}}\oplus R^{\infty}(\widetilde{V}).\)_ Proof.: It follows from Theorem 3.8 and Corollary 3.14. **Remark 3.16**.: 1. _Let_ \((\sigma,V)\) _be a regular c.b.c. representation of_ \(E\) _on_ \(\mathcal{H}.\) _If_ \(\widetilde{V}^{*}=\widetilde{V}^{\dagger}\) _on_ \(R^{\infty}(\widetilde{V})\)_, then_ \(R^{\infty}(\widetilde{V})\) _reduces_ \((\sigma,V).\) _Thus_ \((\sigma,V)\) _is bi-regular. It is easy to verify that the equality_ \(\widetilde{V}^{*}=\widetilde{V}^{\dagger}\) _on_ \(R^{\infty}(\widetilde{V})\) _is equivalent to each one of the following:_ 1. \(\widetilde{V}\widetilde{V}^{*}P_{R^{\infty}(\widetilde{V})}=P_{R^{\infty}( \widetilde{V})};\)__ 2. \(P_{R^{\infty}(\widetilde{V})}\widetilde{V}\widetilde{V}^{*}=P_{R^{\infty}( \widetilde{V})}\widetilde{V}\widetilde{V}^{\dagger};\)__ 3. \(\widetilde{V}^{*}P_{R^{\infty}(\widetilde{V})}=\widetilde{V}^{\dagger}P_{R^{ \infty}(\widetilde{V})};\)__ 4. \(\widetilde{V}^{*}\widetilde{V}P_{E\otimes R^{\infty}(\widetilde{V})}=\widetilde {V}^{\dagger}\widetilde{V}P_{E\otimes R^{\infty}(\widetilde{V})};\)__ 5. \(\widetilde{V}|_{(E\otimes R^{\infty}(\widetilde{V}))\cap N(\widetilde{V})^{ \perp}}:(E\otimes R^{\infty}(\widetilde{V}))\cap N(\widetilde{V})^{\perp}\to R ^{\infty}(\widetilde{V})\) _is a unitary map._ 2. _Under any of the assumptions_ \((i)\)_-_\((v),\) _we have_ \[(\widetilde{V}|_{E\otimes R^{\infty}(\widetilde{V})})^{\dagger}=\widetilde{V}^{ \dagger}|_{R^{\infty}(\widetilde{V})}=\widetilde{V}^{*}|_{R^{\infty}(\widetilde {V})}=(\widetilde{V}|_{E\otimes R^{\infty}(\widetilde{V})})^{*}.\] Now, we present the relation between \(\widetilde{V}^{\prime}\) and \(\widetilde{V}\) in terms of extended Wold-type decomposition. **Proposition 3.17**.: _Let \((\sigma,V)\) be a bi-regular c.b.c. representation of \(E\) on \(\mathcal{H}.\) Then \((\sigma,V)\) admits the extended Wold-type decomposition if and only if \((\sigma,V^{\prime})\) admits it. In this case, we have_ \[R^{\infty}(\widetilde{V})=R^{\infty}(\widetilde{V}^{\prime})\quad\text{and} \quad[\mathcal{E}]_{\widetilde{V}}=[\mathcal{E}]_{\widetilde{V}^{\prime}}.\] Proof.: Let \((\sigma,V)\) be admits the extended Wold-type decomposition, then \(\mathcal{H}=[\mathcal{E}]_{\widetilde{V}}\oplus R^{\infty}(\widetilde{V}).\) By Proposition 3.12, we get \(R^{\infty}(\widetilde{V})=R^{\infty}(\widetilde{V}^{\prime})\) and \([\mathcal{E}]_{\widetilde{V}}=[\mathcal{E}]_{\widetilde{V}^{\prime}}.\) Since \(\widetilde{V}^{\dagger}\) is regular, \(\widetilde{V}^{\prime}=\widetilde{V}^{\dagger*}\) is also regular. From Proposition 2.3, we get \(\widetilde{V}^{\prime}(E\otimes R^{\infty}(\widetilde{V}^{\prime}))=R^{ \infty}(\widetilde{V}^{\prime}).\) Let \(h\in R^{\infty}(\widetilde{V}^{\prime}),\) we have \[\widetilde{V}^{\prime*}h=\widetilde{V}^{\dagger}h\in\widetilde{V}^{\dagger}R ^{\infty}(\widetilde{V})\subseteq E\otimes R^{\infty}(\widetilde{V})=E \otimes R^{\infty}(\widetilde{V}^{\prime}).\] Therefore \(R^{\infty}(\widetilde{V}^{\prime})\) reduces \((\sigma,V^{\prime})\). Next, we want to prove that \[\widetilde{V}^{\prime}|_{(E\otimes R^{\infty}(\widetilde{V}^{\prime}))\cap N (\widetilde{V}^{\prime})^{\perp}}:(E\otimes R^{\infty}(\widetilde{V}^{\prime} ))\cap N(\widetilde{V}^{\prime})^{\perp}\to R^{\infty}(\widetilde{V}^{\prime})\] is unitary. Let \(\eta\in(E\otimes R^{\infty}(\widetilde{V}^{\prime}))\cap N(\widetilde{V}^{ \prime})^{\perp}=(E\otimes R^{\infty}(\widetilde{V}))\cap N(\widetilde{V})^{ \perp}.\) Since \((\sigma,V)\) admits the extended Wold-type decomposition, \(\widetilde{V}|_{(E\otimes R^{\infty}(\widetilde{V}))\cap N(\widetilde{V})^{ \perp}}:(E\otimes R^{\infty}(\widetilde{V}))\cap N(\widetilde{V})^{\perp} \to R^{\infty}(\widetilde{V})\) is unitary, that is, \(\widetilde{V}^{*}\widetilde{V}\eta=\eta.\) If \(\widetilde{V}^{*}\widetilde{V}=\widetilde{V}^{\prime\dagger}\widetilde{V}\) and \(\widetilde{V}^{\prime}\widetilde{V}^{\prime\dagger}\widetilde{V}=\widetilde{V}\), then \(\widetilde{V}^{\prime}\eta=\widetilde{V}^{\prime}\widetilde{V}^{\prime \dagger}\widetilde{V}\eta=\widetilde{V}\eta,\) and hence \(\|\widetilde{V}^{\prime}\eta\|=\|\widetilde{V}\eta\|=\|\eta\|\) for every \(\eta\in(E\otimes R^{\infty}(\widetilde{V}^{\prime}))\cap N(\widetilde{V}^{ \prime})^{\perp}.\) Let \(h\in R^{\infty}(\widetilde{V}^{\prime})=R^{\infty}(\widetilde{V}),\) by Remark 3.16, we get \(\widetilde{V}^{*}h=\widetilde{V}^{\dagger}h.\) Therefore \[\|\widetilde{V}^{\prime*}h\|=\|\widetilde{V}^{\dagger}h\|=\|\widetilde{V}^{*}h \|=\|h\|.\] Thus \(\widetilde{V}^{\prime}|_{(E\otimes R^{\infty}(\widetilde{V}^{\prime}))\cap N( \widetilde{V}^{\prime})^{\perp}}:(E\otimes R^{\infty}(\widetilde{V}^{\prime})) \cap N(\widetilde{V}^{\prime})^{\perp}\to R^{\infty}(\widetilde{V}^{\prime})\) is unitary. The converse follows by \(\widetilde{V}^{\prime\prime}=\widetilde{V}.\) **Remark 3.18**.: _Suppose that \((\sigma,V)\) is a c.b.c. representation of \(E\) on \(\mathcal{H}\) such that \(\widetilde{V}\) is left-invertible, we get [26, Corollary 3.9] from the above Proposition._ The following theorem is a generalization of [25, Theorem 2.4.1] and [10, Theorem 2]. **Theorem 3.19**.: _Let \((\sigma,V)\) be a bi-regular c.b.c. representation of \(E\) on \(\mathcal{H}.\) The following statements are equivalent:_ 1. \(\mathcal{H}\neq[\mathcal{E}]_{\widetilde{V}};\)__ 2. \(R^{\infty}(\widetilde{V}^{\prime})\neq\{0\};\)__ 3. _there exists a non-zero closed subspace_ \(\mathcal{M}\) _such that_ \(\mathcal{M}\subseteq\widetilde{V}^{\prime}(E\otimes\mathcal{M});\)__ 4. _there exists a non-zero closed subspace_ \(\mathcal{M}\subseteq R(\widetilde{V})\) _such that_ \(\widetilde{V}^{*}\mathcal{M}\subseteq E\otimes\mathcal{M};\)__ 5. _there exists a closed subspace_ \(\mathcal{M}\supseteq\mathcal{E},\mathcal{M}\neq\mathcal{H}\) _such that_ \(\widetilde{V}(E\otimes\mathcal{M})\subseteq\mathcal{M};\)__ 6. _there exists a closed subspace_ \(\mathcal{M}\supseteq\mathcal{E},\mathcal{M}\neq\mathcal{H}\) _such that_ \(E\otimes\mathcal{M}\subseteq\widetilde{V}^{\dagger}\mathcal{M}.\)__ 7. _there exist non-zero closed subspaces_ \(\mathcal{M}_{1},\mathcal{M}_{2}\) _such that_ \(\widetilde{V}(E\otimes\mathcal{M}_{1})\subseteq\mathcal{M}_{1},\widetilde{V} ^{\dagger}(\mathcal{M}_{1})\subseteq E\otimes(\mathcal{M}_{1}\oplus\mathcal{E }),P_{\mathcal{M}_{2}}\widetilde{V}(E\otimes\mathcal{M}_{2})=\mathcal{M}_{2}\) _and_ \(R(\widetilde{V})=\mathcal{M}_{1}\oplus\mathcal{M}_{2}\)_, where_ \(P_{\mathcal{M}_{2}}\) _is the orthogonal projection of_ \(\mathcal{H}\) _onto_ \(\mathcal{M}_{2}.\)__ Proof.: From Proposition 3.12, \((i)\Leftrightarrow(ii).\) \((ii)\Rightarrow(iii)\) Let \(\mathcal{M}=R^{\infty}(\widetilde{V}^{\prime})\neq\{0\}.\) Since \(\widetilde{V}^{\prime}\) is regular, \(R^{\infty}(\widetilde{V}^{\prime})\) is closed and by Proposition 2.3, we have \(\widetilde{V}^{\prime}(E\otimes R^{\infty}(\widetilde{V}^{\prime}))=R^{\infty }(\widetilde{V}^{\prime}).\) Therefore \(\mathcal{M}\subseteq\widetilde{V}^{\prime}(E\otimes\mathcal{M}).\) \((iii)\Rightarrow(ii)\) Let \(\mathcal{M}\subseteq\widetilde{V}^{\prime}(E\otimes\mathcal{M}),\) then \(\mathcal{M}\subseteq\widetilde{V}^{\prime}_{n}(E^{\otimes n}\otimes\mathcal{M})\) for all \(n\geq 0,\) and hence \(\mathcal{M}\subseteq\bigcap_{n=0}^{\infty}\widetilde{V}^{\prime}_{n}(E^{ \otimes n}\otimes\mathcal{M}).\) It follows that \(\{0\}\neq\mathcal{M}\subseteq R^{\infty}(\widetilde{V}^{\prime}).\) \((ii)\Rightarrow(iv)\) Let \(\mathcal{M}=R^{\infty}(\widetilde{V}^{\prime})\neq\{0\},\) then \(\mathcal{M}\subseteq R(\widetilde{V}).\) Since \(\widetilde{V}^{\prime}\) is regular and \(\widetilde{V}^{*}\) is the generalized inverse of \(\widetilde{V}^{\prime},\) by Proposition 2.3 we get \(\widetilde{V}^{*}(R^{\infty}(\widetilde{V}^{\prime}))\subseteq E\otimes R^{ \infty}(\widetilde{V}^{\prime}).\) \((iv)\Rightarrow(iii)\) Since \(\mathcal{M}\subseteq R(\widetilde{V})\) and \(\widetilde{V}^{\prime}\widetilde{V}^{*}=(\widetilde{V}\widetilde{V}^{\dagger })^{*}=\widetilde{V}\widetilde{V}^{\dagger}\) is the orthogonal projection of \(\mathcal{H}\) onto \(R(\widetilde{V}),\) we get \(\mathcal{M}=\widetilde{V}^{\prime}\widetilde{V}^{*}\mathcal{M}\subseteq \widetilde{V}^{\prime}(E\otimes\mathcal{M}).\) \((i)\Rightarrow(v)\) Suppose that \(\mathcal{M}=[\mathcal{E}]_{\widetilde{V}},\) then \(\mathcal{E}\subseteq\mathcal{M}\neq\mathcal{H}\) and \(\widetilde{V}(E\otimes\mathcal{M})\subseteq\mathcal{M}.\) \((v)\Rightarrow(iv)\) Since \(\mathcal{E}\subseteq\mathcal{M}\neq\mathcal{H},\) we have \(\{0\}\neq\mathcal{M}^{\perp}\subseteq R(\widetilde{V}).\) From \(\widetilde{V}(E\otimes\mathcal{M})\subseteq\mathcal{M},\) we get \(\widetilde{V}^{*}\mathcal{M}^{\perp}\subseteq E\otimes\mathcal{M}^{\perp}.\) \((i)\Rightarrow(vi)\) Suppose that \(\mathcal{M}=[\mathcal{E}]_{\widetilde{V}},\) then \(\mathcal{E}\subseteq\mathcal{M}\neq\mathcal{H}.\) Since \(\widetilde{V}^{\prime}\) is regular, we have \((E\otimes R^{\infty}(\widetilde{V}^{\prime}))^{\perp}\subseteq N(\widetilde{ V}^{\prime})^{\perp}=R(\widetilde{V}^{\dagger}).\) From Proposition 3.12, \(R^{\infty}(\widetilde{V}^{\prime})^{\perp}=[\mathcal{E}]_{\widetilde{V}},\) and hence \(E\otimes\mathcal{M}\subseteq R(\widetilde{V}^{\dagger}).\) Since \(\widetilde{V}(E\otimes\mathcal{M})\subseteq\mathcal{M}\) and \(\widetilde{V}^{\dagger}\widetilde{V}=P_{R(\widetilde{V}^{\dagger})},\) we get \(E\otimes\mathcal{M}=\widetilde{V}^{\dagger}\widetilde{V}(E\otimes\mathcal{M}) \subseteq\widetilde{V}^{\dagger}\mathcal{M}.\) \((vi)\Rightarrow(iii)\) Since \(\mathcal{E}\subseteq\mathcal{M}\neq\mathcal{H},\) we have \(\{0\}\neq\mathcal{M}^{\perp}\subseteq R(\widetilde{V}).\) We want to prove that \(\mathcal{M}^{\perp}\subseteq\widetilde{V}^{\prime}(E\otimes\mathcal{M}^{ \perp}).\) Let \(h\in\mathcal{M}^{\perp}\subseteq R(\widetilde{V})=R(\widetilde{V}^{\prime}),\) then there exists \(\eta\in E\otimes\mathcal{H}\) such that \(h=\widetilde{V}^{\prime}\eta\) and \(\eta=\eta_{1}+\eta_{2}\) for some \(\eta_{1}\in E\otimes\mathcal{M}\) and \(\eta_{2}\in E\otimes\mathcal{M}^{\perp}.\) Suppose that \(\eta_{1}\neq 0,\) then for every \(m\in\mathcal{M},\)\(0=\langle h,m\rangle=\langle\widetilde{V}^{\prime}(\eta_{1}+\eta_{2}),m\rangle= \langle(\eta_{1}+\eta_{2}),\widetilde{V}^{\dagger}m\rangle,\) and hence \(\eta_{1}+\eta_{2}\in(\widetilde{V}^{\dagger}\mathcal{M})^{\perp}\subseteq E \otimes\mathcal{M}^{\perp}.\) Since \(\eta_{1}+\eta_{2}\in E\otimes\mathcal{M}^{\perp}\) and \(\eta_{2}\in E\otimes\mathcal{M}^{\perp},\) we have \(\eta_{1}\in E\otimes\mathcal{M}^{\perp},\) and thus \(\eta_{1}=0\) is a contradiction. It follows that \(\mathcal{M}^{\perp}\subseteq\widetilde{V}^{\prime}(E\otimes\mathcal{M}^{ \perp}).\) \((i)\Rightarrow(vii)\) Let \(\mathcal{M}_{1}=\bigvee_{n=1}\widetilde{V}_{n}(E^{\otimes n}\otimes\mathcal{E})\) and \(\mathcal{M}_{2}=R^{\infty}(\widetilde{V}^{\prime}).\) Clearly \(\mathcal{M}_{1},\mathcal{M}_{2}\neq\{0\}\) and by Proposition 3.12, \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) are orthogonal. If \(\mathcal{E}\perp\widetilde{V}_{n}(E^{\otimes n}\otimes\mathcal{E})\) for every \(n\geq 1,\) then \[R(\widetilde{V})=\mathcal{E}^{\perp}=([\mathcal{E}]_{\widetilde{V}}\oplus R^{ \infty}(\widetilde{V}^{\prime}))\ominus\mathcal{E}=\mathcal{M}_{1}\oplus \mathcal{M}_{2}.\] It is easy to verify that \(\widetilde{V}(E\otimes\mathcal{M}_{1})\subseteq\mathcal{M}_{1}.\) Since \(\mathcal{M}_{1}\oplus\mathcal{E}=[\mathcal{E}]_{\widetilde{V}},\) from Proposition 3.12, \((\mathcal{M}_{1}\oplus\mathcal{E})^{\perp}=R^{\infty}(\widetilde{V}^{\prime}).\) Using Proposition 2.3, since \(\widetilde{V}^{\prime}\) is regular, \(\widetilde{V}^{\prime}(E\otimes R^{\infty}(\widetilde{V}^{\prime}))\subseteq R ^{\infty}(\widetilde{V}^{\prime}),\) equivalently, \(\widetilde{V}^{\dagger}(\mathcal{M}_{1}\oplus\mathcal{E})\subseteq E\otimes( \mathcal{M}_{1}\oplus\mathcal{E}),\) it gives \(\widetilde{V}^{\dagger}(\mathcal{M}_{1})\subseteq E\otimes(\mathcal{M}_{1} \oplus\mathcal{E}).\) Next we want to prove that \(P_{\mathcal{M}_{2}}\widetilde{V}(E\otimes\mathcal{M}_{2})=\mathcal{M}_{2}.\) Clearly \(P_{\mathcal{M}_{2}}\widetilde{V}(E\otimes\mathcal{M}_{2})\subseteq\mathcal{M} _{2},\) let \(h\in\mathcal{M}_{2}=R^{\infty}(\widetilde{V}^{\prime})\subseteq R(\widetilde{V }),\) then there exists \(\eta\in E\otimes\mathcal{H}\) such that \[h=\widetilde{V}\eta\quad and\quad\eta=\eta_{1}+\eta_{2}\] for some \(\eta_{1}\in E\otimes[\mathcal{E}]_{\widetilde{V}}\) and \(\eta_{2}\in E\otimes R^{\infty}(\widetilde{V}^{\prime}))=E\otimes\mathcal{M} _{2}.\) Since \(\widetilde{V}\eta_{1}\in[\mathcal{E}]_{\widetilde{V}}\perp\mathcal{M}_{2},\) we have \(h=P_{\mathcal{M}_{2}}h=P_{\mathcal{M}_{2}}\widetilde{V}\eta_{2}\in P_{\mathcal{ M}_{2}}\widetilde{V}(E\otimes\mathcal{M}_{2}).\) \((vii)\Rightarrow(v)\) Suppose \((vii)\) holds. First, we want to prove that \(\widetilde{V}(E\otimes\mathcal{E})\subseteq\mathcal{M}_{1}.\) Let \(\eta\in E\otimes\mathcal{E},\) since \(\widetilde{V}\eta\in R(\widetilde{V})=\mathcal{M}_{1}\oplus\mathcal{M}_{2},\) we have \(\widetilde{V}\eta=h_{1}+h_{2},\) where \(h_{1}\in\mathcal{M}_{1}\) and \(h_{2}\in\mathcal{M}_{2}=P_{\mathcal{M}_{2}}\widetilde{V}(E\otimes\mathcal{M} _{2}),\) and hence \(h_{2}=P_{\mathcal{M}_{2}}\widetilde{V}\eta_{1}\) for some \(\eta_{1}\in E\otimes\mathcal{M}_{2}.\) Using orthogonal decomposition of \(R(\widetilde{V}),\) we have \(\widetilde{V}\eta_{1}=h_{3}+h_{2}\) for some \(h_{3}\in\mathcal{M}_{1},\) and hence \(\widetilde{V}\eta=h+\widetilde{V}\eta_{1},\) where \(h=h_{1}-h_{3}\in\mathcal{M}_{1}.\) It follows that \(\widetilde{V}^{\dagger}\widetilde{V}\eta=\widetilde{V}^{\dagger}h+\widetilde{ V}^{\dagger}\widetilde{V}\eta_{1}.\) Since \(\widetilde{V}^{\dagger}\) is regular, \(\eta\in E\otimes\mathcal{E}=E\otimes N(\widetilde{V}^{\dagger})\subseteq R (\widetilde{V}^{\dagger}),\) and thus \(\widetilde{V}^{\dagger}\widetilde{V}\eta=\eta=\widetilde{V}^{\dagger}h+ \widetilde{V}^{\dagger}\widetilde{V}\eta_{1}.\) On the other side, \(\widetilde{V}^{\dagger}h\in\widetilde{V}^{\dagger}(\mathcal{M}_{1})\subseteq E \otimes(\mathcal{M}_{1}\oplus\mathcal{E})=E\otimes\mathcal{M}_{2}{}^{\perp}\) and \(\eta\in E\otimes\mathcal{E}\subseteq E\otimes\mathcal{M}_{2}{}^{\perp}.\) This implies that \(\widetilde{V}^{\dagger}\widetilde{V}\eta_{1}\in E\otimes\mathcal{M}_{2}{}^{ \perp}=E\otimes(\mathcal{M}_{1}\oplus\mathcal{E}),\) then there exist \(z_{1}\in E\otimes\mathcal{M}_{1}\) and \(z_{2}\in E\otimes\mathcal{E}\) such that \(\widetilde{V}^{\dagger}\widetilde{V}\eta_{1}=z_{1}+z_{2}.\) Now we apply \(\widetilde{V},\) we get \(\widetilde{V}\eta_{1}=\widetilde{V}z_{1}+\widetilde{V}z_{2},\) it gives \(z_{1}-\eta_{1}+z_{2}\in N(\widetilde{V})\subseteq E\otimes R(\widetilde{V}).\) Since \(z_{1}-\eta_{1}\in E\otimes(\mathcal{M}_{1}\oplus\mathcal{M}_{2})=E\otimes R (\widetilde{V}),\) we have \(z_{2}\in E\otimes R(\widetilde{V}).\) Therefore \(z_{2}=0,\) and hence \(\widetilde{V}\eta_{1}=\widetilde{V}z_{1}.\) Finally, we get \[\widetilde{V}\eta=h+\widetilde{V}z_{1}\in\mathcal{M}_{1}.\] Since \(\widetilde{V}(E\otimes\mathcal{M}_{1})\subseteq\mathcal{M}_{1},\) we have \(\widetilde{V}(E\otimes(\mathcal{M}_{1}\oplus\mathcal{E}))\subseteq\mathcal{M} _{1}\subseteq\mathcal{M}_{1}\oplus\mathcal{E}.\) This implies that a closed subspace \(\mathcal{M}:=\mathcal{M}_{1}\oplus\mathcal{E}\) and \(\mathcal{M}\neq\mathcal{H}\) such that \(\widetilde{V}(E\otimes\mathcal{M})\subseteq\mathcal{M}.\) ## 4. Powers of Moore-Penrose inverse In this section, we will discuss the powers of Moore-Penrose inverse of c.b.c. representations. Suppose \(\widetilde{V}_{n}^{\dagger}\) is the Moore-Penrose Inverse of \(\widetilde{V}_{n}.\) In general, the equation \(\widetilde{V}_{n}^{\dagger}=\widetilde{V}^{\dagger(n)},n\geq 2,\) is not true (cf. [10, Example 4]). We have the following definition. **Definition 4.1**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}\) with closed range. For \(n\geq 2,\)\((\sigma,V)\) is called \(n\)-dagger if \(\widetilde{V}^{\dagger(n)}=\widetilde{V}_{n}^{\dagger}.\) Moreover, if \((\sigma,V)\) is \(n\)-dagger for every \(n\geq 2,\) then we say that \((\sigma,V)\) is hyper-dagger._ The next result present the regularity between \(\widetilde{V}\) and \(\widetilde{V}^{\dagger}.\) **Proposition 4.2**.: _Let \((\sigma,V)\) be a hyper-dagger c.b.c. representation of \(E\) on \(\mathcal{H}.\) Then \((\sigma,V)\) is regular if and only if \(\widetilde{V}^{\dagger}\) is regular._ Proof.: Suppose that \((\sigma,V)\) is regular, then \(\widetilde{V}^{*}\) is also regular. Since \(R(\widetilde{V})\) is closed, \(R(\widetilde{V}^{*})=R(\widetilde{V}^{\dagger})\) is also closed, we get \(E^{\otimes n}\otimes N(\widetilde{V}^{\dagger})=E^{\otimes n}\otimes N( \widetilde{V}^{*})\subseteq R(\widetilde{V}_{n}^{*})=R(\widetilde{V}_{n}^{ \dagger})=R(\widetilde{V}^{\dagger(n)})\) for every \(n\geq 1.\) Thus \(\widetilde{V}^{\dagger}\) is regular. On the other hand, since \((\widetilde{V}^{\dagger})^{\dagger}=\widetilde{V},\) by given hypothesis, \((\sigma,V)\) is regular. We recall that, let \(P\) and \(Q\) be two orthogonal projections of \(\mathcal{H}\) onto \(R(P)\) and \(R(Q),\) respectively. Then \(PQ=QP=P\) if and only if \(R(P)\subseteq R(Q).\) The following theorem is an analogue of [10, Theorem 3]. **Theorem 4.3**.: _Let \((\sigma,V)\) be a regular hyper-dagger c.b.c. representation of \(E\) on \(\mathcal{H},\) then_ 1. \(P_{\mathcal{E}}=I-\widetilde{V}\widetilde{V}^{\dagger}\) _is the orthogonal projection of_ \(\mathcal{H}\) _onto_ \(\mathcal{E}=R(\widetilde{V})^{\perp};\)__ 2. \(\widetilde{V}_{n}\widetilde{V}^{\dagger(n)}\) _converges strongly to the orthogonal projection_ \(P\) _onto_ \(R^{\infty}(\widetilde{V});\)__ 3. \(\sum_{n=0}^{\infty}\widetilde{V}_{n}(I_{E^{\otimes n}}\otimes P_{\mathcal{E} })\widetilde{V}^{\dagger(n)}\) _converges strongly to_ \(Q:=I-P;\)__ 4. \(R(Q)=\{h\in\mathcal{H}:\quad\lim_{n\to\infty}\|\widetilde{V}_{n}\widetilde{V }^{\dagger(n)}h\|=0\};\)__ 5. \(R(P)\) _and_ \(R(Q)\) _are_ \((\sigma,V)\)_-reduces._ Proof.: (1) We know that \(\widetilde{V}\widetilde{V}^{\dagger}\) is the orthogonal projection of \(\mathcal{H}\) onto \(R(\widetilde{V}),\) then \(I-\widetilde{V}\widetilde{V}^{\dagger}\) is the orthogonal projection onto \(R(\widetilde{V})^{\perp}=\mathcal{E}.\) (2) Let \((\sigma,V)\) be a hyper-dagger, then \(P_{n}:=\widetilde{V}_{n}\widetilde{V}^{\dagger(n)}\) is the orthogonal projection of \(\mathcal{H}\) onto \(R(\widetilde{V}_{n}).\) Since \(R(\widetilde{V}_{n+1})\subseteq R(\widetilde{V}_{n}),\) the sequence \((P_{n})_{n\geq 1}\) converges strongly to the orthogonal projection \(P.\) We want to prove \(R(P)=R^{\infty}(\widetilde{V}).\) Let \(h\in R^{\infty}(\widetilde{V}),\) then \(P_{n}h=h\) for all \(n\geq 1,\) and hence \(Ph=h.\) On the other hand, let \(i,j\geq 0\) and \(h\in\mathcal{H},\) since \(R(\widetilde{V}_{i+j})\subseteq R(\widetilde{V}_{i}),\) we have \(P_{i}P_{i+j}h=P_{i+j}h.\) As \(j\to\infty,\) we get \(P_{i}Ph=Ph\) for all \(i\geq 0.\) It follows that \(Ph\in R^{\infty}(\widetilde{V}).\) (3) Since \(R(\widetilde{V}_{n+1})\subseteq R(\widetilde{V}_{n}),\) we define \(Q_{n}:=P_{n}-P_{n+1}=\widetilde{V}_{n}(I_{E^{\otimes n}}\otimes P_{\mathcal{E }})\widetilde{V}^{\dagger(n)},\) it is easy to verify that \(Q_{n}^{*}=Q_{n}\) and \(R(Q_{n})=R(\widetilde{V}_{n})\cap R(\widetilde{V}_{n+1})^{\perp}.\) Since \((\sigma,V)\) is regular, \(\widetilde{V}^{*}\) is also regular. It gives \(E^{\otimes n}\otimes\mathcal{E}=E^{\otimes n}\otimes R(\widetilde{V})^{\perp }=E^{\otimes n}\otimes N(\widetilde{V}^{*})\subseteq R(\widetilde{V}_{n}^{*}),\) and thus \[Q_{n}^{2} =\widetilde{V}_{n}(I_{E^{\otimes n}}\otimes P_{\mathcal{E}}) \widetilde{V}^{\dagger(n)}\widetilde{V}_{n}(I_{E^{\otimes n}}\otimes P_{ \mathcal{E}})\widetilde{V}^{\dagger(n)}\] \[=\widetilde{V}_{n}(I_{E^{\otimes n}}\otimes P_{\mathcal{E}})P_{R( \widetilde{V}_{n}^{*})}(I_{E^{\otimes n}}\otimes P_{\mathcal{E}})\widetilde{V} ^{\dagger(n)}=\widetilde{V}_{n}(I_{E^{\otimes n}}\otimes P_{\mathcal{E}}) \widetilde{V}^{\dagger(n)}=Q_{n}.\] Now, we want to prove \(Q_{n}\) is mutually orthogoanl to \(Q_{m}\) for all \(n\neq m,\) that is, \(R(Q_{n})\perp R(Q_{m})\) for \(n\neq m.\) Suppose \(1\leq n<m,\) we have \[R(Q_{n})\subseteq R(\widetilde{V}_{n+1})^{\perp}\subseteq R(\widetilde{V}_{m })^{\perp}\subseteq R(\widetilde{V}_{m})^{\perp}\cup R(\widetilde{V}_{m+1})=R (Q_{m})^{\perp}.\] Then, the sequence \((\sum_{i=0}^{n}Q_{i})_{n\geq 0}\) converges strongly to an orthogonal projection \(Q,\) where \(R(Q)=\bigoplus_{i=0}^{\infty}(R(\widetilde{V}_{i})\cap R(\widetilde{V}_{i+1})^ {\perp}).\) Since \(\sum_{i=0}^{n-1}Q_{i}=I-P_{n},\) we get \(\sum_{n=0}^{\infty}\widetilde{V}_{n}(I_{E^{\otimes n}}\otimes P_{\mathcal{E}}) \widetilde{V}^{\dagger(n)}\) converges strongly to \(Q=I-P.\) (4) Since \(\sum_{i=0}^{n-1}Q_{i}=I-P_{n}=I-\widetilde{V}_{n}\widetilde{V}^{\dagger(n)},\) we have \(R(Q)=\{h\in\mathcal{H}:\lim_{n\to\infty}\|\widetilde{V}_{n}\widetilde{V}^{ \dagger(n)}h\|=0\}.\) (5) Let \((\sigma,V)\) be a regular, then \(\widetilde{V}(E\otimes R^{\infty}(\widetilde{V}))=R^{\infty}(\widetilde{V}).\) On the other hand, let \(\eta\in E\otimes\mathcal{H}=N(\widetilde{V})\oplus N(\widetilde{V})^{\perp }=N(\widetilde{V})\oplus R(\widetilde{V}^{*})=N(\widetilde{V})\oplus R( \widetilde{V}^{\dagger}),\) then there exist \(h\in\mathcal{H}\) and \(\xi\in N(\widetilde{V})\) such that \(\eta=\widetilde{V}^{\dagger}h+\xi.\) For \(m\geq 1,\) we have \[P_{m+1}\widetilde{V}\eta=\widetilde{V}_{m+1}\widetilde{V}^{\dagger(m+1)} \widetilde{V}\widetilde{V}^{\dagger}h=\widetilde{V}_{m+1}\widetilde{V}^{ \dagger(m+1)}h=P_{m+1}h\quad and\] \[\widetilde{V}(I_{E}\otimes P_{m})\eta=\widetilde{V}_{m+1}\widetilde{V}^{\dagger(m+1 )}h+\widetilde{V}_{m+1}(I_{E}\otimes\widetilde{V}^{\dagger(m)})\xi=P_{m+1}h+ \widetilde{V}_{m+1}(I_{E}\otimes\widetilde{V}^{\dagger(m)})\xi.\] Since \(\widetilde{V}^{\dagger}\) is generalized inverse of \(\widetilde{V}\) and \((\sigma,V)\) is regular. Using mathematical induction, from Proposition 2.2, we get \((I_{E}\otimes\widetilde{V}^{\dagger(m)})N(\widetilde{V})\subseteq N( \widetilde{V}_{m+1}).\) Therefore \(\widetilde{V}(I_{E}\otimes P_{m})\eta=P_{m+1}h,\) and thus \(P_{m+1}\widetilde{V}\eta=\widetilde{V}(I_{E}\otimes P_{m})\eta\) for all \(\eta\in E\otimes\mathcal{H}.\) As \(m\to\infty,\) we obtain \(P\widetilde{V}=\widetilde{V}(I_{E}\otimes P),\) and hence \(R(P)=R^{\infty}(\widetilde{V})\) is \((\sigma,V)\)-reduces. And \(R(Q)=R(P)^{\perp}\) is also \((\sigma,V)\)-reduces. The next result is an analogue of [14, Lemma 3.1]. **Lemma 4.4**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}\) with closed range such that \((I_{E}\otimes\widetilde{V}_{i}\widetilde{V}_{i}^{*})N(\widetilde{V})\subseteq N (\widetilde{V})\) for all \(1\leq i\leq n,\) then \(R(\widetilde{V}_{i+1})\) is closed for \(1\leq i\leq n.\)_ Proof.: We will prove using the mathematical induction. Since \(R(\widetilde{V})\) is closed, \(R(\widetilde{V}^{*})\) is also closed. Suppose that \(R(\widetilde{V}_{i}^{*})\) is closed. We want to prove that \(R(\widetilde{V}_{i+1}^{*})\) is closed. Let \(\zeta\in R(\widetilde{V}_{i+1}^{*})\subseteq R(I_{E}\otimes\widetilde{V}_{i} ^{*})=R(I_{E}\otimes\widetilde{V}_{i}^{*}),\) then there exist \(\eta\in E\otimes\mathcal{H}\) such that \(\zeta=(I_{E}\otimes\widetilde{V}_{i}^{*})\eta.\) Since \(\eta\in E\otimes\mathcal{H}=N(\widetilde{V})\oplus N(\widetilde{V})^{\perp}=N (\widetilde{V})\oplus R(\widetilde{V}^{*}),\) we can write \(\eta=\xi+\widetilde{V}^{*}h\) for some \(\xi\in N(\widetilde{V})\) and \(h\in\mathcal{H}.\) If \((I_{E}\otimes\widetilde{V}_{i}\widetilde{V}_{i}^{*})N(\widetilde{V})\subseteq N (\widetilde{V})\) for \(1\leq i\leq n,\) then \[\widetilde{V}_{i+1}\zeta=\widetilde{V}_{i+1}(I_{E}\otimes\widetilde{V}_{i}^{*} )\xi+\widetilde{V}_{i+1}(I_{E}\otimes\widetilde{V}_{i}^{*})\widetilde{V}^{*}h= \widetilde{V}(I_{E}\otimes\widetilde{V}_{i}\widetilde{V}_{i}^{*})\xi+ \widetilde{V}_{i+1}\widetilde{V}_{i+1}^{*}h=\widetilde{V}_{i+1}\widetilde{V}_{ i+1}^{*}h,\] and hence \(\zeta-\widetilde{V}_{i+1}^{*}h\in N(\widetilde{V}_{i+1}).\) Since \(\zeta\in\overline{R(\widetilde{V}_{i+1}^{*})}\) and \(\widetilde{V}_{i+1}^{*}h\in\overline{R(\widetilde{V}_{i+1}^{*})},\)\(\zeta-\widetilde{V}_{i+1}^{*}h\in\overline{R(\widetilde{V}_{i+1}^{*})}=N( \widetilde{V}_{i+1})^{\perp}.\) Therefore \(\zeta=\widetilde{V}_{i+1}^{*}h\in R(\widetilde{V}_{i+1}^{*}).\) It follows that \(R(\widetilde{V}_{i+1}^{*})\) is closed, and consequently, \(R(\widetilde{V}_{i+1})\) is also closed. **Theorem 4.5**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}\) with closed range such that \((I_{E}\otimes\widetilde{V}_{i}\widetilde{V}_{i}^{*})N(\widetilde{V})\subseteq N (\widetilde{V})\) for all \(1\leq i\leq n,\) then \(\widetilde{V}_{i+1}^{\dagger}=\widetilde{V}^{\dagger(i+1)}\) on \(R(\widetilde{V}_{i+1}).\)_ Proof.: From Lemma 4.4, \(R(\widetilde{V}_{i+1})\) is closed for \(1\leq i\leq n.\) We will prove using mathematical induction, for \(i=1,\) since \(E^{\otimes 2}\otimes\mathcal{H}=N(\widetilde{V}_{2})\oplus N(\widetilde{V}_{2})^{\perp}.\) Let \(h\in R(\widetilde{V}_{2})\subseteq R(\widetilde{V}),\) then there exist \(\zeta\in N(\widetilde{V}_{2})^{\perp}\) and \(\eta\in N(\widetilde{V})^{\perp}\) such that \(h=\widetilde{V}_{2}\zeta\) and \(h=\widetilde{V}\eta.\) It follows that \(\eta-(I_{E}\otimes\widetilde{V})\zeta\in N(\widetilde{V}).\) Since \(\zeta\in N(\widetilde{V}_{2})^{\perp}=R(\widetilde{V}_{2}^{*}),\) we have \(\zeta=(I_{E}\otimes\widetilde{V}^{*})v\) for some \(v\in N(\widetilde{V})^{\perp},\) and hence \((I_{E}\otimes\widetilde{V})\zeta=(I_{E}\otimes\widetilde{V}\widetilde{V}^{*}) v\in(I_{E}\otimes\widetilde{V}\widetilde{V}^{*})N(\widetilde{V})^{\perp} \subseteq N(\widetilde{V})^{\perp}.\) Therefore \(\eta-(I_{E}\otimes\widetilde{V})\zeta\in N(\widetilde{V})^{\perp}\cap N( \widetilde{V})=\{0\},\) and thus \(\eta=(I_{E}\otimes\widetilde{V})\zeta.\) Since \(N(\widetilde{V}_{2})^{\perp}\subseteq N(I_{E}\otimes\widetilde{V})^{\perp}\) and \(\widetilde{V}^{\dagger}\widetilde{V}=P_{N(\widetilde{V})^{\perp}},\) we have \[\widetilde{V}^{\dagger(2)}h =(I_{E}\otimes\widetilde{V}^{\dagger})\widetilde{V}^{\dagger}h=(I _{E}\otimes\widetilde{V}^{\dagger})\widetilde{V}^{\dagger}\widetilde{V}\eta=(I_ {E}\otimes\widetilde{V}^{\dagger})\eta=(I_{E}\otimes\widetilde{V}^{\dagger} \widetilde{V})\zeta\] \[=P_{N(I_{E}\otimes\widetilde{V})^{\perp}}\zeta=\zeta=P_{N( \widetilde{V}_{2})^{\perp}}\zeta=\widetilde{V}_{2}^{\dagger}\widetilde{V}_{2} \zeta=\widetilde{V}_{2}^{\dagger}h.\] Assume that \(\widetilde{V}_{i+1}^{\dagger}=\widetilde{V}^{\dagger(i+1)}\) on \(R(\widetilde{V}_{i+1}).\) Let \(h\in R(\widetilde{V}_{i+2})\subseteq R(\widetilde{V}),\) then there exist \(\zeta\in N(\widetilde{V}_{i+2})^{\perp}\) and \(\eta\in N(\widetilde{V})^{\perp}\) such that \(h=\widetilde{V}_{i+2}\zeta\) and \(h=\widetilde{V}\eta.\) It follows that \(\eta-(I_{E}\otimes\widetilde{V}_{i+1})\zeta\in N(\widetilde{V}),\widetilde{V}^{ \dagger}h=\eta\) and \[\widetilde{V}_{i+2}^{\dagger}h=\widetilde{V}_{i+2}^{\dagger}\widetilde{V}_{i+2} \zeta=P_{N(\widetilde{V}_{i+2})^{\perp}}\zeta=\zeta.\] Since \(\zeta\in N(\widetilde{V}_{i+2})^{\perp}=R(\widetilde{V}_{i+2}^{*}),\) we have \(\zeta=(I_{E}\otimes\widetilde{V}_{i+1}^{*})v\) for some \(v\in N(\widetilde{V})^{\perp},\) and thus \((I_{E}\otimes\widetilde{V}_{i+1})\zeta=(I_{E}\otimes\widetilde{V}_{i+1} \widetilde{V}_{i+1}^{*})v\in(I_{E}\otimes\widetilde{V}_{i+1}\widetilde{V}_{i+1} ^{*})N(\widetilde{V})^{\perp}\subseteq N(\widetilde{V})^{\perp}.\) Therefore \(\eta-(I_{E}\otimes\widetilde{V}_{i+1})\zeta\in N(\widetilde{V})^{\perp}\cap N( \widetilde{V})=\{0\},\) and hence \(\eta=(I_{E}\otimes\widetilde{V}_{i+1})\zeta\in R(I_{E}\otimes\widetilde{V}_{i +1}).\) Since \(\zeta\in N(\widetilde{V}_{i+2})^{\perp}\subseteq N(I_{E}\otimes\widetilde{V}_{i +1})^{\perp},\eta\in R(I_{E}\otimes\widetilde{V}_{i+1})\) and \(\widetilde{V}_{i+1}^{\dagger}=\widetilde{V}^{\dagger(i+1)}\) on \(R(\widetilde{V}_{i+1}),\) we have \[\widetilde{V}^{\dagger(i+2)}h =(I_{E}\otimes\widetilde{V}^{\dagger(i+1)})\widetilde{V}^{ \dagger}h=(I_{E}\otimes\widetilde{V}^{\dagger(i+1)})\eta=(I_{E}\otimes \widetilde{V}_{i+1}^{\dagger})\eta\] \[=(I_{E}\otimes\widetilde{V}_{i+1}^{\dagger}\widetilde{V}_{i+1}) \zeta=P_{N(I_{E}\otimes\widetilde{V}_{i+1})^{\perp}}\zeta=\zeta.\] This completes the proof. ## 5. Cauchy dual of concave covariant representations In this section, we will discuss the Cauchy dual of concave completely bounded covariant representations. The following definitions draw inspiration from a paper by Ezzahraoui, Mbekhta, and Zerouali [9, Section 4]. **Definition 5.1**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}.\)_ * _The representation_ \((\sigma,V)\) _is said to be_ hyponormal modulo__\(N(\widetilde{V})\) _if_ \[\|(I_{E}\otimes\widetilde{V}^{*})\eta\|\leq\|\widetilde{V}\eta\|\quad for\quad every \quad\eta\in N(\widetilde{V})^{\perp};\] * _The representation_ \((\sigma,V)\) _is called_ \(n\)_-expansive modulo_ \(N(\widetilde{V})\) _if_ \[\sum_{j=0}^{n}(-1)^{j}\binom{n}{j}\|(I_{E^{\otimes(n-j)}}\otimes\widetilde{V}_ {j})\xi\|^{2}\leq 0\quad for\quad all\quad\xi\in N(I_{E^{\otimes(n-1)}} \otimes\widetilde{V})^{\perp}.\] **Definition 5.2**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}\) which satisfies_ \[\|\widetilde{V}_{2}\zeta\|^{2}+\|\zeta\|^{2}\leq 2\|(I_{E}\otimes\widetilde{V}) \zeta\|^{2}\quad for\quad all\quad\zeta\in N(I_{E}\otimes\widetilde{V})^{\perp}.\] _then we say that the representation \((\sigma,V)\) is concave modulo \(N(\widetilde{V}).\)_ **Example 5.3**.: _Suppose that \(E\) is an \(n\)-dimensional Hilbert space with the orthonormal basis \(\{\delta_{i}\}_{i\in I_{n}}\) and \(\mathcal{H}\) is a Hilbert space with the orthonormal basis \(\{e_{m}:\,m\geq 0\}.\) Let \((\rho,S^{w})\) be the unilateral weighted shift c.b.c. representation of \(E\) on \(\mathcal{H}\) is defined by_ \[S^{w}(\delta_{i})=V_{i}\text{ and }\ \rho(b)=bI_{\mathcal{H}},\ b\in\mathbb{C},\] _where \(V_{i}(e_{m})=w_{i,m}e_{nm+i}\) for all \(m\geq 0,i\in I_{n}\) and \(\{w_{i,m}:i\in I_{n},\ m\geq 0\}\) is a bounded set of nonnegative real numbers. For \(i\in I_{n},\) it is easy to verify that \(V_{i}\) is concave if and only if_ \[w_{i,m}^{2}w_{i,nm+i}^{2}-2w_{i,m}^{2}+1\leq 0. \tag{5.1}\] _Let \(A=\{m_{0}\}\subset\mathbb{N}\cup\{0\}\) be a non-empty set. For \(i\in I_{n},\) we construct a sequence \(w_{A}\) given by \(w_{A}(m)=w_{i,m}\) for \(m\notin A\) and \(w_{A}(m_{0})=w_{i,m_{0}}=0\). Since \(V_{i,A}(e_{m})=w_{A}(m)e_{nm+i},\) we have \(N(V_{i,A})=span\{e_{m_{0}}\}.\) For \(i\in I_{n},\) we obtain_ \[\|V_{i,A}^{2}e_{m_{0}}\|^{2}+\|e_{m_{0}}\|^{2}-2\|V_{i,A}e_{m_{0}}\|^{2}=w_{i, m_{0}}^{2}w_{i,nm_{0}+i}^{2}+1-2w_{i,m_{0}}^{2}=1>0,\] _and hence \(V_{i,A}\) is not concave. From Equation (5.1), \(V_{i,A}\) is concave modulo \(N(V_{i,A})\) for all \(i\in I_{n}\). Since \(R(V_{i})\perp R(V_{j})\) for distinct \(i,j\in I_{n},\) we get \(N(\widetilde{S}^{w_{A}})=\bigoplus_{i=1}^{n}span\{\delta_{i}\otimes e_{m_{0}}\}.\) It is easy to see that \((\rho,S^{w_{A}})\) is concave modulo \(N(\widetilde{S}^{w_{A}}).\)_ The next result is an analogue of [9, Proposition 4.1]. **Proposition 5.4**.: _Let \((\sigma,V)\) be a concave c.b.c. representation modulo \(N(\widetilde{V}).\) Then \((\sigma,V)\) has closed range._ Proof.: Let \((\widetilde{V}\eta_{n})\to h\) be a sequence in \(R(\widetilde{V}),\) then there exists \((\eta_{n}^{\prime})\subseteq N(\widetilde{V})^{\perp}\) such that \(\widetilde{V}\eta_{n}=\widetilde{V}\eta_{n}^{\prime}.\) This implies that \((\widetilde{V}\eta_{n}^{\prime})\) is a Cauchy sequence. Since \(N(I_{E}\otimes\widetilde{V})^{\perp}=E\otimes N(\widetilde{V})^{\perp}\) and for every \(\xi\in E,\) we have \[\|\xi\otimes\eta_{n}^{\prime}\|^{2}\leq 2\|(I_{E}\otimes\widetilde{V})(\xi \otimes\eta_{n}^{\prime})\|^{2}-\|\widetilde{V}_{2}(\xi\otimes\eta_{n}^{ \prime})\|^{2}\leq 2\|(I_{E}\otimes\widetilde{V})(\xi\otimes\eta_{n}^{ \prime})\|^{2}.\] It follows that \((\xi\otimes\eta_{n}^{\prime})_{n\geq 0}\) is a Cauchy sequence, and hence \((\eta_{n}^{\prime})_{n\geq 0}\) is also a Cauchy sequence. Suppose that \(\lim_{n\to\infty}(\eta_{n}^{\prime})=\eta^{\prime},\) then \(h=\widetilde{V}\eta^{\prime}.\) Thus \(R(\widetilde{V})\) is closed. We now present some valuable connection between the concavity modulo \(N(\widetilde{V})\) and the Moore-Penrose inverse. **Proposition 5.5**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}\) with closed range. Then \((\sigma,V)\) is concave modulo \(N(\widetilde{V})\) if and only if_ \[\|\widetilde{V}_{2}\zeta\|^{2}+\|(I_{E}\otimes\widetilde{V}^{\dagger} \widetilde{V})\zeta\|^{2}-2\|(I_{E}\otimes\widetilde{V})\zeta\|^{2}\leq 0 \quad for\quad\zeta\in E^{\otimes 2}\otimes\mathcal{H}. \tag{5.2}\] Proof.: Let \((\sigma,V)\) be a concave modulo \(N(\widetilde{V})\) and let \(\zeta\in E^{\otimes 2}\otimes\mathcal{H},\) since \((I_{E}\otimes\widetilde{V}^{\dagger}\widetilde{V})=P_{N(I_{E}\otimes \widetilde{V})^{\perp}}\) and \(\widetilde{V}\widetilde{V}^{\dagger}\widetilde{V}=\widetilde{V},\) we have \[\|\widetilde{V}_{2}\zeta\|^{2}+\|(I_{E}\otimes\widetilde{V}^{ \dagger}\widetilde{V})\zeta\|^{2}-2\|(I_{E}\otimes\widetilde{V})\zeta\|^{2}\] \[=\|\widetilde{V}_{2}(I_{E}\otimes\widetilde{V}^{\dagger} \widetilde{V})\zeta\|^{2}+\|(I_{E}\otimes\widetilde{V}^{\dagger}\widetilde{V}) \zeta\|^{2}-2\|(I_{E}\otimes\widetilde{V})(I_{E}\otimes\widetilde{V}^{\dagger }\widetilde{V})\zeta\|^{2}\leq 0.\] On the other side, let \(\zeta\in N(I_{E}\otimes\widetilde{V})^{\perp},\) then \((I_{E}\otimes\widetilde{V}^{\dagger}\widetilde{V})\zeta=\zeta.\) It follows that \(\|\widetilde{V}_{2}\zeta\|^{2}+\|\zeta\|^{2}-2\|(I_{E}\otimes\widetilde{V}) \zeta\|^{2}\leq\|\zeta\|^{2}-\|(I_{E}\otimes\widetilde{V}^{\dagger}\widetilde{V} )\zeta\|^{2}=0.\) **Proposition 5.6**.: _Let \((\sigma,V)\) be a concave c.b.c. representation modulo \(N(\widetilde{V}),\) which satisfies \((I_{E^{\otimes n}}\otimes\widetilde{V})N(I_{E^{\otimes n}}\otimes\widetilde{V} )^{\perp}\subseteq N(I_{E^{\otimes n-1}}\otimes\widetilde{V})^{\perp}\) for every \(n\geq 1.\) Then \((\sigma,V)\) is expansive modulo \(N(\widetilde{V}).\) Thus \((\sigma,V^{\prime})\) is contractive._ Proof.: Using the idea of [26, Lemma 2.2]. Suppose \((\sigma,V)\) is concave modulo \(N(\widetilde{V}),\) then \[\|\widetilde{V}_{2}\zeta\|^{2}-\|\zeta\|^{2}\leq 2(\|(I_{E}\otimes\widetilde{V}) \zeta\|^{2}-\|\zeta\|^{2})\quad for\quad\zeta\in N(I_{E}\otimes\widetilde{V})^{ \perp}.\] For \(n\geq 1\) and \(y\in N(I_{E^{\otimes(n-1)}}\otimes\widetilde{V})^{\perp},\) using the mathematical induction, we have \[\|\widetilde{V}_{n}y\|^{2}-\|y\|^{2}\leq n(\|(I_{E^{\otimes(n-1)}}\otimes \widetilde{V})y\|^{2}-\|y\|^{2}).\] This implies that \(\|y\|^{2}+n(\|(I_{E^{\otimes(n-1)}}\otimes\widetilde{V})y\|^{2}-\|y\|^{2})\geq 0.\) Thus \(\|(I_{E^{\otimes(n-1)}}\otimes\widetilde{V})y\|^{2}\geq\frac{n-1}{n}\|y\|^{2}.\) From the properties of the creation operators, we obtain \[\|\widetilde{V}\xi\|\geq\frac{n-1}{n}\|\xi\|\text{ for all }\xi\in N(\widetilde{V})^{ \perp}.\] As \(n\to\infty\) we get \(\|\xi\|\leq\|\widetilde{V}\xi\|,\) and hence \((\sigma,V)\) is expansive modulo \(N(\widetilde{V})\) and \(\gamma(\widetilde{V})\geq 1.\) From [23, Proposition 4.5],we have \((\sigma,V^{\prime})\) is contractive. The following theorem is the main result of this section which is a gen- realization of [9, Theorem 4.1]. **Theorem 5.7**.: _Let \((\sigma,V)\) be a concave c.b.c. representation modulo \(N(\widetilde{V}).\) Then the Cauchy dual \((\sigma,V^{\prime})\) is hyponormal modulo \(N(\widetilde{V}).\)_ Proof.: Let \(\zeta\in E^{\otimes 2}\otimes\mathcal{H},\) we have \[\|\widetilde{V}_{2}\zeta\|^{2}-\|(I_{E}\otimes\widetilde{V}^{* }\widetilde{V})\zeta\|^{2}\leq\|\widetilde{V}_{2}\zeta\|^{2}-\|(I_{E}\otimes \widetilde{V}^{*}\widetilde{V})\zeta\|^{2}+\] \[\|(I_{E^{\otimes 2}\otimes\mathcal{H}}-(I_{E}\otimes\widetilde{V}^{* }\widetilde{V}))\zeta\|^{2}=\|\widetilde{V}_{2}\zeta\|^{2}+\|\zeta\|^{2}-2\|(I _{E}\otimes\widetilde{V})\zeta\|^{2}.\] For \(\zeta\in N(I_{E}\otimes\widetilde{V})^{\perp},\) since \((\sigma,V)\) is concave modulo \(N(\widetilde{V}),\) we have \(\|\widetilde{V}_{2}\zeta\|\leq\|(I_{E}\otimes\widetilde{V}^{*}\widetilde{V}) \zeta\|.\) For every \(\eta\in E^{\otimes 2}\otimes\mathcal{H}=N(I_{E}\otimes\widetilde{V})\oplus N(I_{E} \otimes\widetilde{V})^{\perp},\) then there exist \(\eta_{1}\in N(I_{E}\otimes\widetilde{V})\) and \(\zeta\in N(I_{E}\otimes\widetilde{V})^{\perp}\) such that \(\eta=\eta_{1}+\zeta.\) It follows that \[\|\widetilde{V}_{2}\eta\|=\|\widetilde{V}_{2}\zeta\|\leq\|(I_{E}\otimes \widetilde{V}^{*}\widetilde{V})\zeta\|=\|(I_{E}\otimes\widetilde{V}^{*} \widetilde{V})\eta\|\quad for\quad\eta\in E^{\otimes 2}\otimes\mathcal{H}.\] Since \(R(\widetilde{V})\) is closed, \(\widetilde{V}^{\dagger}\) exists. From Proposition 3.3, we have \[\widetilde{V}(I_{E}\otimes\widetilde{V}^{\prime}\widetilde{V}^{*}\widetilde{V} )=\widetilde{V}(I_{E}\otimes P_{R(\widetilde{V})}\widetilde{V})=\widetilde{V} _{2},\] and hence \(\|\widetilde{V}(I_{E}\otimes\widetilde{V}^{\prime}\widetilde{V}^{*}\widetilde {V})\eta\|=\|\widetilde{V}_{2}\eta\|\leq\|(I_{E}\otimes\widetilde{V}^{*} \widetilde{V})\eta\|\) for \(\eta\in E^{\otimes 2}\otimes\mathcal{H}.\) For any arbitary \(\eta\in E^{\otimes 2}\otimes\mathcal{H}=N(I_{E}\otimes\widetilde{V})\oplus R(I_{E} \otimes\widetilde{V}^{*}\widetilde{V}),\) then there exist \(\zeta_{1}\in N(I_{E}\otimes\widetilde{V})\) and \(\zeta_{2}\in E^{\otimes 2}\otimes\mathcal{H}\) such that \(\eta=\zeta_{1}+(I_{E}\otimes\widetilde{V}^{*}\widetilde{V})\zeta_{2}.\) Since \(N(I_{E}\otimes\widetilde{V}^{\prime})=N(I_{E}\otimes\widetilde{V}),\) we get \[\|\widetilde{V}(I_{E}\otimes\widetilde{V}^{\prime})\eta\|=\|\widetilde{V}(I_{E }\otimes\widetilde{V}^{\prime}\widetilde{V}^{*}\widetilde{V})\zeta_{2}\|\leq \|(I_{E}\otimes\widetilde{V}^{*}\widetilde{V})\zeta_{2}\|\leq\|\eta\|.\] Thus \(\widetilde{T}:=\widetilde{V}(I_{E}\otimes\widetilde{V}^{\prime})\) is a contraction. For \(\eta\in E^{\otimes 2}\otimes\mathcal{H}\) and \(b\in\mathcal{B},\) we have \[\widetilde{T}(\phi_{2}(b)\otimes I_{\mathcal{H}})\eta=\widetilde{V}(\phi(b) \otimes I_{\mathcal{H}})(I_{E}\otimes\widetilde{V}^{\prime})\eta=\sigma(b) \widetilde{T}\eta.\] Therefore \((\sigma,T)\) is completely contractive covariant representation of \(E^{\otimes 2}\) on \(\mathcal{H}.\) Using Proposition 3.3, we get \[\widetilde{T}^{*}\widetilde{V}^{\prime}\widetilde{V}^{*}=(I_{E}\otimes \widetilde{V}^{\prime})^{*}\widetilde{V}^{*}\widetilde{V}^{\prime}\widetilde{V} ^{*}=(I_{E}\otimes\widetilde{V}^{\prime})^{*}P_{R(\widetilde{V}^{*})} \widetilde{V}^{*}=(I_{E}\otimes\widetilde{V}^{\prime})^{*}\widetilde{V}^{*}.\] It gives \(\|(I_{E}\otimes\widetilde{V}^{\prime})^{*}\widetilde{V}^{*}h\|=\|\widetilde{T} ^{*}\widetilde{V}^{\prime}\widetilde{V}^{*}h\|\leq\|\widetilde{V}^{\prime} \widetilde{V}^{*}h\|\) for every \(h\in\mathcal{H},\) thus \((\sigma,V^{\prime})\) is hyponormal modulo \(N(\widetilde{V}).\) The next result is an application of Proposition 5.6 and Theorem 5.7 which is a generalization of [6, Theorem 2.9]. **Corollary 5.8**.: _Let \((\sigma,V)\) be a concave c.b.c. representation of \(E\) on \(\mathcal{H},\) then the Cauchy dual \((\sigma,V^{\prime})\) is a hyponormal contractive._ **Corollary 5.9**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}\) such that_ \[\|(I_{E}\otimes\widetilde{V})\zeta+\eta\|^{2}\leq 2(\|\zeta\|^{2}+\|\widetilde{V} \eta\|^{2}),\quad\zeta\in E^{\otimes 2}\otimes\mathcal{H}\quad for\quad\eta\in E \otimes\mathcal{H}.\] _Then \((\sigma,V)\) is a hyponormal contractive._ Proof.: Suppose that \((\sigma,V)\) satisfies \(\|(I_{E}\otimes\widetilde{V})\zeta+\eta\|^{2}\leq 2(\|\zeta\|^{2}+\| \widetilde{V}\eta\|^{2})\) for \(\zeta\in E^{\otimes 2}\otimes\mathcal{H}\) and \(\eta\in E\otimes\mathcal{H},\) then by [26, Theorem 3.13], we get the Cauchy dual \((\sigma,V^{\prime})\) is concave. Since \(\widetilde{V}^{\prime\prime}=\widetilde{V},\)\((\sigma,V)\) is hyponormal contractive. **Corollary 5.10**.: _Let \((\sigma,V)\) be a c.b.c. representation of \(E\) on \(\mathcal{H}\) with closed range such that_ \[\|(I_{E}\otimes\widetilde{V})\zeta+(I_{E}\otimes\widetilde{V}\widetilde{V}^{ \dagger})\widetilde{V}^{\dagger}\widetilde{V}\eta\|^{2}\leq 2(\|\zeta\|^{2}+\| \widetilde{V}\eta\|^{2}) \tag{5.3}\] _for all \(\zeta\in E^{\otimes 2}\otimes\mathcal{H}\) and \(\eta\in E\otimes\mathcal{H}.\) Then \((\sigma,V)\) is hyponormal modulo \(N(\widetilde{V}).\)_ Proof.: Suppose \((\sigma,V)\) satisfies \(\|(I_{E}\otimes\widetilde{V})\zeta+(I_{E}\otimes\widetilde{V}\widetilde{V}^{ \dagger})\widetilde{V}^{\dagger}\widetilde{V}\eta\|^{2}\leq 2(\|\zeta\|^{2}+\| \widetilde{V}\eta\|^{2}).\) Put \(h=\widetilde{V}\eta,\) we have \[\|(I_{E}\otimes\widetilde{V})\zeta+(I_{E}\otimes\widetilde{V}\widetilde{V}^{ \dagger})\widetilde{V}^{\dagger}h\|^{2}\leq 2(\|\zeta\|^{2}+\|h\|^{2}) \tag{5.4}\] for \(\zeta\in E^{\otimes 2}\otimes\mathcal{H}\) and \(h\in R(\widetilde{V}).\) Define an operator \(X:(E^{\otimes 2}\otimes\mathcal{H})\oplus\mathcal{H}\to E\otimes \mathcal{H}\) by \(X(\zeta,h)=(I_{E}\otimes\widetilde{V})\zeta+(I_{E}\otimes\widetilde{V} \widetilde{V}^{\dagger})\widetilde{V}^{\dagger}h\) for all \(\zeta\in E^{\otimes 2}\otimes\mathcal{H}\) and \(h\in\mathcal{H}.\) From Equation (5.4), it is easy to verify that \(\|X\|\leq\sqrt{2},\) and hence \(XX^{*}\leq 2I_{E\otimes\mathcal{H}},\) which yields \[(I_{E}\otimes\widetilde{V}\widetilde{V}^{*})+(I_{E}\otimes\widetilde{V} \widetilde{V}^{\dagger})\widetilde{V}^{\dagger}\widetilde{V}^{\prime}(I_{E} \otimes\widetilde{V}\widetilde{V}^{\dagger})\leq 2I_{E\otimes\mathcal{H}}. \tag{5.5}\] Now we multiply by left \((I_{E}\otimes\widetilde{V}^{\dagger})\) and right \((I_{E}\otimes\widetilde{V}\widetilde{V}^{\prime})\) in Equation (5.5), since \(\widetilde{V}^{*}\widetilde{V}^{\prime}=\widetilde{V}^{\dagger}\widetilde{V}\) and \(\widetilde{V}\widetilde{V}^{\dagger}\widetilde{V}^{\prime}=\widetilde{V}^{\prime},\) we get \[(I_{E}\otimes\widetilde{V}^{\dagger}\widetilde{V})+(I_{E}\otimes\widetilde{V} ^{\dagger})\widetilde{V}^{\dagger}\widetilde{V}^{\prime}_{2}\leq 2(I_{E} \otimes\widetilde{V}^{\dagger}\widetilde{V}^{\prime}).\] It gives \(\|\widetilde{V}^{\prime}_{2}\zeta\|^{2}-2\|(I_{E}\otimes\widetilde{V}^{\prime })\zeta\|^{2}+\|(I_{E}\otimes\widetilde{V}^{\dagger}\widetilde{V})\zeta\|^{2} \leq 0\) for all \(\zeta\in E^{\otimes 2}\otimes\mathcal{H}.\) Since \(\widetilde{V}^{*}\widetilde{V}^{\prime}=\widetilde{V}^{\dagger}\widetilde{V},\) we have \[\|\widetilde{V}^{\prime}_{2}\zeta\|^{2}-2\|(I_{E}\otimes\widetilde{V}^{\prime })\zeta\|^{2}+\|(I_{E}\otimes\widetilde{V}^{*}\widetilde{V}^{\prime})\zeta\|^ {2}\leq 0.\] Using Proposition 5.5, we get \((\sigma,V^{\prime})\) is concave modulo \(N(\widetilde{V}^{\prime})=N(\widetilde{V}).\) From Theorem 5.7, \((\sigma,V)\) is hyponormal modulo \(N(\widetilde{V}).\) ### Acknowledgment I want to thank Harsh Trivedi and Shankar Veerabathiran for some fruitful discussions. The author supported by UGC fellowship (File No:16-6(DEC. 2018)/2019(NE T/CSIR)) and acknowledge the Centre for Mathematical & Financial Computing and the DST-FIST grant for the financial support for the computing lab facility under the scheme FIST ( File No: SR/FST/MS-I/2018/24) at the LNMIIT, Jaipur.
2304.08372
On the dimension theory of random walks and group actions by circle diffeomorphisms
We establish new results on the dimensional properties of measures and invariant sets associated to random walks and group actions by circle diffeomorphisms. This leads to several dynamical applications. Among the applications, we show, strengthening of a recent result of Deroin-Kleptsyn-Navas [24], that the minimal set of a finitely generated group of real-analytic circle diffeomorphisms, if exceptional, must have Hausdorff dimension less than one. Moreover, if the minimal set contains a fixed point of multiplicity k + 1 of an diffeomorphism of the group, then its Hausdorff dimension must be greater than k/(k + 1). These results generalize classical results about Fuchsian group actions on the circle to non-linear settings. This work is built on three novel components, each of which holds its own interest: a structure theorem for smooth random walks on the circle, several dimensional properties of smooth random walks on the circle and a dynamical generalization of the critical exponent of Fuchsian groups.
Weikun He, Yuxiang Jiao, Disheng Xu
2023-04-17T15:35:06Z
http://arxiv.org/abs/2304.08372v2
# On dimension theory of random walks and group actions by circle diffeomorphisms ###### Abstract In this paper we establish several results on the dimensional properties of invariant measures and sets associated to random walks and group actions by circle diffeomorphisms. Our main results include the exact dimensionality and a dimension formula of stationary measures, variational principles for dimensions in various settings and estimates of the Hausdorff dimensions of exceptional minimal sets. We also prove an approximation theorem for random walks on the circle which is analogous to the results of Katok, Avila-Crovisier-Wilkinson, Morris-Shmerkin. The proofs of our results are based on a combination of techniques including a new structure theorem for smooth random walks on circle, a dynamical generalization of the critical exponent of Fuchsian groups and some novel arguments inspired by the study of fractal geometry, hyperbolic geometry and holomorphic dynamics. ###### Contents * 1 Introduction * 1.1 Group actions by real analytic circle diffeomorphisms * 1.2 Existence of subsystems approximating Lyapunov exponents, entropy, dimension simultaneously * 1.3 Key ingredients and some byproducts of the proof * 1.4 Organization of the paper Acknowledgement * 2 Statements of the main results * 2.1 Exact dimensionality of stationary measures * 2.2 Dimension formulas for smooth actions on the interval and circle * 2.3 Variational principle for dimensions * 2.4 Dynamical critical exponents * 2.5 Conformal measures * 2.6 Approximation by uniformly hyperbolic subsystems * 2.7 List of proofs * 3 Preliminaries * 3.1 Group actions on the circle * 3.2 Dimension and entropy * 3.3 Distortion estimates * 3.4 Iterated Function Systems * 4 Random walks on \(\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) * 4.1 Preliminaries on random walks * 4.2 Generalities on random transformations * 4.3 The structure of random walks on \(\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) * 4.4 Basic properties of stationary measures * 4.5 Uniform good words * 4.6 Effective convergence to the Furstenberg boundary * 5 Exact dimensionality of stationary measures * 5.1 Preparation * 5.2 Exact dimensionality * 6 Dimension Formulas * 6.1 Dimension formula on the circle * 6.2 Dimension formula on the interval * 7 Approximation with a uniformly hyperbolic subsystem * 7.1 Hyperbolic elements * 7.2 Perfect pingpong pairs * 7.3 Proof of Theorem J * 8 Variational principle for dimensions * 8.1 Elements with controlled contracting rates * 8.2 Proof of the variational principle * 8.3 Minimal sets of sub-semigroups * 9 The dynamical critical exponents * 9.1 The \(C^{1}\) dynamical critical exponent * 9.2 The \(C^{2}\) dynamical critical exponent * 10 Groups with parabolic elements * 10.1 Growth of derivatives around a parabolic fixed point * 10.2 Boosting the dynamical critical exponent * 11 The dynamical critical exponent and conformal measures * 11.1 Conformal measures * 11.2 Basic properties of the dynamical critical exponent on sets * 11.3 The dynamical critical exponent for real analytic groups * 11.4 Existence of atomless conformal measures * 11.5 The dynamical critical exponent at a point * 12 Additional proofs and further discussions * 12.1 Additional proofs of main results * 12.2 Comparison with the critical exponent of Fuchsian groups * 12.3 Some counterexamples ## 1 Introduction The dimension of fractal sets and associated measures defined through dynamical systems has been extensively researched, including investigating their relationships with other dynamically-defined objects and properties such as entropies, Lyapunov exponents, critical exponents, and uniformity of hyperbolicity. This topic has been explored in various areas of mathematics, such as the dimension of Julia sets in holomorphic dynamics [7, 8, 73, 17, 55, 25, 1]; the dimension of attractors and invariant measures of smooth dynamical systems and their relationship with entropies and Lyapunov exponents [45, 46, 48, 49, 79, 10]; the study of self-affine sets/measures of iterated function systems (IFS) in fractal geometry [38, 31, 77, 42, 27, 28, 9, 57]; the study of stationary measures for random walks [39, 51, 64, 50]; and the dimension of limit sets of hyperbolic group actions [76, 74, 15, 13, 62, 11]. In this paper, we focus on the dimension theory for smooth (orientation preserving) group action and random walks on the circle. We establish several results concerning the dimensions of invariant sets and associated stationary measures. In this introduction, to illustrate the main results we present some example theorems in a special case. More precise and stronger statements of each part are provided in Section 2. As in many known results, for simplicity, in our paper we only consider orientation preserving homeomorphisms. Notice that the general case reduces to this one, as the subgroup of orientation-preserving elements has index two in the original group, hence it carries all its relevant dynamical properties. Throughout the paper, we use base-\(2\) logarithms. ### Group actions by real analytic circle diffeomorphisms A classical theorem about group actions on the circle states that for any subgroup \(G\subset\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) without finite orbits, there exists a unique \(G\)-invariant minimal set that is either the circle itself or homeomorphic to the Cantor set [58, Theorem 2.1.1]. Since we are primarily interested in the dimension theory of fractal sets, we focus on the latter case, which is usually refereed as the _exceptional minimal set_ of \(G\). In Section 1.1, unless otherwise stated, **we always assume that \(G\) is a finitely generated subgroup of \(\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) and admits a unique \(G\)-invariant exceptional minimal set, denoted by \(\Lambda=\Lambda_{G}\).** For some of the results below, the regularity assumption \(C^{\omega}\) can be relaxed to \(C^{2}\)1 or even \(C^{1+}\). Footnote 1: Sometimes some extra assumptions may be needed, cf. Section 2 for more details. Exact dimensionality and dimension formula of stationary measure.A Borel measure \(\nu\) is said to be exact dimensional if the pointwise dimension (see Section 3.2) exists and constant \(\alpha\)\(\nu\)-a.e. In this case we say \(\dim\nu=\alpha\). A fundamental result by Young in [79] shows that if \(\nu\) is exact dimensional then \(\dim_{\mathrm{H}}(\nu)=\dim\nu\). In [39] Hochman and Solomyak studied the \(\mu\)-stationary measure \(\nu\) (as known as Furstenberg measure) on the projective line for general probability measure \(\mu\) on \(\mathrm{SL}(2,\mathbb{R})\), showed that \(\nu\) is exact dimensional and a Ledrappier-Young type formula, i.e. \[\dim\nu=\frac{h_{\mathrm{F}}(\mu)}{2\chi(\mu)}=\min\left\{1,\frac{h_{\mathrm{ RW}}(\mu)}{2\chi(\mu)}\right\}\] if \(\mathrm{supp}\,\mu\) generates an unbounded and totally irreducible subgroup in \(\mathrm{SL}(2,\mathbb{R})\) and satisfies Diophantine condition 2, (Ledrappier [50] showed that a slightly weaker version of first "\(=\)", namely, \(\log\nu(B_{r}(x))/\log r\to h_{\mathrm{F}}(\mu)/2\chi(\mu)\) in \(\nu\)-probability as \(r\to 0\)). Here \(h_{\mathrm{F}}(\mu),h_{\mathrm{RW}}(\mu)\) are the Furstenberg entropy and random walk entropy of \(\mu\) respectively (Definitions 2.3 and 2.5), \(\chi(\mu)\) is the Lyapunov exponent of the random walk with law \(\mu\), and cf. [10, 38, 31, 50, 51, 64], etc. for more results on the exact dimensionality and the dimension formula in different settings. Our first result is an analogue of the results of [39] in the setting of circle diffeomorphisms. For a \(\mu\)-ergodic stationary measure \(\nu\) on circle we denote by \(\lambda(\mu,\nu)\) the Lyapunov exponent of \(\nu\) (Definition 2.2). **Theorem A**.: 3_Let \(\mu\) be a finitely supported probability measure on \(G\) whose support generates \(G\). Then any ergodic \(\mu\)-stationary measure \(\nu\) is exact dimensional. Moreover we have_ Footnote 3: cf. Sections 2.1 and 2.2 for more discussions and \(C^{2}\) version of Theorem A \[\dim\nu=\frac{h_{\mathrm{F}}(\mu,\nu)}{-\lambda(\mu,\nu)}=\frac{h_{\mathrm{RW }}(\mu)}{-\lambda(\mu,\nu)}<1.\] Variational principles for dimensions.In [31], Feng and Hu studied the dimensional properties of conformal IFSs, built the exact dimensionality of of the invariant measures of such IFS, and further showed a variational principle between the Hausdorff dimension of the attractors and that of invariant measures. We show the following variational principle in the same spirit for group actions, see also [57] for 2-dimensional affine analogues in this direction. **Theorem B**.: _For any \(\varepsilon>0\), there exists a finitely supported probability measure \(\mu\) on \(G\) which admits a \(\mu\)-stationary measure \(\nu\) on \(\Lambda\) such that_ \[\dim_{\mathrm{H}}\nu>\dim_{\mathrm{H}}\Lambda-\varepsilon.\] Using the proof of Theorem B, we can obtain further results that demonstrate the approximation of the Hausdorff dimension of \(\Lambda\) by an attractor (see Section 3.4) \(\Lambda^{\prime}\subset\Lambda\) of a contracting IFS with the separation property (which is the simplest model of IFS). **Theorem C**.: _For any \(\varepsilon>0\), there exists a closed interval \(I\subset\mathbb{S}^{1}\) and \(g_{i}\in G,i\leqslant\ell\) such that \(\{g_{i}|_{I}\}_{i=1}^{\ell}\) defines a contracting IFS satisfying that \(g_{i}(I)\cap g_{j}(I)=\varnothing,i\neq j\). Moreover, the associated attractor \(\Lambda^{\prime}\) satisfies_ \[\Lambda^{\prime}\subset\Lambda\text{ and }\dim_{\mathrm{H}}\Lambda^{\prime}> \dim_{\mathrm{H}}\Lambda-\varepsilon\] Dynamical critical exponents and estimates of Hausdorff dimensions of exceptional minimal sets.The critical exponent is a fundamental numerical invariant associated with a discrete subgroup \(\Gamma\) of the isometry group of the \(n\)-dimensional hyperbolic space \(\mathbb{H}^{n}\). It measures the asymptotic growth rates of \(\Gamma\)-orbits in hyperbolic space. Sullivan's influential paper [74] established a relationship between the critical exponent and the Hausdorff dimension of the radial limit set \(\Lambda(\Gamma)\), which extended earlier pionreeing work by Patterson [62]. In this paper, we introduce a dynamical analogue of the critical exponent for subgroups of diffeomorphisms of the circle having a unique minimal set. Our investigation leads to several new applications and estimates of Hausdorff dimensions of exceptional minimal sets. **Definition 1.1**.: For a subgroup \(H\) of \(\mathrm{Diff}^{1}_{+}(\mathbb{S}^{1})\) with a unique minimal set \(\Lambda_{H}\), we define the _dynamical critical exponent_ of \(H\) by \[\delta(H)=\lim_{\varepsilon\to 0^{+}}\limsup_{n\to+\infty}\frac{1}{n}\log \#\{\,g\in H:\exists x\in\Lambda_{H},\,g^{\prime}|_{B(x,\varepsilon)}\geqslant 2 ^{-n}\,\}. \tag{1.1}\] The main reason we refer \(\delta(\,\boldsymbol{\cdot}\,)\) as the _dynamical_ critical exponent is because it differs from the classical critical exponent, which is determined by the geometric information on hyperbolic spaces, our definition of the critical exponent is based solely on the local contracting rate of the group elements acting on the circle, which makes it easier to generalize to more general group actions. The first application of the dynamical critical exponent is an analogue of Sullivan's theorem which relates the classical critical exponent of a Fuchsian group to the Hausdorff dimension of its limit set, but in the setting of subgroups of diffeomorphisms of the circle. **Theorem D**.: \(\dim_{\mathrm{H}}\Lambda=\delta(G)\). _Remark 1.2_.: We also have a \(C^{2}\)-version of Theorem D (see Theorem 2.17). However, unlike the case of Fuchsian groups, the restriction that \(x\in\Lambda_{H}\) in (1.1) is necessary for Theorem D. Further discussions on this topic can be found in Section 11 and Example 12.8. Although the definition of the dynamical critical exponent involves the minimal set \(\Lambda\), which makes it a priori not computable, it is still very useful for estimating the Hausdorff dimension of exceptional minimal sets. For instance, a classical result in the study of Fuchsian groups (cf. [11, 61]) states that if a Fuchsian group contains a parabolic element, then its limit set has Hausdorff dimension greater than \(1/2\). Similarly, a result in holomorphic dynamics (cf. [1], Theorem 8.5) states that for the Julia set \(J(T)\) of a parabolic rational map \(\bar{T}:\bar{\mathbb{C}}\to\bar{\mathbb{C}}\), we have \[\dim_{\mathrm{H}}(J(T))>\max_{\omega}\frac{p(\omega)}{p(\omega)+1}\geqslant \frac{1}{2},\] where \(p(\,\cdot\,)\) is a positive integer corresponding to the multiplicity of certain periodic points of \(T\) (See [1] for definitions of parabolic rational maps and \(p(\,\cdot\,)\)). As a consequence of the study of dynamical critical exponent, we generalize the abovementioned estimates to general real analytic circle diffeomorphisms. Recall that a fixed point \(x_{0}\) of a diffeomorphism \(f\in\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) is said to be of _multiplicity_\(k+1\) if \(x_{0}\) is a \((k+1)\)-multiple zero of the function \(\varphi(x)=f(x)-x\), that is, \[\varphi(x_{0})=\varphi^{\prime}(x_{0})=\cdots=\varphi^{(k)}(x_{0})=0\text{ and } \varphi^{(k+1)}(x_{0})\neq 0.\] **Theorem E**.: _Let \(k\) be a positive integer. If \(G\) contains a nontrivial element with a fixed point of multiplicity at least \(k+1\) on \(\Lambda\), then_ \[\dim_{\mathrm{H}}\Lambda>\frac{k}{k+1}.\] _Remark 1.3_.: Contrary to the case of Fuchsian groups, the assumption that the parabolic fixed point belongs to \(\Lambda\) is necessary for the case \(G\subset\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1})\), see Example 12.8 for a counterexample and further discussions. The following conjecture in the study of group actions on the circle which is mainly motivated by the theory of codimension one foliations, is due to Hector, Ghys and Sullivan, see also [59, Question 14] and [36, 41] corresponding result in foliation theory. **Conjecture:** Let \(H\) be a finitely-generated group of \(\mathrm{Diff}^{2}(\mathbb{S}^{1})\) which admits an exceptional minimal set \(\Lambda_{H}\). Is \(\mathrm{Leb}(\Lambda_{H})=0\)? An important recent progress towards the solution of this question is made by Deroin, Kleptsyn and Navas in [24], where the conjecture confirmed for \(G\subset\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1})\).In this paper, we strengthen their result as follows: **Theorem F**.: _Let \(G\) be a finitely generated subgroup of \(\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) which admits an exceptional minimal set \(\Lambda\), then \(0<\dim_{\mathrm{H}}\Lambda<1\)._ A classical result by Beardon [11] (see also the fruitful development in this direction by Patterson and Sullivan, [62, 74], etc.) says that the limit set of a Fuchsian group of the second type has Hausdorff dimension less than \(1\). Theorem F extends these results to non-linear case. _Remark 1.4_.: In [75] Sullivan showed Theorem F for \(C^{2}\) case under an extra expansion assumptions. As a consequence, the Hausdorff dimensions (of measures, attractors, minimal sets) that occurred in the previous theorems are all less than \(1\). Combining the study of dynamical critical exponent, we have the following corollary. See also [36] a related result by Hector. **Corollary G**.: _There exists an upper bound \(N=N(G)<\infty\) such that no nontrivial element of \(G\) has a fixed point with multiplicity higher than \(N\)._ Orbit closure classifications.The classification of orbit closures of group actions is a natural problem in the study of homogeneous dynamics, e.g. Ratner's proof of Raghunathan conjecture [65]. Recent developments in this area include [12, 14, 26, 16], among others. A direct consequence of our study of dynamcial critical exponent is the following orbit closures classification result in the setting of real analytic group actions on the circle. **Theorem H**.: _Let \(H\) be a finitely generated group of real analytic diffeomorphisms of the circle. Then we have the following orbit closure classification, i.e any \(H\)-orbit closure is_ 1. _an infinite countable set, in this case_ \(H\) _is isomorphic to either_ \(\mathbb{Z}\times\mathbb{Z}/k\mathbb{Z}\) _or a semi-direct product of_ \(\mathbb{Z}\) _and_ \(\mathbb{Z}/(2k\mathbb{Z})\) _with the presentation_ \(\left\langle a,b\ |\ bab^{-1}=a^{-1},\ b^{2k}=1\right\rangle,\) _for some positive integer_ \(k\)_;_ 2. _or a submanifold, i.e. a finite set, a finite union of closed intervals or the whole circle;_ 3. _or the union of a countable set with the unique_ \(H\)_-invariant exceptional minimal set_ \(\Lambda_{H}\) _which has Hausdorff dimension_ \(\delta(H)\)_._ _Remark 1.5_.: The group \(H\cong\left\langle a,b\ |\ bab^{-1}=a^{-1},\ b^{2k}=1\right\rangle\) is solvable but not nilpotent and satisfies that \(H^{2}=\{g^{2}:g\in H\}\) is abelian, so it is in the first case of the classification of the solvable group action on circle in [18, Theorem 1.9], see Remark 12.1 for a realization of \(H\) in \(\mathrm{Diff}^{\omega}_{+}(\mathbb{S}^{1})\). For \(k=1\), it is just \(D_{\infty}\), the infinite dihedral group. In general, every solvable subgroup of \(\mathrm{Diff}^{\omega}_{+}(\mathbb{S}^{1})\) is metabelian [33], while every nilpotent subgroup of \(\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) is abelian according to [63], see also [30]. A corollary of Theorem H is the following orbit closure classification result. **Corollary I**.: _Let \(H=\left\langle f_{1},\ldots,f_{n}\right\rangle\subset\mathrm{Diff}^{\omega}_{+} (\mathbb{S}^{1}),n\geqslant 2\) be a rank \(n\) free group freely generated by \(f_{1},\ldots,f_{n}\). If_ \[\max_{1\leqslant i\leqslant n}\left\{\|(f_{i}{}^{-1})^{\prime}\|_{C^{0}},\|f _{i}^{\prime}\|_{C^{0}}\right\}\leqslant(2n-1),\] _then the closure of any \(H\)-orbit is a submanifold of \(\mathbb{S}^{1}\). Moreover, the bound \((2n-1)\) is sharp._ ### Existence of subsystems approximating Lyapunov exponents, entropy, dimension simultaneously A famous theorem by A. Katok [43] asserts that any ergodic hyperbolic measure of a \(C^{1+}\) diffeomorphism can be approximated by a horseshoe with approximately the same entropy. In [4] Theorem 3.3, Avila, Crovisier and Wilkinson further showed that it is possible to let the horseshoe have a dominated splitting, with approximately the same Lyapunov exponents. In the case of 2D matrix valued cocycle over full shift of finite type, Morris and Shmerkin [57] showed that any cocycle with distinct Lyapunov exponents can be approximated by a subsystem with approximately the same entropy and Lyapunov exponents and having additionally a dominated splitting. Notice that dominated splitting can be translated into the cone field conditions for dynamics on projective spaces (cf. [6]). We show an analogue of these results in the setting of random walks by circle diffeomorphisms. Recall that for a discrete measure \(\mu\) we denote by \(H(\mu)\) the entropy of \(\mu\) (cf. Section 3.2). For a group \(H\) and a subset \(\Gamma\subset H^{N}\), we denote by \(\Gamma^{*}:=\{f_{N}\cdots f_{1}:(f_{1},\cdots,f_{N})\in\Gamma\}\). **Theorem J**.: _Let \(\mathcal{S}\subset\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) be a finite set without common invariant probability measures, \(\mu\) a nondegenerated probability measure supported on \(\mathcal{S}\) and \(\varepsilon>0\). Then there exists \(d,r,N\in\mathbb{Z}^{+}\) and \(\Gamma\subset\mathcal{S}^{N}\), where \(d,r\) only depend on \(\mathcal{S}\) such that_ 1. _there are exactly_ \(d\) _ergodic_ \(\mu\)_-stationary measures_ \(\nu_{i}\)_. The Lyapunov exponent_ \(\lambda_{i}\) _of each_ \(\nu_{i}\) _is negative._ 2. \(\#\mathbb{T}\geqslant 2^{N(H(\mu)-\varepsilon)}\)_._ 3. _There exists_ \(dr\) _disjoint open intervals_ \(\left\{U_{i,j}\right\}_{1\leqslant i\leqslant d,1\leqslant j\leqslant r}\) _such that_ \(f\overline{U_{i,j}}\subset U_{i,j}\) _and_ \(f^{\prime}|_{U_{i,j}}\in[2^{N(\lambda_{i}-\varepsilon)},2^{N(\lambda_{i}+ \varepsilon)}]\) _for any_ \(f\in\Gamma^{*},1\leqslant i\leqslant d,1\leqslant j\leqslant r\) _._ 4. _The semigroup generated by_ \(\Gamma^{*}\) _has a unique minimal set_ \(K_{i,j}\) _in each_ \(U_{i,j}\) _with_ \(\dim_{H}(K_{i,j})\geqslant(\dim\nu_{i}-\varepsilon)\)_._ Here (2) can be viewed as an approximation of entropy (by subsystem), (3) can be viewed as an approximation of Lyapunov exponents (with dominated splitting). See Section 7.3 for different variants and more discussions on Theorem J. ### Key ingredients and some byproducts of the proof One of the key arguments hidden behind our proof is a _structure theorem of random walks by circle diffeomorphisms_, Theorem 4.12. Roughly speaking, consider the associated skew product system for a given random walk, we show that along every regular orbit, the fibered dynamics is like a finite copies of north-south dynamics with strong rotational symmetry (see Theorems 4.12 and 4.15 for details and more discussions). In particular we have the following interesting corollary. Recall that by [52] for any subsemigroup \(T\) of \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) without finite orbit, the number of \(T\)-invariant minimal sets is finite. And for a semigroup \(T\) we denote by \(T^{-1}\) the semigroup generated by the inverses of the elments in \(T\). **Corollary K**.: _Let \(T\subset\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) be a semigroup without finite orbits. Then \(T\) and \(T^{-1}\) have the same number of minimal sets._ Another important consequence of the structure theorem is the existence of _perfect pingpong pairs_ (Definition 7.3), which is one of the main technical tools in the proof of Theorem D. Roughly speaking, a perfect pingpong pair is a pair of diffeomorphisms on circle with dynamics as finite copies of classical ping-pong dynamics, with uniform hyperbolicity. Recall that in [53] Margulis (cf. also [22, 40, 34]) proved a conjecture of Ghys by showing that any group action by circle homeomorphisms without an invariant Borel probability measure contains a free non-abelian subgroup. Under an extra \(C^{2}\) assumption we can obtain the following extension of Margulis' theorem. **Corollary L**.: 4 _Let \(T\subset\mathrm{Diff}^{2}(\mathbb{S}^{1})\) be a semigroup with no invariant probability measure on \(\mathbb{S}^{1}.\) Then there exists a perfect pingpong pair \((h_{1},h_{2})\subset T.\)_ Footnote 4: Note that the \(C^{2}\) assumption of Corollary L can be relaxed to be \(C^{1}\). The \(C^{1}\) case is more complicated and will be proved in a forthcoming paper of the second and third author. In our proof, we develop a series of covering arguments that turns out to be an important tool. These arguments use the classical Vitali covering argument, as well as insights drawn from the study of hyperbolic geometry. Specifically, we are motivated by the role of non-radial limit points in the limit set of hyperbolic geometry, and in our setting, we find that the orbits of non-expandable (NE) points play similar roles. To this end, we utilize recent studies and basic properties of non-expandable points, e.g. properties \((\star),(\Lambda\star)\) (Definitions 2.14 and 2.15) by Deroin-Klepstyn-Navas [23, 24], among others. Additionally, we incorporate new arguments partially inspired by works in various areas. For example, in the proof of the exact dimensionality of stationary measures, in addition to employing the structural theorem we mentioned above and a strategy similar to that in [39], we also adapt the _hyperbolic time_ technique in Pesin theory in the setting of random dynamics; in the proof of the dimension formula, we apply entropy arguments for counting which is partially inspired the study of dimension theory in IFS ([38], etc.). And we extend some techniques of the Patterson-Sullivan theory ([62, 74], etc.) and the holomorphic dynamics ([75, 1], etc.). This includes constructing non-atomic conformal measures to aid us in estimating the Hausdorff dimension, among other methods. ### Organization of the paper As mentioned earlier, in Section 2, we restate the main results in more precise and technical forms, including their strongest versions that we can obtain from our arguments. Sometimes, this requires distinguishing between a \(C^{2}\) version and a real analytic version. Additionally, we provide a list of where the proofs for the results presented in the first two sections can be found. After recalling some preliminaries in Section 3, this paper is divided into two parts. The first part of this paper, consisting of Sections 4 to 7, is dedicated to establishing the dimension theory of smooth random walks on the circle. Section 4 focuses on studying the structure of smooth random walks on the circle and developing useful technical tools to control global dynamics, which is crucial preparation for later studies. In Sections 5 and 6, we prove the exact dimensionality and dimension formulas for ergodic stationary measures, respectively. The final section of this part, Section 7, concerns approximating a smooth random walk on the circle with a uniformly hyperbolic subsystem, which corresponds to Theorem J and its variants. The second part of this paper, comprising Sections 8-11, focuses on the Hausdorff dimension of the minimal set of a group action on the circle. In Section 8, we discuss the variational principle for the dimension of minimal sets. In the next three sections, we establish the theory of dynamical critical exponents. In Section 9, we prove the identity between the Hausdorff dimension of the minimal set and the dynamical critical exponent of the group, building upon the constructions in Section 8 and the dimension formula established in the first part. In Section 10, we apply this identity to provide a lower bound of the Hausdorff dimension when the group contains parabolic elements. In Section 11, we generalize the definition of the dynamical critical exponent to various subsets of the circle, which allows us to construct conformal measures on the minimal set and derive some other estimates of its dimension. The final section of this paper, Section 12, serves to complete the proofs of some of the main results and provide further discussions. We discuss the relationship between the dynamical critical exponent and the critical exponent of Fuchsian groups, and show how to deduce a classical result. Additionally, we present some counterexamples that demonstrate the necessity of certain conditions in our definition, highlighting the differences between smooth group actions and the canonical actions of Fuchsian groups. ### Acknowledgement W. H. is supported by the National Key R&D Program of China (No. 2022YFA1007500) and the National Natural Science Foundation of China (No. 12288201). D. X. is supported by NSFC grant 12090015. Part of this work has been done during D. X. visiting AMSS and the University of Chicago, D. X. thanks AMSS and the University of Chicago for hospitality during the visiting. We thank A. Avila for the suggestions to study Theorem J. We thank A. Wilkinson for useful suggestions and pointing out an earlier version of Theorem H can be improved. We thank Jialun Li, Wenyu Pan, Shaobo Gan, Yi Shi, Wenyuan Yang, Minghui Ouyang for useful discussions. **Notation** We summarize here our main notation and conventions. \begin{tabular}{l l} \hline \(\log\) & Logarithm with base 2. \\ \(d(\,\boldsymbol{\cdot}\,,\,\boldsymbol{\cdot})\) & The metric on \(\mathbb{S}^{1}\). \\ \(B(x,\rho)\) & The open ball with center \(x\) and radius \(\rho\). \\ \(A^{(\rho)}\) & The \(\rho\)-neighborhood of \(A\), i.e., \(\bigcup_{x\in A}B(x,\rho)\). \\ \(|I|\) & Lebesgue measure of the interval \(I\). \\ \(tI\) & The interval with the same center as \(I\) and \(t\)-times its length, \(t>0\). \\ \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) & The group of orientation preserving circle homeomorphisms. \\ \(\mathrm{Diff}^{(\mathrm{resp},2,\omega)}_{+}(\mathbb{S}^{1})\) & The group of \(C^{1(\mathrm{resp},2,\omega)}\) orientation preserving circle diffeomorphisms. \\ \(C^{1(\mathrm{resp},2)}_{+}(I,I)\) & The semigroup of \(C^{1(\mathrm{resp},2)}\) orientation preserving maps on \(I\) without critical points. \\ \(\Lambda\) & The minimal set of a subgroup of \(\mathrm{Homeo}_{+}(\mathbb{S}^{1})\) that has no finite orbits. \\ \(\mu\) & A finitely supported probability measure on \(\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) or \(C^{2}_{+}(I,I)\). \\ \(\nu\) & A probability measure on \(\mathbb{S}^{1}\) or \(I\). Usually take \(\nu\) to be a stationary measure. \\ \(\mathcal{S}\) & A finite subset of \(\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) or \(C^{2}_{+}(I,I)\), usually denotes the support of \(\mu\). \\ \(\mu^{*n}\) & \(n\)-fold convolution of a measure \(\mu\) with itself in a (semi)group. \\ \(\mathcal{A}^{*n},\mathcal{A}^{*\leqslant n}\) & Set of products of \(n\) (resp. at most \(n\)) elements of a subset \(\mathcal{A}\) in a (semi)group. \\ \(T_{\mu}\) & The semigroup generated by \(\mathrm{supp}\,\mu\). \\ \(\Sigma,\Sigma^{+},\Sigma^{-}\) & The underlying space \(\Sigma=\mathcal{S}^{\mathbb{Z}}\) and \(\Sigma^{+}=\mathcal{S}^{\mathbb{Z}_{\geqslant 0}}\), \(\Sigma^{-}=\mathcal{S}^{\mathbb{Z}_{\leqslant 0}}\), see Section 4.1. \\ \(\omega,\omega^{+},\omega^{-}\) & Elements in \(\Sigma,\Sigma^{+},\Sigma^{-}\), respectively. \\ \(\sigma\) & The left shift map on \(\Sigma\) or \(\Sigma^{+}\). \\ \(\mathbf{P},\mathbf{P}^{+},\mathbf{P}^{-}\) & The probability measure on the underlying spaces with \(\mathbf{P}=\mu^{\mathbb{Z}}\), see Section 4.1. \\ \(F,F^{+},f^{n}_{\omega},f^{n}_{\omega^{+}}\) & The cocycle over \((\Sigma,\sigma)\) or \((\Sigma^{+},\sigma)\), see Section 4.1. \\ \(\pi^{\pm}\) & Projections of \(\Sigma\) down to \(\Sigma^{\pm}\). \\ \(P,Q\) & Projections of \(\Sigma\times\mathbb{S}^{1}\) down to \(\Sigma\) and \(\mathbb{S}^{1}\), respectively. \\ \(\varkappa,\widetilde{\varkappa}\) & The distortion coefficient and distortion norm on an interval, see Section 3.3. \\ \(\mu^{+},\mu^{-}\) & The probability measure \(\mu\) and \(\mu^{*(-1)}\), see Section 4.2. \\ \(d,r\) & The constants of a random walk induced by \(\mu\), see Theorem 4.12, 4.15. \\ \([d]\) & The set \(\{0,\ldots,d-1\}\) which is equipped with addition modulo \(d\). \\ \(\nu^{\pm}_{i},i\in[d]\). & The ergodic \(\mu^{\pm}\)-stationary measures, see Theorem 4.12. \\ \(m^{\pm}_{i},i\in[d]\). & The ergodic \(u/s\)-states on \(\Sigma\times\mathbb{S}^{1}\), see Theorem 4.12. \\ \(\lambda^{\pm}_{i},i\in[d]\). & The Lyapunov exponents correspond to \(\nu^{\pm}_{i}\), see Theorem 4.15. \\ \(\mathrm{S}_{k}\) & Family of \(k\)-element subsets of \(\mathbb{S}_{1}\) equipped with a metric, see Section 4.2. \\ \(\Pi(\omega,i),\Xi(\omega,i)\) & The maps from \(\Sigma\) to \(\mathbb{S}_{r}\), see Theorem 4.12. \\ \(\Pi(\omega),\Xi(\omega)\) & The maps from \(\Sigma\) to \(\mathbb{S}_{dr}\), see Theorem 4.12. \\ \(W^{*}(\omega,i),W^{u}(\omega,i)\) & The \(s\)-manifolds and \(u\)-manifolds, see (4.4) and Theorem 4.15. \\ \(\Sigma_{\varepsilon}\) & A set of uniform good words with parameter \(\varepsilon>0\), see Proposition 4.22. \\ \(\underline{x}\) & An element in \(\mathbb{S}_{k}\), which is a \(k\)-element subset of \(\mathbb{S}^{1}\). \\ \(u_{\underline{x}}\) & Uniform measure on \(\underline{x}\subset\mathbb{S}_{1}\), see Section 4.3. \\ \(\underline{d}(\,\boldsymbol{\cdot}\,,\,\boldsymbol{\cdot})\) & The metric on \(\mathbb{S}_{k}\) induced by \(d(\,\boldsymbol{\cdot}\,,\,\boldsymbol{\cdot})\), see Section 4.3. \\ \(\underline{\nu}^{\pm}_{i},i\in[d]\) & The probability measures on \(\mathbb{S}_{r}\) corresponds to \(\nu^{\pm}_{i}\), see Section 4.6. \\ \(H(\mu)\) & The discrete entropy of a finitely supported probability measure \(\mu\). \\ \(h_{\mathrm{RW}}(\mu),h_{\mathrm{F}}(\mu,\nu)\) & The random walk entropy and Furstenberg entropy, see Definitions 2.5 and 2.3. \\ \(H(\nu,\mathcal{A}),H(\nu,\mathcal{A}|\mathcal{B})\) & The Shannon entropy and conditional Shannon entropy. \\ \(\mathcal{D}_{n}\) & The level-\(n\) dyadic partition of \(\mathbb{S}^{1}\). \\ \(\dim\nu\) & The exact dimension of a probability measure (if exists), see Definition 2.1. \\ \(H^{\alpha}(\,\cdot\,)\) & \(\alpha\)-Hausdorff outer measure. \\ \(\dim E,\dim_{\mathrm{H}}\nu\) & Hausdorff dimension of a set \(E\) or a measure \(\nu\). \\ \(\delta(G),\delta_{2}(G)\) & The \(C^{1},C^{2}\) dynamical critical exponents of \(G\), see Definition 1.1 and Section 2.4. \\ \(\delta(G,\Delta)\) & The dynamical critical exponent of \(G\) on \(\Delta\), see Section 2.4. \\ \(\phi\ll\psi,\phi\gg\psi\) & \(\phi\leqslant C\psi\) (resp. \(\psi\leqslant C\phi\)) where \(C>0\) is an absolute constant. \\ \(\phi\ll_{\square}\psi,\phi\gg_{\square}\psi\) & \(\phi\leqslant C\psi\) (resp. \(\psi\leqslant C\phi\)) where the constant \(C>0\) only depends on \(\square\). \\ \(\phi\ll_{\square}\psi\) & \(\phi\ll_{\square}\psi\) and \(\psi\ll_{\square}\phi\). \\ \hline \end{tabular} ## 2 Statements of the main results ### Exact dimensionality of stationary measures Let us first recall two notions of the dimensional properties of Borel measures. **Definition 2.1**.: Let \(\nu\) be a Borel probability measure on \(\mathbb{S}^{1}\). 1. The _Hausdorff dimension_ of \(\nu\) is defined by \[\dim_{\mathrm{H}}\nu\coloneqq\inf\left\{\dim_{\mathrm{H}}E:\nu(E)>0\right\},\] where \(\dim_{\mathrm{H}}E\) denotes the Hausdorff dimension of a set \(E\), see Section 3.2. 2. We say \(\nu\) is _exact dimensional_ if there exists a constant \(\alpha\) such that \[\lim_{\rho\to 0^{+}}\frac{\log\nu(B(x,\rho))}{\log\rho}=\alpha,\ \ \ \ \nu-\mathrm{a.e.}\ x.\] In this case, we call \(\alpha\) the _exact dimension_ of \(\nu\) and denote it by \(\dim\nu\). While a Borel probability measure \(\nu\) may not be exact dimensional, the Hausdorff dimension can always be defined. However, if \(\nu\) is exact dimensional, then various dimensions are equal. In particular, the Hausdorff dimension and entropy dimension are both equal to \(\dim\nu\), see Section 3.2. Now, let \(\mu\) be a finitely supported probability measure on \(\mathrm{Diff}^{1}_{+}(\mathbb{S}^{1})\). A Borel probability measure \(\nu\) on \(\mathbb{S}^{1}\) is said to be \(\mu\)_-stationary_ if \[\nu=\mu*\nu=\int f_{*}\nu\,\mathrm{d}\mu(f),\] where \(f_{*}\nu\) is the pushforward of \(\nu\) by the diffeomorphism \(f\). We call a \(\mu\)-stationary measure _ergodic_ if it cannot be written as a nontrivial convex combination of two \(\mu\)-stationary measures. **Definition 2.2**.: The _Lyapunov exponent of_ an ergodic \(\mu\)-stationary measure \(\nu\) is \[\lambda(\mu,\nu)\coloneqq\iint\log f^{\prime}(x)\,\mathrm{d}\mu(f)\,\mathrm{d }\nu(x).\] **Definition 2.3**.: Let \(\nu\) be a \(\mu\)-stationary measure, the _Furstenberg entropy_ of \((\mu,\nu)\) is defined by \[h_{\mathrm{F}}(\mu,\nu)\coloneqq\iint\log\frac{\mathrm{d}f_{*}\nu}{\mathrm{d }\nu}(x)\,\mathrm{d}f_{*}\nu(x)\mathrm{d}\mu(f).\] In this paper, we show the exact dimensionality of stationary measures for general \(C^{2}\) random walks on circle, generalizing the results in [39, 50] to the smooth setting. **Theorem 2.4**.: _Let \(\mu\) be a finitely supported probability measure on \(\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) such that \(\mathrm{supp}\,\mu\) does not preserve any probability measure on \(\mathbb{S}^{1}.\) Let \(\nu\) be an ergodic \(\mu\)-stationary measure on \(\mathbb{S}^{1},\) then \(\nu\) is exact dimensional and_ \[\dim\nu=\frac{h_{\mathrm{F}}(\mu,\nu)}{|\lambda(\mu,\nu)|}.\] Here we make the assumption that the support of \(\mu\) does not preserve any invariant measure. While this assumption may appear restrictive, it is actually quite mild. If it were not satisfied, then either the support of \(\mu\) would preserve a finite orbit, or it would lie within an abelian group. In the latter case, the associated stationary measure could be non-exact dimensional if its rotation number is Liouville, as shown in [71]. ### Dimension formulas for smooth actions on the interval and circle In general, computing the Furstenberg entropy mentioned in Theorem 2.4 can be very difficult. In practice, it may be necessary to use alternative quantities to determine the exact dimension of the stationary measure. Under certain discreteness or separation assumptions, the random walk entropy can be a viable candidate to substitute for \(h_{\mathrm{F}}\) in Theorem 2.4. **Definition 2.5**.: For a finitely supported measure \(\mu\) in a group \(G,\) the _Shannon entropy_\(H(\mu)\) of \(\mu\) is given by \(-\sum_{f\in\operatorname{supp}\mu}\mu(f)\log\mu(f)\). Let \(\mu^{*n}\) denote the \(n\)-fold convolution of \(\mu\) in \(G.\) The _random walk entropy_ of \(\mu\) is then defined as \[h_{\mathrm{RW}}(\mu)\coloneqq\lim_{n\to\infty}\frac{1}{n}H(\mu^{*n}).\] The limit by sub-additivity, as shown in [44], is guaranteed to exist and characterizes the degree of non-freeness of the semigroup generated by \(\operatorname{supp}\mu\). Specifically, we always have \(h_{\mathrm{RW}}(\mu)\leqslant H(\mu),\) and the two are equal if and only if the semigroup generated by \(\operatorname{supp}\mu\) is generated freely by it. In general, \(h_{\mathrm{RW}}\) provides an upper bound for \(h_{\mathrm{F}},\) as can be seen, for example, in [39]. In this paper we are often interested in the groups which are (\(C^{1}\)-)locally discrete in \(\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\). Recall the definition of local discreteness given below. **Definition 2.6**.: A group \(G\subset\operatorname{Diff}_{+}^{1}(\mathbb{S}^{1})\) is called _\(C^{1}\)-locally discrete_ (abbreviate to _locally discrete_) if for any interval \(I\subset\mathbb{S}^{1},\) there is no distinct \(g_{n}\in G\) such that \(g_{n}|_{I}\to\operatorname{id}\) in \(C^{1}\)-topology. The local discreteness condition is implicitly or explicitly used in the results presented in [32, 33, 66, 67, 23, 24, 2, 3], particularly in the case of actions by subgroups of \(\operatorname{Diff}^{\omega}(\mathbb{S}^{1})\). In the analytic setting, local non-discreteness usually implies the existence of a local flow in the \(C^{1}\) local closure of the group action [66, 67], and it often implies that the group acts minimally on \(\mathbb{S}^{1}\)[54, Proposition 3.2]. _Remark 2.7_.: The authors of [2, 3] considered a slightly different definition of local discreteness. They only consider intervals \(I\) that have non-empty intersection with the group invariant minimal set. Basically, all of the results in our paper that involve the discreteness assumption can be strengthened to the case in [2, 3] without much difficulty, with some necessary modifications of the statements. The following theorem can be viewed as a smooth analogue of the dimension formula for self-similar measures. For a closed interval \(I,\) we denote \(C^{2}_{+}(I,I)\) to be the set of \(C^{2}\) orientation preserving maps on \(I\) that have no critical point. For a probability measure \(\mu\) supported on a group \(G,\) we denote by \(T_{\mu}\) the semigroup generated by \(\operatorname{supp}\mu.\) **Theorem 2.8**.: _Let \(\mu\) be a finitely supported probability measure on \(\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) such that \(\operatorname{supp}\mu\) does not preserve any probability measure on \(\mathbb{S}^{1}\). Let \(\nu\) be an ergodic \(\mu\)-stationary measure on \(\mathbb{S}^{1}\). Then \(\nu\) is exact dimensional and_ 1. _either_ \(|\lambda(\mu,\nu)|\geqslant h_{\mathrm{RW}}(\mu)\) _and_ \(\dim\nu=\frac{h_{\mathrm{RW}}(\mu)}{|\lambda(\mu,\nu)|},\)__ 2. _or there exists a closed interval_ \(J\subset\mathbb{S}^{1}\) _and two sequence of elements_ \(\{g_{n}\},\{f_{n}\}\subset T_{\mu}\) _with_ \(g_{n}\neq f_{n},\) _such that_ \(g_{n}^{-1}f_{n}\) _tends to_ \(\operatorname{id}\) _on_ \(J\) _in the_ \(C^{1}\)_-topology._ _Moreover, the conclusion holds when replacing \(\mathbb{S}^{1}\) with a closed interval \(I\) and \(\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) with \(C^{2}_{+}(I,I),\) where the elements \(g_{n}\) and \(f_{n}\) found in the second case additionally satisfy \(f_{n}(J)\subset g_{n}(I),\) which ensures that \(g_{n}^{-1}f_{n}\) is well-defined on \(J.\)_ _Remark 2.9_.: We can compare Theorem 2.8 with [38]. Hochman showed that for self-similar measures, either the dimension formula holds, or there exists a sequence of maps \(g_{n}^{-1}\circ f_{n}\) that tend to \(\operatorname{id}\) super-exponentially fast. We have the following corollary for locally discrete group actions on the circle. **Theorem 2.10**.: _Let \(\mu\) be a finitely supported probability measure on \(\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1}).\) Assume that the group generated by \(\operatorname{supp}\mu\) is locally discrete and has no finite orbits. Then for every ergodic \(\mu\)-stationary measure \(\nu\),_ 1. \(\nu\) _is exact dimensional._ 2. \(|\lambda(\mu,\nu)|\geqslant h_{\operatorname{RW}}(\mu)\)_._ 3. \(\dim\nu=\frac{h_{\operatorname{RW}}(\mu)}{|\lambda(\mu,\nu)|}.\)__ In the real analytic setting, the statement becomes simpler if the group generated by \(\operatorname{supp}\mu\) admits an exceptional minimal set. **Corollary 2.11**.: _Let \(\mu\) be a finitely supported probability measure on \(\operatorname{Diff}_{+}^{\omega}(\mathbb{S}^{1}).\) Assume that the group generated by \(\operatorname{supp}\mu\) admits an exceptional minimal set. Then for every ergodic \(\mu\)-stationary measure \(\nu\),_ 1. \(\nu\) _is exact dimensional,_ 2. \(|\lambda(\mu,\nu)|>h_{\operatorname{RW}}(\mu)\)_,_ 3. \(\dim\nu=\frac{h_{\operatorname{RW}}(\mu)}{|\lambda(\mu,\nu)|}<1.\)__ ### Variational principle for dimensions By the definition of the Hausdorff dimension of measure, for any stationary measure \(\nu\) we have \(\dim_{\mathrm{H}}\nu\leqslant\dim_{\mathrm{H}}(\operatorname{supp}\nu)\). It is natural to ask if we could construct stationary measure to approximate given invariant set with almost the same Hausdorff dimension. As we mentioned in the introduction, this type of variational principle for contracting IFS has been proved in [31]. The following theorem generalizes it to the setting of smooth group actions on circle. **Theorem 2.12**.: _Let \(G\subset\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated subgroup which does not preserve any probability measure and satisfies property \((\star)\) or \((\Lambda\star)\). Let \(\Lambda\) be the unique minimal set of \(G\), then_ \[\dim_{\mathrm{H}}\Lambda=\sup\left\{\dim_{\mathrm{H}}\nu:\begin{array}{l}\nu \text{ is an ergodic $\mu$-stationary measure on $\Lambda$},\\ \mu\text{ is a finitely supported probability measure on $G$}\end{array}\right\}. \tag{2.1}\] The properties \((\star)\) and \((\Lambda\star)\) mentioned in the theorem were introduced by Deroin-Kleptsyn-Navas [23], and their precise definitions are given below. It is expected that these properties hold for most finitely generated subgroups \(G\subset\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) without finite orbits. Some further discussions will be presented in Section 3.1. **Definition 2.13**.: A point \(x\in\mathbb{S}^{1}\) is _non-expandable_ for the action of \(G\subset\operatorname{Diff}_{+}^{1}(\mathbb{S}^{1})\) if \(g^{\prime}(x)\leqslant 1\) for every \(g\in G.\) We denote by \(\operatorname{NE}=\operatorname{NE}(G)\) the set of non-expandable points and by \(G(\operatorname{NE})\) the \(G\)-orbit of \(\operatorname{NE}\). **Definition 2.14**.: We say \(G\subset\operatorname{Diff}^{1}(\mathbb{S}^{1})\) satisfies _property_\((\star)\) if it acts minimally (i.e., \(\Lambda=\mathbb{S}^{1}\)) and for every \(x\in\operatorname{NE}\), there exist \(g_{+},g_{-}\in G\) such that \(g_{+}(x)=g_{-}(x)=x\) and \(x\) is an isolated-from-the-right (resp. isolated-from-the-left) point of the set of fixed points \(\operatorname{Fix}(g_{+})\) (resp. \(\operatorname{Fix}(g_{-})\)). **Definition 2.15**.: We say \(G\subset\operatorname{Diff}^{1}(\mathbb{S}^{1})\) satisfies _property_\((\Lambda\star)\) if \(\Lambda\) is exceptional and for every \(x\in\operatorname{NE}\cap\Lambda\), there exists \(g_{+},g_{-}\in G\) such that \(g_{+}(x)=g_{-}(x)=x\) and \(x\) is an isolated-from-the-right (resp. isolated-from-the-left) point of the set of fixed points \(\operatorname{Fix}(g_{+})\) (resp. \(\operatorname{Fix}(g_{-})\)). The condition that \(\nu\) is supported on \(\Lambda\) in Theorem 2.12 is necessary even in the \(C^{\infty}\) setting. For a discussion of this, see Section 8.3. However, this condition can be removed in the real analytic setting, as shown in the following theorem. **Corollary 2.16**.: _Let \(G\subset\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) be a finitely generated subgroup which does not preserve any probability measure and satisfies property \((\star)\) or \((\Lambda\star)\). Let \(\Lambda\) be the unique minimal set of \(G\), then_ \[\dim_{\mathrm{H}}\Lambda=\sup\left\{\dim_{\mathrm{H}}\nu:\begin{array}{l}\nu \text{ is an ergodic $\mu$-stationary measure},\\ \mu\text{ is a finitely supported probability measure on }G\end{array}\right\}. \tag{2.2}\] ### Dynamical critical exponents Let us first recall the definition of the dynamical critical exponent in Definition 1.1. That is, for every \(G\subset\mathrm{Diff}_{+}^{1}(\mathbb{S}^{1})\) without finite orbits, we denote \(\Lambda\) to be the unique minimal set of \(G\). Then the _(\(C^{1}\)-)dynamical critical exponent_ of \(G\) is \[\delta(G)\coloneqq\lim_{\varepsilon\to 0^{+}}\limsup_{n\to+\infty}\frac{1}{n} \log\#\{g\in G:\exists x\in\Lambda,\,g^{\prime}|_{B(x,\varepsilon)}\geqslant 2 ^{-n}\}. \tag{2.3}\] **Theorem 2.17**.: _Let \(G\subset\mathrm{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated subgroup without finite orbits and let \(\Lambda\) be its unique minimal set. We have the following:_ 1. _If_ \(G\) _satisfies property_ \((\star)\) _or_ \((\Lambda\star)\)_, then_ \(\delta(G)\geqslant\dim_{\mathrm{H}}\Lambda\)_._ 2. _If_ \(G\) _is locally discrete and virtually free, then_ \(\delta(G)\leqslant\dim_{\mathrm{H}}\Lambda\)_._ _In particular, if both assumptions hold, we have \(\delta(G)=\dim_{\mathrm{H}}\Lambda\)._ Once again, for the real analytic case, we have a simpler statement. **Corollary 2.18**.: _Let \(G\subset\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) be a finitely generated subgroup without finite orbits. Let \(\Lambda\) be its unique minimal set and assume that \(G\) satisfies property \((\star)\) or \((\Lambda\star).\) Then_ \[\dim_{\mathrm{H}}\Lambda=\min\left\{1,\delta(G)\right\}.\] One may ask whether the condition \(x\in\Lambda\) in (2.3) can be removed, or whether the condition \(g^{\prime}|_{B(x,\varepsilon)}\geqslant 2^{-n}\) can be replaced by \(g^{\prime}\geqslant 2^{-n}\) in (2.3). In general, these modifications do not lead to an identity between the critical exponent and the dimension of the minimal set. See counterexamples and related discussions in Section 12.3. However, the inclusion of the minimal set \(\Lambda\) in the definition of a dynamical critical exponent may not always be desirable, as it restricts the applicability of the concept. This raises the question of whether there exists an a priori criterion to determine if a group action is minimal via a variant of the dynamical critical exponent. To address this, we generalize the concept of dynamical critical exponents to any subset of \(\mathbb{S}^{1}\), thus allowing for a more flexible and useful tool for analyzing the dynamics of group actions. **Definition 2.19**.: Let \(G\subset\mathrm{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a subgroup. For a subset \(\Delta\subset\mathbb{S}^{1}\), we define the _dynamical critical exponent of \(G\) on \(\Delta\)_ as \[\delta(G,\Delta)\coloneqq\lim_{\varepsilon\to 0}\limsup_{n\to\infty}\frac{1}{n} \log\#\left\{g\in G:\exists x\in\Delta,g^{\prime}|_{B(x,\varepsilon)}\geqslant 2 ^{-n}\right\}.\] _Remark 2.20_.: The discussions in Section 11 reveal that for the real analytic case, the dynamical critical exponent at a point outside of \(\Lambda\) can be viewed as the exponent of convergence for certain Poincare series. Likewise, the dynamical critical exponent \(\delta(G)\) can be interpreted as the exponent of convergence for certain series. This interpretation is utilized in the proof of Theorem E, as described in Section 10.2. If \(G\) has no finite orbit and let \(\Lambda\) be its unique minimal set, then \(\delta(G,\Lambda)=\delta(G)\). Besides, if we take \(\Delta=\mathbb{S}^{1}\), the dynamical critical exponent \(\delta(G,\mathbb{S}^{1})\) is a quantity which can be computed a priori. The value of \(\delta(G,\mathbb{S}^{1})\) provides a criterion for the \(G\)-action being minimal. **Theorem 2.21**.: _Let \(G\subset\operatorname{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) be a finitely generated subgroup with an exceptional minimal set \(\Lambda.\) Then_ \[\dim_{\mathrm{H}}\Lambda=\delta(G)\leqslant\delta(G,\mathbb{S}^{1})<1.\] _More precisely, we have_ \[\delta(G,\mathbb{S}^{1})=\max\left\{\dim_{\mathrm{H}}\Lambda,\ \sup_{x\in\mathbb{S}^{1}}\frac{k(x)}{k(x)+1}\right\}=\max\left\{\dim_{ \mathrm{H}}\Lambda,\ \max_{x\in\mathbb{S}^{1}\setminus\Lambda}\frac{k(x)}{k(x)+1}\right\},\] _where \(k(x)\) is defined in Definition 11.9, which relates to the multiplicity at \(x.\)_ **Corollary 2.22**.: _Let \(G\subset\operatorname{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) be a finitely generated subgroup without finite orbits. Assume that \(\delta(G,\mathbb{S}^{1})\geqslant 1,\) then \(G\) acts minimally on the circle._ _Remark 2.23_.: If \(G\) acts minimally on \(\mathbb{S}^{1}\) and satisfies property (\(\star\)), then by Corollary 2.18, we have \(\delta(G,\mathbb{S}^{1})=\delta(G)\geqslant 1\). Since property (\(\star\)) is expected to hold, the condition \(\delta(G,\mathbb{S}^{1})\geqslant 1\) is expected to be an equivalent criterion for the minimality of the \(G\)-action. To extend our study to a general subgroup of \(\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1}),\) we introduce a definition of the \(C^{2}\)-dynamical critical exponent that considers only those elements with uniform distortion control on an interval. **Definition 2.24**.: Let \(G\subset\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated subgroup without finite orbits. Let \(\Lambda\) be its unique minimal set. We define the \(C^{2}\)-_dynamical critical exponent_ of \(G\) as \[\delta_{2}(G)\coloneqq\lim_{\varepsilon\to 0^{+}}\lim_{C\to+\infty}\limsup_{n\to+ \infty}\frac{1}{n}\log\#\left\{\,g\in G:\exists x\in\Lambda,g^{\prime}|_{B(x, \varepsilon)}\geqslant 2^{-n},\widetilde{\varkappa}(g,B(x,\varepsilon)) \leqslant C\,\right\},\] where \(\widetilde{\varkappa}(g,I)\coloneqq\sup_{x\in I}(\log g^{\prime}(x))^{\prime}\) controls the distortion, as explained in Section 3.3. Without any assumption of the structure of \(G,\) we get the following dimension equality. **Theorem 2.25**.: _Let \(G\subset\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated, locally discrete subgroup without finite orbits. Let \(\Lambda\) be the unique minimal set and assume that \(G\) satisfies property (\(\star\)) or (\(\Lambda\star\)). Then_ \[\dim_{\mathrm{H}}\Lambda=\delta_{2}(G).\] ### Conformal measures To study the limit sets of Fuchsian groups, Patterson introduced in [62] a measure on the limit set that satisfies a specific homogeneous condition, now known as the Patterson-Sullivan measure. Later, in [75], Sullivan introduced a general definition of conformal measures to study the fractal geometry of limit sets for conformal dynamics. Moreover, he constructed a conformal measure for rational maps on the Julia set. Let us first review the general definition of conformal measures. **Definition 2.26**.: Let \(G\) be a group of conformal transformations. A measure \(\nu\) on the underlying space is said to be _conformal_ with exponent \(\delta\) (or simply \(\delta\)-conformal), if for every Borel set \(A\) and for every map \(g\in G\) one has \[\nu(A)=\int_{A}|g^{\prime}(x)|^{\delta}\mathrm{d}\nu(x).\] In the context of group actions on the circle, the concept of a conformal measure is also a powerful tool for studying exceptional minimal sets. It is unclear whether there exists a conformal measure supported on the exceptional minimal set for general group actions, especially when it comes to atomless conformal measures. Although the existence of an atomless \(\delta\)-conformal measure is unknown, the authors of [23] proved the uniqueness of \(\delta\) and an estimate of \(0<\delta<1\) assuming the existence of an atomless \(\delta\)-conformal measure. In the case that \(\mathrm{NE}(G)=\varnothing\), it is still possible to show the existence of an atomless \(\delta\)-conformal measure on the exceptional minimal set \(\Lambda\) using techniques from [75], even if \(G\) is only assumed to be a subgroup of \(\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\). Furthermore, \(\delta\) corresponds to the Hausdorff dimension of \(\Lambda\) and \(0<\delta<1\). In particular, this implies that \(0<\dim_{\mathrm{H}}\Lambda<1\). In this paper, we establish an identity between the conformal dimension \(\delta\) and the Hausdorff dimension \(\dim_{\mathrm{H}}\Lambda\) whenever an atomless \(\delta\)-conformal measure is supported on \(\Lambda\). This result extends Sullivan's result [75] to the case where \(\mathrm{NE}\neq\varnothing\), and also directly implies the uniqueness of \(\delta\) which was shown in [23]. Combining this with [23, Theorem F], we conclude that the existence of an atomless conformal measure implies \(\dim_{\mathrm{H}}\Lambda<1\). **Theorem 2.27**.: _Let \(G\subset\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) be a finitely generated subgroup with an exceptional minimal set \(\Lambda.\) Assume that \(G\) satisfies property \((\Lambda\star).\) If there exists an atomless \(\delta\)-conformal measure supported on \(\Lambda\), then \(\delta=\dim_{\mathrm{H}}\Lambda.\)_ Thanks to dynamical critical exponents, we can construct atomless conformal measures on exceptional minimal sets in certain cases. One application is to estimate the Hausdorff dimension of an exceptional minimal set (see Theorem F). Another case where an atomless \(\delta\)-conformal measure exists is given by the following result. **Theorem 2.28**.: _Let \(G\subset\mathrm{Diff}^{\omega}_{+}(\mathbb{S}^{1})\) be a finitely generated subgroup with an exceptional minimal set \(\Lambda.\) Assume that the stabilizer of any point \(x\in\mathbb{S}^{1}\setminus\Lambda\) in \(G\) is trivial. Then there exists an atomless \(\delta\)-conformal measure supported on \(\Lambda\) with \(\delta=\dim_{\mathrm{H}}\Lambda.\)_ _Remark 2.29_.: Notice that the triviality condition of the stabilizers in Theorem 2.28 is quite mild, for example it holds for any non elementary Fuchsian group. ### Approximation by uniformly hyperbolic subsystems We are interested in the existence of _large_ uniformly hyperbolic subsystems of a smooth random walk on circle, which encompasses two main problems. The first one is to construct a subsystem with the strongest hyperbolicity, i.e. the perfect pingpong pair, which we have mentioned in Section 1.3. A perfect pingpong pair has very clear dynamics, which allow us to predict the behavior for every point. However, the subsystem generated by this pair of elements may not be _large_ enough for our purpose. The second problem is how large a uniformly hyperbolic subsystem could be. There are three fundamental quantities characterizing a random walk: the entropy, the Lyapunov exponent, and the dimension of the stationary measure. In this paper, we aim to approximate all three quantities simultaneously using a uniformly hyperbolic subsystem. One version was presented in the previous section as Theorem J. Here, we present another version where, under the condition of local discreteness, we can approximate the original random walk using maps that satisfy the separation property, simultaneously approximating the random walk entropy, the Lyapunov exponent and the dimension of the stationary measure. **Theorem 2.30**.: _Let \(\mu\) be finitely supported probability measure on \(\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) without invariant probability measures on \(\mathbb{S}^{1}.\) Let \(\mathcal{S}=\mathrm{supp}\,\mu\) and assume that the group generated by \(\mathcal{S}\) is locally discrete. Let \(\nu\) be an ergodic \(\mu\)-stationary measure and \(\lambda=\lambda(\mu,\nu)<0.\) Then for every \(\varepsilon>0\) sufficiently small, there exists a positive integer \(N\) and a subset \(\Gamma^{*}\subset\mathcal{S}^{*N}\) such that:_ 1. _The cardinality of_ \(\Gamma^{*}\) _is at least_ \(2^{N(h_{\mathrm{RW}}(\mu)-\varepsilon)}.\)__ 2. _There exists an open interval_ \(U\subset\mathbb{S}^{1}\) _which is strictly preserved by every_ \(f\in\Gamma^{*}.\)__ 3. _The closure of_ \(f(U)\) _are pairwise disjoint for_ \(f\in\Gamma^{*}.\)__ 4. _For every_ \(f\in\Gamma^{*},\)__\(x\in U,\)__\(f^{\prime}(x)\in[2^{N(\lambda-\varepsilon)},2^{N(\lambda+\varepsilon)}].\)__ _._ 5. _The semigroup generated by_ \(\Gamma^{*}\) _has a unique minimal set_ \(K\subset U\) _with a Hausdorff dimension at least_ \((\dim\nu-\varepsilon).\)__ _In particular, if \(\mathcal{S}\) generates a free semigroup, then the cardinality of \(\Gamma\) is at least \(2^{N(H(\mu)-\varepsilon)}.\)_ _Remark 2.31_.: The discreteness condition is necessary because conditions (1), (3), and (4) imply that \(-\lambda\leqslant h_{\mathrm{RW}}(\mu)\). Furthermore, \(h_{\mathrm{RW}}(\mu)\) cannot be replaced by \(H(\mu)\) because it is possible for \(\#\mathcal{S}^{*N}\) to be much less than \(2^{NH(\mu)}.\) ### List of proofs We provide a list of the locations of the proofs for all the results in the first two sections. \begin{tabular}{l l} \hline \hline Section 5 & Theorem 2.4. \\ Section 6 & Theorems 2.8 and 2.10. \\ Section 7 & Theorem 2.30 and Theorem J. \\ Section 8 & Theorem 2.12, Corollary 2.16, Theorems B and C. \\ Section 9 & Theorems 2.17 and 2.25, Corollary 2.18 and Theorem D. \\ Section 10 & Theorem E. \\ Section 11 & Theorems 2.21, 2.27 and 2.28, Corollary 2.22, Theorems F and G. \\ Section 12 & Corollary 2.11, Theorems A, H, Corollaries I, K and L. \\ \hline \hline \end{tabular} ## 3 Preliminaries ### Group actions on the circle We list some useful results about groups acting on the circle by diffeomorphisms. The following lemma is well-known. For a proof, see for example [2, Lemma 7.12]. **Lemma 3.1**.: _Let \(G\subset\mathrm{Homeo}(\mathbb{S}^{1})\) be a subgroup without finite orbits on \(\mathbb{S}^{1}\). If \(G^{\prime}\) is a subgroup of finite index in \(G\), then the unique minimal set of \(G\) is also the unique minimal set of \(G^{\prime}\)._ The following is also a well-known lemma. We include a proof for the reader's convenience. **Lemma 3.2**.: _Let \(G\subset\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) be a finitely generated subgroup. If \(G\) does not act minimally on \(\mathbb{S}^{1}\) and preserves a probability measure then \(G\) has a finite orbit on \(\mathbb{S}^{1}\)._ Proof.: Assume that \(G\) does not act minimally and preserves a probability measure \(\nu\) on \(\mathbb{S}^{1}\). By Denjoy's theorem, every element \(f\) of \(G\) has rational rotation number \(\rho(f)\). Then for all \(x\in\mathbb{S}^{1}\), we have \(\nu([x,f(x)[)=\rho(f)\). Choose a point \(x_{0}\in\mathrm{supp}\,\nu\). If \(N\rho(f)\in\mathbb{Z}\) for some integer \(N>0\) then \(f\) preserves the set \(\left\{\,y\in\mathrm{supp}\,\nu:\nu([x_{0},y]\in\frac{1}{N}\mathbb{Z}\,\right\}\), which is finite. Taking \(N\) to be a common multiple of the denominators of the rotation numbers of a finite set of generators of \(G\), then this set is preserved by \(G\). We recall the concepts of non-expandable points, properties (\(\star\)) and (\(\Lambda\star\)) mentioned in Section 2.3. We present some consequences of these properties here. **Theorem 3.3** ([23, Theorems A, D]).: _Let \(G\subset\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) be a finitely generated subgroup with the minimal set \(\Lambda\). Assume that \(G\) satisfies property (\(\star\)) or (\(\Lambda\star\)). Then_ 1. _The set_ \(\Lambda\cap\mathrm{NE}\) _is finite._ 2. _For each_ \(x\in\Lambda\setminus G(\mathrm{NE}),\) _the set of derivatives_ \(\{g^{\prime}(x):g\in G\}\) _is unbounded._ In [24], Deroin-Kleptsyn-Navas established property (\(\star\)) or (\(\Lambda\star\)) for free group actions by real analytic circle diffeomorphisms. **Theorem 3.4** ([24, Main Theorem]).: _Let \(G\) be a finitely generated subgroup of \(\mathrm{Diff}^{\omega}_{+}(\mathbb{S}^{1})\)._ 1. _If_ \(G\) _is free of rank_ \(\geqslant 2\) _and acts minimally on_ \(\mathbb{S}^{1},\) _then it satisfies property_ \((\star).\)__ 2. _If_ \(G\) _acts on the circle with an exceptional minimal set_ \(\Lambda,\) _then it satisfies property_ \((\Lambda\star).\)__ We will also make use of the following theorem due to Hector [36] (see also [58]). **Theorem 3.5** ([36]).: _If \(G\) is a subgroup of \(\operatorname{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) having an exceptional minimal set, then the stabilizer of any point in \(\mathbb{S}^{1}\) is either trivial or infinite cyclic._ The following is a consequence of Hector's theorem. For a proof, see, for example, [54, Proposition 3.2]). **Corollary 3.6**.: _Let \(G\subset\operatorname{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) be a finitely generated, locally nondiscrete group without finite orbits, then \(G\) acts minimally on \(\mathbb{S}^{1}.\)_ ### Dimension and entropy Let \(E\) be a subset of a metric space \(X\), for \(\alpha>0,\) the \(\alpha\)_-Hausdorff outer measure_ is defined as \[H^{\alpha}(E)\coloneqq\lim_{\rho\to 0^{+}}H^{\alpha}_{\rho}(E),\] where \[H^{\alpha}_{\rho}(E)\coloneqq\inf\left\{\sum_{n=1}^{\infty}(\operatorname{ diam}U_{n})^{\alpha}:(U_{n})\text{ is a countable cover of }E,\ \operatorname{diam}U_{n}<\rho\right\}.\] Then there exists a unique constant \(\alpha_{0}\geqslant 0\) such that \(H^{\alpha}(E)=\infty\) for every \(\alpha<\alpha_{0}\) and \(H^{\alpha}(E)=0\) for every \(\alpha>\alpha_{0}.\) The constant \(\alpha_{0}\) is called the _Hausdorff dimension_ of \(E,\) denoted by \(\dim_{\mathrm{H}}E.\) Recall the the Hausdorff dimension of a measure \(\nu\), Definition 2.1, then we have \(\dim_{\mathrm{H}}\nu\leqslant\dim_{\mathrm{H}}\operatorname{supp}\nu.\) The following lemma is a crucial technical tool used throughout this paper. It will be applied to several different dynamically defined covers to obtain dynamically good elements. These elements, in turn, generate clear dynamics that help us make estimates. The lemma also includes a Vitali-type covering argument, which we mentioned in Section 1.3. **Lemma 3.7**.: _Let \(C>0\) and \(\lambda>0\) be parameters. For \(n\in\mathbb{N}\), let \(\mathcal{E}_{n}\) be a collection of intervals in \(\mathbb{S}^{1}\) of length at most \(C2^{-\lambda n}\). Let \(E=\limsup_{n\to+\infty}\bigcup_{J\in\mathcal{E}_{n}}J\). Let \(\widetilde{\mathcal{E}}_{n}\subset\mathcal{E}_{n}\) be a maximal subcollection consisting of pairwise disjoint intervals. Then_ \[\limsup_{n\to+\infty}\frac{1}{n}\log\#\widetilde{\mathcal{E}}_{n}\geqslant \lambda\dim_{\mathrm{H}}E.\] Proof.: For every \(J\in\mathcal{E}_{n}\setminus\widetilde{\mathcal{E}}_{n},\) by the maximality of \(\widetilde{\mathcal{E}}_{n},\) there is \(I\in\widetilde{\mathcal{E}}_{n}\) such that \(J\cap I\neq\varnothing.\) Then \(J\subset\widetilde{I}\) where \(\widetilde{I}\) denotes the interval of the same center as \(I\) and of length \(C2^{-\lambda n+2}\). It follows that \[\bigcup_{J\in\mathcal{E}_{n}}J\subset\bigcup_{I\in\widetilde{\mathcal{E}}_{n} }\widetilde{I}.\] Given \(\rho>0,\) for any \(N\geqslant\lambda^{-1}(|\log\rho|+\log C+2),\) we have \[E\subset\bigcup_{n\geqslant N}\bigcup_{I\in\widetilde{\mathcal{E}}_{n}^{n}} \widetilde{I},\] which is a cover of \(E\) by intervals of length at most \(\rho\). Thus for any \(s>0\) and \(\rho>0,\) \[H^{s}_{\rho}(E)\leqslant\sum_{n\geqslant N}\sum_{I\in\widetilde{\mathcal{E}}_{ n}}|\widetilde{I}|^{s}\ll_{C,s}\sum_{n\geqslant N}2^{-s\lambda n}\#\widetilde{ \mathcal{E}}_{n}.\] The right-hand side is the tail of a convergent series whenever \(-s\lambda+\beta<0\) where \(\beta=\limsup_{n\to+\infty}\frac{1}{n}\log\#\widetilde{\mathcal{E}}_{n}\). In this case, \(H^{s}(E)=0\). Hence \(\dim_{\mathrm{H}}E\leqslant\frac{\beta}{\lambda}\) Let \(\nu\) be a Borel probability measure on \(X\) and \(\mathcal{A}\) a finite measurable partition of \(X.\) The _Shannon entropy_ of \(\nu\) with respect to \(\mathcal{A}\) is \[H(\nu,\mathcal{A})\coloneqq-\sum_{A\in\mathcal{A}}\nu(A)\log\nu(A).\] Note that \(H(\nu,\mathcal{A})\leqslant\log\#\mathcal{A}.\) Let \(\mathcal{B}\) be another finite measurable partition on \(X,\) the _conditional entropy_ is \[H(\nu,\mathcal{A}|\mathcal{B})\coloneqq-\sum_{B\in\mathcal{B}}\sum_{A\in \mathcal{A}}\nu(A\cap B)\log\frac{\nu(A\cap B)}{\nu(B)}.\] Similarly, an upper bound for the conditional entropy is given by \[H(\nu,\mathcal{A}|\mathcal{B})\leqslant\max_{B\in\mathcal{B}}\log\#\left\{A \in\mathcal{A}:\nu(A\cap B)>0\right\}. \tag{3.1}\] If \(\mu\) is a probability measure with finite support, we use \(H(\mu)\) and \(H(\mu|\mathcal{B})\) to denote \(H(\mu,\mathcal{A})\) and \(H(\mu,\mathcal{A}|\mathcal{B}),\) respectively, where \(\mathcal{A}\) is the discrete partition. This notation is used only for finitely supported probability measures on \(\mathrm{Diff}^{1}_{+}(\mathbb{S}^{1})\) in this paper. Fix a finite measurable partition \(\mathcal{A},\) the function \(\nu\mapsto H(\nu,\mathcal{A})\) is concave and almost convex: for any probability vector \(\alpha=(\alpha_{1},\cdots,\alpha_{k})\) and any Borel probability measures \(\nu_{1},\ldots,\nu_{k},\) we have \[\sum_{i=1}^{k}\alpha_{i}H(\nu_{i},\mathcal{A})\leqslant H\left(\sum_{i=1}^{k} \alpha_{i}\nu_{i},\mathcal{A}\right)\leqslant\sum_{i=1}^{k}\alpha_{i}H(\nu_{i},\mathcal{A})+H(\alpha). \tag{3.2}\] We often consider the entropy with respect to the dyadic partitions on \(\mathbb{S}^{1}\). We identify \(\mathbb{S}^{1}\) with \([0,1[\). For a positive integer \(n,\) let \[\mathcal{D}_{n}\coloneqq\left\{\left[\frac{k}{2^{n}},\frac{k+1}{2^{n}}\right[: 0\leqslant k\leqslant 2^{n}-1\right\}.\] Moreover, for a positive number \(t,\) let \(\mathcal{D}_{t}=\mathcal{D}_{\lfloor t\rfloor}.\) If the sequence \((\frac{1}{n}H(\nu,\mathcal{D}_{n}))_{n}\) converges and \[\lim_{n\to+\infty}\frac{1}{n}H(\nu,\mathcal{D}_{n})=\alpha,\] we say that \(\nu\) has _entropy dimension_\(\alpha\). It is shown in [79, Theorem 4.4][29, Theorem 1.3] that if \(\nu\) is exact dimensional of dimension \(\alpha\) then it has entropy dimension \(\alpha\). ### Distortion estimates This subsection recalls useful tools of distortion controls that enable controlling the iterations of \(C^{2}\) maps on a small piece using only the information at one point. These techniques date back to Denjoy [21], Schwartz [72], and Sacksteder [70], etc. and later a more concise view was proposed by Sullivan [75]. Further discussions on this topic can be found in [22, 23, 24]. Let \(f\in\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) and \(I\subset\mathbb{S}^{1}\) be an interval, denote \[\varkappa(f,I)\coloneqq\sup_{x,y\in I}|\log f^{\prime}(x)-\log f^{\prime}(y)|.\] We also consider the Lipschitz norm of \(\log f^{\prime}\) on \(I\) denoted by \[\widetilde{\varkappa}(f,I)=\left\|\log f^{\prime}\right\|_{\mathrm{Lip}(I)} \coloneqq\sup_{x\neq y\in I}\frac{|\log f^{\prime}(x)-\log f^{\prime}(y)|}{d(x,y)}.\] According to [23], \(\varkappa\) is called the _distortion coefficient_, and \(\widetilde{\varkappa}\) corresponds to their _distortion norm_\(\eta\), defined as \(\eta(f,I)=\widetilde{\varkappa}(f^{-1},fI)\). Obviously, \(\varkappa(f,I)\leqslant\widetilde{\varkappa}(f,I)|I|.\) Besides, there are some basic estimate of the distortion of compositions, that is \[\varkappa(fg,I)\leqslant\varkappa(g,I)+\varkappa(f,g(I)),\] \[\widetilde{\varkappa}(fg,I)\leqslant\widetilde{\varkappa}(g,I)+\|g^{\prime} \|_{\infty}\cdot\widetilde{\varkappa}(f,g(I)). \tag{3.3}\] Now we fix a finite subset \(\mathcal{S}\subset\mathrm{Diff}_{+}^{2}(\mathbb{S}^{1})\), let \[M=M(\mathcal{S})\coloneqq\max\left\{\max_{g\in\mathcal{S}}\|g^{\prime}\|_{ \infty},\max_{g\in\mathcal{S}}\|\log g^{\prime}\|_{\mathrm{Lip}}\right\}.\] Take \(f\in\mathcal{S}^{*n}\) and write \(f\) as \(g_{n}\cdots g_{2}g_{1}\) where \(g_{i}\in\mathcal{S}.\) Let \[f_{0}=\mathrm{id},\quad f_{k}=g_{k}\cdots g_{2}g_{1},\ \forall 1\leqslant k\leqslant n.\] Then for every interval \(I\subset\mathbb{S}^{1}\), we have \[\varkappa(f,I)\leqslant\sum_{k=0}^{n-1}\varkappa(g_{k+1},f_{k}(I))\leqslant M \sum_{k=0}^{n-1}|f_{k}(I)|, \tag{3.4}\] and \[\widetilde{\varkappa}(f,I)\leqslant\sum_{k=0}^{n-1}\|f^{\prime}_{k}\|_{\infty }\cdot\widetilde{\varkappa}(g_{k+1},f_{k}(I))\leqslant M^{n-1}\sum_{k=0}^{n-1 }\widetilde{\varkappa}(g_{k+1},f_{k}(I))\leqslant M^{n}\sum_{k=0}^{n-1}|f_{k} (I)|. \tag{3.5}\] Let \(x_{0}\in I\) and let \(I_{0}=I\) and \(I_{k}=f_{k}(I_{0})\) for \(k=1,\ldots,n\). We have the following inequality [24, Corollary 2.3] \[\forall k=1,\ldots,n,\quad\left|\log\frac{f^{\prime}_{k}(x_{0})|I_{0}|}{|I_{k }|}\right|\leqslant\varkappa(f_{k},I_{0})\leqslant M\sum_{i=0}^{k-1}|I_{i}|, \tag{3.6}\] and summing over the exponentiation of (3.6), \[\log\left(\sum_{k=0}^{n-1}|I_{k}|\right)\leqslant\log|I|+\log\left(\sum_{k=0 }^{n-1}f^{\prime}_{k}(x_{0})\right)+M\sum_{k=0}^{n-2}|I_{k}|.\] **Proposition 3.8**.: _Fix \(x_{0}\in\mathbb{S}^{1}\) and \(f\in\mathcal{S}^{*n}.\) Define \(M=M(\mathcal{S})\) and \((f_{k})_{1\leqslant k\leqslant n}\) as above and let \(S=\sum_{k=0}^{n-1}f^{\prime}_{k}(x_{0}).\) Then for every \(\delta\leqslant(2MS)^{-1},\) we have_ \[\varkappa(f,B(x_{0},\delta))\leqslant 2MS\delta, \tag{3.7}\] \[\widetilde{\varkappa}(f^{-1},fB(x_{0},\delta))\leqslant 4MS/f^{\prime}(x_{0}). \tag{3.8}\] Proof.: The first inequality (3.7) is [24, Proposition 2.4]. It remains to prove (3.8). We first upgrade (3.7) to allow us replace \(B(x_{0},\delta)\) in (3.7) by any subinterval \(I\). Let \(I\subset B(x_{0},\delta)\) be an interval. Since \(\varkappa(f_{k},B(x_{0},\delta))\leqslant 1\) by (3.7), we have \(|f_{k}I|\leqslant 2|I|f^{\prime}_{k}(x_{0}).\) In view of (3.4), \[\varkappa(f,I)\leqslant M\sum_{k=0}^{n-1}|f_{k}I|\leqslant 2MS|I|\leqslant 1.\] Combining with \(\varkappa(f,B(x_{0},\delta))\leqslant 2MS\delta\leqslant 1,\) we have \[\widetilde{\varkappa}(f^{-1},fB(x_{0},\delta))=\sup_{I\subset B(x_{0},\delta )}\frac{\varkappa(f^{-1},fI)}{|fI|}\leqslant\sup_{I\subset B(x_{0},\delta)} \frac{2\varkappa(f,I)}{f^{\prime}(x_{0})|I|}\leqslant\frac{4MS}{f^{\prime}(x_{0 })}.\] ### Iterated Function Systems Let \(\{g_{i}:X\to X\}_{i=1}^{l}\) be a family of contracting maps on a nonempty closed set \(X\subset\mathbb{R}^{d}\). We say that \(\Phi=\{g_{i}\}_{i=1}^{l}\) is a (contracting) _iterated function system_ (IFS) on \(X\). Hutchinson [42] showed that there is a unique non-empty compact set \(\Lambda\subset X\) such that \(\Lambda=\cup g_{i}\Lambda\). The set \(\Lambda\) is called the _attractor_ of \(\Phi\). If there exists a non-empty open set \(V\) such that all of \(g_{i}(V)\) are contained in \(V\) and pairwise disjoint, then we say \(\Phi\) satisfies the _open set condition_. ## 4 Random walks on \(\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) ### Preliminaries on random walks Cocycles.Let \(\mathcal{S}\subset\operatorname{Diff}_{+}^{1}(\mathbb{S}^{1})\) be a finite set. We equip \(\mathcal{S}\) with the discrete topology and let \(\Sigma=\mathcal{S}^{\mathbb{Z}}\) be the product space which is compact. We will also consider the spaces \(\Sigma^{+}=\mathcal{S}^{\mathbb{Z}_{\geq 0}}\) and \(\Sigma^{-}=\mathcal{S}^{\mathbb{Z}_{<0}}.\) Elements in \(\Sigma,\Sigma^{+},\Sigma^{-}\) are usually denoted by \(\omega,\omega^{+},\omega^{-}\), respectively. The maps \(\pi^{+}:\Sigma\to\Sigma^{+}\) and \(\pi^{-}:\Sigma\to\Sigma^{-}\) refer to natural projections. We use the notation \(\sigma\) to denote the left shift map on \(\Sigma\) or \(\Sigma^{+}\). For an element \(\omega\in\Sigma\), we write \(\omega=(\cdots,f_{-2},f_{-1},f_{0},f_{1},\cdots)\) where \(f_{n}\in\mathcal{S},n\in\mathbb{Z}.\) Denote \[f_{\omega}^{n}\coloneqq\begin{cases}f_{n-1}\cdots f_{1}f_{0}&\text{if }n \geqslant 0;\\ f_{n}^{-1}\cdots f_{-2}^{-1}f_{-1}^{-1}&\text{if }n<0.\end{cases}\] We abbreviate \(f_{\omega}^{1}\) to \(f_{\omega}.\) It induces an invertible cocycle over \(\sigma:\Sigma\to\Sigma\) defined as \[F:\Sigma\times\mathbb{S}^{1}\to\Sigma\times\mathbb{S}^{1},\quad(\omega,x)\mapsto (\sigma\omega,f_{\omega}x).\] Then the map \(F\) satisfies \(F^{n}(\omega,x)=(\sigma^{n}\omega,f_{\omega}^{n}x)\) for every \(n\in\mathbb{Z}.\) We denote by \(P\) and \(Q\) to be the natural projections of \(\Sigma\times\mathbb{S}^{1}\) down to \(\Sigma\) and \(\mathbb{S}^{1}\), respectively. We also use the notation \(f_{\omega^{+}}\) and \(f_{\omega^{+}}^{n}\) defined similarly for \(\omega^{+}\in\Sigma^{+}\) and \(n\geqslant 0\), which induces a forward cocycle as \[F^{+}:\Sigma^{+}\times\mathbb{S}^{1}\to\Sigma^{+}\times\mathbb{S}^{1},\quad( \omega^{+},x)\mapsto(\sigma\omega^{+},f_{\omega^{+}}x).\] Then \(F\) is semi-conjugate to \(F^{+}\) via the natural projection, \((\pi^{+},\operatorname{id})\circ F=F^{+}\circ(\pi^{+},\operatorname{id}).\) Random walks.Let \(\mu\) be a finitely supported probability measure on \(\operatorname{Diff}_{+}^{1}(\mathbb{S}^{1})\). We take \(\mathcal{S}=\operatorname{supp}\mu\) which induces a cocycle \(F:\Sigma\times\mathbb{S}^{1}\to\Sigma\times\mathbb{S}^{1}\) as above. Equip \(\Sigma\) with its Borel \(\sigma\)-algebra. Let \(\mathbf{P}=\mu^{\mathbb{Z}}\) be the product probability measure on \(\Sigma.\) Similarly, the spaces \(\Sigma^{+}\) and \(\Sigma^{-}\) are equipped with probabilty measures \(\mathbf{P}^{+}=\mu^{\mathbb{Z}_{>0}}\) and \(\mathbf{P}^{-}=\mu^{\mathbb{Z}_{<0}}\) respectively. In this sense, the cocycle \(F:\Sigma\times\mathbb{S}^{1}\to\Sigma\times\mathbb{S}^{1}\) equipped with a probability measure \(\mathbf{P}\) on \(\Sigma\) is called the _random walk_ induced by \(\mu\). Now we recall the definition of stationary measures mentioned in Section 2.1. The stationary measures correspond to the invariant measures of the forward cocycle \(F^{+}\) in the following way. **Proposition 4.1** ([78, Propositions 5.5 and 5.13]).: _Let \(\nu\) be a probability measure on \(\mathbb{S}^{1},\) then_ 1. \(\nu\) _is_ \(\mu\)_-stationary if and only if_ \(\mathbf{P}^{+}\times\nu\) _is_ \(F^{+}\)_-invariant._ 2. \(\nu\) _is an ergodic_ \(\mu\)_-stationary measure if and only if_ \(\mathbf{P}^{+}\times\nu\) _is ergodic_ \(F^{+}\)_-invariant._ Lyapunov Exponents.For an element \((\omega^{+},x)\in\Sigma^{+}\times\mathbb{S}^{1},\) recall the _Lyapunov exponent_ at \((\omega^{+},x)\) is defined as (if exists) \[\lambda(\omega^{+},x)\coloneqq\lim_{n\to+\infty}\frac{1}{n}\log(f_{\omega^{+} }^{n})^{\prime}(x).\] If \(\nu\) is an ergodic \(\mu\)-stationary measure then \(\mathbf{P}^{+}\times\nu\) is an ergodic \(F^{+}\)-invariant measure. By Birkhoff's ergodic theorem, \[\lambda(\omega^{+},x)=\iint\log f^{\prime}_{\omega^{+}}(y)\,\mathrm{d}\mathbf{P} ^{+}(\omega^{+})\,\mathrm{d}\nu(y)=\iint\log f^{\prime}(y)\,\mathrm{d}\mu(f)\, \mathrm{d}\nu(y)\] for \(\mathbf{P}^{+}\times\nu\) almost every \((\omega^{+},x).\) This corresponds to the Lyapunov exponent \(\lambda(\mu,\nu)\) defined in Definition 2.2. In general, the Lyapunov exponent can be defined similarly for every ergodic \(F^{+}\)-invariant probability measure on \(\Sigma^{+}\times\mathbb{S}^{1},\) or every ergodic \(F\)-invariant probability measure on \(\Sigma\times\mathbb{S}^{1}.\) Specifically, let \(m\) be an \(F\)-invariant probability measure on \(\Sigma\times\mathbb{S}^{1},\) the Lyapunov exponent of \((\omega,x)\) is given by \[\lambda(\omega,x)\coloneqq\lim_{n\to+\infty}\frac{1}{n}\log(f_{\omega}^{n})^{ \prime}(x).\] Then for \(m\)-almost every \((\omega,x),\) the limit exists and coincides with the backward limit \[\lim_{n\to-\infty}\frac{1}{n}\log(f_{\omega}^{n})^{\prime}(x).\] Moreover, if \(m\) is ergodic, then \(\lambda(\omega,x)\) is constant almost everywhere, denoted by \(\lambda(m)\). We also remark that if \(m\) is an ergodic \(F\)-invariant probability measure and \(m^{+}=(\pi^{+},\mathrm{id})_{*}m\) which is an ergodic \(F^{+}\)-invariant probability measure on \(\Sigma\times\mathbb{S}^{1},\) then \(\lambda(m)=\lambda(m^{+}).\) ### Generalities on random transformations In this subsection, we state some basic results about random transformations, necessary for our discussion in the sequel. For a Borel probability measure \(m\) on \(\Sigma\times\mathbb{S}^{1}\) with \(P_{*}m=\mathbf{P}\), write \[\mathrm{d}m(\omega,x)=\mathrm{d}\mathbf{P}(\omega)\,\mathrm{d}m_{\omega}(x)\] for its disintegration along \(P:\Sigma\times\mathbb{S}^{1}\to\Sigma\) in the sense of Rokhlin. The measure \(m\) being \(F\)-invariant translates to the following equivariant property: for \(\mathbf{P}\)-almost every \(\omega\in\Sigma,\) \[m_{\sigma(\omega)}=(f_{\omega})_{*}m_{\omega}. \tag{4.1}\] Let \(\mathcal{P}=\mathcal{P}_{F,\mathbf{P}}\) denote the set of \(F\)-invariant probability measures \(m\) on \(\Sigma\times\mathbb{S}^{1}\) such that \(P_{*}m=\mathbf{P}\). Let \(\mathcal{P}^{u}\subset\mathcal{P}\) (resp. \(\mathcal{P}^{s}\)) denote the subset of all \(u\)-states (resp. \(s\)-states). Recall that \(m\in\mathcal{P}\) is a \(u\)_-state_ (resp. \(s\)_-state_) if its disintegration \(\omega\mapsto m_{\omega}\) factors through \(\pi^{-}:\Sigma\to\Sigma^{-}\) (resp. through \(\pi^{+}:\Sigma\to\Sigma^{+}\)). For convenience later on, we denote \(\mu^{+}\) to be \(\mu\) and \(\mu^{-}\) to be the pushforward of \(\mu\) under the map \(f\mapsto f^{-1}.\) **Proposition 4.2** ([78, Proposition 5.17]).: _The map \(m\mapsto Q_{*}m\) is a bijection between the convex set \(\mathcal{P}^{u}\) (resp. \(\mathcal{P}^{s}\)) and the convex set of \(\mu^{+}\)-stationary (resp. \(\mu^{-}\)-stationary) measures._ Note that this bijection is clearly linear, and hence preserves ergodicity. This following fact is a special case of Avila-Viana invariance principle [5], which is a generalization of an earlier result in the linear case by Ledrappier [47]. **Theorem 4.3** ([5, Theorem B]).: _Let \(m\) be an \(F\)-invariant probability measure on \(\Sigma\times\mathbb{S}^{1}\) with \(P_{*}m=\mathbf{P}\). If \(\lambda(\omega,x)\leqslant 0\) holds for \(m\)-almost every \((\omega,x)\) then \(m\in\mathcal{P}^{u}\). Dually, if \(\lambda(\omega,x)\geqslant 0\) holds for \(m\)-almost every \((\omega,x)\) then \(m\in\mathcal{P}^{s}\)._ Consequently, every ergodic \(m\in\mathcal{P}\) is either a \(u\)-state or an \(s\)-state. It follows that \(\mathcal{P}\) is the convex hull of \(\mathcal{P}^{u}\cup\mathcal{P}^{s}\). We also have the following. **Corollary 4.4**.: _Assume that \(\operatorname{supp}\mu\) does not preserve any probability measure on \(\mathbb{S}^{1}.\) Then for every ergodic \(u\)-state \(m\), \(\lambda(m)<0.\) For every ergodic \(s\)-state \(m,\)\(\lambda(m)>0.\)_ Proof.: The equality \(\lambda(m)=0\) holds if and only if \(m\) is both a \(u\)-state and an \(s\)-state. Then the condition measure \(m_{\omega}\) is thus constant, which is a Borel probability measure on \(\mathbb{S}^{1}\) invariant for every \(f\in\operatorname{supp}\mu.\) We will also need a result of Ruelle-Wilkinson [69] about invertible cocycles with negative Lyapunov exponents. **Theorem 4.5** ([69, Theorem II]).: _Let \(m\) be an ergodic \(F\)-invariant probability measure on \(\Sigma\times\mathbb{S}^{1}\). Assume that \(\lambda(m)<0.\) Then there exists a subset \(X\subset\Sigma\times\mathbb{S}^{1}\) and a positive integer \(k\) such that_ 1. \(m(X)=1\) _and_ 2. _for every_ \((\omega,x)\in X,\)__\(\#(X\cap\{\omega\}\times\mathbb{S}^{1})=k.\)__ By considering the inverse cocycle, we see that the same holds if we assume \(\lambda(m)>0.\) Using ergodicity and equivariance (4.1), we have the following fact about the disintegration of \(m\). **Corollary 4.6**.: _If \(m\in\mathcal{P}\) is ergodic and \(\lambda(m)\neq 0\), then there exists a positive integer \(k\) such that for \(\mathbf{P}\)-almost every \(\omega\in\Sigma\), \(m_{\omega}\) is a uniform probability measure on a set of \(k\) elements._ The next result is due to Malicet [52]. For a probability measure \(\mu\) on \(\operatorname{Homeo}(\mathbb{S}^{1}),\) we denote by \(T_{\mu}\) the semi-group generated by \(\operatorname{supp}(\mu).\) **Theorem 4.7** ([52, Theorem B]).: _Let \(\mu\) be a probability measure on \(\operatorname{Homeo}(\mathbb{S}^{1})\) whose support \(\operatorname{supp}(\mu)\) is finite and does not preserve any Borel probability measure on \(\mathbb{S}^{1}.\) Then there are only finitely many ergodic \(\mu\)-stationary measures on \(\mathbb{S}^{1}.\) Their topological supports are pairwise disjoint and are exactly the \(T_{\mu}\)-minimal sets._ ### The structure of random walks on \(\operatorname{Diff}^{2}_{+}(\mathbb{S}^{1})\) Let \(\mu\) be a finitely supported probability measure on \(\operatorname{Diff}^{2}_{+}(\mathbb{S}^{1})\) without common invariant probability measures on \(\mathbb{S}^{1}.\) In order to study the structure of the random walk induced by \(\mu\), we first establish some relations among all the ergodic \(\mu^{\pm}\)-stationary measures. One basic question is whether the number \(k\) in Corollary 4.6 is the same among different \(\mu^{\pm}\)-stationary measures. The questions of this type can be answered by constructing a dynamically defined transitive permutation among all the ergodic \(\mu^{\pm}\)-stationary measure, see Lemma 4.11. The construction is inspired by Hertz-Hertz-Tahzibi-Ures [68], where they associate each invariant measure of positive center exponent to one of negative center exponent by considering the extremal points of the Pesin center manifold. Using this permutation, we can show that these stationary measures share similar properties. Thus they form a very structural dynamic on \(\mathbb{S}^{1}.\) Let \(\Theta^{s}\subset\Sigma\times\mathbb{S}^{1}\) denote \[\Theta^{s}=\left\{(\omega,x)\in\Sigma\times\mathbb{S}^{1}:\limsup_{n\to+ \infty}\frac{1}{n}\log(f^{n}_{\omega})^{\prime}(x)<0\right\}.\] It is clear that \(\Theta^{s}\) is \(F\)-invariant. For any \((\omega,x)\in\Theta^{s},\)\((f^{n}_{\omega})^{\prime}(x)\to 0\) exponentially fast. Hence \(\sum_{n\geqslant 0}(f^{n}_{\omega})^{\prime}(x)<+\infty\). In view of the distortion estimate (Proposition 3.8), there are constants \(\delta,c,C>0\) such that \[\forall y\in B(x,\delta),\,\forall n\geqslant 0,\quad(f^{n}_{\omega})^{\prime}(y )\leqslant C2^{-cn}. \tag{4.2}\] Therefore, for any \(\omega\in\Sigma,\) the slice \(W^{s}(\omega)=\left\{x\in\mathbb{S}^{1}:(\omega,x)\in\Theta^{s}\right\}\) is open in \(\mathbb{S}^{1}\). Moreover \(W^{s}(\omega)\neq\mathbb{S}^{1}\) since otherwise, using compactness, we could cover \(\mathbb{S}^{1}\) by finitely many open balls with the property of (4.2) leading to \((f^{n}_{\omega})^{\prime}(y)\leqslant C2^{-cn}\) uniformly in \(y\in\mathbb{S}^{1},\) which is absurd. For \((\omega,x)\in\Theta^{s}\), let \(W^{s}(\omega,x)\) denote the connected component of \(W^{s}(\omega)\) containing \(x\). It is an open interval of \(\mathbb{S}^{1}\). Using (4.2), we see that \[W^{s}(\omega,x)=\left\{y\in\mathbb{S}^{1}:\limsup_{n\to+\infty}\frac{1}{n} \log d(f^{n}_{\omega}(y),f^{n}_{\omega}(x))<0\right\}.\] Let \(R^{s}(\omega,x)=(\omega,y)\) where \(y\) is the right end-point of \(W^{s}(\omega,x)\). Clearly \(R^{s}(\omega,x)\not\in\Theta^{s}\). Thus, we get a map \(R^{s}\colon\Theta^{s}\to\Sigma\times\mathbb{S}^{1}\setminus\Theta^{s}\). From the \(F\)-invariance of \(\Theta^{s}\), we find that \(R^{s}\) commutes with \(F\). Let \(m\in\mathcal{P}^{u}\). By Corollary 4.4, we have \(m(\Theta^{s})=1\). Thus, \(R^{s}_{*}m\) is a well defined Borel probability measure on \(\Sigma\times\mathbb{S}^{1}\). Moreover, \(R^{s}_{*}m\in\mathcal{P}\) since \(R^{s}\circ F=F\circ R^{s}\) and \(P\circ R^{s}=P\). Note that, \(R^{s}_{*}m(\Sigma\times\mathbb{S}^{1}\setminus\Theta^{s})=1\), implying that for \(R^{s}_{*}m\)-almost every \((\omega,x)\), \(\lambda(\omega,x)\geqslant 0\). By Theorem 4.3, \(R^{s}_{*}m\in\mathcal{P}^{s}\). To summarize, \(m\mapsto R^{s}_{*}m\) is a map \(R^{s}_{*}\colon\mathcal{P}^{u}\to\mathcal{P}^{s}\). Similarly, we define the left-end point map \(L^{s}\colon\Theta^{s}\to\Sigma\times\mathbb{S}^{1}\setminus\Theta^{s}\). In a dual manner (considering the inverse cocyle \(F^{-1}\)), define \(\Theta^{u}\), \(W^{u}(\omega,x)\) for \((\omega,x)\in\Theta^{u}\), \(R^{u}\colon\Theta^{u}\to\Sigma\times\mathbb{S}^{1}\setminus\Theta^{u}\) and \(R^{u}_{*}\colon\mathcal{P}^{s}\to\mathcal{P}^{u}\). **Lemma 4.8**.: _Let \(m\in\mathcal{P}^{u}\). For \(\mathbf{P}\)-almost every \(\omega\in\Sigma\), the map \(x\mapsto Q\circ R^{s}(\omega,x)\) is injective on \(\operatorname{supp}m_{\omega}\)._ Proof.: By Corollary 4.6, for \(\mathbf{P}\)-almost every \(\omega\in\Sigma\), \(\operatorname{supp}m_{\omega}\) is finite. It follows that there exists \(c>0\) such that the set \[\Sigma^{\prime}=\left\{\omega:d(x,x^{\prime})\geqslant c,\;\forall x\neq x^{ \prime}\in\operatorname{supp}m_{\omega}\right\}\] has a positive \(\mathbf{P}\) measure. By the ergodicity of \(\sigma\), for \(\mathbf{P}\)-almost every \(\omega\in\Sigma\), there are infinitely many \(n\in\mathbb{N}\) such that \(\sigma^{n}\omega\in\Sigma^{\prime}\). For such \(\omega\), for every \(x\neq x^{\prime}\in\operatorname{supp}m_{\omega}\), we have \(d\big{(}f^{n}_{\omega}(x),f^{n}_{\omega}(x^{\prime})\big{)}\not\to 0\). In particular, \(W^{s}(\omega,x)\neq W^{s}(\omega,x^{\prime})\). For \(\omega\in\Sigma\), let \[\Pi(\omega)=\bigcup_{m\in\mathcal{P}^{u}}\operatorname{supp}m_{\omega}\quad \text{and}\quad\Xi(\omega)=\bigcup_{m\in\mathcal{P}^{s}}\operatorname{supp}m _{\omega}.\] Clearly, the union can be taken over ergodic \(u\)-states (resp. \(s\)-states) and it still defines the same set. Then for \(\mathbf{P}\)-almost every \(\omega\in\Sigma\), \(\Pi(\omega)\) and \(\Xi(\omega)\) are finite sets. **Lemma 4.9**.: _For \(\mathbf{P}\)-almost every \(\omega\in\Sigma\), we have \(\#\Pi(\omega)=\#\Xi(\omega)\) and_ \[\Pi(\omega)=\mathbb{S}^{1}\setminus W^{u}(\omega)\quad\text{and}\quad\Xi( \omega)=\mathbb{S}^{1}\setminus W^{s}(\omega).\] Proof.: Applying Lemma 4.8 to a suitable convex combination of ergodic \(u\)-states and remembering \(R^{s}_{*}m\in\mathcal{P}^{s}\), we see that for \(\mathbf{P}\)-almost every \(\omega\in\Sigma\), the map \(x\mapsto Q\circ R^{s}(\omega,x)\) is injective from \(\Pi(\omega)\) to \(\Xi(\omega)\). Similarly, \(x\mapsto Q\circ L^{s}(\omega,x)\) is injective from \(\Pi(\omega)\) to \(\Xi(\omega)\) and \(x\mapsto Q\circ R^{u}(\omega,x)\) and \(x\mapsto Q\circ L^{u}(\omega,x)\) are injective maps from \(\Xi(\omega)\) to \(\Pi(\omega)\), proving the claim about cardinality. Moreover, this shows that, \(W^{s}(\omega,x)\), \(x\in\Pi(\omega)\) are disjoint open intervals with endpoints in \(\Xi(\omega)\). The total number of endpoints being equal to the number of intervals, forces these intervals and these point to partition the circle. Hence \(\Xi(\omega)=\mathbb{S}^{1}\setminus W^{s}(\omega)\). **Lemma 4.10**.: _The numbers of ergodic \(u\)-states and \(s\)-states are equal. If \(m\in\mathcal{P}^{u}\) is ergodic then so is \(R^{s}_{*}m\in\mathcal{P}^{s}\)._ Proof.: Let \(\{m_{1},\cdots,m_{d}\}\in\mathcal{P}^{u}\) be the set of ergodic \(u\)-states. Since they are singular to each other, for \(\mathbf{P}\)-almost every \(\omega\in\Sigma\), \((\operatorname{supp}(m_{i})_{\omega})_{1\leqslant i\leqslant d}\) form a partition of \(\Pi(\omega)\). Set \(m^{\prime}_{i}=R^{s}_{*}m_{i}\). Then, by Lemma 4.8, \((\operatorname{supp}(m^{\prime}_{i})_{\omega})_{1\leqslant i\leqslant d}\) are pairwise disjoint. It follows that the number of ergodic \(s\)-states, denoted as \(d^{\prime}\), is no less than \(d\). The equality \(d^{\prime}=d\) holds if and only if every \(m^{\prime}_{i}\) is ergodic. But, using \(R^{u}\), we see that indeed \(d\geqslant d^{\prime}.\) Hence, \(d=d^{\prime}\) and the lemma is proved. **Lemma 4.11**.: _The maps \(R^{s}_{*}\) and \(R^{u}_{*}\) together induces a transitive permutation of ergodic measures in \(\mathcal{P}\). In particular, there is an integer \(r\geqslant 1\) such that for every ergodic \(m\in\mathcal{P}\), \(\#\operatorname{supp}m_{\omega}=r\) for \(\mathbf{P}\)-almost every \(\omega\in\Sigma\)._ Proof.: The proof of Lemma 4.9 shows also the following for \(\mathbf{P}\)-almost every \(\omega\in\Sigma\). Write \(\Pi(\omega)=\{x_{1},\ldots,x_{k}\}\) with \(x_{1},\ldots,x_{k}\) arranged in cyclic order and set \(y_{j}=Q\circ R^{s}(\omega,x_{j})\) so that \(y_{j}\in\Xi(\omega)\) for each \(j\). Then \(x_{1},y_{1},\ldots,x_{k},y_{k}\) are arranged in cyclic order. Moreover \(R^{u}(\omega,y_{j})=(\omega,x_{j+1\text{ mod }k})\). In particular, \(R^{s}\) and \(R^{u}\) together induce a transitive permutation of points in \(\Pi(\omega)\cup\Xi(\omega)\). Then the claim of the lemma follows from Lemma 4.10 together with the observation that two ergodic measures \(m,m^{\prime}\) in \(\mathcal{P}\) are equal if and only if \(m_{\omega}\) and \(m^{\prime}_{\omega}\) have common atoms for a set of \(\omega\) with positive \(\mathbf{P}\) measure. The "in particular" part follows from the fact that \(R^{s}_{*}\) and \(R^{u}_{*}\) preserve \(\#\operatorname{supp}m_{\omega}\), a direct consequence of Lemma 4.8. Let \(d\) denote the number of ergodic \(u\)-states. We write \([d]=\{0,\ldots,d-1\}\) and addition in \([d]\) is to be understood reduction modulo \(d\). Fix an arbitrary ergodic \(u\)-state \(m_{0}^{+}\in\mathcal{P}^{u}\). By Lemma 4.11, applying alternatively \(R^{u}_{*}\) and \(R^{s}_{*}\), we find in the sequence \[m_{0}^{+}\mapsto m_{0}^{-}\mapsto m_{1}^{+}\mapsto\cdots m_{d-1}^{+}\mapsto m_ {d-1}^{-}\mapsto m_{0}^{+}\] all ergodic \(u\)-states \(m_{i}^{+}\) and all ergodic \(s\)-states \(m_{i}^{-}\), \(i\in[d]\). For \(i\in[d]\) and \(\omega\in\Sigma\), define \[\Pi(\omega,i)=\operatorname{supp}(m_{i}^{+})_{\omega}\quad\text{and}\quad \Xi(\omega,i)=\operatorname{supp}(m_{i}^{-})_{\omega}.\] Then for each \(i\in[d]\), we have that \(Q\circ(R^{u}\circ R^{s})^{d}\) preserves \(\Pi(\omega,i)\) and maps every element of \(\Pi(\omega,i)\) to the element on its right. The space \(\mathbb{S}_{k}\).For later convenience, we consider the space \(\mathbb{S}_{k}\) as the family of \(k\)-element subsets of \(\mathbb{S}^{1}.\) Equip \(\mathbb{S}_{k}\) with the metric \[\underline{d}(\underline{x},\underline{y})=\inf_{\tau\in\mathfrak{S}_{k}}\sum _{i=1}^{k}d(x_{i},y_{\tau(i)})\] where \(\underline{x}=\left\{x_{1},\cdots,x_{k}\right\},\underline{y}=\left\{y_{1}, \cdots,y_{k}\right\}\) and \(\mathfrak{S}_{k}\) is the symmetric group of \([k].\) Every element in \(\operatorname{Homeo}(\mathbb{S}^{1})\) can be naturally regarded as an element in \(\operatorname{Homeo}(\mathbb{S}_{k})\). For every \(\underline{x}\in\mathbb{S}_{k}\), we define the probability measure \(u_{\underline{x}}\) to be the uniform measure on \(\underline{x}\). Let \(\mathcal{M}(\mathbb{S}^{1})\) be the space of Radon measures on \(\mathbb{S}^{1}\) equipped with the weak* topology. Then \(u_{\bullet}:\underline{x}\mapsto u_{\underline{x}}\) is a topological embedding. In fact, it is a homothety onto its image if we equip \(\mathcal{M}(\mathbb{S}^{1})\) with the Wasserstein metric \[\operatorname{dist}(\eta,\zeta)=\sup_{\phi}\left|\int\phi\,\mathrm{d}\eta-\int \phi\,\mathrm{d}\zeta\right|,\] where the supreme is taken over all \(1\)-Lipschitz functions, where \(\eta,\zeta\in\mathcal{M}(\mathbb{S}^{1})\). Structure of random walks on \(\operatorname{Diff}^{2}_{+}(\mathbb{S}^{1})\).The following is immediate from our discussion. **Theorem 4.12** (Structure of random walks I: construction).: _Assume that \(\operatorname{supp}\mu\) does not preserve any probability measure on \(\mathbb{S}^{1}.\) Then there exists two positive integers \(d,r\) and two measurable maps \(\Pi:\Sigma\times[d]\to\mathbb{S}_{r}\) and \(\Xi:\Sigma\times[d]\to\mathbb{S}_{r}\) such that_ 1. _Let_ \(m_{i}^{\pm}\) _be probability measures on_ \(\Sigma\times\mathbb{S}^{1}\) _defined by_ \[\mathrm{d}m_{i}^{+}=\mathrm{d}\mathbf{P}(\omega)\mathrm{d}u_{\Pi(\omega,i)}, \quad\mathrm{d}m_{i}^{-}=\mathrm{d}\mathbf{P}(\omega)\mathrm{d}u_{\Xi(\omega,i)},\] _then_ \(m_{i}^{+}\)_'s (resp._ \(m_{i}^{-}\)_'s) are exactly the ergodic_ \(u\)_-states (resp._ \(s\)_-states) that projects to_ \(\mathbf{P}\)_._ _._ 2. _Let_ \(\nu_{i}^{\pm}\) _be probability measures on_ \(\mathbb{S}^{1}\) _defined by_ \(\nu_{i}^{+}=Q_{*}m_{i}^{+}\) _and_ \(\nu_{i}^{-}=Q_{*}m_{i}^{-}\)_. They can also be expressed as_ \[\nu_{i}^{+}=\int u_{\Pi(\omega,i)}\,\mathrm{d}\mathbf{P}(\omega),\quad\nu_{i}^{ -}=\int u_{\Xi(\omega,i)}\,\mathrm{d}\mathbf{P}(\omega).\] _Then_ \(\nu_{i}^{+}\)_'s (resp._ \(\nu_{i}^{-}\)_'s) are exactly the ergodic_ \(\mu^{+}\)_-stationary (resp._ \(\mu^{-}\)_-stationary) measures._ _Moreover, for \(\mathbf{P}\) almost every \(\omega,\) we have_ 1. \(\Pi(\omega,i)\) _only depends on_ \((\pi^{-}\omega,i)\) _and_ \(\Xi(\omega,i)\) _only depends on_ \((\pi^{+}\omega,i).\)__ 2. _Cocycle invariance:_ \(\forall n\in\mathbb{Z},\) _we have_ \(f_{\omega}^{n}\Pi(\omega,i)=\Pi(\sigma^{n}\omega,i),\)__\(f_{\omega}^{n}\Xi(\omega,i)=\Xi(\sigma^{n}\omega,i).\)__ 3. _Define the sets on_ \(\mathbb{S}^{1}\) _as_ \[\Pi(\omega)=\bigcup_{i\in[d]}\Pi(\omega,i),\quad\Xi(\omega)=\bigcup_{i\in[d] }\Xi(\omega,i).\] _Then_ \(\Pi(\omega)\cup\Xi(\omega)\) _is made up of_ \(2dr\) _different points. Denote_ \(x_{0},x_{1},\cdots,x_{2dr-1}\) _to be these points arranged in cyclic order on_ \(\mathbb{S}^{1}\) _such that_ \(x_{0}\in\Pi(\omega,0).\) _Then_ \[x_{2j}\in\Pi(\omega,j\ \mathrm{mod}\ d),\ \ x_{2j+1}\in\Xi(\omega,j\ \mathrm{mod}\ d),\quad\forall 0 \leqslant j\leqslant dr-1.\] _Furthermore, the constants \(d,r\) are topological invariants of the semigroup \(T_{\mu}\). That is, if \(\mu,\mu^{\prime}\) are two probability measures on \(\mathrm{Diff}_{+}^{2}(\mathbb{S}^{1})\) satisfying assumption of this theorem and that there exists \(h\in\mathrm{Homeo}(\mathbb{S}^{1})\) with \(T_{\mu}=hT_{\mu^{\prime}}h^{-1},\) then the constants \(d,r\) are the same for \(\mu\) and \(\mu^{\prime}.\)_ Proof.: The statements (1) -- (5) are summaries of properties we have discussed. The proof of \(d,r\) are topological invariants will be left to Section 4.4, where we will show that \(dr\) is indeed the least number of pairs of topologically hyperbolic fixed points in the semigroup \(T_{\mu}\). In which we need to use the construction of hyperbolic elements in Section 7.1. _Remark 4.13_.: Since \(\Pi,\Xi\) only depends on \(\pi^{-}\omega,\pi^{+}\omega,\) we will sometimes use the notation \(\Pi(\omega^{-})\) and \(\Xi(\omega^{+})\) for \(\omega^{-}\in\Sigma^{-}\) and \(\omega^{+}\in\Sigma^{+},\) respectively. _Remark 4.14_.: The map \(\omega\mapsto\Pi(\omega,i)\) plays the role of the Furstenberg boundary map. It can also be defined as follows (see [78, Lemma 5.22]). For \(\mathbf{P}\)-almost every \(\omega\in\Sigma,\) we have \[(f_{\sigma^{-}\omega}^{n})_{*}\nu_{i}^{+}\stackrel{{\mathrm{ weak}^{*}}}{{\longrightarrow}}u_{\Pi(\omega,i)}, \tag{4.3}\] as \(n\to+\infty\). An effective version of (4.3) will be shown in Section 4.6. For \(\omega\in\Sigma\) and \(i\in[d],\) define \[W^{s}(\omega,i)=\bigcup_{x\in\Pi(\omega,i)}W^{s}(\omega,x),\quad W^{u}( \omega,i)=\bigcup_{x\in\Xi(\omega,i)}W^{u}(\omega,x). \tag{4.4}\] The following summarizes useful properties of \(W^{s}(\omega,i)\) and \(W^{u}(\omega,i).\) **Theorem 4.15** (Structure of random walks II: dynamics).: _Assume that \(\mathrm{supp}\,\mu\) does not preserve any probability measure on \(\mathbb{S}^{1}.\) Let \(\nu_{i}^{+}\) and \(\nu_{i}^{-}\) be the ergodic stationary measures defined in the previous theorem corresponding to \(\Pi(\,\boldsymbol{\cdot}\,,i)\) and \(\Xi(\,\boldsymbol{\cdot}\,,i).\) Then for \(\mathbf{P}\) almost every \(\omega\) and \(i\in[d],\) there exists subsets \(W^{s}(\omega,i)\subset\mathbb{S}^{1},W^{u}(\omega,i)\subset\mathbb{S}^{1}\) satisfying_ 1. _Each_ \(W^{s}(\omega,i),W^{u}(\omega,i)\) _is a disjoint union of_ \(r\) _open intervals._ 2. _For every_ \(i\in[d],\)__\(\Pi(\omega,i)\subset W^{s}(\omega,i)\subset\mathbb{S}^{1}\setminus\Xi(\omega)\) _and_ \(\Xi(\omega,i)\subset W^{u}(\omega,i)\subset\mathbb{S}^{1}\setminus\Pi(\omega).\)__ 3. _Let_ \(W^{s}(\omega)=\bigcup_{i\in[d]}W^{u}(\omega,i),\) _which is a disjoint union and_ \(W^{s}(\omega)=\mathbb{S}^{1}\setminus\Xi(\omega).\) _The same holds for_ \(W^{u}(\omega)=\bigcup_{i\in[d]}W^{u}(\omega,i)\) _that_ \(W^{u}(\omega)=\mathbb{S}^{1}\setminus\Pi(\omega).\)__ _._ 4. _Cocycle invariance:_ \(\forall n\in\mathbb{Z},\) _we have_ \(f_{\omega}^{n}W^{s}(\omega,i)=W^{s}(\sigma^{n}\omega,i)\) _and_ \(f_{\omega}^{n}W^{u}(\omega,i)=W^{u}(\sigma^{n}\omega,i).\)__ 5. _For every connected component_ \(I\) _of_ \(W^{s}(\omega,i)\) _and_ \(J\) _of_ \(W^{u}(\omega,i)\)_, we have_ \(\nu_{i}^{+}(I)=1/r\) _and_ \(\nu_{i}^{-}(J)=1/r.\) _In particular,_ \[\nu_{i}^{+}(W^{s}(\omega,i))=1,\quad\nu_{i}^{-}(W^{u}(\omega,i))=1,\quad\forall i \in[d].\] 6. _For every_ \(x\in\Pi(\omega,i)\) _and_ \(y\in\Xi(\omega,i),\)__ \[\lim_{n\to\pm\infty}\frac{1}{n}\log(f_{\omega}^{n})^{\prime}(x)=\lambda_{i}^{+ },\quad\lim_{n\to\pm\infty}\frac{1}{n}\log(f_{\omega}^{n})^{\prime}(y)= \lambda_{i}^{-}.\] _Where_ \(\lambda_{i}^{+}<0\) _and_ \(\lambda_{i}^{-}>0\) _are the Lyapunov exponents corresponding to_ \(\nu_{i}^{+}\) _and_ \(\nu_{i}^{-}\)_._ 7. _For every closed subintervals_ \(I\subset W^{s}(\omega),J\subset W^{u}(\omega),\) _as_ \(n\to+\infty,\) _we have_ \[|f_{\omega}^{n}I|\to 0,\quad|f_{\omega}^{-n}J|\to 0,\quad\text{ exponentially fast.}\] Proof.: Besides (5), all other items are immediate. Without loss of generality, we show the argument for \(\nu_{i}^{+}.\) Then the boundary points of \(W^{s}(\omega,i)\) only depends on \(\pi^{+}(\omega)\) and hence \(W^{s}(\omega,i)\) only depends on \(\pi^{+}(\omega).\) Fix an \(\omega^{+}\in\Sigma^{+}\) and take a connected component of \(I\) of \(W^{s}(\omega^{+},i).\) Then for every \(\omega\) with \(\pi^{+}\omega=\omega^{+},\) we have \(\#(I\cap\Pi(\omega,i))=1.\) Since \(\Pi(\omega,i)\) only depends on \(\pi^{-}\omega,\) it follows that \[\nu_{i}^{+}(I)=\frac{1}{r}\int_{\Sigma^{-}}\#\left\{I\cap\Pi(\omega^{-},i) \right\}\mathrm{d}\mathbf{P}^{-}(\omega^{-})=\frac{1}{r}.\] _Example 4.16_.: The figure below illustrates the dynamics of the random walk with \(d=r=2\) where \(n\) is a positive integer and \(\omega^{\prime}=\sigma^{n}\omega\). Red points denote the points in \(\Pi(\,\cdot\,)\), and blue points denote the points in \(\Xi(\,\cdot\,)\). The black arcs map to black arcs with some contraction. For the convenience in later discussions, we define "typical" words in this system. **Definition 4.17**.: We say \(\omega\in\Sigma\) is _regular for the random walk_ if it 1. satisfies all the conditions in the previous two theorems, and 2. \((\omega,x)\) is Birkhoff regular for every \(m_{i}^{+}\) for every \(x\in\Pi(\omega,i)\) and \((\omega,y)\) is Birkhoff regular for every \(m_{i}^{-}\) for every \(y\in\Xi(\omega,i)\). The previous two theorems show that the regular words \(\omega\) for the random walk form a full \(\mathbf{P}\)-measure subset of \(\Sigma\). ### Basic properties of stationary measures In the rest of this whole section, we focus on the random walk induced by a finitely supported probability measure \(\mu\) on \(\operatorname{Diff}^{2}_{+}(\mathbb{S}^{1})\) without common invariant probability measures on \(\mathbb{S}^{1}\). We follow the notation in last subsection. Recall that \(T_{\mu}\) denotes the semigroup generated by \(\operatorname{supp}\mu\). There are some basic properties about \(\mu\)-stationary measures. **Lemma 4.18**.: _Stationary measures are atomless, hence continuous._ Proof.: Otherwise, the atoms of \(\nu\) with weight at least \(c>0\) form a finite orbit of \(T_{\mu}\). Then the uniform measure on this finite orbit is a invariant probability measure for \(\operatorname{supp}\mu\). For \(c>0\), a set on \(\mathbb{S}^{1}\) is said to be \(c\)_-separated_ if its elements are of distance \(>c\) one from each other. **Lemma 4.19**.: _There exists a constant \(c>0\) depending only on \(\mu\), such that for every \(\omega\) regular for the random walk, \(\Pi(\omega)\) and \(\Xi(\omega)\) are \(c\)-separated._ Proof.: By Lemma 4.18 and the compactness of \(\mathbb{S}^{1}\), we can take \(c>0\) such that for every interval \(I\) on \(\mathbb{S}^{1}\) with length \(|I|\leqslant c\), it holds that \[\forall i\in[d],\quad\nu_{i}^{+}(I)<\frac{1}{r}.\] Two consecutive points in \(\Xi(\omega)\) bound an open interval \(I\), which is a connected component of \(W^{s}(\omega,i)\) for some \(i\in[d]\). By Theorem 4.15(5), \(\nu_{i}^{+}(I)=1/r\). Hence \(|I|>c\). We deduce that \(\Xi(\omega)\) is \(c\)-separated. The proof for \(\Pi(\omega)\) is similar. As a corollary of this lemma, the number of fixed points of a hyperbolic element in the semigroup \(T_{\mu}\) is bounded from below. This will be helpful in showing that \(d,r\) are topological invariants. **Definition 4.20**.: Let \(f\in\operatorname{Homeo}(\mathbb{S}^{1})\). A fixed point \(x\) of \(f\) is called a _topologically hyperbolic_ if there exists \(\varepsilon>0\) such that 1. either for every \(y\in B(x,\varepsilon)\), \(f^{n}(y)\to x(n\to+\infty)\), 2. or for every \(y\in B(x,\varepsilon)\), \(f^{-n}(y)\to x(n\to+\infty)\). We call \(x\) an _attractor_ of \(f\) in the first case and _repellor_ otherwise. Observe that if every fixed point of \(f\in\operatorname{Homeo}(\mathbb{S}^{1})\) is topologically hyperbolic, then necessarily, \(f\) has finitely many fixed points and its attractors and repellors are arranged alternatively. **Corollary 4.21**.: _Let \(f\in T_{\mu}\) be a diffeomorphism such that every fixed point is topologically hyperbolic, then it has at least \(dr\) attractors and \(dr\) repellors._ Proof.: Assume for a contradiction that there is \(f\in T_{\mu}\) has \(q\) attractors and \(q\) repellors with \(q<dr\). Denote these fixed points by \(a_{1},r_{1},a_{2},r_{2},\cdots,a_{q},r_{q}\) ordered in the cyclic order and where \(a_{i}\)'s are the attractors and \(r_{i}\)'s are the repellors. Take \(\varepsilon^{\prime}>0\) such that for every interval \(I\subset\mathbb{S}^{1}\) of length \(|I|\leqslant 2\varepsilon^{\prime}\), \(\nu_{i}^{+}(I)<1/qdr\) for every ergodic \(\mu^{+}\)-stationary measure. Consider the set \(\Sigma_{1}=\left\{\omega^{-}\in\Sigma^{-}:\Pi(\omega^{-})\cap(\bigcup_{j}B(r_ {j},\varepsilon^{\prime}))\neq\varnothing\right\}.\) Then from Theorem 4.12(2), \[\mathbf{P}^{-}(\Sigma_{1})\leqslant r\sum_{i\in[d]}\sum_{j=1}^{q}\nu_{i}^{+}( B(r_{j},\varepsilon^{\prime}))<1.\] Let \(\Sigma_{2}=\Sigma^{-}\setminus\Sigma_{1}\) with positive \(\mathbf{P}^{-}\) measure. Assume that \(f\in\mathcal{S}^{sn}\) where \(\mathcal{S}=\operatorname{supp}\mu.\) Let \(m\) be a positive integer large enough so that \(f^{m}(\mathbb{S}^{1}\setminus\bigcup_{j}B(r_{j},\varepsilon^{\prime}))\subset \bigcup_{j=1}^{q}B(a_{j},c/2)\) where \(c\) is the constant in Lemma 4.19. Consider the set \[\Sigma^{\prime}=\left\{\omega\in\Sigma:\pi^{-}\omega\in\Sigma_{2},f_{\omega}^{ nm}=f^{m}\right\},\] then \(\mathbf{P}(\Sigma^{\prime})=\mathbf{P}^{-}(\Sigma_{2})\mu^{*nm}(f^{m})>0.\) But by the definition of \(\Sigma_{2}\) and the cocycle invariance of \(\Pi(\,\boldsymbol{\cdot}\,)\), for \(\mathbf{P}\)-almost every \(\omega\in\Sigma^{\prime}\), we have \[\Pi(\sigma^{nm}\omega)=f^{m}\Pi(\omega)\subset\bigcup_{j=1}^{q}B(a_{j},c/2).\] By the pigeonhole principle, \(\Pi(\sigma^{nm}\omega)\) has 2 points within distance \(<c\). This contradicts Lemma 4.19. Now we show the last part of Theorem 4.12 assuming Lemma 7.1. Proof of Theorem 4.12.: By Theorem 4.7, the constant \(d\) is precisely the number of \(T_{\mu}\) minimal sets, which is obviously a topological invariant. By Lemma 7.1 and Corollary 4.21, \(2dr\) is the least number of fixed points of elements in \(T_{\mu}\) having only topologically hyperbolic fixed points. Thus, \(dr\) is also invariant under \(C^{0}\) conjugates. Hence, so is \(r.\) ### Uniform good words In this subsection, we will introduce a powerful tool -- the set of uniform good words. Roughly speaking, it is a set of words with large probability and uniform control, and it has global contraction properties on each connected component of the \(s\)-manifolds \(W^{s}(\omega)\). It enables us to handle smooth actions in a manner similar to linear actions, except that in the smooth case there are \(dr\) cones instead of one cone in the linear case. **Proposition 4.22**.: _For any \(\varepsilon>0,\) there exists a subset \(\Sigma_{\varepsilon}\subset\Sigma\) satisfying the following._ 1. \(\mathbf{P}(\Sigma_{\varepsilon})>1-\varepsilon\)_._ 2. _For every_ \((\omega,i)\in\Sigma_{\varepsilon}\times[d]\) _and every_ \(x\in\Pi(\omega,i)\)_, we have_ \(\lim_{n\to+\infty}\frac{1}{n}\log(f_{\omega}^{n})^{\prime}(x)=\lambda_{i}^{+}\)_. The convergence is uniform in_ \(\omega\)_,_\(i\) _and_ \(x\)_._ 3. _there exists_ \(C=C(\varepsilon)>0\) _and an open set_ \(U(\omega)=U(\omega,\varepsilon)>0\) _for every_ \(\omega\in\Sigma_{\varepsilon},\) _such that_ * \(\Pi(\omega)\subset U(\omega)\subset W^{s}(\omega),\)__ * \(\mathbb{S}^{1}\setminus U(\omega)\subset\Xi(\omega)^{(\varepsilon)},\) _where_ \(\Xi(\omega)^{(\varepsilon)}=\bigcup_{x\in\Xi(\omega)}B(x,\varepsilon).\)__ * _for all_ \(n\geqslant 0\)_,_ \(\widetilde{\varkappa}(f_{\omega}^{n},I)\leqslant C(\varepsilon)\) _for every connected component_ \(I\) _of_ \(U(\varepsilon,\omega).\)__ We call the set \(\Sigma_{\varepsilon}\subset\Sigma\) constructed above is a set of _uniform good words_. **Lemma 4.23**.: _Let \(\Sigma^{\prime}\subset\Sigma\) be a subset and \(x\in\mathbb{S}^{1}.\) Assume there exists \(C>0,c>0\) such that for every \(\omega\in\Sigma^{\prime},\)\((f_{\omega}^{n})^{\prime}(x)\leqslant C2^{-cn}.\) Then there exists \(C^{\prime},\rho>0\) only depends on \(C\) and \(c\) such that_ \[\widetilde{\varkappa}(f_{\omega}^{n},B(x,\rho))\leqslant C^{\prime},\quad \forall\omega\in\Sigma^{\prime},\ n\in\mathbb{N}.\] Proof.: Let \[M=\max\left\{\max_{f\in\mathcal{S}}\|f^{\prime}\|_{\infty},\max_{f\in\mathcal{ S}}\|\log f^{\prime}\|_{\mathrm{Lip}}\right\}.\] Since \((f_{\omega}^{n})^{\prime}(x)\leqslant C2^{-cn},\) then \[\sum_{n=0}^{+\infty}(f_{\omega}^{n})^{\prime}(x)\leqslant\frac{C}{1-2^{-c}}\] is a uniform bound. Let \[\rho=\frac{1-2^{-c}}{2MC},\quad C^{\prime}=2M\frac{C}{1-2^{-c}}.\] By Proposition 3.8, \(\widetilde{\varkappa}(f_{\omega}^{n},B(x,\rho))\leqslant C^{\prime}\) for every \(\omega\in\Sigma^{\prime},n\in\mathbb{N}.\) Proof of Proposition 4.22.: First, we can take \(\Sigma^{\prime}\subset\Sigma\) with \(\mathbf{P}(\Sigma^{\prime})>1-\varepsilon/2\) such that \[\forall\omega\in\Sigma^{\prime},\,\forall i\in[d],\,\forall x\in\Pi(\omega,i), \quad\lim_{n\to+\infty}\frac{1}{n}\log(f_{\omega}^{n})^{\prime}(x)=\lambda_{i} ^{+},\] where the convergence is uniform. Let \[c=\frac{1}{2}\inf_{i\in[d]}-\lambda_{i}^{+}>0,\] then there exists \(C>0\) such that \(\forall\omega\in\Sigma^{\prime},\) \[(f_{\omega}^{n})^{\prime}(x)\leqslant C2^{-cn},\quad\forall x\in\Pi(\omega), n\in\mathbb{N}.\] By Lemma 4.23, there exists \(C^{\prime},\rho>0\) such that for every \(\omega\in\Sigma^{\prime},\) \[\widetilde{\varkappa}(f_{\omega}^{n},B(x,\rho))\leqslant C^{\prime},\quad \forall x\in\Pi(\omega),n\in\mathbb{N}.\] In particular, for every \(y\in B(x,\rho),\,\frac{1}{n}\log(f_{\omega}^{n})^{\prime}(y)\to\lambda_{i}^{+}<0,\) it follows that \(y\neq\Xi(\omega).\) For every \(\omega\in\Sigma^{\prime},\) let \[K(\omega)=\mathbb{S}^{1}\setminus\left(\bigcup_{x\in\Pi(\omega)}B(x,\rho) \right).\] Then \(\Xi(\omega)\subset K(\omega)\subset W^{u}(\omega)\) and \(K(\omega)\) is a finite union of closed intervals. By (7) of Theorem 4.15, for every connected component \(J\) of \(K(\omega),\)\(|f_{\omega}^{-n}J|\to 0.\) Now we can take \(\Sigma^{\prime\prime}\subset\Sigma^{\prime}\) be a subset with \(\mathbf{P}(\Sigma^{\prime\prime})>1-\varepsilon\) and \(N\) sufficiently large satisfying \[|f_{\omega}^{-N}J|<\varepsilon\] for every \(\omega\in\Sigma^{\prime\prime}\) and \(J\) a connected component of \(K(\omega).\) Let \(\Sigma_{\varepsilon}=\sigma^{-N}\Sigma^{\prime\prime},\) then \(\mathbf{P}(\Sigma_{\varepsilon})=\mathbf{P}(\Sigma^{\prime\prime})>1-\varepsilon\) and the condition (2) holds. For every \(\omega\in\Sigma_{\varepsilon},\) set \[U(\omega)=f_{\sigma^{N}\omega}^{-N}\left(\bigcup_{x\in\Pi(\sigma^{N}\omega)}B (x,\rho)\right),\] then \(\Pi(\omega)\subset U(\omega)\subset W^{s}(\omega)=\mathbb{S}^{1}\setminus\Xi (\omega).\) The complement of \(U(\omega)\) is exactly \[f_{\sigma^{N}\omega}^{-N}(K(\sigma^{N}\omega)),\] each connected component is with length less than \(\varepsilon\) by the choice of \(N.\) It suffices to estimate the distortion on \(f_{\sigma^{N}\omega}^{-N}B(x,\rho).\) Let \[M=\max\left\{\sup_{f\in\mathcal{S}}\|f^{\prime}\|,\sup_{f\in\mathcal{S}}\| \log f^{\prime}\|_{\mathrm{Lip}},2\right\}\,,\] and take \(C=C(\varepsilon)=M^{N}C^{\prime}.\) Since \(\sigma^{N}\omega\in\Sigma^{\prime\prime},\) combining (3.5), we have \[\widetilde{\varkappa}(f_{\omega}^{n},f_{\sigma^{N}\omega}^{-N}B(x,\rho)) \leqslant C(\varepsilon),\quad\forall x\in\Pi(\sigma^{N}\omega),\forall n \geqslant 0.\] The statement holds. **Corollary 4.24**.: _For every \(\varepsilon>0,\) let \(\Sigma_{\varepsilon}\subset\Sigma\) be the set defined in Proposition 4.22 with the corresponding \(U(\omega)=U(\omega,\varepsilon).\) Then there exists positive numbers \(\varepsilon_{n}\to 0\) such that_ \[|f_{\omega}^{n}I|<2^{(\lambda_{i}^{+}+\varepsilon_{n})n}\] _for every \(\omega\in\Sigma_{\varepsilon},n\in\mathbb{N}\) and a connected component \(I\) of \(U(\omega)\cap W^{s}(\omega,i).\)_ Combined with Theorem 4.15(5) and Lemma 4.18, the following is immediate. **Corollary 4.25**.: _There exists \(\Sigma^{\prime}\subset\Sigma\), such that \(\mathbf{P}(\Sigma^{\prime})>0\) and the following holds for every \(\omega\in\Sigma^{\prime}\). For every \(i\in[d]\) and every \(x\in\Pi(\omega,i)\) there is an open interval \(I\) containing \(x\) satisfying \(\nu_{i}(I)\geqslant\frac{1}{2r}\) and such that_ \[\forall n\in\mathbb{N},\quad\lambda_{i}^{+}-\varepsilon_{n}\leqslant\frac{1 }{n}\log|f_{\omega}^{n}I|\leqslant\lambda_{i}^{+}+\varepsilon_{n}\] _where \(\varepsilon_{n}\to 0^{+}\) is a sequence independent of \((\omega,x)\)._ ### Effective convergence to the Furstenberg boundary We may consider the distribution of \(\Pi(\omega,i)\) as a probability measure on \(\mathbb{S}_{r}.\) For every \(i\in[d],\) consider the measurable maps \[\Pi(\,\boldsymbol{\cdot}\,,i):\Sigma\to\mathbb{S}_{r},\quad\Xi(\,\boldsymbol{ \cdot}\,,i):\Sigma\to\mathbb{S}_{r}\] Let \(\underline{\nu}_{i}^{+},\underline{\nu}_{i}^{-}\) be the probability measures on \(\mathbb{S}_{r}\) which are the push forward of \(\mathbf{P}\) by \(\Pi(\,\boldsymbol{\cdot}\,,i),\Xi(\,\boldsymbol{\cdot}\,,i),\) respectively. For every \(\underline{x}\in\mathbb{S}_{r},\) recall \(u_{\underline{x}}\) denotes the uniform probability measure on \(\underline{x}\subset\mathrm{S}^{1}.\) Combining Theorem 4.12(2) and the definition of \(\underline{\nu}_{i}^{\pm},\) we have \[\nu_{i}^{+}=\int_{\mathbb{S}_{r}}u_{\underline{x}}\mathrm{d}\underline{\nu}_{ i}^{+}(\underline{x}),\quad\nu_{i}^{-}=\int_{\mathbb{S}_{r}}u_{\underline{x}} \mathrm{d}\underline{\nu}_{i}^{-}(\underline{x}).\] **Proposition 4.26**.: _Given \(\varepsilon>0\), there exists \(N>0\) such that for every \(i\in[d]\), for all \(\underline{x}\in\mathrm{supp}\,\underline{\nu}_{i}^{+}\) and all \(n\geqslant N\),_ \[\mathbf{P}\left\{\omega:\underline{d}(f_{\sigma^{-n}\omega}^{n}\underline{x}, \Pi(\omega,i))<2^{(\lambda_{i}^{+}+\varepsilon)n}\right\}\geqslant 1-\varepsilon.\] This Proposition can be interpreted as an effective version of the the convergence (4.3). For \(\omega\in\Sigma\) and \(i\in[d],\) define \[\underline{W}^{s}(\omega,i)=\left\{\underline{x}\in\mathbb{S}_{r}:\bigcup_{x \in\underline{x}}W^{s}(\omega,x)=W^{s}(\omega,i)\right\}.\] In other words, \(\underline{W}^{s}(\omega,i)\) is the set of \(\underline{x}\in\mathbb{S}_{r}\) whose elements fall in different connected components of \(W^{s}(\omega,i)\). Note that \(\underline{W}^{s}(\omega,i)\) depends only on \((\omega^{+},i)\) and that \(\Pi(\omega,i)\in W^{s}(\omega,i)\) for \(\mathbf{P}\)-almost every \(\omega\in\Sigma\) and hence \(\mathrm{supp}\,\underline{\nu}_{i}^{+}\subset\underline{W}^{s}(\omega,i)\) for \(\mathbf{P}\)-almost every \(\omega\in\Sigma.\) We will show Proposition 4.26 for all \(\underline{x}\in\mathbb{S}_{r}\) such that \(\underline{x}\in\underline{W}^{s}(\omega,i)\) for \(\mathbf{P}\)-almost every \(\omega\in\Sigma\). Such \(\underline{x}\) form a larger set than \(\mathrm{supp}\,\underline{\nu}_{i}^{+}\) in some situations. For instance, if \(d=r=1,\) then the proposition holds for all \(x\in\mathbb{S}^{1}.\) Proof.: In view of Lemma 4.18, let \(\varepsilon^{\prime}\in(0,\varepsilon/2)\) be such that \[\forall j\in[d],\,\forall x\in\mathbb{S}^{1},\quad\nu_{j}^{-}(B(x,\varepsilon ^{\prime}))\leqslant\frac{\varepsilon}{2dr^{2}}.\] Let \(\Sigma_{\varepsilon^{\prime}}\subset\Sigma\) be the set of uniform good words with the corresponding constant \(C=C(\varepsilon^{\prime})\) and open sets \(U(\omega)=U(\omega,\varepsilon^{\prime})\) given by Proposition 4.22. Consider \[\Sigma^{\prime}=\left\{\omega\in\Sigma_{\varepsilon^{\prime}}:\underline{x} \subset U(\omega)\right\}.\] We claim that \(\mathbf{P}(\Sigma^{\prime})\geqslant 1-\varepsilon.\) Indeed, if \(\omega\in\Sigma_{\varepsilon^{\prime}}\) and \(\omega\not\in\Sigma^{\prime},\) then there exists \(x\in\underline{x}\) such that \(x\in\subset\mathbb{S}^{1}\setminus U(\omega)\subset\bigcup_{y\in\Xi(\omega)}B (y,\varepsilon^{\prime})\). It follows that \(B(x,\varepsilon^{\prime})\cap\Pi(\omega)\neq\varnothing.\) Recalling (1) and (2) in Theorem 4.12), we have \[\mathbf{P}(\Sigma_{\varepsilon^{\prime}}\setminus\Sigma^{\prime})\leqslant \int r\sum_{j\in[d]}\sum_{x\in\underline{x}}u_{\Xi(\omega,i)}(B(x, \varepsilon^{\prime}))\,\mathrm{d}\mathbf{P}(\omega)=r\sum_{j\in[d]}\sum_{x \in\underline{x}}\nu_{j}^{-}(B(x,\varepsilon^{\prime}))\leqslant r^{2}d\cdot \frac{\varepsilon}{2dr^{2}}=\frac{\varepsilon}{2},\] Showing \(\mathbf{P}(\Sigma^{\prime})\geqslant 1-\varepsilon\). Moreover, for \(\mathbf{P}\)-almost every \(\omega\in\Sigma^{\prime}\), since \(x\in\underline{W}^{s}(\omega,i)\), each element of \(x\) falls exactly in one the \(r\) connected components of \(U(\omega)\). By Corollary 4.24, there exists \(N\) depending only on \(\mu\) and \(\varepsilon\) such that for all \(n\geqslant N\), all \(\omega\in\Sigma^{\prime}\) and every connected component \(I\) of \(U(\omega)\), \(|f_{\omega}^{n}I|<\frac{1}{r}2^{(\lambda_{i}^{+}+\varepsilon)n}\). Now for every \(n\geqslant N\), if \(\sigma^{-n}\omega\in\Sigma^{\prime}\), then elements of \(\underline{x}\) can be paired with those of \(\Pi(\sigma^{-n}\omega,i)\) such that each pair belong to the same connected component of \(U(\sigma^{-n}\omega)\). Hence \[\underline{d}(f_{\sigma^{-n}\omega}^{n}\underline{x},\Pi(\omega,i))<r\cdot \frac{1}{r}2^{(\lambda_{i}^{+}+\varepsilon)n}=2^{(\lambda_{i}^{+}+\varepsilon )n}.\] Since \(\mathbf{P}(\sigma^{-n}\Sigma^{\prime})=\mathbf{P}(\Sigma^{\prime})\geqslant 1 -\varepsilon\), the conclusion follows. ## 5 Exact dimensionality of stationary measures In this section, we will prove Theorem 2.4, which gives the exact dimensionality of the ergodic stationary measures. Our strategy is similar to that used in [39, Theorem 3.4] for \(\mathrm{PSL}(2,\mathbb{R})\) actions, but general circle diffeomorphisms are not as rigid as Mobius transformations. Therefore, we will make use of Pesin theory, distortion controls, and a hyperbolic time argument (the set (5.5) corresponds to hyperbolic times) to obtain our result. ### Preparation We list some useful results that occur in [39] in this section. Let \(\mu\) be a finitely supported probability measure on \(\mathrm{Diff}_{+}^{2}(\mathbb{S}^{1})\) without commmon invariant probability measures. Let \(\nu\) be a \(\mu\)-stationary measure on \(\mathbb{S}^{1}\) and let \(m\) be the corresponding ergodic \(u\)-state (recall Proposition 4.2). For every \(f\in\mathcal{S}\) and interval \(I\subset\mathbb{S}^{1}\), define \[\varphi(f,I)\coloneqq\frac{f_{*}\nu(I)}{\nu(I)}. \tag{5.1}\] Besicovitch's derivation theorem states that \[\frac{\mathrm{d}f_{*}\nu}{\mathrm{d}\nu}(x)=\lim_{I\ni x,\ |I|\to 0} \varphi(f,I),\ \ \ \ \nu\text{-a.e.}\ x.\] **Lemma 5.1** ([39, Lemma 3.5]).: _For every \(f\in\mathcal{S}\) and \(t>0\),_ \[\nu\left\{x:\sup_{I\ni x}\varphi(f,I)>t\right\}\leqslant 2t^{-1},\ \ \ f_{*}\nu\left\{x:\inf_{I\ni x}\varphi(f,I)<t^{-1}\right\}\leqslant 2 t^{-1}.\] Using the relation \(Q_{*}m=\nu\) and the fact that \(m\) is a \(u\)-state, we obtain the following consequence, which is essentially [39, Corollary 3.6]. **Lemma 5.2**.: _For every \(t>0,\)_ \[m\left\{(\omega,x)\in\Sigma\times\mathbb{S}^{1}:\sup_{I\ni x}\varphi(f_{\sigma ^{-1}\omega},I)\geqslant t\right\}\ll_{r}t^{-1},\] _and_ \[m\left\{(\omega,x)\in\Sigma\times\mathbb{S}^{1}:\inf_{I\ni x}\varphi(f_{\sigma ^{-1}\omega},I)\leqslant t^{-1}\right\}\ll_{r}t^{-1}.\] Define \(\mathcal{O}:\Sigma\times\mathbb{S}^{1}\times\mathbb{R}_{+}\to\mathbb{R}_{+}\) as \[\mathcal{O}(\omega,x,\rho)\coloneqq\sup_{I\ni x,|I|\leqslant\rho}\log\varphi( f_{\sigma^{-1}\omega},I)-\inf_{I\ni x,|I|\leqslant\rho}\log\varphi(f_{\sigma^{-1} \omega},I).\] Besicovitch's derivation theorem can be restated as \(\lim_{\rho\to 0^{+}}\mathcal{O}(\omega,x,\rho)=0\) for \(m\)-almost every \((\omega,x)\in\Sigma\times\mathbb{S}^{1}\). By Lemma 5.2 and the layer cake representation, the following is immediate. **Lemma 5.3** (see [39, Corollary 3.7]).: _The function \((\omega,x)\mapsto\sup_{\rho>0}\mathcal{O}(\omega,x,\rho)\) is in \(L^{1}(\Sigma\times\mathbb{S}^{1},m).\)_ The following is a variant of Maker's Theorem, as stated in [39]. **Theorem 5.4** ([39, Theorem 3.8]).: _Let \((X,\mathcal{F},\theta,T)\) be an ergodic probability measure preserving system. Let \(G_{t}:X\to\mathbb{R}\) be a measurable \(1\)-parameter family of measurable functions such that \(\sup_{t}|G_{t}|\in L^{1}.\) Suppose that_ \[G=\lim_{t\to 0}G_{t}\] _exists almost everywhere. Let \(t_{N,n}:X\to\mathbb{R}\) be functions with the property that for \(\theta\)-a.e. \(x\) and every \(\varepsilon>0,\) for large enough \(N,\)_ \[|t_{N,n}|<\varepsilon,\quad\text{for $1\leqslant n\leqslant(1-\varepsilon)N.$}\] _Then_ \[\lim_{N\to+\infty}\frac{1}{N}\sum_{n=1}^{N}G_{t_{N,n}(x)}(T^{n}x)=\int G\, \mathrm{d}\theta,\quad\text{ $\theta$-a.e. $x.$}\] ### Exact dimensionality Let \(\nu\) be an ergodic \(\mu\)-stationary measure and let \(m\in\mathcal{P}^{u}\) be the corresponding ergodic \(u\)-state. Let \(\lambda=\lambda(\mu,\nu)=\lambda(m)\) denote the Lyapunov exponent, which is negative under our assumption. Let \(h=h_{\mathrm{F}}(\mu,\nu)\) be the Furstenberg entropy. Note that using the relation between \(\nu\) and \(m\) and a change of variable, the Furstenberg entropy can be expressed as \[h=\int\log\frac{\mathrm{d}(f_{\sigma^{-1}\omega})_{*}\nu}{\mathrm{d}\nu}(x)\, \mathrm{d}m(\omega,x).\] Proof of Theorem 2.4.: Our goal is to show that for \(\nu\)-almost every \(x\in\mathbb{S}^{1},\) or, in other words, for \(m\)-almost very \((\omega,x)\in\Sigma\times\mathbb{S}^{1},\) \[\lim_{\rho\to 0^{+}}\frac{\log\nu(B(x,\rho))}{\log\rho}=-\frac{h}{\lambda}. \tag{5.2}\] Fix an arbitrary element \(\omega\in\Sigma\) regular for the random walk (Definition 4.17) and a point \(x\in\operatorname{supp}m_{\omega}.\) Let \(I_{0}=B(x,\rho)\) and for integers \(n\geqslant 0,\) set \[x_{n}=f_{\omega}^{-n}x\quad\text{and}\quad I_{n}=f_{\omega}^{-n}I_{0}.\] For any integer \(N\geqslant 1,\) we can write \[\nu(I_{0})=\nu(I_{N})\prod_{n=0}^{N-1}\frac{\nu(I_{n})}{\nu(I_{n+1})}=\nu(I_{ N})\prod_{n=0}^{N-1}\frac{\nu(I_{n})}{(f_{\sigma^{-(n+1)}\omega})_{*}\nu(I_{n})}.\] Recall the definition of \(\varphi(f,I)\) in (5.1), \[-\log\nu(I_{0})=-\log\nu(I_{N})+\sum_{n=0}^{N-1}\log\varphi(f_{\sigma^{-(n+1)} \omega},I_{n}).\] Since \[\left|\log\frac{\mathrm{d}(f_{\sigma^{-(n+1)}\omega})_{*}\nu}{\mathrm{d}\nu}( x_{n})-\log\varphi(f_{\sigma^{-(n+1)}\omega},I_{n})\right|\leqslant\mathcal{O}( \sigma^{-n}\omega,|I_{n}|),\] we have \[\left|-\log\nu(I_{0})-\sum_{n=0}^{N-1}\log\frac{\mathrm{d}(f_{\sigma^{-(n+1)}_{ \omega}})_{*}\nu}{\mathrm{d}\nu}(x_{n})\right|\leqslant-\log\nu(I_{N})+\sum_{n= 0}^{N-1}\mathcal{O}(\sigma^{-n}\omega,|I_{n}|). \tag{5.3}\] Note that the sum in the left-hand side is independent of \(\rho\) and it is a Birkhoff sum of the function \((\omega,x)\mapsto\log\frac{\mathrm{d}(f_{\sigma^{-1}\omega})_{*}\nu}{\mathrm{d} \nu}(x)\) for the transformation \(F^{-1}\). Hence, by Birkhoff's ergodic theorem, outside a null set, as \(N\to+\infty\), \[\frac{1}{N}\sum_{n=0}^{N-1}\log\frac{\mathrm{d}(f_{\sigma^{-(n+1)}_{\omega}})_{ *}\nu}{\mathrm{d}\nu}(x_{n})\to h. \tag{5.4}\] Recall that our objective is to estimate \(\lim_{\rho\to 0^{+}}\frac{\log\nu(I_{0})}{\log\rho}\). To make use of (5.3), we still have the freedom to choose \(N=N(\omega,\rho)\) according to \(\rho>0\). The choice will guarantee \(N(\omega,\rho)\sim\frac{\log\rho}{\lambda}\) and that in the same time, the right-hand side of (5.3) is relatively small compare with \(N(\omega,\rho)\) as \(\rho\to 0^{+}\). By Corollary 4.25, we can fix a subset \(\Sigma^{\prime}\subset\Sigma\) and a sequence \(\varepsilon_{n}\to 0^{+}\) at first, such that \(\mathbf{P}(\Sigma^{\prime})>0\) and that for \(m\)-almost every \((\omega^{\prime},y)\in\Sigma^{\prime}\times\mathbb{S}^{1}\), there is an interval \(I=I(\omega^{\prime},y)\) containing \(y\) and satisfying \(\nu(I)\geqslant\frac{1}{2r}\) and for any \(n\in\mathbb{N}\), \(\log|f_{\omega^{\prime}}^{n}y|\leqslant(\lambda+\varepsilon_{n})n\). For every \(\omega\in\Sigma\), we define \[A_{\omega}\coloneqq\left\{N:\sigma^{-N}\omega\in\Sigma^{\prime}\right\}, \tag{5.5}\] which is the family of _hyperbolic times_ that \(f_{\sigma^{-N}\omega}^{N}\) possesses a significant hyperbolicity for \(N\in A_{\omega}\). For \(\mathbf{P}\)-almost every \(\omega\in\Sigma\), the set \(A_{\omega}\) is infinite and thus, \[N(\omega,\rho)\coloneqq\min\left\{N\in A_{\omega}:2^{(\lambda+\varepsilon_{N} )N}\leqslant\rho\right\}\] is well defined for all \(\rho>0\). We claim that \[\frac{\log\rho}{N(\omega,\rho)}\to\lambda\quad\text{as $\rho\to 0^{+}$}. \tag{5.6}\] Indeed, if we list elements of \(A_{\omega}=\{N_{1},N_{2},\dots\}\) in increasing order, then by Birkhoff's ergodic theorem, for \(\mathbf{P}\)-almost every \(\omega\), \(N_{k}/k\to\mathbf{P}(\Sigma^{\prime})^{-1}\) and hence \(N_{k-1}/N_{k}\to 1\) as \(k\to+\infty\). By definition, \(N(\omega,\rho)\) is some \(N_{k}\) such that \((\lambda+\varepsilon_{N_{k}})N_{k}\leqslant\log\rho<(\lambda+\varepsilon_{N_{k -1}})N_{k-1}\). Dividing by \(N_{k}\) and taking limit shows (5.6). We write \(N=N(\omega,\rho)\) as a short hand. From \(N\in A_{\omega}\) and the point \(x\in\operatorname{supp}m_{\omega}\), we know that there is an interval \(I=I(F^{-N}(\omega,x))\) containing \(f_{\omega}^{-N}x\) and satisfying \(\nu(I)\geqslant\frac{1}{2r}\) and \[|f_{\sigma^{-N}\omega}^{N}I|\leqslant 2^{(\lambda+\varepsilon_{N})N}\leqslant\rho.\] Hence \(f_{\sigma^{-N}\omega}^{N}I\subset B(x,\rho)=I_{0}\) and \(I_{N}=f_{\omega}^{-N}I_{0}\supset I\) has measure \[\nu(I_{N})\geqslant\frac{1}{2r}. \tag{5.7}\] We claim that for \(\mathbf{P}\)-almost every \(\omega\in\Sigma\), there is \(\widetilde{\varepsilon}_{\rho}\to 0\) as \(\rho\to 0^{+}\) (depending on \(\omega\)) such that for every \(1\leqslant n\leqslant N\) we have \[|I_{n}|\leqslant 2^{\lambda(N-n)+\widetilde{\varepsilon}_{\rho}N}.\] Thus, applying Theorem 5.4 (let \((G_{t},t_{N,n},T,X)\) be \((\mathcal{O}(\cdot,t),|I_{n}|,F,\Sigma\times\mathbb{S}^{1})\)) we obtain that \[\frac{1}{N}\sum_{n=0}^{N-1}\mathcal{O}(\sigma^{-n}\omega,|I_{n}|)\to 0\] for \(m\)-almost every \((\omega,x)\in\Sigma\times\mathbb{S}^{1}\). Combining (5.3), (5.4), (5.7) and (5.6) we obtain the desired convergence (5.2). To show the claim, recall that \(\omega\) is regular for the random walk. Then there exists positive numbers \(\varepsilon_{n}^{\prime}\to 0\) such that \[\forall n\in\mathbb{N},\quad(f_{\omega}^{-n})^{\prime}(x)\leqslant 2^{(- \lambda+\varepsilon_{n}^{\prime})n}\quad\text{and}\quad\sum_{k=0}^{n-1}(f_{ \omega}^{-k})^{\prime}(x)\leqslant 2^{(-\lambda+\varepsilon_{n}^{\prime})n}.\] Writing \(M=\max\big{\{}\sup_{f\in\mathcal{S}^{-1}}\|f^{\prime}\|,\sup_{f\in\mathcal{S}^ {-1}}\|\log f^{\prime}\|_{\mathrm{Lip}},2\big{\}},\) by Proposition 3.8, we have \(\varkappa(f_{\omega}^{-n},I_{0})\leqslant 1\) whenever \[\rho\leqslant\frac{1}{2M}2^{(\lambda-\varepsilon_{n}^{\prime})n}.\] For such \(n\), we have \[|I_{n}|\leqslant 2(f_{\omega}^{n})^{\prime}(x)|I_{0}|\leqslant 2^{(-\lambda+ \varepsilon_{n}^{\prime})n+2}\rho.\] Recall (5.6). So we can write \[\rho=2^{(\lambda+\varepsilon_{\rho})N}\] with some \(\varepsilon_{\rho}\to 0\) as \(\rho\to 0^{+}\). The claim can be verified easily, with the choice \[\widetilde{\varepsilon}_{\rho}=\varepsilon_{\rho}+\sup_{1\leqslant n\leqslant N }\varepsilon_{n}^{\prime}\frac{n}{N}+\frac{2+\log M}{N}.\] ## 6 Dimension Formulas In the previous section, we established the exact dimensionality of ergodic stationary measures and the dimension is expressed as the ratio between the Furstenberg entropy and the Lyapunov exponent. However, computing the Furstenberg entropy can be challenging in practice. In this section, we will demonstrate that the random walk entropy can be used as a substitute for the Furstenberg entropy under a suitable separation condition, namely the local discreteness of the group generated by the support of the measure. Let us explain how the local discreteness helps in this setting. Firstly, we use the language of entropy to interpret the dimension. Let \(\alpha\) be the exact dimension of an ergodic \(\mu\)-stationary measure \(\nu.\) Then \(\alpha\) also equals to the entropy dimension of \(\nu\), i.e., \[\alpha=\lim_{n\to\infty}\frac{1}{n}H(\nu,\mathcal{D}_{n}).\] Recall that \(\Pi(\omega)\) plays the role of Furstenberg boundary. Proposition 4.26 gives rise to some \(\underline{x}\in\mathbb{S}_{r}\) with which we can interpret \(\nu\) as the weak limit \[\nu=\lim_{n\to\infty}\mu^{*n}*u_{\underline{x}}.\] Combining the convergences above, we can expect that \[\frac{1}{n}H(\mu^{*n}*u_{\underline{x}},\mathcal{D}_{-\lambda n})\to-\lambda\alpha,\] where \(\lambda<0\) is the corresponding Lyapunov exponent. It relates the action of \(n\)-step iterations of \(\mu\) and the dimension \(\alpha\). If there were a gap between \(h_{\mathrm{RW}}(\mu)\) and \(-\lambda\alpha\), then there would be exponentially many elements in \(\operatorname{supp}\mu^{*n}\) mapping a point of \(\underline{x}\) into the same atom of \(\mathcal{D}_{-\lambda n}.\) Combining the derivative and distortion controls for good words in \(\operatorname{supp}\mu^{*n},\) the restrictions of these elements to a fixed interval would be very close to each other. This allows us to construct elements of the form \(g^{-1}f\) that are arbitrarily (\(C^{1}\)-)close to the identity on an interval. This contradicts the local discreteness assumption. ### Dimension formula on the circle We will now prove Theorem 2.10. Let \(\mu\) be a probability measure on \(\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) whose support is finite and does not preserve any Borel probability measure on \(\mathbb{S}^{1}\). Let \(\nu\) to be an ergodic \(\mu\)-stationary measure. By previous discussions, we know that \(\nu\) is exact dimensional, and we denote its dimension by \(\alpha\). We use the notation of Theorem 4.12 and without loss of generality, assume that \(\nu=\nu_{0}^{+}\). Recall \(\underline{\nu}_{0}^{+}\) is the probability measure on \(\mathbb{S}_{r}\) which is the image measure of \(\mathbf{P}\) by \(\Pi(\,\cdot\,,0)\). We abbreviate \(\underline{\nu}_{0}^{+}\) to \(\underline{\nu}\) in this section. **Lemma 6.1**.: _For every \(\underline{x}\in\operatorname{supp}\underline{\nu},\) we have_ \[\lim_{n\to+\infty}\frac{1}{n}H(\mu^{*n}*u_{\underline{x}},\mathcal{D}_{- \lambda n})=-\alpha\lambda.\] Recall that \(u_{\underline{x}}\) denotes the uniform probability measure on \(\underline{x}\in\mathbb{S}_{r}\). Proof.: By Proposition 4.26, for every \(\varepsilon>0\), for every \(n\) large enough, we have \[\mathbf{P}(\Sigma^{\prime})\geqslant 1-\varepsilon\] where \[\Sigma^{\prime}=\left\{\omega\in\Sigma:\underline{d}(f^{n}_{\sigma^{-n}\omega }\underline{x},\Pi(\omega,0))<2^{(\lambda+\varepsilon)n}\right\}.\] From this we can construct a Borel probability measure \(\theta_{n}\) on \(\mathbb{S}^{1}\times\mathbb{S}^{1}\) whose marginals distributions are \(\mu^{*n}*u_{\underline{x}}\) and \(\nu\) and moreover \[\theta_{n}\left\{(x,y):d(x,y)\leqslant 2^{(\lambda+\varepsilon)n}\right\} \geqslant 1-\varepsilon. \tag{6.1}\] Indeed, write \(\underline{x}=\{x_{1},\ldots,x_{r}\}\) and for each \(\omega\in\Sigma\), label elements of \(\Pi(\omega,0)\) as \(y_{1}(\omega),\ldots,y_{r}(\omega)\) (in a measurable way). Define \[\theta_{n}\coloneqq\int_{\Sigma}\frac{1}{r}\sum_{i=1}^{r}\delta_{(f^{n}_{ \sigma^{-n}\omega}x_{i},y_{i}(\omega))}\,\mathrm{d}\mathbf{P}(\omega),\] so that \(\theta\) is a probability measure on \(\mathbb{S}^{1}\times\mathbb{S}^{1}\). Its projection to the first coordinate is \(\mu^{*n}*u_{\underline{x}}\) and to the second coordinate is \(\nu\). At this stage, we still have the freedom to choose the labelling \(\Pi(\omega,0)=\{y_{1}(\omega),\ldots,y_{r}(\omega)\}\). For \(\omega\in\Sigma^{\prime}\), there is unique labelling such that for every \(i=1,\ldots r\), \(d(f^{n}_{\sigma^{-n}\omega}x_{i},y_{i}(\omega))\leqslant 2^{(\lambda+ \varepsilon)n}\), because \(\Pi(\omega,0)\) is \(c\)-separated for some \(c=c(\mu)>0\) by Lemma 4.19 and \(d(f^{n}_{\sigma^{-n}\omega}\underline{x},\Pi(\omega,0))<2^{(\lambda+ \varepsilon)n}<c/2\). This shows (6.1). Applying Lemma 6.2 below, we obtain \[|H(\mu^{*n}*u_{\underline{x}},\mathcal{D}_{-\lambda n})-H(\nu,\mathcal{D}_{- \lambda n})|\ll\varepsilon(-\lambda)n+1.\] Since \(\varepsilon>0\) is arbitrarily small and \(\lim_{n\to+\infty}\frac{1}{n}H(\nu,\mathcal{D}_{n})\to\alpha\), the desired convergence follows. In the proof above, we used the following elementary fact. **Lemma 6.2**.: _Let \(\theta\) be a Borel probability measure on \(\mathbb{S}^{1}\times\mathbb{S}^{1}\) such that_ \[\theta\left\{(x,y):d(x,y)\leqslant 2^{-(1-\varepsilon)n}\right\}\geqslant 1-\varepsilon\] _for some \(n\in\mathbb{N}\) and \(\varepsilon>0.\) Let \(\eta\) and \(\zeta\) be the marginal distributions of \(\theta\) on the two coordinates. Then_ \[|H(\eta,\mathcal{D}_{n})-H(\zeta,\mathcal{D}_{n})|\ll\varepsilon n+1.\] Proof.: Using (3.2) and \(\log\#\mathcal{D}_{n}=n\), we can reduce to the case where \[\theta\left\{(x,y):d(x,y)\leqslant 2^{-(1-\varepsilon)n}\right\}=1.\] Let \(\mathcal{T}\) be the trivial partition of \(\mathbb{S}^{1}.\) Note that \(H(\eta,\mathcal{D}_{n})=H(\theta,\mathcal{D}_{n}\times\mathcal{T})\) and we have \[0\leqslant H(\theta,\mathcal{D}_{n}\times\mathcal{D}_{n})-H(\eta,\mathcal{D}_ {n})=H(\theta,\mathcal{D}_{n}\times\mathcal{D}_{n}|\mathcal{D}_{n}\times \mathcal{T})\ll(\varepsilon n+1).\] The last inequality is due to (3.1) and the fact that once the position of \(x\) with respect to \(\mathcal{D}_{n}\) is known, the position of \(y\in B(x,2^{-(1-\varepsilon)n})\) with respect to \(\mathcal{D}_{n}\) has only \(1+2\lceil 2^{\varepsilon n}\rceil\) possibilities. Similarly, \[0\leqslant H(\theta,\mathcal{D}_{n}\times\mathcal{D}_{n})-H(\zeta,\mathcal{D}_ {n})\ll(\varepsilon n+1).\] The conclusion follows. **Lemma 6.3**.: _Assume that \(-\lambda\alpha<h_{\mathrm{RW}}(\mu).\) Then there are constants \(c,C>0\), a closed interval \(J\subset\mathbb{S}^{1}\) centered at a point in \(\mathrm{supp}\,\nu\) and infinitely many integers \(n\in\mathbb{N}\) with a sequence of positive numbers \(\varepsilon_{n}\to 0\) and subsets \(T_{n}\subset T_{\mu},\) satisfying_ 1. \(\#T_{n}\geqslant 2^{cn},\)__ 2. _for every_ \(f\in T_{n},\)__\(f^{\prime}|_{J}\in[2^{(\lambda-\varepsilon_{n})n},2^{(\lambda+\varepsilon_{n})n}]\) _and_ \(\widetilde{\varkappa}(f,J)\leqslant C,\)__ 3. \(f(J),\)__\(f\in T_{n},\) _falls in a common interval of length at most_ \(2^{(\lambda+\varepsilon_{n})n}.\)__ Proof.: By the assumption, we can take \(c>0\) such that \(-\lambda\alpha+10lc<h_{\mathrm{RW}}(\mu),\) where \(l=\max\left\{\#\,\mathrm{supp}\,\mu,-\lambda\right\}.\) By concavity (3.2), for all \(n\in\mathbb{N},\) \[\frac{1}{r}\sum_{x\in\underline{x}}H(\mu^{*n}*\delta_{x},\mathcal{D}_{-\lambda n })\leqslant H(\mu^{*n}*u_{\underline{x}},\mathcal{D}_{-\lambda n}).\] Thus, by Lemma 6.1, there is \(x\in\underline{x}\) such that \[\liminf_{n\to+\infty}\frac{1}{n}H(\mu^{*n}*\delta_{x},\mathcal{D}_{-\lambda n })\leqslant-\lambda\alpha. \tag{6.2}\] Recall Lemma 4.18. Let \(\varepsilon^{\prime}>0\) be small so that \[\forall i\in[d],\quad\nu_{i}^{-}(B(x,3\varepsilon^{\prime}))\leqslant\frac{ c}{dr}.\] Let \[J=[x-\varepsilon^{\prime},x+\varepsilon^{\prime}].\] Let \(\Sigma_{\varepsilon}\subset\Sigma\), \(C=C(\varepsilon)\) and \(U(\omega)=U(\omega,\varepsilon)\) be given by Proposition 4.22 applied to \(\varepsilon=\min\left\{c,\varepsilon^{\prime}\right\}\). By the uniform convergence and a uniform distortion bound in \(\Sigma_{\varepsilon},\) there exists positive constants \(\varepsilon_{n}\to 0\) such that for every \(y\in U(\omega)\cap W^{s}(\omega,0),\) \[|\frac{1}{n}\log(f_{\omega}^{n})^{\prime}(y)-\lambda|\leqslant\varepsilon_{n}.\] Since \(\underline{x}\in\mathrm{supp}\,\underline{\nu},\) for \(\mathbf{P}\)-almost every \(\omega\in\Sigma,\) we have \(\underline{x}\subset W^{s}(\omega,0)\). In particular, \(J\cap W^{s}(\omega,0)\neq\varnothing\). Thus, for \(\omega\in\Sigma_{\varepsilon},\) if \(J\not\subset U(\omega),\) then the \(2\varepsilon^{\prime}\)-neighborhood of \(J\) must contain a point in \(\Xi(\omega).\) Thus, \[\mathbf{P}\left\{\omega\in\Sigma_{\varepsilon}:J\not\subset U(\omega)\right\} \leqslant\mathbf{P}\left\{\omega:\Xi(\omega)\cap B(x,3\varepsilon^{\prime}) \neq\varnothing\right\}\leqslant r\sum_{j\in[d]}\nu_{j}^{-}(B(x,3 \varepsilon^{\prime}))\leqslant c.\] If follow that the set \[\Sigma^{\prime}=\left\{\omega\in\Sigma_{\varepsilon}:J\subset U(\omega)\cap W ^{s}(\omega,0)\right\},\] satisfies \(\mathbf{P}(\Sigma^{\prime})\geqslant 1-2c\). For every \(n\in\mathbb{N}\), define \(\Sigma^{\prime}_{n}\coloneqq\left\{f_{\omega}^{n}:\omega\in\Sigma^{\prime} \right\},\) then \(\mu^{\star n}(\Sigma^{\prime}_{n})\geqslant 1-2c\). Denote \(\tau_{n}=1-\mu^{\star n}(\Sigma^{\prime}_{n})\leqslant 2c\) and let \[\mu_{n}=\frac{1}{1-\tau_{n}}\mu^{\star n}|_{\Sigma^{\prime}_{n}}.\] By the concavity and almost convexity of the entropy (3.2), we have \[|H(\mu_{n})-H(\mu^{\star n})|\leqslant 1+\tau_{n}(\#\operatorname{supp}\mu)n \leqslant 1+2lcn\] and \[|H(\mu_{n}\ast\delta_{x},\mathcal{D}_{-\lambda n})-H(\mu^{\star n}\ast\delta _{x},\mathcal{D}_{-\lambda n})|\leqslant 1+\tau_{n}(-\lambda)n\leqslant 1+2lcn.\] Remembering \(\frac{1}{n}H(\mu^{\star n})\to h_{\mathrm{RW}}(\mu)\), the choice of \(c\) and (6.2), we have \[H(\mu_{n})\geqslant H(\mu_{n}\ast\delta_{x},\mathcal{D}_{-\lambda n})+cn.\] for infinitely many \(n\). Define \(e_{x}:\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\to\mathbb{S}^{1}\) to be the map \(f\mapsto f(x)\), then \(e_{x}^{-1}\mathcal{D}_{-\lambda n}\) is a finite partition of \(\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\). We have \[H(\mu_{n}|e_{x}^{-1}\mathcal{D}_{-\lambda n})=H(\mu_{n})-H(\mu_{n},e_{x}^{-1} \mathcal{D}_{-\lambda n})=H(\mu_{n})-H(\mu_{n}\ast\delta_{x},\mathcal{D}_{- \lambda n})\geqslant cn.\] This implies the existence of \(I\in\mathcal{D}_{-\lambda n}\) such that the set \(T_{n}=\left\{f\in\Sigma^{\prime}_{n}:f(x)\in I\right\}\) has cardinality \(\#T_{n}\geqslant 2^{cn}.\) By the definition of \(\Sigma^{\prime}_{n}\), for any \(f\in T_{n}\), 1. \(\widetilde{\varkappa}(f,J)\leqslant C\), and 2. \(\forall y\in J\), \(\log f^{\prime}(y)\in[(\lambda-\varepsilon_{n})n,(\lambda+\varepsilon_{n})n]\). Moreover, since \(f(x)\in I\), we conclude that \(f(J)\) is contained in the \(2^{(\lambda+\varepsilon_{n})n}\)-neighborhood of \(I\). After replacing \(\varepsilon_{n}\) by \(\varepsilon_{n}+2/n\), we obtain that \(f(J)\) falls in a common interval of length at most \(2^{(\lambda+\varepsilon_{n})n}\) for \(f\in T_{n}\). **Lemma 6.4**.: _Let \(c,C,J,\varepsilon_{n},T_{n}\) be as in Lemma 6.3. Let \(J^{\prime}=\frac{1}{2}J\) be the closed interval of the same center as \(J\) and but of half the length. Then there exists \(g_{n}\neq f_{n}\in T_{n}\) such that \(f_{n}(J^{\prime})\subset g_{n}(J)\) and the sequence \(g_{n}^{-1}f_{n}\) converges to the identity on \(J^{\prime}\) in the \(C^{1}\)-topology._ Proof.: Define \(k=k(n)\) for large \(n\) to be the greatest integer such that \[4k+1<\frac{cn}{\lceil 2\varepsilon_{n}n+\sqrt{n}\rceil+\log(\lceil 2 \varepsilon_{n}n^{2}\rceil+1)}.\] Since \(\varepsilon_{n}\to 0\), \(k(n)\to+\infty\) as \(n\to+\infty\). Let \(y_{0},y_{1},\cdots,y_{4k}\in J\) be evenly spaced points on the circle such that \(J=[y_{0},y_{4k}]\). Then \(d(y_{j},y_{j+1})\leqslant 1/(4k).\) For every \(f\in T_{n}\), each \(f(y_{i})\) takes value in a fixed interval of length \(2^{(\lambda+\varepsilon_{n})n}\) and \(\log f^{\prime}(x_{i})\) takes value in \([(\lambda-\varepsilon_{n})n,(\lambda+\varepsilon_{n})n]\). Arrange vectors \[(f(y_{0}),\cdots,f(y_{4k}),\log f^{\prime}(y_{0}),\cdots,\log f^{\prime}(y_{4 k})),f\in T_{n}\] into boxes of dimension \(2^{(\lambda-\varepsilon_{n})n-\sqrt{n}}\times\cdots\times 2^{(\lambda- \varepsilon_{n})n-\sqrt{n}}\times\frac{1}{n}\times\cdots\times\frac{1}{n}\). The choice of \(k\) guarantees that \[\left(2^{\lceil 2\varepsilon_{n}n+\sqrt{n}\rceil}\right)^{4k+1}\left(\lceil 2 \varepsilon_{n}n^{2}\rceil+1\right)^{4k+1}<2^{cn}.\] Thus, by the pigeonhole principle there exist \(f\neq g\in T_{n}\) such that \[\forall i=0,\ldots,4k,\quad d(f(y_{i}),g(y_{i}))\leqslant 2^{(\lambda- \varepsilon_{n})n-\sqrt{n}}\quad\text{and}\quad|\log f^{\prime}(y_{i})-\log g^{ \prime}(y_{i})|\leqslant\frac{1}{n}.\] The endpoints of \(J^{\prime}\) are exactly \(y_{k}\) and \(y_{3k}.\) Note that, for \(n\) large enough, \[f(y_{k})\geqslant g(y_{k})-2^{(\lambda-\varepsilon_{n})n-\sqrt{n}}\geqslant g (y_{0})+\frac{|J|}{4}\min_{y\in J}g^{\prime}(y)-2^{(\lambda-\varepsilon_{n})n -\sqrt{n}}\geqslant g(y_{0}).\] Similarly \(f(y_{3k})\leqslant g(y_{4k})\). It follows that \[f(J^{\prime})\subset g(J).\] In what follows, we will estimate the \(C^{1}\) distance between \(g^{-1}f\) and \(\operatorname{id}\) on \(J^{\prime}\). First, since \(g^{-1}:f(J^{\prime})\to J\), \(g^{-1}\) is \(2^{(-\lambda+\varepsilon_{n})n}\)-Lipschitz on \(f(J^{\prime}).\) Hence for every \(i=0,\ldots,4k,\) \[d(g^{-1}f(y_{i}),y_{i})\leqslant 2^{(-\lambda+\varepsilon_{n})n}d(f(y_{i}),g( y_{i}))\leqslant 2^{-\sqrt{n}}.\] Then, using \(\widetilde{\varkappa}(g,J)\leqslant C\), \[|\log(g^{-1}f)^{\prime}(y_{i})|\leqslant\left|\log\frac{f^{\prime}(y_{i})}{g^ {\prime}(y_{i})}\right|+\left|\log\frac{g^{\prime}(y_{i})}{g^{\prime}(g^{-1} fy_{i})}\right|\leqslant\frac{1}{n}+Cd(g^{-1}f(y_{i}),y_{i})\leqslant\frac{1}{n}+C2^{- \sqrt{n}}.\] More generally, let \(y\in J^{\prime}\) be arbitrary. There is \(i\in\{k,\ldots,3k\}\) such that \(y\in[y_{i},y_{i+1}]\). Then \(g^{-1}f(y)\in[g^{-1}f(y_{i}),g^{-1}f(y_{i+1})]\), both points being contained in the \(2^{-\sqrt{n}}\) neighborhood of \([y_{i},y_{i+1}]\). Hence \[d(y,g^{-1}f(y))\leqslant\frac{1}{4k}+2^{-\sqrt{n}+1}.\] The logarithm of derivatives at \(y\) can be bounded by comparing with \(\log(g^{-1}f)^{\prime}(y_{i})\): \[\left|\log\frac{(g^{-1}f)^{\prime}(y)}{(g^{-1}f)^{\prime}(y_{i})}\right| \leqslant\left|\log\frac{f^{\prime}(y)}{f^{\prime}(y_{i})}\right|+\left|\log \frac{g^{\prime}(g^{-1}fy_{i})}{g^{\prime}(g^{-1}fy)}\right|\leqslant C\frac{ 1}{4k}+C(\frac{1}{2k}+2^{-\sqrt{n}+1}).\] Now, for each \(n\), construct \((f_{n},g_{n})\) in this way. Since \(k\to+\infty\) as \(n\to+\infty\), we conclude that \(g_{n}^{-1}f_{n}\) tends to identity on \(J^{\prime}\) in the \(C^{1}\)-topology. Proof of Theorem 2.10.: By the property of Furstenberg entropy and Theorem 2.4, we have \[\alpha=\dim\nu=-\frac{h_{\mathrm{F}}(\mu,\nu)}{\lambda}=-\frac{h_{\mathrm{F}} (\mu^{*n},\nu)}{n\lambda}\leqslant-\frac{H(\mu^{*n})}{n\lambda}\] for every positive integer \(n.\) Letting \(n\to+\infty,\) we obtain that \(\alpha\leqslant-h_{\mathrm{RW}}/\lambda.\) Assume for a contradiction that \(-\lambda\alpha<h_{\mathrm{RW}}(\mu)\). Then Lemma 6.3 and Lemma 6.4 lead to a contradiction with the assumption of local discreteness. ### Dimension formula on the interval The purpose of this subsection is to prove Theorem 2.8. The case of random walks on the circle was already established in the previous subsection. In order to prove the result for intervals, we view the interval as a part of a circle and use the previously established proof technique to obtain an intermediate version of the theorem. Let \(\mu\) be a finitely supported probability measure on \(\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1}).\) Let \(I\subset\mathbb{S}^{1}\) be a closed interval or the whole circle which is preserved by every element of \(\operatorname{supp}\mu,\) that is \(f(I)\subset I\) for every \(f\in\operatorname{supp}\mu.\) We define the random walk entropy restricted to \(I\) as \[h_{\mathrm{RW},I}(\mu)=\lim_{n\to\infty}\frac{1}{n}H(\mu^{*n}|_{I}),\] where \(\mu^{*n}|_{I}\) denotes the probability measure obtained by restricting \(\mu^{*n}\) to \(C^{2}_{+}(I,I).\) **Proposition 6.5**.: _Let \(\mu\) be a finitely supported probability measure on \(\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) such that \(\operatorname{supp}\mu\) does not preserve any common probability measure on \(\mathbb{S}^{1}\). Let \(I\subset\mathbb{S}^{1}\) be a closed interval or the whole circle which is preserved by every element of \(\operatorname{supp}\mu,\) that is, \(f(I)\subset I\) for every \(f\in\operatorname{supp}\mu.\) Let \(\nu\) be an ergodic \(\mu\)-stationary measure with \(\operatorname{supp}\nu\subset I.\) Then \(\nu\) is exact dimensional and_ 1. _either_ \(|\lambda(\mu,\nu)|\geqslant h_{\mathrm{RW},I}(\mu)\) _and_ \(\dim\nu=\frac{h_{\mathrm{RW},I}(\mu)}{|\lambda(\mu,\nu)|},\)__ _._ 2. _or there exists a closed interval_ \(J\subset I\) _and two sequence of diffeomorphisms_ \(\{g_{n}\},\{f_{n}\}\subset T_{\mu}\) _with_ \(g_{n}|_{I}\neq f_{n}|_{I}\) _and_ \(f_{n}(J)\subset g_{n}(I),\) _such that_ \(g_{n}^{-1}f_{n}\) _tends to_ \(\operatorname{id}\) _on_ \(J\) _in the_ \(C^{1}\)_-topology._ _Remark 6.6_.: The case where \(h_{\operatorname{RW}}(\mu)>h_{\operatorname{RW},I}(\mu)\) only occurs when there exist \(g\neq f\in T_{\mu}\) such that \(g|_{I}=f|_{I}\). Then, for every positive integer \(n\), \((g^{-1}f)^{n}|_{I}\) is the identity, which implies that the group generated by \(\operatorname{supp}\mu\) is locally non-discrete in \(\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\). Therefore, this proposition indeed covers Theorem 2.10. _Remark 6.7_.: In general, \(I\) can be replaced by a finite union of closed intervals which are preserved by \(\operatorname{supp}\mu\). The proof remains the same. This version seems more useful since there may not exist a subinterval preserved by \(\operatorname{supp}\nu\) in the case when \(r>1\). Proof.: We only need to make a slight adaptation to Lemma 6.3. Specifically, we replace the assumption \(-\lambda\alpha<h_{\operatorname{RW}}(\mu)\) with \(-\lambda\alpha<h_{\operatorname{RW},I}(\mu)\). During the proof, we replace \(\mu_{n}\) and \(\mu^{\star n}\) with \(\mu_{n}|_{I}\) and \(\mu^{\star n}|_{I}\), respectively. The choice of \(\underline{x}=\{x_{1},\cdots,x_{r}\}\in\operatorname{supp}\underline{\nu}\) implies that \(x_{1},\cdots,x_{r}\in\operatorname{supp}\nu\subset I\). Thus, each \(\mu^{n}\ast u_{\underline{x}}\) is still supported on \(I\), and we can deduce the adapted version of Lemma 6.3. Specifically, there exists an interval \(J\subset I\) centered at a point in \(\operatorname{supp}\nu\), positive numbers \(\varepsilon_{n}\to 0\) and subsets \(T_{n}\subset T_{\mu}\) for infinitely many \(n\) satisfying conditions (1)(2)(3) in Lemma 6.3. Next, we apply Lemma 6.4. Replacing \(J\) by \(\frac{1}{2}J,\) we obtain the desired conclusion. **Lemma 6.8**.: _Let \(I\) be a closed subinterval of \(\mathbb{S}^{1}\) and \(\mu\) be a finitely supported probability measure on \(C^{2}_{+}(I,I)\) such that \(T_{\mu}\) does not preserve any probability measure on \(I.\) Then there exists a finitely supported probability measure \(\widetilde{\mu}\) supports on \(\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) such that_ 1. \(H_{\widetilde{\mu}}\) _does not preserve any probability measure on_ \(\mathbb{S}^{1},\)__ 2. \(I\) _is preserved by every element in_ \(\operatorname{supp}\mu,\) _and_ 3. \(\widetilde{\mu}|_{I}=\mu.\)__ Proof.: Take a closed interval \(J\) on \(\mathbb{S}^{1}\) disjoint with \(I.\) Take \(f_{1},f_{2}\in\operatorname{supp}\mu\) such that \(f_{1}\) does not preserve the left endpoint \(x_{-}\) of \(I\) and \(f_{2}\) does not preserve the right endpoint \(x_{+}\) of \(I\) (it is possible that \(f_{1}=f_{2}\)). Write \(\mathbb{S}^{1}=I\sqcup U_{1}\sqcup J\sqcup U_{2}\), where \(x_{-}\) is an endpoint of \(U_{1}\) and \(x_{+}\) is an endpoint of \(U_{2}.\) Take \(\widetilde{f}_{1}\in\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) such that \(\widetilde{f}_{1}(U_{1}\cup J)\subset I\) and \(\widetilde{f}_{1}|_{I}=f_{1}\), then take \(\widetilde{f}_{2}\in\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) such that \(\widetilde{f}_{2}(J\cup U_{2})\subset I\) and \(\widetilde{f}_{2}|_{I}=f_{2}.\) Then the common invariant probability measure of \(\widetilde{f}_{1}\) and \(\widetilde{f}_{2}\) must support on \(I.\) For other element \(f\in\operatorname{supp}\mu\), extend \(f\) to \(\widetilde{f}\in\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) arbitrarily. Let \(\mu=\sum_{i=1}^{k}p_{i}\delta_{f_{i}}\), taking \(\widetilde{\mu}=\sum_{i=1}^{k}p_{i}\delta_{\widetilde{f}_{i}}\) is enough. Proof of Theorem 2.8.: For the case of \(\mathbb{S}^{1},\) it follows by taking \(I=\mathbb{S}^{1}\) in Proposition 6.5. For the case of an interval, we regard \(I\) as a subinterval of \(\mathbb{S}^{1}.\) The statement follows by combining Lemma 6.8 and Proposition 6.5. ## 7 Approximation with a uniformly hyperbolic subsystem ### Hyperbolic elements The construction of hyperbolic fixed points and hyperbolic elements in sub-(semi)groups of \(\operatorname{Diff}(\mathbb{S}^{1})\) dates back to Sacksteder's Theorem [70] for \(C^{2}\) pseudo-groups. Deroin-Kleptsyn-Navas [22, Theoreme F] showed the existence of hyperbolic elements in subgroups of \(\operatorname{Diff}_{+}^{1}(\mathbb{S}^{1})\) that do not preserve any probability measure on \(\mathbb{S}^{1}.\) Here, we prove similar results but in the context of subsemigroups and of \(C^{2}\) regularity. Similar to the proof in [22], we use random walks to find hyperbolic elements. With the description of the random walks given by Theorem 4.12, the proof is rather straightforward. Recall that a fixed point \(x\in\mathbb{S}^{1}\) of a diffeomorphism \(f\in\operatorname{Diff}_{+}(\mathbb{S}^{1})\) is hyperbolic if \(f^{\prime}(x)\neq 1\). **Lemma 7.1**.: _Let \(\mu\), \(d\) and \(r\) be as in Theorem 4.12. Then there is an element in \(T_{\mu}\) having exactly \(2dr\) fixed points in \(\mathbb{S}^{1}\), all of which are hyperbolic._ In fact in the proof of Lemma 7.1, we can "lock" the positions of the attracting and repelling fixed points, which is given by the next lemma. We follow the notation in Theorem 4.12. Let \(\underline{\nu}^{+}=\Pi_{*}\mathbf{P}\) and \(\underline{\nu}^{-}=\Xi_{*}\mathbf{P}\). These are probability measures on \(\mathbb{S}_{dr}\), the space of all subsets of \(\mathbb{S}^{1}\) of \(dr\) elements. For a subset \(A\subset\mathbb{S}^{1}\) and \(\rho>0\) write \[A^{(\rho)}=\bigcup_{a\in A}B(a,\rho)\] for the \(\rho\)-neighborhood of \(A\) in \(\mathbb{S}^{1}\). **Lemma 7.2**.: _Let \(A\in\operatorname{supp}\underline{\nu}^{+}\) and \(R\in\operatorname{supp}\underline{\nu}^{-}\). Assume that \(A\cap R=\varnothing\) and that \(A\cup R\) is \(3\rho\)-separated for some \(\rho>0\). Then there is \(f\in T_{\mu}\) such that_ \[f(\mathbb{S}^{1}\setminus R^{(\rho)})\subset A^{(\rho)}, \tag{7.1}\] _it preserves each connected component of \(A^{(\rho)}\), that is,_ \[\forall a\in A,\quad f(B(a,\rho))\subset B(a,\rho), \tag{7.2}\] _and_ \[f^{\prime}|_{\mathbb{S}^{1}\setminus R^{(\rho)}}<1\quad\text{and}\quad(f^{-1 })^{\prime}|_{\mathbb{S}^{1}\setminus A^{(\rho)}}<1. \tag{7.3}\] Proof.: Consider \[\Sigma_{0}=\{\,\omega\in\Sigma:\underline{d}(\Pi(\omega),A)<\rho/2\text{ and } \underline{d}(\Xi(\omega),R)<\rho/2\,\}\,. \tag{7.4}\] The two conditions defining \(\Sigma_{0}\) are independent because \(\Pi(\omega)\) depends only on \(\omega^{-}\) and \(\Xi(\omega)\) only \(\omega^{+}\). Thus, since \(A\in\operatorname{supp}\underline{\nu}^{+}\) and \(R\in\operatorname{supp}\underline{\nu}^{-}\), we have \(\mathbf{P}(\Sigma_{0})>0\). By Theorem 4.12 (5), elements of \(A\) and elements of \(R\) arrange alternatively on \(\mathbb{S}^{1}\). By Theorem 4.7, the closed sets \(\operatorname{supp}\nu_{i}\), \(i\in[d]\) are disjoint. Thus we can partition \(A=\bigsqcup_{i\in[d]}A_{i}\) with \(A_{i}=A\cap\operatorname{supp}\nu_{i}\). Moreover, if \(\rho>\ 0\) is small enough, then the condition \(\underline{d}(\Pi(\omega),A)<\rho/2\) implies that for all \(i\in[d]\), \(\underline{d}(\Pi(\omega,i),A_{i}))<\rho/2\). Thus, \[\forall\omega\in\Sigma_{0},\,\forall i\in[d],\quad\underline{d}(\Pi(\omega,i),A_{i}))<\rho/2. \tag{7.5}\] By Proposition 4.22 there is a subset \(\Sigma^{\prime}\subset\Sigma\) and constants \(c,C>0\) such that \(\mathbf{P}(\Sigma^{\prime})\geqslant 1-\mathbf{P}(\Sigma_{0})/2\) and that for all \(\omega\in\Sigma^{\prime}\), \[\forall n\in\mathbb{N},\,\forall x\in\mathbb{S}^{1}\setminus\Xi(\omega)^{( \rho/2)},\quad(f^{n}_{\omega})^{\prime}(x)\leqslant C2^{-cn}. \tag{7.6}\] By Proposition 4.22 applied to \(F^{-1}\), similarly, there is a subset \(\Sigma^{\prime\prime}\subset\Sigma\) such that \(\mathbf{P}(\Sigma^{\prime\prime})\geqslant 1-\mathbf{P}(\Sigma_{0})/2\) and that for all \(\omega\in\Sigma^{\prime\prime}\), \[\forall n\in\mathbb{N},\,\forall x\in\mathbb{S}^{1}\setminus\Pi(\omega)^{( \rho/2)},\quad(f^{-n}_{\omega})^{\prime}(x)\leqslant C2^{-cn}. \tag{7.7}\] In particular, writing \(\Sigma_{1}=\Sigma_{0}\cap\Sigma^{\prime}\) and \(\Sigma_{2}=\Sigma_{0}\cap\Sigma^{\prime\prime}\), we have \(\mathbf{P}(\Sigma_{1})>0\) and \(\mathbf{P}(\Sigma_{2})>0\). Fixed an \(a_{0}\in A_{0}\) for the rest of the proof of Lemma 7.2. Let \(U_{0,0}\) denote \(B(a_{0},\rho)\). By (7.5), for every \(\omega\in\Sigma_{0}\) there is a unique element in \(\Pi(\omega,0)\cap U_{0,0}\), which we will denote by \(\Pi(\omega,0,0)\). Then (recall Theorem 4.12) \[m_{0}^{+}(\Sigma_{1}\times U_{0,0})=\int_{\Sigma_{1}}u_{\Pi(\omega,0)}(U_{0, 0})\,\mathrm{d}\mathbf{P}(\omega)=\frac{1}{r}\mathbf{P}(\Sigma_{1})>0.\] Similarly, \(m_{0}^{+}(\Sigma_{2}\times U_{0,0})>0\). By Birkhoff's ergodic theorem, for \(m_{0}^{+}\)-almost every \((\omega,x)\in\Sigma_{1}\times U_{0,0}\), there are infinitely many \(n\in\mathbb{N}\) such that \(F^{n}(\omega,x)\in\Sigma_{2}\times U_{0,0}\). In other words, for \(\mathbf{P}\)-almost every \(\omega\in\Sigma_{1}\), there are infinitely many \(n\in\mathbb{N}\) such that \[\sigma^{n}\omega\in\Sigma_{2}\quad\text{and}\quad f^{n}_{\omega}\Pi(\omega,0, 0)\in U_{0,0}. \tag{7.8}\] Let \(n\) be an integer large enough so that \(n\) satisfying (7.8) and \(C2^{-cn}<\rho/2\). We claim that \(f^{n}_{\omega}\) satisfies the desired properties. Indeed since \(\omega\ \in\Sigma_{0}\), we have \(\Xi(\omega)\subset R^{(\rho/2)}\) and hence \(\mathbb{S}^{1}\setminus R^{(\rho)}\subset\mathbb{S}^{1}\setminus\Xi(\omega)^ {(\rho/2)}\). Also, \(\underline{d}(\Pi(\omega),A)<\rho/2\). By the separation of \(A\cup R\), \(\Pi(\omega)\subset A^{(\rho)}\subset\mathbb{S}^{1}\setminus R^{(\rho)}\). For a subset \(X\subset\mathbb{S}^{1}\), we denote by \(\pi_{0}(X)\) the set of its connected components. A subset \(\underline{x}\subset\mathbb{S}^{1}\) is called a _representative of \(\pi_{0}(X)\)_ if \(\underline{x}\) has exactly one element in each of the connected components of \(X\). Then \(\Pi(\omega)\) is a representative of \(\pi_{0}(\mathbb{S}^{1}\setminus R^{(\rho)})\). Thus, by \(\omega\in\Sigma^{\prime}\) and (7.6), for any \(\underline{x}\in\mathbb{S}_{dr}\) which is a representative of \(\pi_{0}(\mathbb{S}^{1}\setminus R^{(\rho)})\), \[\underline{d}(f^{n}_{\omega}\underline{x},\Pi(\sigma^{n}\omega))=\underline{d }(f^{n}_{\omega}\underline{x},f^{n}_{\omega}\Pi(\omega))\leqslant C2^{-cn}< \rho/2.\] But \(\sigma^{n}\omega\in\Sigma_{0}\), hence \(\underline{d}(\Pi(\sigma^{n}\omega),A)<\rho/2\). By the triangle inequality, \[\underline{d}(f^{n}_{\omega}\underline{x},A)<\rho,\] showing \(f^{n}_{\omega}(\mathbb{S}^{1}\setminus R^{(\rho)})\subset A^{(\rho)}\) and that \((f^{n}_{\omega})_{*}\colon\pi_{0}(\mathbb{S}^{1}\setminus R^{(\rho)})\to\pi_ {0}(A^{(\rho)})\) is bijective. It follows that \((f^{n}_{\omega})_{*}\colon\pi_{0}(A^{(\rho)})\to\pi_{0}(A^{(\rho)})\) is bijective. Moreover, it preserves the cyclic order and maps \(U_{0,0}\in\pi_{0}(A^{(\rho)})\) to itself. Hence \((f^{n}_{\omega})_{*}\colon\pi_{0}(A^{(\rho)})\to\pi_{0}(A^{(\rho)})\) is the identity map. Also, from \(\omega\in\Sigma^{\prime}\) and (7.6), we obtain \[\forall x\in\mathbb{S}^{1}\setminus R^{(\rho)},\quad(f^{n}_{\omega})^{\prime }(x)\leqslant C2^{-cn}<1.\] Similarly, from \(\sigma^{n}\omega\in\Sigma_{2}\) and (7.7), we obtain \(\mathbb{S}^{1}\setminus A^{(\rho)}\subset\mathbb{S}^{1}\setminus\Pi(\sigma^{n }\omega)^{(\rho/2)}\) and then \[\forall x\in\mathbb{S}^{1}\setminus A^{(\rho)},\quad(f^{-n}_{\sigma^{n} \omega})^{\prime}(x)\leqslant C2^{-cn}<1.\] This completes the proof because \((f^{n}_{\omega})^{-1}=f^{-n}_{\sigma^{n}\omega}\). Proof of Lemma 7.1.: Let \(h\in T_{\mu}\) be the map given by Lemma 7.2 to arbitrary \(A\in\operatorname{supp}\underline{\nu}^{+}\) and \(R\in\operatorname{supp}\underline{\nu}^{-}\) with \(A\cap R=\varnothing\) and to sufficiently small \(\rho>0\). The existence of such a pair of \(A\) and \(R\) follows from the continuity of stationary measures, Lemma 4.18. For every \(a\in A\), by (7.2) and (7.3), \(h|_{B(a,\varepsilon)}\) is a contraction on \(B(a,\varepsilon)\), hence has a unique attracting fixed point in \(\overline{B(a,\varepsilon)}\). Note that the condition (7.2) implies that, \(h^{-1}\) preserves each of the connected components of \(R^{(\rho)}\). Thus similarly, \(h\) has a unique repelling fixed point on each of the \(dr\) connected components of \(R^{(\rho)}\). Finally, \(h\) does not have fixed point elsewhere because \(h(\mathbb{S}^{1}\setminus R^{(\rho)})\subset A^{(\rho)}\). ### Perfect pingpong pairs In this section, we will define and construct the perfect pingpong pairs in any finitely generated semigroups of \(\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) preserving no probability measures on \(\mathbb{S}^{1}\). Roughly speaking, a perfect pingpong pair is a pair of diffeomorphisms generating several independent pingpong dynamics on intervals of the circle. Recall the notion of attractors and repellors in Definition 4.20. A hyperbolic fixed point \(x\) of a diffeomorphism \(f\) is an attractor (resp. repellor) if and only if \(f^{\prime}(x)<1\) (resp. \(f^{\prime}(x)>1\)). **Definition 7.3**.: Let \(q\) be a positive integer. A pair \((h_{1},h_{2})\subset\operatorname{Diff}_{+}^{1}(\mathbb{S}^{1})\) is called a _(\(q\)-)perfect pingpong pair_ if there are subsets \(U_{1}^{+},U_{2}^{+},U_{1}^{-},U_{2}^{-}\subset\mathbb{S}^{1}\) satisfying the following conditions. 1. Each \(h_{i}\) has exactly \(2q\) fixed points, all of which are hyperbolic. 2. For every attractor \(a_{1}\in\mathbb{S}^{1}\) of \(h_{1}\) there is an attractor \(a_{2}\in\mathbb{S}^{1}\) of \(h_{2}\) such that the segment \([a_{1},a_{2}]\) or \([a_{2},a_{1}]\) contains no other fixed points. The same holds for repellors. 3. \(U_{1}^{+}\) (resp. \(U_{2}^{+},U_{1}^{-},U_{2}^{-}\)) is an open neighborhood of attractors of \(h_{1}\) (resp. \(h_{2},h_{1}^{-1},h_{2}^{-1}\)) made up of \(q\) disjoint open intervals. 4. The closures of \(U_{1}^{+},U_{2}^{+},U_{1}^{-},U_{2}^{-}\) are pairwise disjoint. 5. \(h_{1}(\mathbb{S}^{1}\setminus U_{1}^{-})\subset U_{1}^{+}\), \(h_{2}(\mathbb{S}^{1}\setminus U_{2}^{-})\subset U_{2}^{+}\), \(h_{1}^{-1}(\mathbb{S}^{1}\setminus U_{1}^{+})\subset U_{1}^{-}\), \(h_{2}^{-1}(\mathbb{S}^{1}\setminus U_{2}^{+})\subset U_{2}^{-}\). 6. \(h_{1}^{\prime}|_{\mathbb{S}^{1}\setminus U_{1}^{-}}<1\), \(h_{2}^{\prime}|_{\mathbb{S}^{1}\setminus U_{2}^{-}}<1\), \((h_{1}^{-1})^{\prime}|_{\mathbb{S}^{1}\setminus U_{1}^{+}}<1\), \((h_{2}^{-1})^{\prime}|_{\mathbb{S}^{1}\setminus U_{2}^{+}}<1\). The sets \(U_{i}^{\pm}\) are called pingpong cones of \((h_{1},h_{2})\). _Remark 7.4_.: The condition (2) is crucial. It can be equivalently formulated as the following: all \(2q\) fixed points of \(h_{1}\) are distinct from those of \(h_{2}\); the total \(4q\) fixed points of \(h_{1}\) and \(h_{2}\) appear in cyclic order as follows: \[\text{attractor, attractor, repellor, repellor, repellor, }\cdots\text{, attractor, attractor, repellor, repellor}.\] Combined with the other conditions, we see that every two adjacent attractors are contained in an interval preserved by \(h_{1}\) and \(h_{2}\) and on which \((h_{1},h_{2})\) have pingpong dynamics. Similarly, every two adjacent repellors are contained in an interval preserved by \(h_{1}^{-1}\) and \(h_{2}^{-1}\) and on which \((h_{1}^{-1},h_{2}^{-1})\) have pingpong dynamics. _Example 7.5_.: Here is an example of a \(2\)-perfect pingpong pair. The red points represent attractors, and the blue points represent repellors. The black arcs denote the pingpong cones \(U_{i}^{\pm}\). The arrows inside the circle show the action of \(h_{1}\), and arrows outside the circle represent the action of \(h_{2}\). Note that the fixed points of \(h_{1}\) and \(h_{2}\) are not necessarily arranged alternately. Therefore, there is some flexibility in the orders of each pair of attractors or repellors. _Example 7.6_.: Here is an example of a pair of uniformly hyperbolic elements \((h_{1},h_{2})\) with a "wrong" order on circle. We can obtain this example by replacing \(h_{1}\) by \(h_{1}^{-1}\) in previous example. Let \(h_{3}=h_{2}h_{1}\) (the arrows outside circle), then \(h_{3}\) only has one repellor and one attractor. This example shows that for a wrong arrangement, it is possible to find a further subsystem with fewer fixed points. _Example 7.7_.: Here is another example of a pair of uniformly hyperbolic elements \((h_{1},h_{2})\) with a "wrong" order. Let \(h_{3}=h_{2}h_{1}\) (the arrows outside circle). Then, \(h_{3}\) does not have any fixed points, which means that the semigroup generated by \((h_{1},h_{2})\) is not uniformly hyperbolic in some sense. These examples show that for a wrong order of fixed points, the dynamics of the semigroup generated by \((h_{1},h_{2})\) may not be as clear. The main goal of this subsection is to prove the following result on the existence of perfect pingpong pairs. **Proposition 7.8**.: _Let \(T\subset\mathrm{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated semigroup without invariant probability measures on \(\mathbb{S}^{1}.\) Let \(\Delta\) be a minimal set of \(T.\) Given \(x\in\Delta\) and an open interval \(I\) containing \(x\) with sufficiently small length, there exists a perfect pingpong pair \((h_{1},h_{2})\subset T\) with pingpong cones \((U_{i}^{\pm})\) such that both \(U_{1}^{+}\) and \(U_{2}^{+}\) have a unique connected component contained in \(I\), and these connected components have non-empty intersections with \(\Delta.\)_ Proof.: Let \(\mu\) be the uniform measure on a finite generating set of \(T\) and consider the random walk it induces on \(\mathbb{S}^{1}\). Then \(T=T_{\mu}\) and \(\Delta\) supports an ergodic \(\mu\)-stationary measure \(\nu.\) Without loss of generality, we may assume that \(|I|\) is small enough such that for every \(i\in[d],\)\(\nu_{i}^{-}(I)<\frac{1}{2d}.\) Since \(I\) contains \(x\in\Delta=\mathrm{supp}\,\nu,\) there exists \(A_{1},A_{2}\in\mathrm{supp}\,\underline{\nu}^{+}\) with \(A_{1}\cap A_{2}=\varnothing\) and both \(A_{1}\cap I,A_{2}\cap I\) are nonempty. Then we can choose \(R_{1},R_{2}\in\mathrm{supp}\,\underline{\nu}^{-}\) such that \(A_{1},A_{2},R_{1},R_{2}\) are pairwise disjoint and \(R_{1}\cap I=R_{2}\cap I=\varnothing.\) Take \(\varepsilon>0\) sufficiently small so that 1. \(A_{1}\cup A_{2}\cup R_{1}\cup R_{2}\) is \(3\varepsilon\)-separated and 2. both \(A_{1}^{(\varepsilon)}\) and \(A_{2}^{(\varepsilon)}\) have a connected component contained in \(I\). Let \(U_{i}^{+}=A_{i}^{(\varepsilon)}\) and \(U_{i}^{-}=R_{i}^{(\varepsilon)}\) for \(i=1,2.\) Lemma 7.2 applied twice gives rise to \(h_{1},h_{2}\in T_{\mu}\) such that for every \(i=1,2,\) 1. \(h_{i}(\mathbb{S}^{1}\setminus U_{i}^{-})\subset U_{i}^{+},\) 2. \(h_{i}^{\prime}|_{\mathbb{S}^{1}\setminus U_{i}^{-}}<1\) and \((h_{i}^{-1})^{\prime}|_{\mathbb{S}^{1}\setminus U_{i}^{+}}<1,\) 3. \(h_{i}\) preserves each connected component of \(\mathbb{S}^{1}\setminus U_{i}^{-}.\) It remains to verify Condition (2) of Definition 7.3. Note that the attractors and repellors of \(h_{1}\) and \(h_{2}\) have the same arrangement pattern as \(\pi_{0}(U_{1}^{+}),\)\(\pi_{0}(U_{2}^{+}),\)\(\pi_{0}(U_{1}^{-})\) and \(\pi_{0}(U_{2}^{-}),\) which have the same arrangement pattern as \(A_{1},\)\(A_{2},\)\(R_{1}\) and \(R_{2}\). Thus it suffices to show that for every \(a_{1}\in A_{1}\) there is \(a_{2}\in A_{2}\) such that between \(a_{1}\) and \(a_{2}\) there is no other element of \(A_{1}\cup A_{2}\cup R_{1}\cup R_{2}.\) Indeed, by Theorem 4.12(5) and the construction (7.4), for every \(i,j\in\{1,2\}\), the elements of \(A_{i}\) and those of \(R_{j}\) arrange alternatively in cyclic order on \(\mathbb{S}^{1}.\) By construction, the interval \(I\) contains a point \(a_{1}\in A_{1}\) and a point \(a_{2}\in A_{2}\) and no other elements of \(A_{1}\cup A_{2}\cup R_{1}\cup R_{2}\). Let \(a\) be the next element of \(A_{1}\cup A_{2}\) right to \(I\). Let \(i\in\{1,2\}\) such that \(a\in A_{i}\), then for each \(j=1,2\), \(R_{j}\cap[a_{i},a]\) contains exactly one point. This shows that necessarily, the next two elements of \(A_{1}\cup A_{2}\cup R_{1}\cup R_{2}\) right to \(a_{1}\) and \(a_{2}\) must be one from \(R_{1}\) and one from \(R_{2}\). Thus, by a simple induction, the claim is verified. Recalling that \(R_{1}\cap I=R_{2}\cap I=\varnothing\), the arrangement of points in \(A_{1}\cup A_{2}\cup R_{1}\cup R_{2}\) implies that each \(U_{i}^{+}\) has a unique connected component that intersects \(I\) and is contained in \(I\). Therefore the last claim of Proposition 7.8 can be shown as follows. Since \(x\in I\), the iterates \(h_{1}^{n}x\) of \(x\) converge to the unique attracting fixed point of \(h_{1}\) in \(U_{1}^{+}\cap I\) as \(n\to+\infty\). Since \(\Delta\) is a \(T\)-invariant closed set and \(x\in\Delta\), this fixed point is contained in \(\Delta\). Hence this connected component intersects \(\Delta\). ### Proof of Theorem J In this subsection, we will prove Theorem J. Our proof relies on the fact that the approximation of the entropy (condition (2)) and the approximation of the dimension (condition (4)) can be obtained independently, and two approximation can be "merged" together. The first step of the proof is to find sufficiently many uniformly hyperbolic elements with fixed cones \(U_{i,j}\), cf. Proposition 7.10. This part is very similar to the technique of finding perfect pingpong pairs. The second step is to approximate the dimension, cf. Proposition 7.11. We apply the covering argument (an application of Lemma 3.7) to find sufficiently many elements with separated images and desired contracting rates. Then these elements preserve minimal sets with dimensions approximating the original one. The third step is to "combine" the random walks corresponding to last two steps together, cf. Proposition 7.12. Finally, we complete the proof by estimating the Hausdorff dimension of the minimal sets given by an IFS with the open set condition. Throughout this subsection, as in Theorem J, let \(\mu\) be a finitely supported probability measure on \(\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) without a common invariant probability measure. We follow the notation of Theorem 4.12 and Theorem 4.15. #### Step 1. Finding uniformly hyperbolic elements Recall that \(\underline{\nu}^{+}=\Pi_{*}\mathbf{P}\) and \(\underline{\nu}^{-}=\Xi_{*}\mathbf{P}\). Fix \(A\in\operatorname{supp}\underline{\nu}^{+}\) and \(R\in\operatorname{supp}\underline{\nu}^{-}\) with \(A\cap R=\varnothing\). Take \(\rho>0\) small enough such that \(A\cup R\) is \(3\rho\)-separated. As in the proof of Lemma 7.2 we partition \(A\) into \[A=A_{1}\sqcup A_{2}\sqcup\cdots\sqcup A_{d}\] where \(A_{i}=A\cap\operatorname{supp}\nu_{i}^{+}\). Denote by \(U^{+}:=A^{(\rho)},U^{-}:=R^{(\rho)}\) and for each \(i\in[d]\), \(U_{i}:=A_{i}^{(\rho)}\). Each \(U_{i}\) has exactly \(r\) connected components. We name them as \(U_{i,j}\), \(j\in[r]\), in cyclic order. Without loss of generality we can let \(U_{0,0}\) be the same as that in the proof of Lemma 7.2. For every \(\omega\in\Sigma\), if \(\Pi(\omega,0)\cap U_{0,0}\) is not empty, then it contains at most one point which we denoted by \(\Pi(\omega,0,0)\). This notation is consistent with the one used in the proof of Lemma 7.2. For \(n\in\mathbb{N}\) and \(\varepsilon>0\), let \(\mathcal{A}_{n,\varepsilon}\) be the set of \(f\in\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) such that \[\forall i\in[d],\,\forall j\in[r],\quad\overline{f(U_{i,j})}\subset U_{i,j} \tag{7.9}\] and \[\forall i\in[d],\,\forall x\in U_{i},\quad\frac{1}{n}\log f^{\prime}(x)\in[ \lambda_{i}^{+}-\varepsilon,\lambda_{i}^{+}+\varepsilon]. \tag{7.10}\] **Lemma 7.9**.: _There exist \(\Sigma_{1},\Sigma_{2}\subset\Sigma\) with \(\mathbf{P}(\Sigma_{1})>0\) and \(\mathbf{P}(\Sigma_{2})>0\), such that \(\Pi(\omega)\subset U^{+}\) for every \(\omega\in\Sigma_{1}\cup\Sigma_{2}.\) Moreover, there exists a positive integer \(n_{1}\) such that for all \(n\geqslant n_{1}\) and all \(\omega\in\Sigma_{1}\), if \(F^{n}(\omega,\Pi(\omega,0,0))\in\Sigma_{2}\times U_{0,0}\) then \(f^{n}_{\omega}\in\mathcal{A}_{n,\varepsilon}\)._ Proof.: Lemma 7.9 has been proven implicitly in Lemma 7.2. We only need to pay more attentions to the exponents in (7.6). More precisely, let \(\Sigma_{0}\) be the set given by (7.4). We choose \(\varepsilon=\min\left\{\rho/2,\mathbf{P}(\Sigma_{0})/2\right\}\) and apply Proposition 4.22 to obtain the set \(\Sigma_{\varepsilon}\) of uniform good words. Then we define \(\Sigma_{1}=\Sigma_{0}\cap\Sigma_{\varepsilon}\) and \(\Sigma_{2}=\Sigma_{0}\). It can be shown that these sets satisfy the desired properties. **Proposition 7.10**.: _For every \(\varepsilon>0,\) there is a constant \(c>0\) such that there exists infinitely many \(n>0\) satisfying \(\mu^{*n}(\mathcal{A}_{n,\varepsilon})\geqslant c\)._ Proof.: Let \(\Sigma_{1},\Sigma_{2}\) be the sets given by the previous lemma. Let \[\widetilde{\Sigma}_{i}=\{(\omega,\Pi(\omega,0,0)):\omega\in\Sigma_{i}\} \subset\Sigma\times\mathbb{S}^{1},\quad\forall i=1,2.\] Then \(m^{+}_{0}(\widetilde{\Sigma}_{i})=\frac{1}{r}\mathbf{P}(\Sigma_{i})>0\) for \(i=1,2.\) Recall that \(P:\Sigma\times\mathbb{S}^{1}\to\Sigma\) is the natural projection. By the previous lemma, for every \(n\) sufficiently large and an element \[\omega\in P(\widetilde{\Sigma}_{1}\cap F^{-n}\widetilde{\Sigma}_{2})\subset P (\widetilde{\Sigma}_{1}\cap F^{-n}(\Sigma_{2}\times U_{0,0})),\] we have \(f^{n}_{\omega}\in\mathcal{A}_{n,\varepsilon}.\) Since \(P_{*}m^{+}_{0}=\mathbf{P}\), we have \[\mu^{*n}(\mathcal{A}_{n,\varepsilon})\geqslant\mathbf{P}(P(\widetilde{\Sigma} _{1}\cap F^{-n}\widetilde{\Sigma}_{2}))\geqslant m^{+}_{0}(\widetilde{\Sigma} _{1}\cap F^{-n}\widetilde{\Sigma}_{2}).\] Then the conclusion follows from the fact that \(m^{+}_{0}(\widetilde{\Sigma}_{1}\cap F^{-n}\widetilde{\Sigma}_{2})>0\) for infinitely many \(n>0\) since \(m^{+}_{0}\) is an ergodic \(F\)-invariant measure. **Step 2. Approximating the dimension** **Proposition 7.11**.: _For every \(\varepsilon>0\) and every \((i,j)\in[d]\times[r]\), there is a sequence of subsets \(\mathcal{B}_{n}\subset\mathcal{A}_{n,\varepsilon}\), \(n\ \in\mathbb{N}\) such that_ \[\limsup_{n\to+\infty}\frac{1}{n}\log\#\mathcal{B}_{n}\geqslant(|\lambda^{+}_{i }|-\varepsilon)\dim_{\mathrm{H}}\nu^{+}_{i}\] _and for each \(n\in\mathbb{N}\), the intervals \(f(U_{i,j})\), \(f\in\mathcal{B}_{n}\) are pairwise disjoint._ Proof.: For \(n\in\mathbb{N}\), let \(\mathcal{E}_{n}=\left\{\left.f(U_{i,j}):f\in\mathcal{A}_{n,\varepsilon}\,\right\}\). By the definition of \(\mathcal{A}_{n,\varepsilon}\), for all \(I\in\mathcal{E}_{n}\), \(|I|\leqslant 2^{(\lambda^{+}_{i}+\varepsilon)n}\). Let \(\mathcal{B}_{n}\) be a maximal subset of \(\mathcal{A}_{n,\varepsilon}\) such that the intervals \(f(U_{i,j})\), \(f\in\mathcal{B}_{n}\) are pairwise disjoint. Applying Lemma 3.7 to the subcollections \(\widetilde{\mathcal{E}}_{n}=\left\{\left.f(U_{i,j}):f\in\mathcal{B}_{n}\,\right\}\), we find \[\limsup_{n\to+\infty}\frac{1}{n}\log\#\mathcal{B}_{n}\geqslant(|\lambda^{+}_{i }|-\varepsilon)\dim_{\mathrm{H}}E.\] where \(E=\limsup_{n\to+\infty}\bigcup_{I\in\mathcal{E}_{n}}I\), that is, the set of points which belong to \(\bigcup_{I\in\mathcal{E}_{n}}I\) for infinitely many \(n\). Note that \(E\) is a Borel set. To conclude it remains to show that \(\nu^{+}_{i}(E)>0\). Let \(\Sigma_{0}\) be as in (7.4). For \(\omega\in\Sigma_{0}\), for each \((i,j)\in[d]\times[r]\), there is a unique element in \(\Pi(\omega)\cap U_{i,j}=\Pi(\omega,i)\cap U_{i,j}\). We denote it by \(\Pi(\omega,i,j)\). Let \(\Sigma_{1},\Sigma_{2}\subset\Sigma_{0}\) be as in Lemma 7.9. Let \[\Sigma^{\prime}_{2}=\left\{\left.\omega\in\Sigma_{2}:\text{ there exists infinitely many }n\in\mathbb{N},\,F^{-n}(\omega,\Pi(\omega,0,0))\in\Sigma_{1}\times U_{0,0}\,\right\}.\] By the ergodicity of \(m^{+}_{0}\) of Theorem 4.12 with respect to \(F^{-1}\), we have \(\mathbf{P}(\Sigma^{\prime}_{2})=\mathbf{P}(\Sigma_{2})>0\). We claim that for all \(\omega\in\Sigma^{\prime}_{2}\), \(\Pi(\omega,i,j)\in E\). Indeed, for \(\omega\in\Sigma^{\prime}_{2}\) there are infinitely many \(n\in\mathbb{N}\) such that \(F^{-n}(\omega,\Pi(\omega,0,0))\in\Sigma_{1}\times U_{0,0}\). Then, \(\sigma^{-n}\omega\in\Sigma_{1}\) and \(f_{\omega}^{-n}\Pi(\omega,0,0)\in\Sigma_{1}\times U_{0,0}\). By the ergodicity of \(m^{+}_{0}\) of Theorem 4.12 with respect to \(F^{-1}\), we have \(\mathbf{P}(\Sigma^{\prime}_{2})=\mathbf{P}(\Sigma_{2})>0\). We claim that for all \(\omega\in\Sigma^{\prime}_{2}\), \(\Pi(\omega,i,j)\in E\). Indeed, for \(\omega\in\Sigma^{\prime}_{2}\) there are infinitely many \(n\in\mathbb{N}\) such that \(F^{-n}(\omega,\Pi(\omega,0,0))\in\Sigma_{1}\times U_{0,0}\). Then, \(\sigma^{-n}\omega\in\Sigma_{1}\) and \(f_{\omega}^{-n}\Pi(\omega,0,0)\in\Sigma_{1}\times U_{0,0}\). By the ergodicity of \(m^{+}_{0}\) of Theorem 4.12 with respect to \(F^{-1}\), we have \(\mathbf{P}(\Sigma^{\prime}_{2})=\mathbf{P}(\Sigma_{2})>0\). \(\Pi(\sigma^{-n}\omega,0)\cap U_{0,0}\), hence \(f_{\omega}^{-n}\Pi(\omega,0,0)=\Pi(\sigma^{-n}\omega,0,0)\) and \(F^{n}\big{(}\sigma^{-n}\omega,\Pi(\sigma^{-n}\omega,0,0)\big{)}\in\Sigma_{2} \times U_{0,0}\). By Lemma 7.9, \(f_{\sigma^{-n}\omega}^{n}\in\mathcal{A}_{n,\varepsilon}\). Then, \(\Pi(\omega,i,j)=f_{\sigma^{-n}\omega}^{n}\Pi(\sigma^{-n}\omega,i,j)\in f_{ \sigma^{-n}\omega}^{n}(U_{i,j})\in\mathcal{E}_{n}\). This proves the claim. Then by Theorem 4.12(2), \[\nu_{i}^{+}(E)\geqslant\frac{1}{r}\mathbf{P}\left\{\,\omega:\Pi(\omega,i)\cap E \neq\varnothing\,\right\}\geqslant\frac{1}{r}\mathbf{P}(\Sigma_{2}^{\prime})>0.\] Hence \(\dim_{\mathrm{H}}E\geqslant\dim_{\mathrm{H}}\nu_{i}^{+}\), which complete the proof of the proposition. ### Step 3. Completion of the proof Proposition 7.10 allows us to approximate the original system by a uniformly hyperbolic system with a "large set" in the base space \(\Sigma\). Proposition 7.11 says that we can approximate the original system by a uniformly hyperbolic system with a large dimension along the fiber \(\mathbb{S}^{1}\). It remains to put together these approximations. **Proposition 7.12**.: _For every \(\varepsilon>0,\) there exists a constant \(c^{\prime}>0\) such that there exists infinitely many positive integers \(N\) such that_ 1. \(\mu^{*N}(\mathcal{A}_{N,\varepsilon})\geqslant c^{\prime}\)_._ 2. _For every_ \((i,j)\in[d]\times[r]\)_, there exists_ \(2^{\dim\nu_{i}^{+}(|\lambda_{i}^{+}|-2\varepsilon)N}\) _elements_ \(f\in\mathcal{A}_{N,\varepsilon}\) _such that_ \(f(U_{i,j})\) _are pairwise disjoint._ Proof.: Observe that for any \(n,N\in\mathbb{N}\), we have \(\mathcal{A}_{n,\varepsilon}^{*N}\subset\mathcal{A}_{nN,\varepsilon}\). Recall each \(\nu_{i}^{+}\) is exact dimensional by Theorem 2.4 and hence \(\dim\nu_{i}^{+}=\dim_{\mathrm{H}}\nu_{i}^{+}.\) For each \((i,j)\in[d]\times[r]\), by Proposition 7.11, there is a positive integer \(n_{i,j}\) and a set \(\mathcal{B}_{i,j}\subset\mathcal{A}_{n_{i,j},\varepsilon}\) of cardinality \(\#\mathcal{B}_{i,j}\geqslant 2^{\dim\nu_{i}^{+}(|\lambda_{i}^{+}|-2\varepsilon)n_{i,j}}\) and such that \(f(U_{i,j})\), \(f\in\mathcal{B}_{i,j}\) are pairwise disjoint subintervals of \(U_{i,j}\). Then item (2) holds whenever \(N\) is a common multiple of \(n_{i,j}\), \((i,j)\in[d]\times[r]\). Indeed, the product set \(\mathcal{B}_{i,j}^{*N/n_{i,j}}\) is a subset of \(\mathcal{A}_{N}\). It has cardinality \[\#\mathcal{B}_{i,j}^{*N/n_{i,j}}=(\#\mathcal{B}_{i,j})^{N/n_{i,j}}\geqslant 2^{ \dim\nu_{i}^{+}(|\lambda_{i}^{+}|-2\varepsilon)N}\] and \(f(U_{i,j})\), \(f\in\mathcal{B}_{i,j}^{*N/n_{i,j}}\) are pairwise disjoint. Let \(N_{0}\) be the least common multiple of \(n_{i,j}\), \((i,j)\in[d]\times[r]\). By Proposition 7.10, there is a constant \(c>0\) such that there are infinitely many \(n\in\mathbb{N}\) such that \(\mu^{*n}(\mathcal{A}_{n,\varepsilon})\geqslant c.\) Then for such \(n\), we have \[\mu^{*nN_{0}}(\mathcal{A}_{nN_{0},\varepsilon})\geqslant\mu^{*nN_{0}}(\mathcal{ A}_{n,\varepsilon}^{*N_{0}})\geqslant c^{N_{0}}.\] So the required properties hold for \(N=nN_{0}\) with \(c^{\prime}=c^{N_{0}}\), finishing the proof. Proof of Theorem 3.: The property (1) comes from the Theorems 4.12 and 4.15, where \(\lambda_{i}=\lambda_{i}^{+}\) and \(\nu_{i}=\nu_{i}^{+}.\) Now for every \(\varepsilon>0\), for a positive integer \(N,\) let \[\Gamma_{N}=\left\{(f_{1},\cdots,f_{N})\in\mathcal{S}^{N}:f_{N}\cdots f_{1}\in \mathcal{A}_{N,\varepsilon}\right\}.\] We will show that there exists \(N\) such that \(\Gamma=\Gamma_{N}\) satisfying conditions. By the first item in the previous proposition, we have \(\mu^{N}(\Gamma_{N})=\mu^{*N}(\mathcal{A}_{N,\varepsilon})\geqslant c^{\prime}\) for infinitely many positive integers \(N.\) By the Shannon-McMillan-Breiman theorem, there exists a subset of \(\mathcal{S}^{N}\) with at least \((1-o(1))\)\(\mu^{N}\)-measure such that each element in this subset has \(\mu^{N}\)-measure of \(2^{-N(H(\mu)+o(1))}.\) Therefore, for \(N\) large enough, \(\Gamma_{N}\) contains at least \(2^{N(H(\mu)-\varepsilon)}\) elements. Hence, (2) holds for a sufficiently large \(N\). The property (3) is indeed the definition of \(\mathcal{A}_{N,\varepsilon}\), the conditions (7.9) and (7.10). It remains to show (4). Fix a pair of \((i,j)\). Let \(T\) be the semigroup generated by \(\mathcal{A}_{n,\varepsilon}\) and let \(T_{i,j}\) be the semigroup generated by \(2^{\dim\nu_{i}(|\lambda_{i}|-2\varepsilon)N}\) elements \(f\in\mathcal{A}_{n,\varepsilon}\) such that \(f(U_{i,j})\) are pairwise disjoint. By the uniform contraction of \(T|_{U_{i,j}}\) and the fact that \(T\) strictly preserving \(U_{i,j}\), \(T\) admits a unique minimal set \(K_{i,j}\subset\overline{U_{i,j}}\). We consider a random walk on the interval \(\overline{U_{i,j}}.\) Let \(\mu^{\prime}\) be the uniform measure supported on these \(2^{\dim\nu_{i}(|\lambda_{i}|-2\varepsilon)N}\) elements. Then \(\mu^{\prime}\) has a unique stationary \(\nu\) on \(\overline{U_{i,j}}.\) Moreover, \[\operatorname{supp}\nu=\bigcap_{n=1}^{\infty}\bigcup_{f\in(\operatorname{supp }\mu^{\prime})^{*n}}f(\overline{U_{i,j}})=K_{i,j}.\] Note that for every \(f\in\operatorname{supp}\mu^{\prime}\), we have \(f^{\prime}(x)\geqslant 2^{(\lambda_{i}-\varepsilon)N}\) for every \(x\in\overline{U_{i,j}}.\) Hence for any interval of diameter \(2^{(\lambda_{i}-\varepsilon)nN}\), it intersects at most \(2\) intervals of the form \(f(\overline{U_{i,j}})\), \(f\in(\operatorname{supp}\mu^{\prime})^{*n}.\) We obtain \[\forall\rho>0,\quad\sup_{x\in\mathbb{S}^{1}}\log\nu(B(x,\rho))\leqslant\frac {\log\#\operatorname{supp}\mu^{\prime}}{(|\lambda_{i}|+\varepsilon)N}\log \rho+\log\#\operatorname{supp}\mu^{\prime}+10.\] It implies that \[\dim_{\mathrm{H}}K_{i,j}\geqslant\dim_{\mathrm{H}}\nu\geqslant\frac{\log\# \operatorname{supp}\mu^{\prime}}{(|\lambda_{i}|+\varepsilon)N}\geqslant\dim \nu_{i}\ \frac{|\lambda_{i}|-2\varepsilon}{|\lambda_{i}|+\varepsilon}.\] Then the conclusion follows by shrinking \(\varepsilon>0\) if necessary. Proof of Theorem 2.30.: By Theorem 2.10, we have \((\dim\nu)|\lambda|=h_{\mathrm{RW}}(\mu).\) Then the statement just follows from Proposition 7.11, where the estimate of cardinality holds. For (5), the proof is analogous to the proof above. ## 8 Variational principle for dimensions In this section, we will prove the variational principle for dimensions stated in Theorem 2.12 and Corollary 2.16. The idea is to choose a large set of good elements in the group with appropriate compressing rates. More precisely, assume that \(\dim_{\mathrm{H}}\Lambda=\alpha\). We expect to find about \(2^{\alpha n}\) elements and an interval \(I\) on which these elements act like affine contractions on \(I\) with derivative close to \(2^{-n}\). The method to find these elements is very similar to the technique used in the previous section. Instead of the dimension of a measure, we will apply this technique to the Hausdorff dimension of minimal set. ### Elements with controlled contracting rates Let \(G\subset\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated subgroup without finite orbits in \(\mathbb{S}^{1}\) and \(\Lambda\subset\mathbb{S}^{1}\) be its unique minimal set. We denote by \(\Lambda^{\prime}=\Lambda\setminus G(\mathrm{NE})\), where \(\mathrm{NE}\) is the set of non-expandable points of \(G\)-action. Assume that \(G\) satisfies property (\(\star\)) or (\(\Lambda\star\)). The following result from [23] will be useful. **Proposition 8.1** ([23, Proposition 6.4]).: _In the above setting, there exists \(\varepsilon_{0}>0\) such that for every point \(x\in\Lambda^{\prime}\), there is an interval \(I\subset\mathbb{S}^{1}\) of arbitrarily small length, together with an element \(g\in G\) such that \(gI=B(gx,\varepsilon_{0})\) and \(\varkappa(g,I)\leqslant 1\)._ The radius \(\varepsilon_{0}\) will be fixed throughout this section. **Definition 8.2**.: We say an interval \(I\) is \(x\)_-expandable_ for \(x\in\Lambda^{\prime}\) if there exists \(g\in G\) such that \(gI=B(gx,\varepsilon_{0})\) and \(\varkappa(g,I)\leqslant 1\). We say an interval \(I\) is _expandable_ if it is \(x\)-expandable for some \(x\in\Lambda^{\prime}\). For \(n\in\mathbb{N}\), consider the set \[\mathcal{E}_{n}=\left\{\,\overline{I}\subset\mathbb{S}^{1}:I\text{ is expandable and }|I|\in[2^{-n-1},2^{-n}[\,\right\}.\] Let \(\widetilde{\mathcal{E}}_{n}\) be a finite subset of \(\mathcal{E}_{n}\) with a maximum cardinality such that the intervals in \(\widetilde{\mathcal{E}}_{n}\) are pairwise disjoint. By Lemma 3.7, we have \[\limsup_{n\to+\infty}\frac{1}{n}\log\#\widetilde{\mathcal{E}}_{n}\geqslant \dim_{\mathrm{H}}E.\] where \(E=\limsup_{n\to+\infty}\bigcup_{I\in\mathcal{E}_{n}}I\). By Proposition 3.3, every point \(x\in\Lambda^{\prime}\) is contained in \(\bigcup_{I\in\mathcal{E}_{n}}I\) for infinitely many \(n\in\mathbb{N}\), that is, \(x\in E\). Hence \(\dim_{\mathrm{H}}E\geqslant\dim_{\mathrm{H}}\Lambda^{\prime}\) and \(\dim_{\mathrm{H}}\Lambda^{\prime}=\dim_{\mathrm{H}}\Lambda\) by Theorem 3.3. We conclude that \[\limsup_{n\to+\infty}\frac{1}{n}\log\#\widetilde{\mathcal{E}}_{n}\geqslant \dim_{\mathrm{H}}\Lambda. \tag{8.1}\] By taking the inverses of the group elements involved in the expandable intervals, we find elements with prescribed contracting rates on an interval of constant length. **Proposition 8.3**.: _Let \(G\subset\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) be a finitely generated subgroup without finite orbits in \(\mathbb{S}^{1}\) and \(\Lambda\subset\mathbb{S}^{1}\) be its unique minimal set. Assume that \(G\) satisfies property \((\star)\) or \((\Lambda\star).\) Let \(\varepsilon_{0}>0\) be the constant in Proposition 8.1. Then there exists \(y_{0}\in\Lambda\) and a sequence of subsets \(\mathcal{G}_{n}\subset G\) such that for \(I_{0}:=\overline{B(y_{0},\varepsilon_{0}/2)}\), we have_ 1. _for any_ \(n\in\mathbb{N}\)_, the intervals_ \((gI_{0})_{g\in\mathcal{G}_{n}}\) _are pairwise disjoint,_ 2. _for any_ \(n\in\mathbb{N}\) _and any_ \(g\in\mathcal{G}_{n}\)_,_ \(|gI_{0}|\in[2^{-n-3},2^{-n}[\) _and_ \(\varkappa(g,I_{0})\leqslant 1\)_,_ 3. \(\limsup_{n\to+\infty}\frac{1}{n}\log\#\mathcal{G}_{n}\geqslant\dim_{\mathrm{H }}\Lambda.\)__ Proof.: Fix a finite \(\varepsilon_{0}/2\)-dense subset \(\mathcal{K}\) of \(\Lambda\). Then, for any \(x\in\Lambda^{\prime}\) and any \(x\)-expandable interval \(I\) with \(g\in G\) being the corresponding diffeomorphism, the point \(gx\) is \(\varepsilon_{0}/2\)-close to some point \(y\) in \(\mathcal{K}\) and hence \(\overline{B(y,\varepsilon_{0}/2)}\subset\overline{B(gx,\varepsilon_{0})}=g \overline{\mathcal{I}}\). For \(n\in\mathbb{N}\), let \(\widetilde{\mathcal{E}}_{n,y}\) denote the subset of \(\widetilde{\mathcal{E}}_{n}\) consisting of such interval \(\overline{I}\)s, so that \(\sum_{y\in\mathcal{K}}\#\widetilde{\mathcal{E}}_{n,y}\geqslant\#\widetilde{ \mathcal{E}}_{n}\). Thus, by Lemma (8.1) and the pigeonhole principle, there is \(y_{0}\in\mathcal{K}\) such that \(\limsup_{n\to+\infty}\frac{1}{n}\log\#\widetilde{\mathcal{E}}_{n,y_{0}}\geqslant\alpha\). We check easily that the inverses of the diffeomorphisms associated to the expandable intervals in \(\widetilde{\mathcal{E}}_{n,y_{0}}\) satisfies the desired properties. ### Proof of the variational principle The diffeomorphisms constructed in Proposition 8.3 might map the interval \(I_{0}\) outside \(I_{0}\). To bring their images back to \(I_{0}\), we apply the following lemma, which is a strengthening of the fact that every \(G\)-orbit in \(\Lambda\) is dense. Here we assume additionally that \(G\) does not preserve any Borel probability measure. **Lemma 8.4**.: _Let \(G\subset\mathrm{Diff}^{2}_{+}(\mathbb{S}^{1})\) be a finitely generated subgroup without invariant probability measures on \(\mathbb{S}^{1}\). Let \(\Lambda\subset\mathbb{S}^{1}\) be the unique minimal set of \(G.\) Then there exists \(\varepsilon_{1}>0\) such that for any \(x,y\in\Lambda\) and any \(\varepsilon>0\) there exists \(f\in G\) such that \(f(B(x,\varepsilon_{1}))\subset B(y,\varepsilon).\)_ Proof.: Let \(\mathcal{G}\) be a finite symmetric generating set of \(G\), and let \(\mu\) be the uniform probability measure on \(\mathcal{G}\). It induces a random walk on \(\mathbb{S}^{1}.\) By Theorem 4.7, as \(\Lambda\) is the unique minimal set of the semigroup generated by \(\operatorname{supp}\mu=\mathcal{G}\), there exists a unique \(\mu\)-stationary measure \(\nu\) and \(\operatorname{supp}\nu=\Lambda.\) Since \(\mu\) is symmetric, \(\nu\) is also the unique \(\mu^{-1}\)-stationary. Recall the constant \(r\) and \(\Xi(\omega^{+})\) in Theorem 4.12. In view of Lemma 4.18, we take \(\varepsilon_{1}>0\) such that \(\sup_{x\in\mathbb{S}^{1}}\nu(B(x,\varepsilon_{1}))<1/2r.\) Then, by Theorem 4.12(2), \[\mathbf{P}^{+}\left\{\omega^{+}\in\Sigma^{+}:B(x,\varepsilon_{1})\cap\Xi( \omega^{+})\neq\varnothing\right\}\leqslant r\nu(B(x,\varepsilon_{1}))<\frac {1}{2}.\] In view of Theorem 4.15, the set of \(\omega^{+}\in\Sigma^{+}\) such that \(B(x,\varepsilon_{1})\subset W^{s}(\omega^{+})\) has positive \(\mathbf{P}^{+}\)-measure. For such \(\omega^{+}\), we have \(|f^{n}_{\omega^{+}}B(x,\varepsilon_{1})|\to 0\) as \(n\to+\infty\). Since \(x\in\Lambda=\operatorname{supp}\nu\), we have \(\nu(B(x,\varepsilon_{1}))>0\) so that there is \(\omega^{+}\) as above and \(x^{\prime}\in B(x,\varepsilon_{1})\) such that \((\omega^{+},x^{\prime})\) is generic for the measure preserving system \((\Sigma^{+}\times\mathbb{S}^{1},\mathbf{P}^{+}\times\nu,F^{+})\). Moreover, \(\nu(B(y,\varepsilon/2))>0\) since \(y\in\Lambda=\operatorname{supp}\nu\). Thus, by Birkhoff's ergodic theorem, there exists \(n\) arbitrarily large satisfying \(f^{n}_{\omega^{+}}(x^{\prime})\in B(y,\varepsilon/2)\). For \(n\) large enough, we also have \(|f^{n}_{\omega^{+}}(B(x,\varepsilon_{1}))|<\varepsilon/2.\) Let \(f=f^{n}_{\omega^{+}}\in G\), then \(f(B(x,\varepsilon_{1}))\subset B(y,\varepsilon)\). Once the images are brought back to \(I_{0}\), we have an IFS with controlled contracting rate and satisfies the open set condition. This leads to a proof of Theorem 2.12. Proof of Theorem 2.12.: Let \(\alpha=\dim_{\mathrm{H}}\Lambda\), then \(\dim_{\mathrm{H}}\nu\leqslant\dim_{\mathrm{H}}\operatorname{supp}\nu\leqslant\alpha\) for every \(\nu\) supported on \(\Lambda.\) It suffices to show that \(\dim_{\mathrm{H}}\nu\) can be arbitrarily close to \(\alpha\) for stationary measures \(\nu\) stated in the theorem. Let \(\varepsilon_{1}>0\) be as in Lemma 8.4. We can cover \(\Lambda\) by finitely many open intervals \(B(x_{1},\varepsilon_{1}),\cdots,B(x_{k},\varepsilon_{1})\) with \(x_{1},\cdots,x_{k}\in\Lambda\). Then every interval which is short enough and intersects \(\Lambda\) is contained in \(B(x_{i},\varepsilon_{1})\) for some \(1\leqslant i\leqslant k\). Let \(I_{0}=\overline{B(y_{0},\varepsilon_{0}/2)}\) and \((\mathcal{G}_{n})_{n\in\mathbb{N}}\) be as in Proposition 8.3. For \(n\in\mathbb{N}\) and \(1\leqslant i\leqslant k\), consider \(\mathcal{G}_{n,i}=\{g\in\mathcal{G}_{n}:gI_{0}\subset B(x_{i},\varepsilon_{1})\}\) so that \(\mathcal{G}_{n}\subset\bigcup_{i=1}^{k}\mathcal{G}_{n,k}\) for \(n\) sufficiently large. By the pigeonhole principle, there is \(i\) such that \(\limsup_{n\to+\infty}\frac{1}{n}\log\#\mathcal{G}_{n,i}\geqslant\alpha\). Fix this \(i\). Apply Lemma 8.4 to \(x_{i}\) and \(y_{0}\) to obtain \(f_{0}\in G\) such that \(f_{0}(B(x_{i},\varepsilon_{1}))\subset I_{0}\). Set \(\mathcal{F}_{n}=\{f_{0}\circ g:g\in\mathcal{G}_{n,i}\}\) so that \[\forall f\in\mathcal{F}_{n},\quad fI_{0}\subset I_{0} \tag{8.2}\] From the properties of \(\mathcal{G}_{n}\), we obtain \[(fI_{0})_{f\in\mathcal{F}_{n}}\text{ is a family of pairwise disjoint closed intervals} \tag{8.3}\] and \[\forall f\in\mathcal{F}_{n},\,\forall x\in I_{0},\quad\bigl{|}\log f^{\prime} (x)+n\bigr{|}\leqslant C\text{ for some $C$ which is independent of $n$.} \tag{8.4}\] Here \(C\) can be chosen only depends on \(f_{0}\). For \(n\) large enough, \(\mathcal{F}_{n}\) is contracting on \(I_{0}\) so that the action of \(\mathcal{F}_{n}\) on \(I_{0}\) forms an IFS with the open set condition. Thus, letting \(\mu\) be the uniform measure on \(\mathcal{F}_{n}\subset G\), there is a unique \(\mu\)-stationary measure \(\nu\) on \(I_{0}\), which is obviously ergodic. It is supported on the attractor of the IFS : \[\operatorname{supp}\nu=\bigcap_{m=0}^{\infty}\bigcup_{f\in\mathcal{F}_{n}^{*m} }f(I_{0}).\] Remember \(y_{0}\in I_{0}\cap\Lambda\). So every point in the attractor is a limit of points in \(Gy_{0}\subset\Lambda\) and hence \(\operatorname{supp}\nu\subset\Lambda\). To estimate the dimension of \(\nu\), note that by (8.3) and(8.4), for any \(m\in\mathbb{N}\), any ball of radius \(2^{-(n+C)m}|I_{0}|\) intersects at most \(2\) intervals of the form \(fI_{0}\), \(f\in\mathcal{F}_{n}^{*m}\). Thus, picking \(m=\lfloor\frac{-\log\rho+\log|I_{0}|}{n+C}\rfloor\), we obtain \[\forall\rho>0,\quad\sup_{x\in\mathbb{S}^{1}}\log\nu(B(x,\rho))\leqslant\frac {\log\#\mathcal{F}_{n}}{n+C}\log\frac{\rho}{|I_{0}|}+\log\#\mathcal{F}_{n}+10,\] which implies \[\dim_{\mathrm{H}}\nu\geqslant\frac{\log\#\mathcal{F}_{n}}{n+C}.\] This concludes the proof since \(\limsup_{n\to+\infty}\frac{1}{n}\log\#\mathcal{F}_{n}=\limsup_{n\to+\infty} \frac{1}{n}\log\#\mathcal{G}_{n,i}\geqslant\alpha\). Proof of Theorem 2.: Apply Theorems 2.12 and 3.4. As an intermediate step of the proof, we have the following statement. **Corollary 8.5**.: _Let \(G\subset\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated subgroup without invariant probability measures, and \(\Lambda\) be the unique minimal set of \(G\). Suppose \(G\) satisfies property \((\star)\) or \((\Lambda\star)\), then there exists a closed interval \(I_{0}\subset\mathbb{S}^{1}\) intersecting \(\Lambda\) and a sequence of finite subsets \(\mathcal{F}_{n}\) of \(G\) satisfy (8.2), (8.3), (8.4) and_ \[\limsup_{n\to+\infty}\frac{1}{n}\log\#\mathcal{F}_{n}\geqslant\dim_{\mathrm{ H}}\Lambda.\] Proof of Theorem C.: By Lemma 3.2 and Theorem 3.4, \(G\) has no invariant probability measures and satisfies property \((\Lambda\star).\) Then by Corollary 8.5, we can find a closed interval \(I\) and a subset \(\mathcal{F}=\mathcal{F}_{n}\subset G\) such that \(g|_{I}\) are contracting for \(g\in\mathcal{F}\) and \((gI)_{g\in\mathcal{F}}\) is a family of pairwise disjoint closed intervals. The uniform measure on \(\mathcal{F}\) induces a random walk which has a unique stationary measure \(\nu\) on \(I.\) The measure \(\nu\) is supported on the attractor \[\Lambda^{\prime}=\bigcap_{n=1}^{\infty}\bigcup_{f\in\mathcal{F}^{n}}f(I) \subset\Lambda.\] Note that \(\dim_{\mathrm{H}}\nu\) can be arbitrarily closed to \(\dim_{\mathrm{H}}\Lambda\) by a similar estimate above. Hence \(\dim_{\mathrm{H}}\Lambda^{\prime}\) can also be arbitrarily closed to \(\dim_{\mathrm{H}}\Lambda.\) The interval \(I\) and elements in \(\mathcal{F}\) forms an IFS satisfying conditions. ### Minimal sets of sub-semigroups In the statement of Theorem 2.12, it is necessary to restrict our attention to stationary measures that are supported on \(\Lambda\). The following example illustrates the necessity even in \(C^{\infty}\)-case. _Example 8.6_.: Let \(G_{0}\subset\operatorname{Diff}_{+}^{\infty}(\mathbb{S}^{1})\) be a subgroup having an exceptional minimal set \(\Lambda\subset\mathbb{S}^{1}.\) Let \(J\) be a connected component of \(\mathbb{S}^{1}\setminus\Lambda\) and let \(J^{\prime}\) be a closed interval strictly contained in \(J\). We can choose two elements \(f_{1},f_{2}\in\operatorname{Diff}_{+}^{\infty}(\mathbb{S}^{1})\) which are identity on \(\mathbb{S}^{1}\setminus J\) and such that the actions on \(J^{\prime}\) are affinely conjugate to the map \(x\mapsto\frac{1}{2}x\) and \(x\mapsto\frac{1}{2}(x+1)\) on \([0,1]\), respectively. Let \(G\) be the group generates by \(G_{0}\) and \(f_{1},f_{2}\). Then \(\Lambda\) is also the unique minimal set of \(G\). However, the semigroup generated by \(f_{1},f_{2}\) has a minimal set \(J^{\prime}\) which does not intersect \(\Lambda\). The normalized Lebesgue measure on \(J^{\prime}\) is stationary and ergodic for the probability measure \(\mu=\frac{1}{2}\delta_{f_{1}}+\frac{1}{2}\delta_{f_{2}}\). Its Hausdorff dimension, equal to \(1\), can exceed that of \(\Lambda\). In \(C^{\omega}\) case, this kind of example cannot happen. This is the content of the next proposition. We apply it to prove Corollary 2.16. **Proposition 8.7**.: _Let \(G\subset\operatorname{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) be a finitely generated subgroup without finite orbits and \(\Lambda\) be the unique \(G\)-minimal set. Let \(H\) be a finitely generated sub-semigroup of \(G\) without finite orbits, then every \(H\)-minimal set is contained in \(\Lambda.\)_ **Lemma 8.8**.: _Let \(H\subset\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated sub-semigroup without common invariant probability measures on \(\mathbb{S}^{1},\) let \(\Delta\) be an \(H\)-minimal set. Then for every \(x\in\Delta\) and \(\varepsilon>0,\) there exists a non trivial element \(f\in H\) with a fixed point in \(B(x,\varepsilon)\setminus\{x\}\,.\)_ Proof.: This is a direct consequence of Proposition 7.8 applied to the interval \(I=B(x,\varepsilon).\) This gives us two elements \(h_{1},h_{2}\in H,\) each having a hyperbolic fixed point in \(B(x,\varepsilon)\). Moreover the two fixed points are distinct. One of them is not \(x.\) Proof of Proposition 8.7.: Without loss of generality we assume that \(\Lambda\) is an exceptional minimal set. Suppose there exists an \(H\)-minimal set \(\Delta\) which is not contained in \(\Lambda.\) Let \(x\in\Delta\setminus\Lambda\) be an arbitrary point and let \(J\) be the connected component of \(\mathbb{S}^{1}\setminus\Lambda\) that contains \(x.\) By Hector's Theorem 3.5, the stabilizer \(G_{J}\) of \(J\) is trivial or infinite cyclic. Take \(\varepsilon>0\) sufficiently small so that for every \(y\in B(x,\varepsilon)\setminus\left\{x\right\},\,y\) is not a fixed point of any nontrivial element in \(G_{J}\). By Lemma 3.2, we can apply Lemma 8.8 to \(H\) and obtain that there exists a nontrivial element \(f\in H\subset G\) with a fixed point in \(B(x,\varepsilon)\setminus\left\{x\right\}.\) But then \(f\) preserves \(\Lambda\) and the endpoints of \(J\) belong to \(\Lambda,\) hence \(f\in G_{J}\). This leads to a contradiction. Proof of Corollary 2.16.: By Theorem 2.12, it suffices to show that such \(\dim_{\mathrm{H}}\nu\) cannot strictly larger than \(\dim_{\mathrm{H}}\Lambda.\) Thus we may assume that \(\Lambda\) is an exceptional minimal set. If \(T_{\mu}\) has no finite orbits, then it does not preserve any probability measure by Lemma 3.2. Hence each ergodic \(\mu\)-stationary measure is supported on a \(T_{\mu}\)-minimal set, which is contained in \(\Lambda\) by Proposition 8.7, and the conclusion follows. On the other hand, if \(\operatorname{supp}\mu\) has a finite orbit, then by Theorem 3.5, the group generated by \(\operatorname{supp}\mu\) is either finite or a finite extension of a cyclic group. By the Choquet-Deny theorem [19], every stationary measure of a random walk on a virtually abelian group is indeed an invariant measure. It follows that every ergodic \(\mu\)-stationary measure has finite support and hence zero dimension. ## 9 The dynamical critical exponents In this section, we will establish the equality between the dimension of the minimal set and the dynamical critical exponents, both the \(C^{1}\) one, \(\delta(G),\) and the \(C^{2}\) one, \(\delta_{2}(G)\). Namely, we prove Theorem 2.17 and Theorem 2.25. The strategies of both proofs share some similarities. To show that the critical exponent can attain \(\dim_{\mathrm{H}}\Lambda,\) we only need to find many elements with bounded derivatives (and bounded distortion in for the \(\delta_{2}(G)\) case). This is already done in Section 8.1. Thus, the main task in this section is to show that the dynamical critical exponent does not exceed \(\dim_{\mathrm{H}}\Lambda.\) The definition of the dynamical critical exponent provide us with, for large \(n,\) about \(2^{\delta(G)n}\) elements in \(G\) whose derivatives can be bounded from below by \(2^{-n}\) on some interval of constant length. After using a pigeonhole argument and appending words of bounded length, we can make these elements preserve a common interval. Denote by \(\mathcal{A}\) the set of these elements and consider the random walk on this interval induced by the uniform probability measure on \(\mathcal{A}\). The absolute value of the Lyapunov exponent will be at most \(n\). If \(\mathcal{A}\) freely generate a free sub-semigroup, then the random walk entropy would be \(\log\#\mathcal{A},\) that is, about \(\delta(G)n\) and then, by Theorem 2.8, the dimension of the stationary measure of this random walk would be about \(\delta(G),\) allowing us to conclude. Finding a free sub-semigroup is therefore the crucial point. This is where the proofs of Theorems 2.17 and 2.25 differ. For the case of the \(C^{2}\) dynamical critical exponent, we have a distortion control on the elements of \(\mathcal{A}\). Then we can find, with the help of Lemma 6.4, a large subset of \(\mathcal{A}\) which have pingpong dynamics on some interval and then conclude as in the proof of Theorem 2.12. Details will be given in Section 9.2. For the case of the \(C^{1}\) dynamical critical exponent, we do not have distortion control anymore. Instead of looking for pingpong dynamics on the circle, we will make use of the combinatorics of the free group. Details will be given in Section 9.1. ### The \(C^{1}\) dynamical critical exponent The goal of this subsection is to prove the following statement and then deduce Theorem 2.17 from it. **Proposition 9.1**.: _Let \(G\subset\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated, locally discrete, free subgroup. Assume that \(G\) does not preserve invariant probability measures on \(\mathbb{S}^{1}\) and let \(\Lambda\) be the unique minimal set of \(G.\) Then for every \(\varepsilon>0,\) there exists a Borel probability measure \(\nu\) on \(\mathbb{S}^{1}\) with \(\operatorname{supp}\nu\subset\Lambda\) and \(\dim_{\mathrm{H}}\nu\geq\delta(G)-\varepsilon.\)_ First, let us give an equivalent definition of \(\delta(G)\). **Lemma 9.2**.: _Let \(G\subset\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated subgroup preserving no invariant probability measures on \(\mathbb{S}^{1}\). Let \(\Lambda\subset\mathbb{S}^{1}\) be its unique minimal set. Let \(\varepsilon_{1}>0\) be the constant of Lemma 8.4. For any \(x\in\Lambda\), we have_ \[\delta(G)=\limsup_{n\to+\infty}\frac{1}{n}\#\left\{\,g\in G:g^{\prime}|_{B(x, \varepsilon_{1})}\geqslant 2^{-n}\,\right\}.\] This lemma is a particular case of general discussions about dynamical critical exponents on sets, see Lemma 11.4 and Corollary 11.5. Proof.: By the definition of \(\delta(G)\), for any \(\varepsilon>0\), there is \(\varepsilon^{\prime}>0\) such that \[\limsup_{n\to+\infty}\frac{1}{n}\#\left\{\,g\in G:\exists y\in\Lambda,\,g^{ \prime}|_{B(y,\varepsilon^{\prime})}\geqslant 2^{-n}\,\right\}\geqslant \delta(G)-\varepsilon.\] Let \(\{x_{1},\ldots,x_{k}\}\) be a \(\frac{\varepsilon^{\prime}}{2}\)-dense subset of \(\Lambda\), so that for any \(y\in\Lambda\), there is \(i\in\{1,\ldots,k\}\) such that \(B(x_{i},\varepsilon^{\prime}/2)\subset B(y,\varepsilon^{\prime})\). By the pigeonhole principle, there is \(i\in\{1,\ldots,k\}\) such that \[\limsup_{n\to+\infty}\frac{1}{n}\#\left\{\,g\in G:g^{\prime}|_{B(x_{i}, \varepsilon^{\prime}/2)}\geqslant 2^{-n}\,\right\}\geqslant\delta(G)-\varepsilon.\] By Lemma 8.4, there is \(f\in G\) such that \(fB(x,\varepsilon_{1})\subset B(x_{i},\varepsilon^{\prime}/2)\). Considering \(g\mapsto gf\), we obtain \[\limsup_{n\to+\infty}\frac{1}{n}\#\left\{\,g\in G:g^{\prime}|_{B(x,\varepsilon _{1})}\geqslant 2^{-n}\,\right\}\geqslant\delta(G)-\varepsilon.\] Since \(\varepsilon>0\) is arbitrary, this proves the lemma. Proof of Proposition 9.1.: Let \(\mathcal{S}\) be a free generating set of \(G\) of \(k\) elements. Let \(\mathcal{G}=\mathcal{S}\cup\mathcal{S}^{-1}\). Each element \(g\in G\) can be written uniquely in its _reduced form_\(g=\gamma_{m}\cdots\gamma_{2}\gamma_{1}\) where \(\gamma_{i}\in\mathcal{G}\) and \(\gamma_{i+1}\neq\gamma_{i}^{-1}\) for all \(i\). Let \(|g|\coloneqq m\) denote the _word norm_ of \(g.\) In the case where \(m\geqslant 1\), that is, when \(g\neq\operatorname{id}\), we will call \(\gamma_{1}\) the _beginning_ of \(g\) and \(\gamma_{m}\) the _ending_ of \(g\). We denote them by \(b(g)\) and \(e(g)\) respectively. We say \(h\in G\) is a _prefix_ (resp. _suffix_) of \(g\) if \(h=\gamma_{l}\cdots\gamma_{2}\gamma_{1}\) (resp. \(h=\gamma_{m}\cdots\gamma_{l}\)) for some \(1\leqslant l\leqslant m\). We write \(h\preceq g\) if \(h\) is a prefix of \(g\) and \(h\prec g\) if moreover \(h\neq g\). For \(g,h\in G\), define \(\operatorname{cl}(g,h)=\frac{|g|+|h|-|gh|}{2}\), which is the number of cancellations needed to bring the product of \(g\) and \(h\) to its reduced form. Note that \(\operatorname{cl}(g,h)=0\) if and only if no cancellation happens in \(gh\). We say \(g\) is _cyclically reduced_ if \(\operatorname{cl}(g,g)=0\), that is, if \(b(g)e(g)\neq 1\). _The key to the proof of the Proposition is to construct a subset \(\mathcal{A}_{8}\) that satisfies Lemma 9.9 based on the set \(\mathcal{A}\) as in (9.1)._ Let \(T\) be the semigroup generated by \(\mathcal{S}\), then \(T\) has no invariant probability measures on \(\mathbb{S}^{1}.\) We choose such a sub-semigroup because multiplication of its elements never involves any cancellation. Since \(\Lambda\) is also \(T\)-invariant, then there exists a \(T\)-minimal set \(\Delta\subset\Lambda.\) We fix an arbitrary point \(x\in\Delta\) for the rest of the proof. By Lemma 9.2, for any \(\varepsilon>0\), for arbitrarily large \(n\), the subset \[\mathcal{A}=\left\{\,g\in G:g^{\prime}|_{B(x,\varepsilon_{1})}\geqslant 2^{-n}\,\right\} \tag{9.1}\] has cardinality \(\#\,\mathcal{A}\geqslant 2^{n(\delta(G)-\varepsilon)}\). Now we would like to modify \(\mathcal{A}\) so that its elements preserve a common interval and it freely generates a free sub-semigroup. The latter point will be guaranteed by using the following observation. **Fact 9.3**.: _Let \(\mathcal{A}\subset G\) be a finite subset. Assume that there is no cancellation when concatenating words of \(\mathcal{A}\) and that no element of \(\mathcal{A}\) is a prefix of another element. Then \(\mathcal{A}\) freely generates a free sub-semigroup._ The idea to achieve this is to append to each \(g\in\mathcal{A}\) some element \(h\in G\) of bounded length. The set \(\mathcal{H}_{g}\) below is the set of candidate words \(h\) that will be appended to \(g\) to form \(hg\) making the new set \(\mathcal{A}\). **Lemma 9.4**.: _There is a constant \(c>0\), an integer \(l_{0}\in\mathbb{N}\) and a finite collection \(\mathcal{I}\) of closed intervals which intersect \(\Lambda\), where \(\#\mathcal{I}=C\) is a constant depending only on \(G\), such that the following holds for any \(l\in\mathbb{N}\) sufficiently large._ _For any \(g\in G\), there is an interval \(I=I(g)\in\mathcal{I}\) and a subset \(\mathcal{H}_{g}\subset G\) such that_ 1. \(I\subset B(x,\varepsilon_{1})\)_, where_ \(\varepsilon_{1}\) _is the constant of Lemma_ 9.2_,_ 2. \(\#\mathcal{H}_{g}\geqslant 2^{d}\)_,_ 3. \(\forall h\in\mathcal{H}_{g}\)_,_ \(|h|\leqslant l\) _and_ \(hgI\subset I\)_,_ 4. \(\forall h\in\mathcal{H}_{g}\)_,_ \(\operatorname{cl}(g,h)\leqslant l_{0}\)_,_ 5. _the intervals_ \(hgI\) _with_ \(h\in\mathcal{H}_{g}\)_, are pairwise disjoint._ Here, (4) serves as a technical preparation for later use. It enables us to manage the number of letters that need to be removed to cyclically reduce the word \(hg\). Proof.: By Proposition 7.8, there is a perfect pingpong pair \((h_{1},h_{2})\subset T\) which have pingpong dynamics on a closed subinterval \(I_{0}\subset B(x,\varepsilon_{1})\), that is, \(h_{1}\) and \(h_{2}\) preserves \(I_{0}\) and have disjoint images. Moreover, the interval \(I_{0}\) intersects \(\Delta\) and hence \(I_{0}\cap\Lambda\neq\varnothing.\) Replacing \((h_{1},h_{2})\) by \(\left(h_{1}^{|h_{2}|},h_{2}^{|h_{1}|}\right)\), we may assume that \(|h_{1}|=|h_{2}|=:l_{0}\). By Lemma 8.4, there is \(\varepsilon_{2}>0\) and a finite subset \(\mathcal{F}\subset G\) such that for any interval \(J\) of length \(\leqslant\varepsilon_{2}\) intersecting \(\Lambda\), there exists \(f\in\mathcal{F}\) such that \(fJ\subset I_{0}\). Let \(s\in\mathbb{N}\) be large enough so that \(2^{s}>\varepsilon_{2}^{-1}\) and \(p>\max_{f\in\mathcal{F}}|f|\). Define \(\mathcal{I}\) to be the collection of intervals of the form \(hI_{0}\) with \(h\in\{h_{1},h_{2}\}^{*(s+1)}\). Now let \(g\in G\). Since \(h_{1}\neq h_{2}\) and \(|h_{1}|=|h_{2}|\), there is \(i\in\{1,2\}\) such that \(h_{i}\) is not a suffix of \(g^{-1}\). The \(2^{s}\) intervals \(gh_{i}hI_{0}\), \(h\in\{h_{1},h_{2}\}^{**}\), are pairwise disjoint. Hence there is some \(h\in\{h_{1},h_{2}\}^{**}\) with \(I=h_{i}hI_{0}\in\mathcal{I}\) such that \(|gI|<\varepsilon_{2}\). By the choice of \(\varepsilon_{2}\), there is \(f\in\mathcal{F}\) such that \(fgI\subset I_{0}\). Finally let \(c=\frac{1}{2l_{0}}\). We claim that \[\mathcal{H}_{g}\coloneqq h_{i}h\{h_{1},h_{2}\}^{*cl}f\] satisfies the desired property, whenever \(l>2(p+2)l_{0}+2\max_{f\in\mathcal{F}}|f|\). Indeed, since there is no cancellation between elements of \(T\), for any \(\hat{h}\in\{h_{1},h_{2}\}^{*cl}\), we have \(|\hat{h}|\geqslant cll_{0}>|f|\). Hence \(h_{i}\) remains a suffix of \(h_{i}h\hat{h}f\) so \(\operatorname{cl}(g,h_{i}h\hat{h}f)\leqslant|h_{i}|=l_{0}\). The other properties can be checked in a straightforward manner. Let \(c\), \(l_{0}\), \(\mathcal{I}\) be given by the lemma. Fix some \(l\in\mathbb{N}\) large enough such that the inequalities (9.2) and (9.3) hold. Let \(I(g)\) and \(\mathcal{H}_{g}\) be given by applying the lemma to this \(l\). By a pigeonhole argument, we find \(I\in\mathcal{I}\) such that \[\mathcal{A}_{0}\coloneqq\left\{\,g\in\mathcal{A}:I(g)=I,\ |g|\geqslant l+l_{0}+1\,\right\}\] has cardinality \(\#\mathcal{A}_{0}\geqslant C^{-1}(\#\mathcal{A}-(2k)^{l+l_{0}+1})\) where \(C=\#\mathcal{I}.\) Fix this \(I\in\mathcal{I}\). Next, we replace \(g\in\mathcal{A}\) by a prefix to eliminate cancellations in the products \(hg\), for \(h\in\mathcal{H}_{g}\). **Lemma 9.5**.: _For any \(g\in G\), there is \(\hat{g}\in G\), prefix of \(g\) of length \(|\hat{g}|\geqslant|g|-l\) and such that_ \[\widehat{\mathcal{H}}_{\hat{g}}\coloneqq\left\{\,\hat{h}\in G:\hat{h}\text{ is suffix of }h=\hat{h}\hat{g}g^{-1}\in\mathcal{H}_{g}\text{ and }\operatorname{cl}(\hat{h},\hat{g})=0\,\right\}\] _satisfies \(\#\widehat{\mathcal{H}}_{\hat{g}}\geqslant(l+1)^{-1}2^{cl}\)._ For each pair of \((h,g)\), there exists a unique pair of \((\hat{h},\hat{g})\) where \(\hat{h}\) is a suffix of \(h\) and \(\hat{g}\) is a prefix of \(g\) such that \(\hat{h}\hat{g}=hg\) and \(\operatorname{cl}(\hat{h},\hat{g})=0.\) The element \(\hat{h}\) in the lemma is obtained in this way. Proof.: Let \(g\in G\). By the pigeonhole principle, there is \(i\in\{0,\ldots,l\}\) such that \[\#\left\{\,h\in\mathcal{H}_{g}:\operatorname{cl}(h,g)=i\,\right\}\geqslant\frac{ \#\mathcal{H}_{g}}{l+1}.\] Then let \(\hat{g}\) be the prefix of \(g\) of length \(|g|-i\). It satisfies the desired property. Let \(\mathcal{A}_{1}\) be the image of \(g\in\mathcal{A}_{0}\) under the map \(g\mapsto\hat{g}\). Then for every \(g\in\mathcal{A}_{1}\), \(|g|\geqslant l_{0}+1\). Besides, \(\#\mathcal{A}_{1}\geqslant(2k)^{-l}\#\mathcal{A}_{0}\). In the next step, we will shrink \(\mathcal{A}_{1}\) to \(\mathcal{A}_{4}\) and shrink \(\widehat{\mathcal{H}}_{g}\) to make sure that all \(h\in\widehat{\mathcal{H}}_{g}\) have the same length, that there is a word \(p\in G\) independent of \(g\) and \(h\) such that \(phgp^{-1}\) is cyclically reduced, and that there is no cancellation when multiplying between words of the form \(phgp^{-1}\). By a pigeonhole argument on \(|h|\), we find an integer \(m\leqslant l\) \[\mathcal{A}_{2}\coloneqq\left\{\,g\in\mathcal{A}_{1}:\#\widehat{\mathcal{H}} _{g,m}\geqslant l^{-1}\#\widehat{\mathcal{H}}_{g}\,\right\}\] where \[\widehat{\mathcal{H}}_{g,m}=\left\{\,h\in\widehat{\mathcal{H}}_{g}:|h|=m\,\right\}\] has cardinality \(\#\mathcal{A}_{2}\geqslant l^{-1}\#\mathcal{A}_{1}\). From now on, fix this \(m\) and replace \(\widehat{\mathcal{H}}_{g}\) by \(\widehat{\mathcal{H}}_{g,m}\) for every \(g\in\mathcal{A}_{2}\). Since \(\#\widehat{\mathcal{H}}_{g,m}\geqslant(l+1)^{-2}2^{cl}\) and \(\#\widehat{\mathcal{H}}_{g,m}\leqslant(2k)^{m}\), we must have \[m\geqslant\frac{cl-2\log(l+1)}{\log(2k)}>l_{0}. \tag{9.2}\] Also, for any \(\hat{g}\in\mathcal{A}_{2}\) and any \(\hat{h}\in\widehat{\mathcal{H}}_{\hat{g}}\), \(\hat{h}\) is the suffix of length \(m\) of some element \(h=\hat{h}\hat{g}g^{-1}\) in \(\mathcal{H}_{g}\) for some \(g\in\mathcal{A}_{0}\). We deduce that \(\operatorname{cl}(\hat{g},\hat{h})\leqslant\operatorname{cl}(g,h)\leqslant l_ {0}\). For \(g,h\in G\), let \(\operatorname{lcp}(g,h)\) denote the longest common prefix of \(g\) and \(h\). Note that \(|\operatorname{lcp}(g,h^{-1})|=\operatorname{cl}(g,h)\). Recall \(\operatorname{cl}(g,h)\leqslant l_{0}\) for every \(g\in\mathcal{A}_{2}\) and \(h\in\widehat{\mathcal{H}}_{g}.\) Note also that, if \(\operatorname{cl}(h,g)=0\) and \(p=\operatorname{lcp}(g,h^{-1})\) is shorter than both \(h\) and \(g\) then \(phgp^{-1}\) is cyclically reduced. By a pigeonhole argument with \((g,h)\mapsto\operatorname{lcp}(g,h^{-1})\in\mathcal{G}^{*\leqslant l_{0}}\), we find \(p\in\mathcal{G}^{*\leqslant l_{0}}\) such that \[\mathcal{A}_{3}\coloneqq\left\{\,g\in\mathcal{A}_{1}:\#\widehat{\mathcal{H}} _{g,p}\geqslant(2k)^{-l_{0}}\#\widehat{\mathcal{H}}_{g}\,\right\}\] with \(\widehat{\mathcal{H}}_{g,p}=\left\{\,h\in\widehat{\mathcal{H}}_{g}: \operatorname{lcp}(g,h^{-1})=p\,\right\}\) has cardinality \(\#\mathcal{A}_{3}\geqslant(2k)^{-l_{0}}\#\mathcal{A}_{2}\). From now on, fix this \(p\) and replace \(\widehat{\mathcal{H}}_{g}\) by \(\widehat{\mathcal{H}}_{g,p}\) for every \(g\in\mathcal{A}_{3}\). By yet another pigeonhole argument, we find \((b,e)\in\mathcal{G}\times\mathcal{G}\) with \(b\neq e^{-1}\) such that \[\mathcal{A}_{4}\coloneqq\left\{\,g\in\mathcal{A}_{1}:\#\widehat{\mathcal{H}} _{g,b,e}\geqslant(2k)^{-2}\#\widehat{\mathcal{H}}_{g}\,\right\}\] with \(\widehat{\mathcal{H}}_{g,b,e}=\left\{\,h\in\widehat{\mathcal{H}}_{g}:b(phgp^{ -1})=b,\,e(phgp^{-1})=e\,\right\}\) has cardinality \(\#\mathcal{A}_{4}\geqslant(2k)^{-2}\#\mathcal{A}_{3}\). We also remark that \(b(phgp^{-1})=b(gp^{-1})\) and \(e(phgp^{-1})=e(ph)\) since \(|h|,|g|\geqslant l_{0}+1>|p|\) and \(\operatorname{cl}(h,g)=0\), for every \(g\in\mathcal{A}_{3}\) and \(h\in\widehat{\mathcal{H}}_{g}.\) From now on, we will fix this \((b,e)\) and replace \(\widehat{\mathcal{H}}_{g}\) by \(\widehat{\mathcal{H}}_{g,b,e}\) for every \(g\in\mathcal{A}_{4}\). For later convenience, let \[\mathcal{A}_{5}\coloneqq\left\{\,gp^{-1}:g\in\mathcal{A}_{4}\,\right\}.\] We replace the interval \(I\) by \(pI\) and \(m\) by \(m-|p|.\) Then \(0<m\leqslant l.\) For every \(\hat{g}=gp^{-1}\in\mathcal{A}_{5}\), we replace \(\widehat{\mathcal{H}}_{\hat{g}}\) by the set \(\left\{\,ph:h\in\widehat{\mathcal{H}}_{g}\text{ as given before}\,\right\}.\) To summarize, we obtain the following lemma. **Lemma 9.6**.: _There exist constants \(l_{0},C\in\mathbb{N}\) depending only on \(G\) which satisfy the following properties. Given \(l\) large enough and \(\mathcal{A}\subset G\) as in (9.1), there exists a closed interval \(I\) that intersects \(\Lambda\), \(b,e\in\mathcal{G}\) with \(b\neq e^{-1}\), a positive integer \(m\leqslant l\), a subset \(\mathcal{A}_{5}\subset G\) and a family of subsets \(\{\widehat{\mathcal{H}}_{g}\}_{g\in\mathcal{A}_{5}},\widehat{\mathcal{H}}_{g}\subset G\) such that_ 1. \(\#\mathcal{A}_{5}\geqslant C^{-1}l^{-1}(2k)^{-l-l_{0}-2}(\#\mathcal{A}-(2k)^{l+l_{0 }+1}).\)__ 2. \(\forall g\in\mathcal{A}_{5},\,\#\mathcal{H}_{g}\geqslant(l+1)^{-2}(2k)^{-l_{0}-2 }2^{cl}.\)__ 3. \(\forall g\in\mathcal{A}_{5},\) _the elements in_ \(\widehat{\mathcal{H}}_{g}\) _share the same word norm_ \(m.\)__ 4. \(\forall g\in\mathcal{A}_{5},\,b(g)=b\) _and_ \(\forall h\in\widehat{\mathcal{H}}_{g},\,\mathrm{cl}(h,g)=0\) _and_ \(e(h)=e.\) _In particular,_ \(\mathrm{cl}(h_{1}g_{1},h_{2}g_{2})=0\) _for every_ \(g_{i}\in\mathcal{A}_{5}\) _and_ \(h_{i}\in\widehat{\mathcal{H}}_{g},\,i=1,2.\)__ 5. _For a fixed_ \(g\in\mathcal{A}_{5},\)__\(hgI\subset I\) _and_ \(hgI\) _are pairwise disjoint for_ \(h\in\widehat{\mathcal{H}}_{g}.\)__ 6. _For every_ \(g\in\mathcal{A}_{5}\) _and_ \(h\in\widehat{\mathcal{H}}_{g},\,(hg)^{\prime}|_{I}\geqslant M^{-2l-2l_{0}}2^{-n}\) _where_ \(M=\max_{\gamma\in\mathcal{G}}\left\|\gamma\right\|_{C^{1}}\)_,_ In the next step, we will construct a large set \(\mathcal{A}_{8}\) without prefix relations, by picking the elements of the form \(hg\) with \(g\in\mathcal{A}_{5}\) and \(h\in\widehat{\mathcal{H}}_{g}\). Let \(\mathscr{P}\) denote the set of all prefixes of elements of \(\mathcal{A}_{5}\) and consider \[\mathcal{B}=\Set{(g,h)\in\mathcal{A}_{5}\times G:h\in\widehat{\mathcal{H}}_{ g}\text{ and }hg\in\mathscr{P}}.\] For \((g,h)\in\mathcal{B}\), let \(\phi(g,h)\) be a shortest element in \(\mathcal{A}_{5}\) of which \(hg\) is a prefix. The minimality does not make the choice unique, but the choice among all the shortest elements does not matter. Recall that \(m\leqslant l\) is the common value of \(|h|\) for \(h\in\widehat{\mathcal{H}}_{g},\,g\in\mathcal{A}_{5}\). **Lemma 9.7**.: _The map \(\phi\colon\mathcal{B}\to\mathcal{A}_{5}\) is at most \(m\)-to-1._ Proof.: Assume for a contradiction that \(\phi(g_{0},h_{0})=\cdots=\phi(g_{m},h_{m})\) for \(m+1\) distinct elements of \(\widetilde{B}\). Since \(\mathrm{cl}(h_{0},g_{0})=\cdots=\mathrm{cl}(h_{m},g_{m})=0\) and \(|h_{0}|=\cdots=|h_{m}|\), we know \((h_{i}g_{i})_{0\leqslant i\leqslant m}\) are all distinct and prefixes of the same word. Upon ordering, we may assume \[h_{0}g_{0}\prec\cdots\prec h_{m}g_{m}\preceq\phi(g_{0},h_{0}).\] Then \(|h_{0}g_{0}|\leqslant|h_{m}g_{m}|-m=|g_{m}|\) and hence \(h_{0}g_{0}\preceq g_{m}\prec\phi(g_{0},h_{0})\). This contradicts the minimality of \(|\phi(g_{0},h_{0})|\). Define \[\mathcal{A}_{6}=\Set{g\in\mathcal{A}_{5}:\#\widetilde{\mathcal{H}}_{g} \geqslant 2}\Set{\text{ where }}\quad\widetilde{\mathcal{H}}_{g}\coloneqq\Set{h\in \widehat{\mathcal{H}}_{g}:hg\notin\mathscr{P}}.\] Then by Lemma 9.7 and the estimate \(\#\widehat{\mathcal{H}}_{g}\geqslant(l+1)^{-2}(2k)^{-l_{0}-2}2^{cl},\) \[\#(\mathcal{A}_{5}\setminus\mathcal{A}_{6})\big{(}(l+1)^{-2}(2k)^{-l_{0}-2}2^ {cl}-2\big{)}\leqslant\#\mathcal{B}\leqslant m\#\mathcal{A}_{5}\leqslant l\# \mathcal{A}_{5}.\] As \(l\) was chosen large enough, we have \[(l+1)^{-2}(2k)^{-l_{0}-2}2^{cl}-2\geqslant 2l, \tag{9.3}\] which leads to \(\#\mathcal{A}_{6}\geqslant\#\mathcal{A}_{5}/2\). Let \(\mathcal{A}_{7}\) be a maximal subset of \(\mathcal{A}_{6}\) such that the set \[\mathcal{A}_{8}\coloneqq\bigcup_{g\in\mathcal{A}_{7}}\Set{hg:h\in\widetilde{ \mathcal{H}}_{g}}\] has no prefix relation (that is, for any \(f_{1},f_{2}\in\mathcal{A}_{8},\,f_{1}\nless f_{2}\)). Then \(\#\mathcal{A}_{8}\geqslant 2\#\mathcal{A}_{7}\). **Lemma 9.8**.: _The subset \(\mathcal{A}_{7}\) is \(m\)-dense in \(\mathcal{A}_{6}\) with respect to the (right-invariant) word metric, that is,_ \[\forall g\in\mathcal{A}_{6},\,\exists\tilde{g}\in\mathcal{A}_{7},\quad| \tilde{g}g^{-1}|<m.\] _In particular \(\#\mathcal{A}_{7}\geqslant(2k)^{-l}\#\mathcal{A}_{6}\)._ Proof.: Clearly, for any \(g\in\mathcal{A}_{6}\), there is no prefix relation inside \(\{\,hg:h\in\widetilde{\mathcal{H}}_{g}\,\}\) since \(|h|\) are same and \(\operatorname{cl}(h,g)=0\) for every \(h\in\widetilde{\mathcal{H}}_{g}\). We claim that for any \(g_{1},g_{2}\in\mathcal{A}_{6}\), if some prefix relation is present between \(\{\,h_{1}g_{1}:h_{1}\in\widetilde{\mathcal{H}}_{g_{1}}\,\}\) and \(\{\,h_{2}g_{2}:h_{2}\in\widetilde{\mathcal{H}}_{g_{2}}\,\}\), say \(h_{1}g_{1}\preceq h_{2}g_{2}\), then \(|g_{2}g_{1}^{-1}|<m\). Indeed, as \(|h_{1}|=|h_{2}|\), we deduce from \(h_{1}g_{1}\preceq h_{2}g_{2}\) that \(g_{1}\preceq g_{2}\). Moreover, \(\operatorname{cl}(h_{1},g_{1})=0\) and \(h_{1}g_{1}\not\preceq g_{2}\) by construction of \(\widetilde{\mathcal{H}}_{g}\). Hence \(m+|g_{1}|=|h_{1}g_{1}|>|g_{2}|\). It follows that \(|g_{2}g_{1}^{-1}|<m\). For any \(g\in\mathcal{A}_{6}\setminus\mathcal{A}_{7}\), as we cannot put \(g\) into \(\mathcal{A}_{7}\) without creating prefix relations, we know \(g\) is within distance \(m\) of some element in \(\mathcal{A}_{7}\). To summarize, we proved the following statement. **Lemma 9.9**.: _There exist constants \(l_{0},l,C\in\mathbb{N}\) depending only on \(G\) satisfy the following properties. Given \(\mathcal{A}\subset G\) as (9.1), there exists a closed interval \(I\) that intersects \(\Lambda\) and a subset \(\mathcal{A}_{8}\subset G\) with_ \[\#\mathcal{A}_{8}\geqslant C^{-1}l^{-1}(2k)^{-2l-l_{0}-2}(\#\mathcal{A}-(2k)^ {l+l_{0}+1})\] _such that for every \(g\in\mathcal{A}_{8}\),_ 1. \(gI\subset I\)_,_ 2. \(g^{\prime}|_{I}\geqslant M^{-2l-2l_{0}}2^{-n}\) _where_ \(M=\max_{\gamma\in\mathcal{G}}\left\|\gamma\right\|_{C^{1}}\)_,_ 3. _there is_ \(f\in\mathcal{A}_{8}\) _such that_ \(gI\cap fI=\varnothing\)_,_ 4. _for any_ \(f\in\mathcal{A}_{8}\)_,_ \(\operatorname{cl}(f,g)=0\)_,_ 5. _for any_ \(f\in\mathcal{A}_{8}\setminus\{g\}\)_,_ \(f\not\preceq g\)_._ Now we are ready to conclude Proposition 9.1 by Lemma 9.9 and a random walk argument. Let \(\mu\) be the uniform probability measure on \(\mathcal{A}_{8}\) and consider the random walk induced by \(\mu\) on \(I\). Assuming \(n\) large enough, then \(g\) is strictly contracting on \(I\) for \(g\in\mathcal{A}_{8}\). Let \(\nu\) be the unique stationary measure of \(\mu\) on \(I\), then \(\nu\) is supported on \(\Lambda\) since \(I\cap\Lambda\) is nonempty and invariant under \(\mathcal{A}_{8}\). On the one hand, from item (2), we obtain an estimate of the Lyapunov exponent on \(I\) as \[|\lambda(\mu,\nu)|\leqslant n+O_{G}(1).\] On the other hand, by the last two items, \(\mathcal{A}_{8}\) freely generates a free semigroup in \(G\). Since the group \(G\) is locally discrete, \((g|_{I})_{g\in\mathcal{A}_{8}}\) is a collection of distinct elements of \(C_{+}^{2}(I,I)\). The semigroup \(T_{\mu}\) is also a free semigroup freely generated by \(\operatorname{supp}\mu\). Thus the random walk entropy of \(\mu\) is \[h_{\mathrm{RW}}(\mu)=\log\#\mathcal{A}_{8}\geqslant n(\delta(G)-\varepsilon) -O_{G}(1).\] Condition (3) of Lemma 9.9 implies that there is no \(T_{\mu}\)-invariant measure on \(I\). Then we can apply Theorem 2.8 to the random walk on \(I\) induced by \(\mu\). By the assumption that \(G\) is locally discrete, the second alternative in Theorem 2.8 does not hold. Hence \(\nu\) is exact dimensional and \[\dim_{\mathrm{H}}\nu=\dim\nu=\frac{h_{\mathrm{RW}}(\mu)}{|\lambda(\mu,\nu)|} \geqslant\frac{n(\delta(G)-\varepsilon)-O_{G}(1)}{n+O_{G}(1)}.\] Fixing \(l\) sufficiently large and letting \(n\to+\infty\), we obtain, \[\dim_{\mathrm{H}}\nu\geqslant\delta(G)-2\varepsilon.\] As \(\varepsilon>0\) is arbitrary, Proposition 9.1 follows. Proof of Theorem 2.17.: The first statement follows directly from Proposition 8.3. In order to prove the second statement, we begin by noting that \(G\) has a free subgroup \(G_{1}\) of finite index by assumption. Using Lemma 3.1, we see that \(\Lambda\) is also the unique minimal set of \(G_{1}\). It is not hard to show that \(\delta(G_{1})=\delta(G)\), so we may assume without loss of generality that \(G\) is a free group. By a well-known result of Herman (see [38, Chapter VII]), the cyclic group generated by a \(C^{2}\) diffeomorphism with an irrational rotation number is not discrete in the \(C^{1}\) topology. Hence, our group \(G\) contains only elements with rational rotation number. Then as in the proof of Lemma 3.2, we can show that \(G\) does not have any invariant probability measure. Thus, by Proposition 9.1, \(\dim_{\mathrm{H}}\Lambda\geqslant\delta(G)\). Proof of Theorem D.: Since every finitely generated subgroup of \(\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) preserving an exceptional minimal set is locally discrete by Corollary 3.6, virtually free by [33] and satisfying property (\(\Lambda\star\)) by Theorem 3.4, Theorem D is a particular case of Theorem 2.17. Proof of Corollary 2.18.: For exceptional case, we have \(\dim_{\mathrm{H}}\Lambda=\delta(G)\leqslant 1\) by Theorem D. If \(G\) acts minimally satisfying property (\(\star\)), then \(\delta(G)\geqslant 1\) by Theorem 2.17(1). ### The \(C^{2}\) dynamical critical exponent This subsection is devoted to proving Theorem 2.25. In order to show \(\delta_{2}(G)\geqslant\dim_{\mathrm{H}}\Lambda\), we require a variant of Proposition 8.3. To this end, we need slightly improve Proposition 8.1, i.e., [24, Proposition 6.4]. Recall that \(\widetilde{\varkappa}\) is the distortion norm defined in Section 3.3. Combining (3.8) from Proposition 3.8 with [24, Lemma 6.3], we obtain the following. **Proposition 9.10**.: _Let \(G\subset\mathrm{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated subgroup without finite orbits in \(\mathbb{S}^{1}\). Let \(\Lambda\) be its unique minimal set. Assume that \(G\) satisfies property (\(\star\)) or (\(\Lambda\star\)). Then there exists constants \(\varepsilon_{0}>0,C>0\) such that for every point \(x\in\Lambda\setminus G(\mathrm{NE}),\) there is an interval \(I\subset\mathbb{S}^{1}\) whose length is arbitrarily small, together with an element \(g\in G\) such that \(gI=B(gx,\varepsilon_{0})\) and \(\widetilde{\varkappa}(g^{-1},gI)\leqslant C\)._ Using this, we see easily that Proposition 8.3 holds with the condition \(\varkappa(g,I_{0})\leqslant 1\) replaced by \(\widetilde{\varkappa}(g,I_{0})\leqslant C.\) We deduce immediately the following. **Proposition 9.11**.: _Let \(G\subset\mathrm{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated subgroup without finite orbits. Assume that \(G\) satisfies property (\(\star\)) or (\(\Lambda\star\)). Then_ \[\delta_{2}(G)\geqslant\dim_{\mathrm{H}}\Lambda.\] It suffices to show that under the condition of local discreteness, \(\delta_{2}(G)\) is always bounded above by \(\dim_{\mathrm{H}}\Lambda.\) Let \(\delta=\delta_{2}(G)\), the key lemma in this case is stated below. **Lemma 9.12**.: _Let \(G\subset\mathrm{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated subgroup preserving no Borel probability measures on \(\mathbb{S}^{1}\). Let \(\Lambda\subset\mathbb{S}^{1}\) be its unique minimal set. Let \(\varepsilon_{1}>0\) be the constant of Lemma 8.4. For any \(x\in\Lambda\), we have_ \[\delta_{2}(G)=\lim_{C\to+\infty}\limsup_{n\to+\infty}\frac{1}{n}\#\left\{\,g \in G:g^{\prime}|_{B(x,\varepsilon_{1})}\geqslant 2^{-n},\,\widetilde{ \varkappa}(g,B(x,\varepsilon_{1}))\leqslant C\,\right\}.\] The proof is identical to that of Lemma 9.2 and left to the reader. Proof of Theorem 2.25.: Fix a point \(x\in\Lambda\). By the previous lemma, for any \(\varepsilon>0\), there is a constant \(C>0\) such that \(\limsup_{n\to+\infty}\frac{1}{n}\log\#\mathcal{A}_{n}\geqslant\delta_{2}(G)-\varepsilon\), where for \(n\in\mathbb{N}\), \[\mathcal{A}_{n}=\left\{\,g\in G:g^{\prime}|_{B(x,\varepsilon_{1})}\geqslant 2^ {-n},\,\widetilde{\varkappa}(g,B(x,\varepsilon_{1}))\leqslant C\,\right\}.\] We claim that there are subsets \(\widetilde{\mathcal{A}}_{n}\subset\mathcal{A}_{n}\) with the additional property that the collection of intervals \(\overline{gB(x,\varepsilon_{1})}\), \(g\in\widetilde{\mathcal{A}}_{n}\) are pairwise disjoint while \(\limsup_{n\to+\infty}\frac{1}{n}\log\#\widetilde{\mathcal{A}}_{n}\geqslant \delta_{2}(G)-\varepsilon\) still holds. Assuming this claim, the same argument as in Section 8.2 leads to existence of a probability measure \(\nu\) on \(\Lambda\) with dimension \(\dim_{\mathrm{H}}\nu\geqslant\delta_{2}(G)-2\varepsilon.\) Combined with Proposition 9.11, this concludes the proof of Theorem 2.25. Now we turn to the proof of the claim. Thanks to the control of distortion, we can write \[\mathcal{A}_{n}=\bigcup_{k=0}^{n}\mathcal{B}_{k}\] where \[\mathcal{B}_{k}=\left\{\,g\in G:2^{-k}\leqslant g^{\prime}|_{B(x,\varepsilon_{ 1})}\leqslant 2^{-k+C+1},\,\widetilde{\varkappa}(g,B(x,\varepsilon_{1})) \leqslant C\,\right\}.\] Distinguish two cases. Case 1.There is \(n\in\mathbb{N}\) such that \(\mathcal{B}_{n}\) is infinite. Then we will apply a similar pigeonhole argument as in the proof of Lemma 6.4 to construct elements tending to \(\mathrm{id}\) in the \(C^{1}\)-topology. For every positive integer \(k\), let \(y_{0},\cdots,y_{4k}\) be evenly spaced points on circle such that \(J=\overline{B(x,\varepsilon_{1})}=[y_{0},y_{4k}].\) Then there exists \(f,g\in\mathcal{B}_{n}\) such that \(d(f(y_{i}),g(y_{i}))\leqslant 1/k\) and \(|\log f^{\prime}(y_{i})-\log g^{\prime}(y_{i})|\leqslant 1/k\) for every \(i=0,1,\cdots,4k.\) Let \(J^{\prime}=\overline{B(x,\varepsilon_{1}/2)}=[y_{k},y_{3k}]\). One can show that \(f(J^{\prime})\subset g(J)\) if \(k\) is sufficiently large. By an estimate similar to Lemma 6.4, the elements \(g^{-1}f\) tend to \(\mathrm{id}\) on \(J^{\prime}\) in the \(C^{1}\)-topology as \(k\to+\infty\). This contradicts the local discreteness assumption. Case 2.Otherwise, every \(\mathcal{B}_{n}\) is a finite set. Then \[\limsup_{n\to+\infty}\frac{1}{n}\log\#\mathcal{B}_{n}=\limsup_{n\to+\infty} \frac{1}{n}\log\#\mathcal{A}_{n}\geqslant\delta_{2}(G)-\varepsilon.\] Let \(\widetilde{\mathcal{A}}_{n}\subset\mathcal{B}_{n}\) be a maximal subset such that \(\left\{\overline{gB(x,\varepsilon_{1})}:g\in\widetilde{\mathcal{A}}_{n}\right\}\) forms a family of disjoint closed intervals. Assume that \[\limsup_{n\to+\infty}\frac{1}{n}\log\#\widetilde{\mathcal{A}}_{n}\leqslant \delta_{2}(G)-\varepsilon-\varepsilon^{\prime}\] for some \(\varepsilon^{\prime}>0.\) Note that for every \(g\in\mathcal{B}_{n}\), \(\overline{gB(x,\varepsilon_{1})}\) intersects \(\overline{\widehat{g}B(x,\varepsilon_{1})}\) for some \(\widetilde{g}\in\widetilde{\mathcal{A}}_{n}\). By applying a pigeonhole argument, there exists infinitely many positive integers \(n\) and at least \(2^{\varepsilon^{\prime}n/2}\) elements \(g_{1},\cdots,g_{m}\in G\) satisfying 1. there exists an interval \(I\) with \(|I|\leqslant 2^{-n+C+2}\) such that \(g_{i}(B(x,\varepsilon_{1}))\subset I\), and 2. for all \(1\leqslant i\leqslant m\), \(g^{\prime}_{i}|_{B(x,\varepsilon_{1})}\in[2^{-n},2^{-n+C+1}]\) and \(\widetilde{\varkappa}(g_{i},B(x,\varepsilon_{1}))\leqslant C.\) Now we apply Lemma 6.4. This also contradicts the local discreteness. ## 10 Groups with parabolic elements In this section, we show some lower bounds for the dynamical critical exponents of groups that having parabolic fixed points. Combined with Theorem D, this gives lower bounds for the dimension of the minimal sets of such groups i.e. Theorem E. ### Growth of derivatives around a parabolic fixed point The first step is an estimate of the dynamics of an analytic diffeomorphism near a parabolic fixed point. Recall the multiplicity of a fixed point. **Proposition 10.1**.: _Let \(g\in\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) be an analytic diffeomorphism. Let \(x\) be a parabolic fixed point with multiplicity \(k+1\), \(k\in\mathbb{N}\). Assume that \(g\) is contracting on the interval \(]x,x^{\prime}[\) for some \(x^{\prime}\in\mathbb{S}^{1}\). Then there exist constants \(c>0\) such that_ \[\forall n\in\mathbb{N},\,\forall z\in[x,x^{\prime}[,\quad(g^{n})^{\prime}(z) \geqslant cn^{-\frac{k+1}{k}}.\] As a consequence, for any point \(z\) in the basin of attraction of a fixed point \(x\) of multiplicity \(k+1\) of a diffeomorphism \(g\), we have \[\delta(\text{cyclic group generated by }g,z)\geqslant\frac{k}{k+1},\] in the sense of a dynamical critical exponent on a set, Definition 2.19. Proof of Proposition 10.1.: By our assumption the Taylor series at \(x\) is of the form \[g(z)=z-a(z-x)^{k+1}+(\text{higher order terms}), \tag{10.1}\] with \(a>0\). Take \(\varepsilon>0\) sufficiently small such that (10.1) converges on \([x,x+\varepsilon].\) Then \(g(z)<z\) and \(x_{n}=g^{n}(x+\varepsilon)\) converges to \(x\). For positive integer \(n\), we denote by \(I_{n}:=[x_{n},x_{n-1}]\). Then we have the following two lemmas. **Lemma 10.2** ([56, Lemma 10.1]).: \(\lim_{n\to\infty}\sqrt[k]{n}(x_{n}-x)=1/\sqrt[k]{ka}>0.\)__ **Lemma 10.3**.: _There exists \(C>0\) such that \(\varkappa(g^{n},I_{m})\leqslant C\) for every positive integers \(m,n\)._ Proof.: Let \(M=\widetilde{\varkappa}(g,[x,x_{0}])\). For every positive integers \(m,n\), by discussions in Section 3.3 we have \[\varkappa(g^{n},I_{m})\leqslant\sum_{i=0}^{n-1}\varkappa(g,I_{m+i})\leqslant M \sum_{i=0}^{n-1}|I_{m+i}|\leqslant M\varepsilon,\] since the intervals \(I_{m+i}\) fit into \([x,x_{0}]\) without overlapping. By (10.1), there is a constant \(\varepsilon^{\prime}>0\) such that \[\forall z\in[x,x+\varepsilon^{\prime}],\quad z-g(z)\in\left[\frac{a}{2}(z-x)^ {k+1},2a(z-x)^{k+1}\right].\] Then for every \(n\) large enough, we have \[\frac{a}{2}\leqslant\frac{|I_{n}|}{|x_{n}-x|^{k+1}}\leqslant 2a.\] Combined with Lemma 10.2, we have \[n^{\frac{k+1}{k}}|I_{n}|\asymp_{g,I_{1}}1. \tag{10.2}\] Let \(n\) be a positive integer and \(z\in(x,x+\varepsilon]\). Then \(z\in I_{m}\) for some \(m\). As \(g^{n}I_{m}=I_{n+m}\) and \(\varkappa(g^{n},I_{m})\leqslant C\) by Lemma 10.3, we have \[(g^{n})^{\prime}(z)\gg_{g,I_{1}}\frac{|I_{n+m}|}{|I_{m}|}\gg_{g,I_{1}}\Big{(} \frac{n+m}{m}\Big{)}^{-\frac{k+1}{k}}\gg n^{-\frac{k+1}{k}}.\] To show the same estimate for points \(z\) in \((x+\varepsilon,x^{\prime})\), observe that there exists \(l\geqslant 1\) such that \(g^{l}\big{(}(x+\varepsilon,x^{\prime})\big{)}\subset(x,x+\varepsilon]\), hence the conclusion follows. ### Boosting the dynamical critical exponent As we remarked, Proposition 10.1 showed the weaker form of Theorem E i.e. \(\delta(G)\geqslant\frac{k}{k+1}\). In the rest of the section we will show Theorem E, the strict inequality holds as well. Our proof is partially inspired by the classical results of Beardon [11] and Patterson [61] in the case of Fuchsian groups acting on \(\mathbb{S}^{1}\). Proof of Theorem 5.: Let \(x\in\Lambda\) be a parabolic fixed point of an element \(g\in G\setminus\{\operatorname{id}\}\). Let \(k+1\) be its multiplicity. Without loss of generality we can assume that there exists a subsequence of \(\Lambda\) accumulates to \(x\) from the right. Replacing \(g\) by \(g^{-1}\) if necessary we may also assume that \(g\) is contracting on a right neighborhood \(]x,x^{\prime}[\) of \(x\). By Lemma 3.2, \(G\) does not preserve any probability measure on \(\mathbb{S}^{1}.\) Therefore, we can apply Lemma 7.1 to obtain an element \(h\in G\) having only hyperbolic fixed points on \(\mathbb{S}^{1}\) and at least one fixed point in \(]x,x^{\prime}[\). Let \(y\) be the leftmost element in \(]x,x^{\prime}[\cap\operatorname{Fix}(h)\). Replacing \(h\) by \(h^{-1}\) if necessary we may assume that \(y\) is an attracting fixed point. By Theorem 3.5, all the non-trivial elements in \(\operatorname{Stab}_{G}(x)\) share the same set of fixed points. Thus \(h\) does not fix \(x\). By our choice of \(y\), replacing \(h\) by a large power if necessary we may assume \(h\) contracts the interval \(I=[x,y]\) so hard such that \(hI\cap gI=\varnothing\). Hence the semigroup \(T\) generated by \(g\) and \(h\) is a (freely generated) free semigroup since \(g\) and \(h\) have pingpong dynamics on \(I\). For any \(f\in C^{1}_{+}(I,I)\), we define the co-norm of \(f\) on \(I\) by \[\gamma(f)=\inf_{z\in I}f^{\prime}(z).\] Note that the co-norm is super-multiplicative: \(\gamma(f_{1}f_{2})\geqslant\gamma(f_{1})\gamma(f_{2})\) for all \(f_{1},f_{2}\in C^{1}_{+}(I,I)\). For a semigroup \(H\) and an exponent \(s\geqslant 0\), define \[\varphi(H,s)=\sum_{f\in H\setminus\{\operatorname{id}\}}\gamma(f)^{s}.\] We denote by \(\langle g\rangle\) and \(\langle h\rangle\) the sub-semigroup generated by \(g\) and \(h\) respectively. By freeness of \(T\) we have \[\varphi(T,s) \geqslant\sum_{j\geqslant 1}\sum_{n_{1},\dots,n_{j}\geqslant 1} \sum_{m_{1},\dots,m_{j}\geqslant 1}\gamma(g^{n_{1}}h^{m_{1}}\cdots g^{n_{j}}h^{m_{j} })^{s}\] \[\geqslant\sum_{j\geqslant 1}\sum_{n_{1},\dots,n_{j}\geqslant 1} \sum_{m_{1},\dots,m_{j}\geqslant 1}\left(\gamma(g^{n_{1}})\gamma(h^{m_{1}})\cdots \gamma(g^{n_{j}})\gamma(h^{m_{j}})\right)^{s}\] \[=\sum_{j\geqslant 1}\varphi(\langle g\rangle\,,s)^{j}\varphi( \langle h\rangle\,,s)^{j}.\] In particular, \(\varphi(T,s)\) diverges if \(\varphi(\langle g\rangle\,,s)\varphi(\langle h\rangle\,,s)\geqslant 1\). On the one hand, by Proposition 10.1, for \(s>\frac{k}{k+1}\), we have \[\varphi(\langle g\rangle\,,s)\gg_{g,I}\sum_{n\geqslant 1}n^{-\frac{k+1}{k}s} \gg\frac{k}{(k+1)s-k}.\] Hence \(\varphi(\langle g\rangle\,,s)\to+\infty\) as \(s\downarrow\frac{k}{k+1}.\) On the other hand, since \(h\) is contracting on \(I\), we have \[\forall s\in]0,1],\quad\varphi(\langle h\rangle\,,s)\geqslant\varphi( \langle h\rangle\,,1)>0\] From these we deduce that there exists \(s_{0}>\frac{k}{k+1}\) such that \(\varphi(T,s_{0})\) diverges. It follows that \[\limsup_{n\to\infty}\frac{1}{n}\#\log\left\{f\in T:f^{\prime}|_{I}\geqslant 2 ^{-n}\right\}\geqslant s_{0}>\frac{k}{k+1}.\] Note that \(\Lambda\cap]x,y[\neq\varnothing\). Let \(B(z,\varepsilon_{0})\) be a ball centered at \(z\in\Lambda\) and contained in \(I=[x,y]\). Then the definition of \(\delta(G)\) implies that \[\delta(G)\geqslant\limsup_{n\to\infty}\frac{1}{n}\#\log\left\{f\in G:f^{ \prime}|_{B(z,\varepsilon_{0})}\geqslant 2^{-n}\right\}\geqslant s_{0}>\frac{k}{k+1}.\qed\] ## 11 The dynamical critical exponent and conformal measures ### Conformal measures Recall the notion of conformal measures in Definition 2.26. In our case, we assume that \(G\) is a finitely generated subgroup of \(\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) without finite orbits and let \(\Lambda\) be the unique minimal set. We especially care about the case where \(\Lambda\) is exceptional. A \(\delta\)-conformal measure always refers to a probability measure that is \(\delta\)-conformal with respect to the action of \(G\) on \(\mathbb{S}^{1}.\) In the remainder of this subsection, we will prove Theorem 2.27 by establishing two lemmas, namely Lemma 11.1 and Lemma 11.2 below. **Lemma 11.1**.: _Let \(G\subset\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated subgroup with an exceptional minimal set \(\Lambda\subset\mathbb{S}^{1}.\) Assume that \(G\) satisfies property \((\Lambda\star).\) If there exists a \(\delta\)-conformal measure supported on \(\Lambda\), then \(\delta\geqslant\dim_{\mathrm{H}}\Lambda.\)_ The proof below also works for the situation where \(G\) acts minimally on \(\mathbb{S}^{1}\) and satisfies property \((\star)\). The statement \(\delta\geqslant 1\) in this case is due to Deroin-Kleptsyn-Navas [23, Theorem F(1)], who shown moreover that the Lebesgue measure is the unique \(1\)-conformal measure. Proof.: Let \(\nu\) be a \(\delta\)-conformal measure supported on \(\Lambda\). Its support \(\operatorname{supp}\nu\) is \(G\)-invariant and hence \(\operatorname{supp}\nu=\Lambda.\) Write \(\Lambda^{\prime}=\Lambda\setminus G(\operatorname{NE})\) so that \(\dim_{\mathrm{H}}\Lambda^{\prime}=\dim_{\mathrm{H}}\Lambda\) by Theorem 3.3. Recall the constant \(\varepsilon_{0}>0\) from Proposition 8.1 and the expandable interval defined in Definition 8.2. Let \(I\) be an expandable interval and let \(g\in G\) be the associated diffeomorphism, that is, \(\varkappa(g,I)\leqslant 1\) and \(gI\) has length \(2\varepsilon_{0}\) and centered in \(\Lambda\). By the distortion control \(\varkappa(g,I)\leqslant 1\) and the definition of conformal measure, \[\left(\frac{\varepsilon_{0}}{2|I|}\right)^{\delta}\nu(I)\leqslant\nu(gI)= \int_{I}|g^{\prime}(x)|^{\delta}\mathrm{d}\nu(x)\leqslant\left(\frac{2 \varepsilon_{0}}{|I|}\right)^{\delta}\nu(I). \tag{11.1}\] By compactness, \[\nu(gI)\geqslant\inf_{x\in\Lambda}\nu(B(x,\varepsilon_{0}))>0.\] If follows that \[\nu(I)\gg_{G}|I|^{\delta}.\] By Proposition 8.1, the set of all expandable intervals is a Vitali cover of \(\Lambda^{\prime}\), that is, every point in \(\Lambda^{\prime}\) is contained in an expandable interval of arbitrarily small length. For any \(\rho>0\), by Vitali covering lemma, there exists a countable set \(\mathcal{E}\) of pairwise disjoint expandable intervals of length at most \(\rho\) such that \(\Lambda^{\prime}\subset\bigcup_{I\in\mathcal{E}}5I,\) where \(5I\) denotes the interval with the same center as \(I\) and \(5\) times its length. Then \[H_{5\rho}^{\delta}(\Lambda^{\prime})\leqslant\sum_{I\in\mathcal{E}}|5I|^{ \delta}\ll_{G}\sum_{I\in\mathcal{E}}\nu(I)\leqslant 1\] Letting \(\rho\to 0^{+}\), we find \(H^{\delta}(\Lambda^{\prime})<\infty.\) Hence \(\dim_{\mathrm{H}}\Lambda=\dim_{\mathrm{H}}\Lambda^{\prime}\leqslant\delta.\) **Lemma 11.2**.: _Let \(G\subset\operatorname{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated subgroup with an exceptional minimal set \(\Lambda\subset\mathbb{S}^{1}.\) Assume that \(G\) satisfies property \((\Lambda\star).\) If there exists an atomless \(\delta\)-conformal measure supported on \(\Lambda\), then \(\delta\leqslant\dim_{\mathrm{H}}\Lambda.\)_ Proof.: Let \(\nu\) be an atomless \(\delta\)-conformal measure supported on \(\Lambda\). Now for any \(x\in\Lambda^{\prime}\), for each \(x\)-expandable interval \(I\) with corresponding diffeomorphism \(g\in G\), we claim that there is a subinterval \(I^{\prime}\) such that \[x\in I^{\prime}\subset 5I^{\prime}\subset I\quad\text{and}\quad gI^{\prime} \supset B(gx,\varepsilon_{0}/100). \tag{11.2}\] Indeed, write \(I=]x-\rho,x+\rho^{\prime}[\). Recall that \(\varkappa(g,I)\leqslant 1\) and \(gI=]gx-\varepsilon_{0},gx+\varepsilon_{0}[\). It follows that \(1/2\leqslant\rho^{\prime}/\rho\leqslant 2\). Define \(I^{\prime}=]x-\rho/50,x+\rho^{\prime}/50[\) so that \(5I^{\prime}\subset I.\) Again by distortion control, \(B(gx,\varepsilon_{0}/100)\subset gI^{\prime},\) proving the claim. For \(n\in\mathbb{N}\), let \(\mathcal{E}_{n}\) be a the set of intervals \(I^{\prime}\) obtained this way and satisfying moreover \(|I^{\prime}|\in[2^{-n-1},2^{-n}[\). Let \(\widetilde{\mathcal{E}}_{n}\) be a maximal subset of \(\mathcal{E}_{n}\) consisting of pairwise disjoint intervals. By maximality, we have \(\bigcup_{I^{\prime}\in\widetilde{\mathcal{E}}_{n}^{\prime}}5I^{\prime}\supset \bigcup_{I^{\prime}\in\mathcal{E}_{n}^{\prime}}I^{\prime}.\) By Proposition 8.1 every point in \(\Lambda^{\prime}\) falls in \(\bigcup_{I^{\prime}\in\mathcal{E}_{n}}I^{\prime}\) for infinitely many \(n\in\mathbb{N}\). Note that \(\nu\) is atomless and hence \(\nu(\Lambda^{\prime})=1\). By the Borel-Cantelli lemma, \[\sum_{n=1}^{\infty}\sum_{I^{\prime}\in\widetilde{\mathcal{E}}_{n}^{\prime}} \nu(5I^{\prime})\geqslant\sum_{n=1}^{\infty}\nu\left(\bigcup_{I^{\prime}\in \mathcal{E}_{n}}I^{\prime}\right)=\infty.\] For every \(I^{\prime}\in\mathcal{E}_{n}\), by (11.1), \[\nu(5I^{\prime})\leqslant\nu(I)\ll_{G}|I|^{\delta}\ll 2^{-\delta n},\] where the implied constant depends only on \(G\). Combining these, we have \[\limsup_{n\to\infty}\frac{1}{n}\log\#\widetilde{\mathcal{E}}_{n}\geqslant\delta.\] By Lemma 3.2, \(G\) does not preserve any invariant probability measure. Starting from the set \(\widetilde{\mathcal{E}}_{n}\), which consists of intervals satisfying (11.2), the arguments in Sections 8.1 and 8.2 allow us to construct stationary measures on \(\Lambda\) of dimension arbitrarily close to \(\delta\). We would obtain a contradiction if \(\delta>\dim_{\mathrm{H}}\Lambda.\) Recall the following result of Deroin-Kleptsyn-Navas, which is useful in the sequel. **Theorem 11.3** ([23, Theorem F]).: _Let \(G\subset\mathrm{Diff}_{+}^{2}(\mathbb{S}^{1})\) be a finitely generated subgroup with an exceptional minimal set \(\Lambda.\) Assume that \(G\) satisfies property \((\Lambda\star)\) and there exists an atomless \(\delta\)-conformal measure supported on \(\Lambda,\) then \(\delta<1.\)_ ### Basic properties of the dynamical critical exponent on sets The goal of the rest of the section is to prove Theorem 2.28. Recalling the definition of \(\delta(G,\Delta)\) in Definition 2.19, we first discuss some basic properties of \(\delta(G,\Delta)\) in this subsection. **Lemma 11.4**.: _Let \(G\) be a subgroup of \(\mathrm{Diff}_{+}^{2}(\mathbb{S}^{1})\) and \(\varnothing\neq\Delta\subset\mathbb{S}^{1}\)._ 1. _If_ \(\Delta\subset\Delta^{\prime}\subset\mathbb{S}^{1}\)_, then_ \(\delta(G,\Delta)\leqslant\delta(G,\Delta^{\prime}).\)__ 2. _If_ \(\Delta_{i}\subset\mathbb{S}^{1},i\in\mathcal{I}\) _and_ \(\Delta=\bigcup_{i\in\mathcal{I}}\Delta_{i}\) _then_ \[\delta(G,\Delta)=\sup_{i\in\mathcal{I}}\delta(G,\Delta_{i}).\] 3. _Let_ \(\overline{\Delta}\) _be the closure of_ \(\Delta\)_, then_ \(\delta(G,\Delta)=\delta(G,\overline{\Delta}).\)__ 4. _For any_ \(g\in G\)_,_ \(\delta(G,\Delta)=\delta(G,g\Delta)\)_._ 5. _Let_ \(G\Delta:=\bigcup_{g\in G}g\Delta\)_, then_ \[\delta(G,\Delta)=\delta(G,G\Delta)=\delta(G,\overline{G\Delta}).\] 6. _If_ \(H\subset G\) _is a subgroup of finite index, then_ \(\delta(H,\Delta)=\delta(G,\Delta).\)__ Proof.: We only prove (2). The proof of the rest of the lemma are straightforward. It is obvious that \(\delta(G,\Delta)\geqslant\sup_{i\in\mathcal{I}}\delta(G,\Delta_{i}).\) On contrary, for every \(\varepsilon>0\) we can take a finite \(\varepsilon/2\)-dense set of \(\Delta\), denotes by \(\mathcal{K}=\mathcal{K}(\varepsilon).\) For every \(x\in\Delta,\) there exists \(y\in\mathcal{K}\) such that \(B(x,\varepsilon)\supset B(y,\varepsilon/2).\) Hence \[\limsup_{n\to\infty}\frac{1}{n}\log\#\left\{g\in G:\exists x\in\Delta,g^{ \prime}|_{B(x,\varepsilon)}\geqslant 2^{-n}\right\}\leqslant\max_{j\in \mathcal{J}}\delta(G,\Delta_{j})\leqslant\sup_{i\in\mathcal{I}}\delta(G,\Delta_{ i}),\] where \(\mathcal{J}=\mathcal{J}(\varepsilon)\) is a finite subset of \(\mathcal{I}\) such that \(\mathcal{K}\subset\bigcup_{j\in\mathcal{J}}\Delta_{j}.\) As \(\varepsilon>0\) is arbitrary, the conclusion follows. **Corollary 11.5**.: _Assume moreover that \(G\) does not have any finite orbit. For every nonempty subset \(\Delta\) of the circle, we have_ \[\delta(G,\Delta)\geqslant\delta(G).\] _In particular, \(\delta(G,\Delta)=\delta(G)\) if \(G\) is minimal._ Proof.: Recall that under the assumption, \(G\) has a unique minimal set \(\Lambda\subset\mathbb{S}^{1}\) and \(\delta(G)=\delta(G,\Lambda)\). For every nonempty \(\Delta\subset\mathbb{S}^{1}\), we have \(\overline{G}\Delta\supset\Lambda.\) The desired inequality follows then from the lemma. ### The dynamical critical exponent for real analytic groups In view of Corollary 11.5, in the minimal case, the consideration of critical exponents \(\delta(G,\Delta)\) for different \(\Delta\subset G\) does not provide more information than \(\delta(G)\) already did. That is the reason we focus on the exceptional case. In the remainder of this section, we always assume that \(G\) is a finitely generated subgroup of \(\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) with an exceptional minimal set \(\Lambda\). It enjoys property \((\Lambda\star)\), by Theorem 3.4. **Definition 11.6**.: A point \(x\) is said to be _wandering_ if \(x\notin\Lambda\) and its stabilizer in \(G\) is trivial. This notion of wandering point coincides with the usual one. Indeed, for \(x\in\mathbb{S}^{1}\), \(x\) is wandering if and only if there exists \(\varepsilon>0\) such that \(gB(x,\varepsilon)\) are pairwise disjoint for \(g\in G.\) This is because, by Hector's result, Theorem 3.5, the stabilizer of any connected component \(J\) of \(\mathbb{S}^{1}\setminus\Lambda\) is either trivial or infinitely cyclic. This also shows that for all but finitely many \(x\in J,\)\(x\) is wandering. Consider the series \[P(x,s)=\sum_{g\in G}g^{\prime}(x)^{s},\] which is convergent at \(s=1\) for every wandering points \(x\), by [24, Lemma 4.2]. **Proposition 11.7**.: _For every wandering point \(x,\) the exponent of convergence for the series \(P(x,s)\) is exactly \(\delta(G,x)\). In particular, \(\delta(G,x)\leqslant 1.\)_ Therefore, \(\delta(G,x)\) is indeed a "critical exponent". Proof.: Take \(\varepsilon>0\) such that \(gB(x,\varepsilon)\) are pairwise disjoint for \(g\in G.\) By the distortion control (3.4), there exists \(C>0\) such that \(\varkappa(g,B(x,\varepsilon))\leqslant C\) for every \(g\in G.\) It follows that \[\limsup_{n\to+\infty}\frac{1}{n}\log\#\left\{g\in G:g^{\prime}|_{B(x, \varepsilon^{\prime})}\geqslant 2^{-n}\right\}=\limsup_{n\to+\infty}\frac{1}{n} \log\#\left\{g\in G:g^{\prime}(x)\geqslant 2^{-n}\right\}\] for every \(\varepsilon^{\prime}\leqslant\varepsilon.\) The the right-hand side being the exponent of convergence of \(P(x,s)\), the desired equality follows. Combined with Corollary 2.18 and Corollary 11.5, we obtain the following. **Corollary 11.8**.: _For every wandering point \(x,\)\(\dim_{\mathrm{H}}\Lambda=\delta(G)\leqslant\delta(G,x)\leqslant 1.\)_ ### Existence of atomless conformal measures Recall the multiplicity of fixed points of diffeomorphisms. **Definition 11.9**.: Let \(G\subset\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) be a finitely generated subgroup with an exceptional minimal set. Let \(y\in\mathbb{S}^{1}\). Define \[k(y)=\begin{cases}0&\text{ if }\mathrm{Stab}_{G}(x)\text{ is trivial},\\ 0&\text{ if }x\text{ is a hyperbolic fixed point of some }g\in G,\\ k&\text{ if }x\text{ is a parabolic fixed point of multiplicity }k+1\text{ of some }g\in G.\end{cases}\] This is well-defined by Theorem 3.5. Note that \(k\) is constant along \(G\)-orbits. **Proposition 11.10**.: _Let \(G\subset\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) be a finitely generated subgroup with an exceptional minimal set \(\Lambda\). Let \(x\) be a wandering point. If_ \[\delta(G,x)>\frac{k(y)}{k(y)+1}\] _for all points \(y\in\omega(x)\), then there exists an atomless \(\delta(G,x)\)-conformal measure on \(\Lambda.\)_ Recall that the \(\omega\)_-limit set_ of \(x\) is defined as \[\omega(x)=\left\{y\in\mathbb{S}^{1}:\exists\left\{g_{n}\right\}\subset G\text{ a sequence of different elements such that }g_{n}x\to y\right\}.\] From now on, we fix a wandering point \(x\) and let \(\delta=\delta(G,x).\) We adopt the construction of Patterson-Sullivan measures to our setting. This method is known to the experts. One subtle point is that the constructed measure may not be supported on \(\Lambda,\) since \(\omega(x)\) can be strictly larger than \(\Lambda\). In order to address this issue, we study whether a given point is an atom of the constructed conformal measure. This is the content of Lemma 11.14 below, which draws inspiration from [1]. **Lemma 11.11** ([62, Lemma 3.1]).: _If \(P(x,s)\) converges at \(s=\delta,\) then there exists a decreasing function \(h:(0,\infty)\to(0,\infty)\) such that the series_ \[\widetilde{P}(x,s)=\sum_{g\in G}h\big{(}g^{\prime}(x)\big{)}g^{\prime}(x)^{s}\] _diverges for \(s\leqslant\delta\) and converges for \(s>\delta.\) Moreover, for every \(\varepsilon>0\), there exists \(t_{0}=t_{0}(\varepsilon)>0\) such_ \[\forall\lambda\in(0,1),\,\forall t\in(0,t_{0}),\quad h(\lambda t)\leqslant \lambda^{-\varepsilon}h(t).\] We take \(h\equiv 1\) if \(P(x,s)\) diverges at \(s=\delta.\) For every \(s>\delta,\) consider the probability measures \[\nu_{s}=\frac{1}{\widetilde{P}(x,s)}\sum_{g\in G}h\big{(}g^{\prime}(x)\big{)}g ^{\prime}(x)^{s}\delta_{g(x)}.\] By weak compactness, there is some sequence \(s_{n}\downarrow\delta\) such that \(\nu_{s_{n}}\) weakly converges to some probability measure \(\nu\). **Lemma 11.12**.: _The probability measure \(\nu\) is a \(\delta\)-conformal measure._ Proof.: Consider first the case where \(P(x,\delta)\) is divergent. Let \(f\in G\). Using a change of variable, \[f_{*}^{-1}\nu_{s} =\frac{1}{P(x,s)}\sum_{g\in G}g^{\prime}(x)^{s}\delta_{(f^{-1}g)(x)}\] \[=\frac{1}{P(x,s)}\sum_{g\in G}(fg)^{\prime}(x)^{s}\delta_{g(x)}\] \[=\frac{1}{P(x,s)}\sum_{g\in G}f^{\prime}(g(x))^{s}g^{\prime}(x)^{ s}\delta_{g(x)}\] \[=(f^{\prime})^{s}\nu_{s}.\] Letting \(s\downarrow\delta\) along \((s_{n})\), we find \(f_{*}^{-1}\nu=(f^{\prime})^{\delta}\nu\). The case where \(P(x,\delta)\) is convergent can be treated similarly. For every \(\varepsilon>0\), the set \(E_{\varepsilon}\) of \(g\in G\) such that \(g^{\prime}(x)\geqslant t_{0}(\varepsilon)\) is finite. By the property of \(h\), \[g\in G\setminus E_{\varepsilon},\quad h\big{(}(fg)^{\prime}(x)\big{)} \leqslant\max\{1,f^{\prime}(g(x))^{-\varepsilon}\}h(g^{\prime}(x)).\] Hence, by the same change of variable, \[f_{*}^{-1}\nu_{s}\leqslant\max\{1,(f^{\prime})^{-\varepsilon}\}(f^{\prime})^ {s}\nu_{s}+\eta_{\varepsilon,s} \tag{11.3}\] where \(\eta_{\varepsilon,s}\) is a measure on \(E_{\varepsilon}x\) and of total mass \[\frac{1}{\widetilde{P}(x,s)}\sum_{g\in E_{\varepsilon}}h((fg)^{\prime}(x))(fg )^{\prime}(x)^{s}.\] The sum is bounded uniformly in \(s\) and \(\widetilde{P}(x,s)\to+\infty\) as \(s\to\delta\). Thus, letting \(s\to\delta\) along \((s_{n})\) and then \(\varepsilon\to 0^{+}\), we obtain \[f_{*}^{-1}\nu\leqslant(f^{\prime})^{\delta}\nu.\] The inequality in the opposite direction can be obtained by inverting \(f\). **Lemma 11.13**.: _The measure \(\nu\) is supported on \(\omega(x)\)._ Proof.: For \(y\notin\omega(x)\), there is \(\varepsilon>0\) such that the set of \(g\in G\) with \(gx\in B(y,\varepsilon)\) is finite. Their contribution to the sum \(\widetilde{P}(x,s)\) is bounded uniformly in \(s>\delta\). As \(s\downarrow\delta\), we have \(\widetilde{P}(x,s)\to+\infty\). Hence \(\nu(B(y,\varepsilon))=0\). In the setting of a Fuchsian group, the \(\omega\)-limit set of any point in the hyperbolic plane is precisely the limit set of the group. However, in our setting, \(\omega(x)\) may be strictly larger than \(\Lambda\). Indeed, let \(J\subset\mathbb{S}^{1}\setminus\Lambda\) be the connected component of \(\mathbb{S}^{1}\setminus\Lambda\) containing \(x\), then by Hector's Theorem 3.5, \(\mathrm{Stab}_{G}(J)\) is either trivial or infinite cyclic. If \(\mathrm{Stab}_{G}(J)\) is trivial then \(\omega(x)=\Lambda\). If \(\mathrm{Stab}_{G}(J)\) is infinitely generated by \(f\in G\), let \(x_{1}\) and \(x_{2}\) be the endpoints of the connected component of \(J\setminus\mathrm{Fix}(f)\) containing \(x\), where \(\mathrm{Fix}(f)\) denotes the set of fixed points of \(f\). Then \(\omega(x)=\Lambda\cup Gx_{1}\cup Gx_{2}\). In particular, \(\omega(x)\) can be different to \(\Lambda\) if \(G\) has fixed points outside \(\Lambda\). See Example 12.8. **Lemma 11.14**.: _For any \(y\in\mathbb{S}^{1},\) if \(\delta>\frac{k(y)}{k(y)+1}\), then \(y\) is not an atom of \(\nu\)._ Proof.: If \(y\in\mathbb{S}^{1}\) is an atom of \(\nu\) then \[\forall g\in G,\quad\nu(\{gy\})=g^{\prime}(y)^{\delta}\nu(\{y\}).\] Hence \(g^{\prime}(y)\) is bounded uniformly in \(g\in G\). Moreover, \(g^{\prime}(y)=1\) for all \(g\in\mathrm{Stab}_{G}(y)\). By Lemma 11.13, we only need to consider the points \(y\in\omega(x).\) Then either \(y\notin\Lambda\) then \(\operatorname{Stab}_{G}(y)\) is not trivial by the above description of \(\omega(x),\) or \(y\in\Lambda\) then \(y\in G(\operatorname{NE})\) by Theorem 3.3 and by property \((\Lambda\star),\)\(\operatorname{Stab}_{G}(y)\) is not trivial. In all cases, \(y\) is a fixed point of some element \(f\in G\setminus\{\operatorname{id}\}\). We first consider the case that \(y\) is a parabolic fixed point. Thus we denote by \(k=k(y)\). Then the multiplicity of \(f\) at \(y\) is \(k+1\geqslant 2.\) We claim that \[\sup_{s>\delta}\nu_{s}(B(y,\rho))\to 0,\text{ as }\rho\to 0^{+}.\] This finishes the proof of the lemma since \(\nu(B(y,\rho))\leqslant\liminf_{n\to+\infty}\nu_{s_{n}}(B(y,\rho))\) for any \(\rho>0\). To prove the claim, first note that \(\omega(x)\cap Gx=\varnothing\) because \(x\) is a wandering point, hence \(\nu_{s}(\{y\})=0\). Next, we establish \[\sup_{s>\delta}\nu_{s}\big{(}|y,y+\rho|\big{)}\to 0,\text{ as }\rho\to 0^{+}. \tag{11.4}\] Indeed, replacing \(f\) by \(f^{-1}\) if necessary, we may assume that \(f\) is contracting on \(]y,y+\varepsilon^{\prime}[\) for some \(\varepsilon^{\prime}>0\), so that we can use the arguments in subsection 10.1. Let \(y_{0}\in]y,y+\varepsilon^{\prime}[\) be a point close to \(y\) and let \(I_{1}=[f(y_{0}),y_{0}]\). Moreover, set \(I_{n}=f^{n-1}I_{1}\) for \(n\in\mathbb{N}\). By (10.2) and Lemma 10.3, \[\forall n\geqslant 1,\,\forall z\in I_{1},\quad(f^{n})^{\prime}(z)\ll_{f,y_{0}} n^{-\frac{k+1}{k}}.\] Without loss of generality that \(P(x,s)\) converges at \(s=\delta.\) Take \(\varepsilon=\frac{1}{2}(\delta-\frac{k}{k+1})>0\). The set \(E_{\varepsilon}=\{\,g\in G:g^{\prime}(x)\geqslant t_{0}(\varepsilon)\,\}\) is finite, as in the proof of Lemma 11.12. Shrink \(\varepsilon^{\prime}>0\) if necessary, we can assume that \(E_{\varepsilon}x\cap I_{1}=\varnothing\). For every \(n\in\mathbb{N}\), by (11.3), for all \(s>\delta,\) \[\nu_{s}(I_{n+1})=\nu_{s}(f^{n}I_{1})\leqslant\int_{I_{1}}(f^{n})^{\prime}(z)^ {s-\varepsilon}\,\mathrm{d}\nu_{s}(z)\ll_{f,y_{0}}n^{-\frac{k+1}{k}(s- \varepsilon)}\nu_{s}(I_{1})\ll n^{-1-\frac{k+1}{k}\varepsilon}.\] For \(N\in\mathbb{N}\), we have \(\bigcup_{n\geqslant N}I_{n+1}=]y,f^{N}(y_{0})]\). Therefore, \[\sup_{s>\delta}\nu_{s}\big{(}|y,f^{N}(y_{0})|\big{)}\ll_{f,y_{0}}\sum_{n \geqslant N}n^{-1-\frac{k+1}{k}\varepsilon}.\] The righthand-side being the tail of a convergent series, we obtain (11.4). The same estimate for \(|y-\rho,y|\) can be established in the same way, finishing the proof of the claim. For the case that \(y\) is a contracting hyperbolic fixed point, we obtain a similar estimate \[\nu_{s}(]y,f^{N}(y_{0})[)\ll_{f,y_{0}}\sum_{n\geqslant N}\lambda^{s}\leqslant \sum_{n\geqslant N}\lambda^{\delta}\] for a fixed \(\lambda<1\) and every \(s>\delta.\) The conclusion also holds. Proof of Proposition 11.10.: It follows immediately from Lemmas 11.12, 11.13 and 11.14. ### The dynamical critical exponent at a point Now we state the main proposition of an estimate for critical exponents. **Proposition 11.15**.: _For every wandering point \(x,\) we have_ \[\delta(G,x)=\max\left\{\dim_{\mathrm{H}}\Lambda,\sup_{y\in\omega(x)}\frac{k(y) }{k(y)+1}\right\}=\max\left\{\dim_{\mathrm{H}}\Lambda,\max_{y\in\omega(x) \setminus\Lambda}\frac{k(y)}{k(y)+1}\right\}<1.\] Proof.: Corollary 11.8 tells us \(\delta=\delta(G,x)\geqslant\dim_{\mathrm{H}}\Lambda\). Let \(y\in\omega(x)\), either \(k(y)=0\) or there is \(f\in G\setminus\{\mathrm{id}\}\) such that \(y\) is a parabolic fixed point of \(f\). In the latter case, \(Gx\) intersect the basin of attraction of \(y\) for \(f\). Then, by Proposition 10.1, \(\delta(G,Gx)\geqslant\frac{k(y)}{k(y)+1}\). By Lemma 11.4, \(\delta(G,x)=\delta(G,Gx)\). To summarize, we have proved \[\delta\geqslant\max\Bigl{\{}\dim_{\mathrm{H}}\Lambda,\sup_{y\in\omega(x)}\frac {k(y)}{k(y)+1}\Bigr{\}}.\] Assume for a contradiction that this inequality is strict. Then by Proposition 11.10, the probability measure \(\nu\) is an atomless \(\delta\)-conformal measure on \(\omega(x)\). But every point in \(\omega(x)\setminus\Lambda\) is isolated in \(\omega(x)\), hence \(\nu\) is supported on \(\Lambda.\) Then Theorem 2.27 implies \(\delta=\dim_{\mathrm{H}}\Lambda\), leading to a contradiction. Therefore, \[\delta=\max\Bigl{\{}\dim_{\mathrm{H}}\Lambda,\sup_{y\in\omega(x)}\frac{k(y)}{k (y)+1}\Bigr{\}}.\] Taking into account Theorem E, \[\delta=\max\Bigl{\{}\dim_{\mathrm{H}}\Lambda,\sup_{y\in\omega(x)\setminus \Lambda}\frac{k(y)}{k(y)+1}\Bigr{\}}.\] Recall that \(\omega(x)\) is union of \(\Lambda\) with at most two \(G\)-orbits (see the discussions before Lemma 11.14), thus the supremum on the right-hand side is just the maximum between two values, both \(<1\). Thus, if \(\delta\geqslant 1\), then \(\dim_{\mathrm{H}}\Lambda=\delta=1\) and \(\delta>\frac{k(y)}{k(y)+1}\) for every \(y\in\omega(x)\). By Proposition 11.10 again, \(\nu\) would be an atomless \(1\)-conformal measure on \(\Lambda\), which contradicts Theorem 11.3. We deduce that \(\delta(G,x)<1\). **Lemma 11.16**.: _There are only finitely many possible values of \(k(y)\) for \(y\in\mathbb{S}^{1}\setminus\Lambda.\)_ Proof.: By [24, Corollary 1.18], \(\mathbb{S}^{1}\setminus\Lambda\) is the union of finitely many orbits of intervals. Hence the family of elements in \(G\) that stabilize a connected component of \(\mathbb{S}^{1}\setminus\Lambda\) is contained in a finite union of conjugate classes of some infinite cyclic groups by Theorem 3.5. Proof of Theorem 2.21.: The set of wandering points is dense in \(\mathbb{S}^{1}\setminus\Lambda\). Thus by Lemma 11.4, \[\delta(G,\mathbb{S}^{1})=\sup\left\{\,\delta(G,x):x\text{ is wandering}\, \right\}.\] Note that every fixed point is in the \(\omega\)-limit set of some wandering point. Thus, combined with Proposition 11.15, \[\delta(G,\mathbb{S}^{1})=\max\left\{\dim_{\mathrm{H}}\Lambda,\sup_{y\in \mathbb{S}^{1}}\frac{k(y)}{k(y)+1}\right\}=\max\,\left\{\dim_{\mathrm{H}} \Lambda,\sup_{y\in\mathbb{S}^{1}\setminus\Lambda}\frac{k(y)}{k(y)+1}\right\}.\] The last supremum is actually a maximum, by Lemma 11.16. Therefore, by the inequality in Proposition 11.15, \(\delta(G,\mathbb{S}^{1})<1\). Finally, taking into account Corollary 2.18, \[\dim_{\mathrm{H}}\Lambda=\delta(G)\leqslant\delta(G,\mathbb{S}^{1}).\qed\] Proof of Corollary 2.22.: It follows immediately from Theorem 2.21. Proof of Theorem 2.28.: By the assumption, we have \(\delta(G,x)=\delta(G)=\dim_{\mathrm{H}}\Lambda\) for every wandering point \(x\) by Theorem 2.21. In particular, \(\delta(G,x)>k(y)/(k(y)+1)\) for every \(y\in\mathbb{S}^{1}\) by Theorem E. The conclusion follows from Proposition 11.10. Proof of Theorem 5.: The inequality \(\dim_{\rm H}\Lambda<1\) follows immediately from Theorem 2.21. By the existence of a perfect pingpong pair (Proposition 7.8) or [53], \(G\) contains a free sub-semigroup (indeed a free subgroup) freely generated by \(h_{1},h_{2}\in G.\) It follows that \[\delta(G)\geqslant\frac{\log 2}{\log\max\left\{\|h_{1}^{-1}\|_{C^{1}},\|h_{2}^{- 1}\|_{C^{1}}\right\}}>0,\] and hence \(\dim_{\rm H}\Lambda>0.\) _Remark 11.17_.: Positivity of \(\dim_{\rm H}\Lambda\) can also be deduced from the positivity of \(\dim_{\rm H}\nu\) for a stationary measure \(\nu\) supported on \(\Lambda\). This follows by combining Theorem 2.4 and \(h_{\rm F}(\mu,\nu)>0,\) or a recent result on Holder regularity of stationary measures [35]. Proof of Corollary 5.: It follows immediately from Theorem 2.21. ## 12 Additional proofs and further discussions ### Additional proofs of main results Proof of Corollary 2.11.: Note that \(G\) is locally discrete by Corollary 3.6. Additionally, Theorem 4.7 states that \(\nu\) is supported on a \(T_{\mu}\)-minimal set. This set is contained in the exceptional minimal set \(\Lambda\) of \(G\) by Proposition 8.7. Applying Theorem 5, we have \(\dim_{\rm H}\nu\leqslant\dim_{\rm H}\Lambda<1.\) Then the conclusion follows by Theorem 2.10. Proof of Theorem 5.: Combine Theorem 2.4 and Corollary 2.11. Proof of Theorem 5.: If \(H\) has no finite orbits, then either \(H\) acts minimally or \(G\) has an exceptional minimal set of dimension \(\delta(G)\). In the latter case, the orbit closure of a point \(x\) equals \(Gx\cup\omega(x),\) where \(\omega(x)\) is the union of \(\Lambda\) with at most two \(G\)-orbits (see the discussions before Lemma 11.14). Therefore, the conclusion follows. If \(H\) has a finite orbit \(F=\{x_{i}:i\in[n]\}\) where \(x_{0},\cdots,x_{n-1}\) arrange in cyclic order and \(H\) acts transitively on \(F.\) For every \(h\in H,\) there exists a unique translation number \(\tau=\tau(h)\in[n]\cong\mathbb{Z}/n\mathbb{Z}\) such that \(h(x_{i})=x_{i+\tau}.\) It induces a surjective group homomorphism \(\tau:H\to\mathbb{Z}/n\mathbb{Z}.\) Let \(H_{1}=\ker\tau,\) which is a normal subgroup of \(H.\) If \(H_{1}\) is not isomorphic to \(\mathbb{Z},\) then by [58, Proposition 3.5], every orbit closure is a finite set, a finite union of closed intervals or the whole circle, hence we are in the second case of the theorem. Now we consider the case that \(H_{1}=\langle f\rangle\) for some nontrivial element \(f\in H\) that fixes every point in \(F\). In this case, every \(H\)-orbit is a finite union of \(\langle f\rangle\)-orbits, and therefore every \(H\)-orbit closure is either a finite set or a countable set of points. Since \(H/H_{1}\cong\mathbb{Z}/n\mathbb{Z},\) there exists \(g\in H\) such that \(g(x_{i})=x_{i+1}\) for every \(i\in[n].\) As \(g^{n}\in\langle f\rangle,\) the group \(H\) can be presented as \[H\cong\left\langle a,b|bab^{-1}=a^{s},b^{n}=a^{m}\right\rangle\] for some integers \(s,m,\) where \(a\) corresponds to \(f\) and \(b\) corresponds to \(g.\) Note that \(b^{n}\) commutes with \(a\) and \(b^{n}ab^{-n}=a^{s^{n}}\). There are only two cases to consider. Case 1.\(s=1.\) Then \(H\) is a quotient of \(\mathbb{Z}^{2}.\) Specifically, \(H\cong\mathbb{Z}\times\mathbb{Z}/k\mathbb{Z}\) where \(k=\gcd(m,n).\) Case 2.\(s=-1\) and \(n=2k\) for some positive integer \(k.\) Let \(I_{i}\) denote the closed interval \([x_{i},x_{i+1}]\) for every \(i\in[2k].\) Define \(f_{i}=f|I_{i}\) and \(g_{i}=g|I_{i},\) where \(f_{i}\) fixes \(I_{i}\) and \(g_{i}\) maps \(I_{i}\) to \(I_{i+1}.\) The group relations can be interpreted as the following identities: \[g_{i}f_{i}g_{i}^{-1}=(gfg^{-1})|_{I_{i+1}}=f^{-1}|_{I_{i+1}}=f^{-1}_{i+1},\quad \forall i\in[2k], \tag{12.1}\] \[(g_{i-1}\cdots g_{i+1}g_{i})=(g^{2k})|_{I_{i}}=(f^{m})|_{I_{i}}=f^{m}_{i},\quad \forall i\in[2k]. \tag{12.2}\] Conjugating (12.2) by \(g_{i}\) and using (12.1), we obtain \[g_{i}g_{i-1}\cdots g_{i+1}=g_{i}f_{i}^{m}g_{i}^{-1}=f_{i+1}^{-m}.\] Taking \(i+1\) in (12.2) gives \(f_{i+1}^{m}=g_{i}g_{i-1}\cdots g_{i+1}=f_{i+1}^{-m}\). Since \(f_{i+1}\) is nontrivial and there is none other than the identity has finite order in \(\mathrm{Diff}_{+}^{\omega}(I_{i+1})\), we conclude that \(m=0\). Thus, we have \(H\cong\left\langle a,b|bab^{-1}=a^{-1},b^{2k}=1\right\rangle\). _Remark 12.1_.: The case \(H\cong\left\langle a,b|bab^{-1}=a^{-1},b^{2k}=1\right\rangle\) can indeed be realized in \(\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1})\). Take \(g=(x\mapsto x+1/2k)\) on \(\mathbb{S}^{1}=\mathbb{R}/\mathbb{Z}.\) We construct \(f\) as follow. Let \(X(x)=\sin(2k\pi x),\) which is a real analytic vector field on \(\mathbb{R}/\mathbb{Z}\) satisfying \(X(x+1/2k)=-X(x).\) Let \(\phi_{t}\) be the flow generated by \(X(x)\), then \(\phi_{t}\in\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1}).\) Besides \(\phi_{t}\) fixes \(\left\{l/2k:l\in[2k]\right\}\) and \(\phi_{t}\circ g=g\circ\phi_{-t}.\) Let \(f=\phi_{1},\) then \(gfg^{-1}=f^{-1}.\) Hence \(f\) and \(g\) generate the desired subgroup of \(\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1}).\) Proof of Corollary I.: Since \(\#\left\{f_{1},\cdots,f_{n},f_{1}^{-1},\cdots,f_{n}^{-1}\right\}^{m}\geqslant( 2n-1)^{m},\) we have \(\delta(H)\geqslant 1.\) The conclusion follows immediately by Theorem H. The bound is sharp since one can construct \(\left\{f_{1},\cdots,f_{n}\right\}\subset\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^ {1})\) with a pingpong partition of \(2n\) disjoint intervals satisfying \(\max\{\|(f_{i}^{-1})^{\prime}\|_{\infty},\|f_{i}^{\prime}\|_{\infty}\} \leqslant(2n-1)+\varepsilon\) for every \(\varepsilon>0.\) Proof of Corollary K.: If \(T\) preserves an invariant probability measure \(\nu\) on \(\mathbb{S}^{1}.\) Then \(\nu\) is atomless since \(T\) has no finite orbits. Recall the rotation number \(\rho(f)\) of an element \(f\in\mathrm{Homeo}_{+}(\mathbb{S}^{1}).\) Then the conclusion follows from the following lemma. **Lemma 12.2**.: _The rotation spectrum \(\rho(T)\coloneqq\left\{\rho(f):f\in T\right\}\) is a dense sub-semigroup of \(\mathbb{R}/\mathbb{Z}.\) Therefore, \(\Delta=\mathrm{supp}\,\nu\) is the unique minimal set of both \(T\) and \(T^{-1}.\)_ Proof.: For every \(x\in\mathbb{S}^{1},\) we have \(\nu([x,f(x)])=\rho(f)\) for every \(f\in T\cup T^{-1}.\) It follows that \(\rho(fg)=\rho(f)+\rho(g)\) for every \(f,g\in T\) and hence \(\rho(T)\) is a sub-semigroup of \(\mathbb{R}/\mathbb{Z}.\) If \(\rho(T)\) is finite, we get a contradiction as the proof of Lemma 3.2. Then \(\rho(T)\) is infinite and hence dense in \(\mathbb{R}/\mathbb{Z}.\) In order to show the second statement, we take arbitrary points \(x\in\Delta\) and \(y\in\mathbb{S}^{1}.\) Note that \(\nu([y,f(y)])=\rho(f)\) for every \(f\in T.\) Combining with \(\rho(T)\) is dense in \(\mathbb{R}/\mathbb{Z},\) for every \(\varepsilon>0,\) there exists \(f_{1},f_{2}\in T\) such that \(\rho(f_{1})<\nu([y,x])<\rho(f_{2})\) and \(|\rho(f_{2})-\rho(f_{1})|<\varepsilon.\) Then \(x\in[f_{1}(y),f_{2}(y)]\) and \(\nu([f_{1}(y),f_{2}(y)])<\varepsilon.\) Recalling that \(\nu\) is continuous, we conclude that \(x\in\overline{Ty}\) and hence \(\Delta\subset\overline{Ty}\) for every \(y\in\mathbb{S}^{1}.\) The same argument also holds for \(T^{-1},\) hence \(\Delta\) is the unique minimal set of both \(T\) and \(T^{-1}.\) Otherwise \(T\) does not preserve any probability measure. The case that \(T\) is finitely generated follows by Theorems 4.7 and 4.12 (2). For general cases, we first take a finitely generated subgroup \(T_{0}\subset T\) such that \(T_{0}\) does not preserve any probability measure. This is because for every \(f\in\mathrm{Homeo}_{+}(\mathbb{S}^{1}),\) the set of all \(f\)-invariant probability measures is a weak* compact subset in the space of all Radon measures on \(\mathbb{S}^{1}\). Now recall Malicet's result [52] which asserts that \(T\) has only finitely many minimal sets. We need the following lemma. **Lemma 12.3**.: _Every \(T\) minimal set contains at least one \(T_{0}\)-minimal set and hence the number of \(T_{0}\)-minimal sets at least the number of \(T\)'s. Furthermore, if the strict inequality holds, then there exists \(f\in T\) such that \(\left\langle T_{0},f\right\rangle,\) the semigroup generated by \(T_{0}\) and \(f\), has a strictly less number of minimal sets than \(T_{0}.\)_ Proof.: The first part of this lemma is obvious. For the second part, let us denote \(\widetilde{\Delta}_{1},\cdots,\widetilde{\Delta}_{d}\) to be the minimal sets of \(T.\) For each \(i\in[d],\) let \(\Delta_{i}\) to be one of \(T_{0}\)-minimal sets which is contained in \(\widetilde{\Delta}_{i}.\) Let \(\Delta\) to be another \(T_{0}\)-minimal set. Then \(\overline{T\Delta}\) must contain one of \(T\)-minimal sets. Without loss of generality, \(\Delta_{1}\subset\widetilde{\Delta}_{1}\subset\overline{T\Delta}.\) Applying Proposition 7.8 to \(T_{0}\) and an open interval \(I\) that intersects \(\Delta_{1}\) and is disjoint with other minimal sets of \(T_{0}.\) Then there exists \(h\in T_{0}\) with an isolated attracting fixed point \(x\in\Delta_{1}.\) Take \(\varepsilon>0\) such that there is no other fixed point of \(h\) on \(B(x,\varepsilon).\) Take \(f\in T\) and \(y\in\Delta\) such that \(f(y)\in B(x,\varepsilon).\) Then \(h^{n}f(y)\to x\) as \(n\to+\infty.\) We consider the semigroup \(T_{1}\) generated by \(T_{0}\) and \(f.\) Assume that it has a same number of minimal sets with \(T_{0}.\) Then there are two minimal sets of \(T_{1}\) contain \(\Delta_{1}\) and \(\Delta\) respectively. This is not the case since \(h^{n}f\in T_{1}\) and \(h^{n}f(y)\to x\) as \(n\to+\infty\) where \(x\in\Delta\) and \(y\in\Delta_{1}.\) Starting with the finitely generated sub-semigroup \(T_{0}\subset T\) without invariant probability measures, we can apply this lemma finitely many times to obtain a finitely generated subgroup \(T_{0}\subset T_{1}\subset T\) with the same number of minimal sets as \(T\) and without invariant probability measures. We can then repeat this argument for \(T_{1}^{-1}\) and \(T^{-1}\). Combining with the monotonicity of the number of minimal sets with respect to the semigroup, we can find a finitely generated subgroup \(T_{1}\subset T_{2}\subset T\) such that 1. \(T_{2}\) has no invariant probability measures, 2. \(T_{2}\) and \(T\) has the same number of minimal sets, 3. \(T_{2}^{-1}\) and \(T^{-1}\) has the same number of minimal sets. Since the conclusion holds for \(T_{2}\), then so does \(T.\) Proof of Corollary 1.: For the case when \(T\) is finitely generated, the result follows immediately from Proposition 7.8. For the general case, we can use the proof of Corollary K to find a finitely generated subgroup \(T_{0}\subset T\) that does not preserve any probability measure. ### Comparison with the critical exponent of Fuchsian groups In this subsection, we consider a finitely generated non-elementary Fuchsian group \(G\). This group acts on \(\mathbb{S}^{1}=\partial\mathbb{D},\) where \(\mathbb{D}\) is the Poincare disk. As such, \(G\) can be viewed as a locally discrete subgroup of \(\mathrm{Diff}_{+}^{\omega}(\mathbb{S}^{1})\). Furthermore, \(G\) has no finite orbits on \(\mathbb{S}^{1}\) and its limit set corresponds to the unique minimal set \(\Lambda.\) The critical exponent of the Fuchsian group \(G\) is denoted by \(\widetilde{\delta}(G)\) and can be expressed as \[\widetilde{\delta}(G)=\limsup_{n\to\infty}\frac{1}{n}\log\#\left\{g\in G:\|g \|_{\mathrm{SL}(2,\mathbb{R})}\leqslant 2^{\frac{n}{2}}\right\},\] where \(\|\cdot\|_{\mathrm{SL}(2,\mathbb{R})}\) is the operator norm as \(\mathrm{SL}(2,\mathbb{R})\) acting on \(\mathbb{R}^{2}.\) A priori, we can show that \(\widetilde{\delta}(G)=\delta(G)=\delta_{2}(G)\) in our setting. **Proposition 12.4**.: \(\widetilde{\delta}(G)=\delta(G)=\delta_{2}(G).\)__ Proof.: By Cartan decomposition, for every \(g\in\mathrm{SL}(2,\mathbb{R}),\) we can express \(g\) as \(r_{1}\widehat{g}r_{2},\) where \(r_{1},r_{2}\in\mathrm{SO}(2,\mathbb{R})\) and \(\widehat{g}=\mathrm{diag}(\chi,\chi^{-1})\) with \(\chi=\|g\|_{\mathrm{SL}(2,\mathbb{R})}.\) For \(\theta\in\mathbb{R}/(2\pi\mathbb{Z}),\) we have \[\widehat{g}(\theta)=2\arctan\frac{1}{\chi^{2}}\tan\frac{\theta}{2},\] \[\widehat{g}^{\prime}(\theta)=\frac{1}{\chi^{2}\cos^{2}(\theta/2)+\chi^{-2} \sin^{2}(\theta/2)},\] \[(\log\widehat{g}^{\prime})^{\prime}(\theta)=\frac{\widehat{g}^{\prime\prime}( \theta)}{\widehat{g}^{\prime}(\theta)\ln 2}=\frac{1}{2\ln 2}\frac{(\chi^{2}-\chi^{-2}) \sin\theta}{\chi^{2}\cos^{2}(\theta/2)+\chi^{-2}\sin^{2}(\theta/2)}.\] For every \(\varepsilon>0,\)\(\widehat{g}^{\prime}(\theta)\asymp_{\varepsilon}\chi^{-2}\) and \((\log\widehat{g}^{\prime})^{\prime}\ll_{\varepsilon}1\) for every \(\theta\notin B(\pi,\varepsilon).\) This implies that \[\limsup_{n\to\infty}\frac{1}{n}\log\#\left\{g\in G:\exists x\in\Lambda,g^{ \prime}|_{B(x,\varepsilon)}\geqslant 2^{-n}\right\}\leqslant\widetilde{\delta}(G).\] Hence \(\delta(G),\delta_{2}(G)\leqslant\widetilde{\delta}(G).\) On the other hand, for every \(\varepsilon>0\) small enough such that \(\Lambda\) cannot be covered by an interval with length \(5\varepsilon,\) we can find \(x\in\Lambda\) such that \[g^{\prime}|_{B(x,\varepsilon)}\gg_{\varepsilon}\chi^{-2},\quad(\log g^{ \prime})^{\prime}|_{B(x,\varepsilon)}\ll_{\varepsilon}1.\] This shows that \(\delta_{2}(G)=\delta(G)=\widetilde{\delta}(G).\) Next, we show that \(G\) satisfies property (\(\star\)) or (\(\Lambda\star\)) as regarding \(G\) as a subgroup of \(\operatorname{Diff}_{+}^{\omega}(\mathbb{S}^{1}).\) This fact is direct consequence of the theory of Fuchsian groups, which is known to experts. We include a proof for the reader's convenience. **Proposition 12.5**.: _The action of \(G\) on \(\mathbb{S}^{1}=\partial\mathbb{D}\) satisfies property (\(\star\)) or (\(\Lambda\star\))._ Proof.: We fix a base point \(o\in\mathbb{D}.\) Recall that \(\Lambda\) corresponds to the limit set of \(G.\) For every \(x\in\Lambda\) and \(g\in G,\) by the formula of derivatives [61, Lemma 3.4.2], \[g^{\prime}(x)=\frac{1-|g^{-1}o|^{2}}{|x-g^{-1}o|^{2}}\Big{/}\frac{1-|o|^{2}}{| x-o|^{2}}=e^{B_{x}(o,g^{-1}o)}\] where \(B_{x}(\,\cdot\,,\,\cdot\,)\) is the Busemann cocycle and \(|\,\cdot\,|\) denotes the Euclidean norm on \(\mathbb{C}\). Combining with [21, Proposition 3.10], we have \[\big{\{}x\in\Lambda:g^{\prime}(x)\text{ is bounded for all }g\in G \big{\}}\] \[= \,\{x\in\Lambda:B_{x}(o,go)\text{ is upper bounded for all }g\in G\}=\Lambda \setminus\Lambda_{h},\] where \(\Lambda_{h}\) is the horocyclic limit points. The conclusion follows by the fact that every point in \(\Lambda\setminus\Lambda_{h}\) is fixed by a parabolic element [21, Theorem 4.13]. Combining above two propositions and Theorem 2.25, we provide a _dynamical_ proof of the following classic result in the hyperbolic geometry. **Corollary 12.6**.: \(\dim_{\mathrm{H}}L(G)=\delta(G),\) _where \(L(G)\) is the limit set of a finitely generated non-elementary Fuchsian group \(G\) and \(\delta(G)\) is the critical exponent of \(G.\)_ ### Some counterexamples There are several other discussions about the definition of dynamical critical exponents. One question is whether the local \(C^{1}\) contracting norm can be replaced with a global one and if the condition \(x\in\Lambda\) can be removed. While these conditions are not necessary for Fuchsian groups acting on the circle, they are necessary for general circle diffeomorphisms, even when \(G\) is chosen in \(\operatorname{Diff}_{+}^{\omega}(\mathbb{S}^{1})\). Two examples demonstrate this necessity. The key difference between the action of \(\operatorname{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) and \(\operatorname{SL}(2,\mathbb{R})\) on the circle is the lack of global rigidity. An element in \(\operatorname{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) can have multiple attracting and repelling fixed points, generating independent dynamics in different cones. _Example 12.7_.: Recall the example of \(2\)-perfect pingpong pair \((h_{1},h_{2})\) given in Example 7.5. We illustrate the figure once more but with different notation. The points \(a_{i,j}\) are attracting fixed points of \(h_{i}\) and \(r_{i,j}\) are repelling fixed points of \(h_{i}.\) We assume that the contracting rates at \(a_{1,1}\) and \(a_{2,1}\) are much less than the contracting or repelling rates at \(a_{i,2}\)'s and \(r_{i,j}\)'s. Specifically, \[-\log(h_{i}^{-1})^{\prime}|_{U_{i}^{-}}>1000,\quad-\log h_{i}^{\prime}|_{U_{i, 2}^{+}}>1000,\quad-\log h_{i}^{\prime}|_{U_{i,1}^{+}}<100,\] where \(U_{i,j}^{+}\) is the connected component of \(U_{i}^{+}\) containing \(a_{i,j}.\) This can be achieved in the real analytic settings. We consider a minimal set of the semigroup generated by \((h_{1},h_{2}),\) which is contained in the closed interval \([a_{1,1},a_{2,1}].\) Applying the dimension formula to the random walk induced by \(\mu=\frac{1}{2}\delta_{h_{1}}+\frac{1}{2}\delta_{h_{2}},\) the Hausdorff dimension of this minimal set is at least \(\frac{1}{100}.\) Let \(G\) be the group generated by \((h_{1},h_{2}),\) then \(\dim_{\mathrm{H}}\Lambda\geqslant\frac{1}{100}\) where \(\Lambda\) is the exceptional minimal set of \(G.\) For an element \(g=\gamma_{m}\cdots\gamma_{1}\in G,\) where \(\gamma_{i}\in\left\{h_{1},h_{2},h_{1}^{-1},h_{2}^{-1}\right\},\gamma_{i+1} \gamma_{i}\neq\mathrm{id}.\) Then there exists \(x\in\mathbb{S}^{1}\) such that at least \(m/2\) points in the sequence \[x,\ \gamma_{1}x,\ \gamma_{2}\gamma_{1}x,\ \cdots,\gamma_{m-1}\cdots\gamma_{1}x\] do not fall in \(U_{1,1}^{+}\cup U_{2,1}^{+}\). Then we have \(\log g^{\prime}(x)<-500m\). Hence if \(g^{\prime}|_{\mathbb{S}^{1}}\geqslant 2^{-n}\), we have \(m\leqslant n/500\). Therefore there are at most \(4^{n/500}=2^{n/250}\) such elements. If we replace the local co-norm by a global co-norm in Definition 1.1 i.e. \[\delta^{\prime}(G)=\limsup_{n\to\infty}\frac{1}{n}\log\#\left\{g\in G:g^{\prime }(x)\geqslant 2^{-n},\forall x\in\mathbb{S}^{1}\right\},\] then \(\delta^{\prime}(G)\leqslant 1/250<\dim_{\rm H}\Lambda\). _Example 12.8_.: We construct will two diffeomorphisms \(f,g\in\operatorname{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) with * \(g\) has three fixed points: one hyperbolic attracting, one hyperbolic repelling and one \(2k\)-order tangency parabolic fixed point. * \(h\) has two fixed points: one hyperbolic attracting and one hyperbolic repelling. The dynamics of \(g,h\) are illustrated below. To be specific, we take \(g_{0}:\mathbb{R}/\mathbb{Z}\to\mathbb{R}/\mathbb{Z}\) as \[g_{0}(x)=x+\varepsilon\sin^{2k}(\pi x)\cos(2\pi x),\] for some \(\varepsilon>0\) sufficiently small such that \(g_{0}\in\operatorname{Diff}_{+}^{\omega}(\mathbb{S}^{1})\). Then \(0\) is a \((2k)\)-order tangency fixed point of \(g_{0}\) and \(1/4\) is an attracting fixed point, \(3/4\) is a repelling fixed point. Take \(h_{0}\in\operatorname{Diff}_{+}^{\omega}(\mathbb{S}^{1})\) to be an arbitrary hyperbolic element with only two fixed points at \(3/8\) and \(5/8.\) Take \(n\) be a sufficiently large positive integer and let \(g=g_{0}^{n},h=h_{0}^{n}.\) Such that there are disjoint cones \(U_{1}^{+}=B(1/4,\delta),U_{1}^{-}=B(3/4,\delta),\)\(U_{2}^{+}=B(5/8,\delta),U_{2}^{-}=B(3/8,\delta)\) satisfying \[h(\mathbb{S}^{1}\setminus U_{2}^{-})\subset U_{2}^{+},\quad h^{-1}(\mathbb{S }^{1}\setminus U_{2}^{+})\subset U_{2}^{-},\] \[g(U_{1}^{+}\cup U_{2}^{+}\cup U_{2}^{-})\subset U_{1}^{+},\quad g^{-1}(U_{1}^ {-}\cup U_{2}^{+}\cup U_{2}^{-})\subset U_{1}^{-}.\] Moreover, we can assume that \(g,g^{-1},h,h^{-1}\) have large contracting rates on the corresponding cones, for instance, the derivatives are less than \(2^{-100}\). Let \(G\) be the group generated by \(\left\{g,h\right\}.\) Then \(G\) has an exceptional minimal set \(\Lambda\subset\bigcup U_{i}^{\pm}\) and \(\Lambda\) does not intersect the open arc \(\widetilde{r_{g}p_{g}\widetilde{a_{g}}}.\) _Claim 12.9_.: \(\dim_{\mathrm{H}}\Lambda\leqslant\frac{\log 3}{100}.\) Proof.: This estimate can be proved by an estimate on \(C^{2}\)-dynamical critical exponent \(\delta_{2}(G).\) Note that \(G\) is locally discrete. It suffices to show for every \(\varepsilon\) sufficiently small and \(C>0,\) \[\limsup_{n\to\infty}\frac{1}{n}\log\#\left\{f\in G:2^{-n}\leqslant f^{\prime} |_{B(x,\varepsilon)}\leqslant 2^{-n+C+1},\,\widetilde{\varkappa}(f,B(x, \varepsilon))\leqslant C\right\}\leqslant\frac{\log 3}{100},\] as we discussed in the proof of Theorem 2.25. For \(n>C+1,\) it requires that \(f^{\prime}|_{B(x,\varepsilon)}<1.\) Writing \(f\) in the normal form \(f=\gamma_{m}\cdots\gamma_{1}\in G\), where \(\gamma_{i}\in\left\{g,h,g^{-1},h^{-1}\right\},\gamma_{i+1}\gamma_{i}\neq \mathrm{id}.\) Then \(x\) cannot be chosen in the cone that expanded by \(\gamma_{1}.\) Since for \(\varepsilon\) small enough, we have \(\Lambda^{(\varepsilon)}\subset\bigcup U_{i}^{\pm}.\) Note that the group have a Schottky-group-like dynamics on the cones \(U_{i}^{\pm}.\) Combining with the assumption of contracting rates, we have \(m\leqslant n/100.\) Hence \[\#\left\{f\in G:2^{-n}\leqslant f^{\prime}|_{B(x,\varepsilon)}\leqslant 2^{-n+C +1},\,\widetilde{\varkappa}(f,B(x,\varepsilon))\leqslant C\right\}\ll 3^{n/100}.\] But there exists a \(2k\)-order parabolic fixed point \(p_{g}=0.\) This example demonstrates that it is possible to find \(x\in\mathbb{S}^{1}\setminus\Lambda\) such that \(\frac{k(x)}{k(x)+1}>\dim_{\mathrm{H}}\Lambda.\) By our discussions on generalized dynamical critical exponents, \(\delta(G,\mathbb{S}^{1})>\delta(G)=\dim_{\mathrm{H}}\Lambda\) in this example.
2306.06526
Towards using utility data to quantify how investments would have increased the wind resilience of distribution systems
We quantify resilience with metrics extracted from the historical outage data that is routinely recorded by many distribution utilities. The outage data is coordinated with wind data to relate average outage rates in an area to wind speed measured at a nearby weather station. A past investment in wind hardening would have reduced the outage rates, and the effect of this on metrics can be calculated by sampling a reduced number of the historical outages and recomputing the metrics. This quantifies the impact that the hardening would have had on customers. This is a tangible way to relate an investment in wind resilience to the benefits it would have had on the lived experience of customers that could help make the case for the investment to the public and regulators. We also quantify the impact of earlier or faster restoration on customer metrics and compare this to the impact of investment in hardening. Overall this is a new and straightforward approach to quantify resilience and justify resilience investments to stakeholders that is directly driven by utility data. The approach driven by data avoids complicated models or modeling assumptions.
Arslan Ahmad, Ian Dobson
2023-06-10T21:40:49Z
http://arxiv.org/abs/2306.06526v2
Towards using utility data to quantify how investments would have increased the wind resilience of distribution systems ###### Abstract We quantify resilience with metrics extracted from the historical outage data that is routinely recorded by many distribution utilities. The outage data is coordinated with wind data to relate average outage rates in an area to wind speed measured at a nearby weather station. A past investment in wind hardening would have reduced the outage rates, and the effect of this on metrics can be calculated by sampling a reduced number of the historical outages and recomputing the metrics. This quantifies the impact that the hardening would have had on customers. This is a tangible way to relate an investment in wind resilience to the benefits it would have had on the lived experience of customers that could help make the case for the investment to the public and regulators. We also quantify the impact of earlier or faster restoration on customer metrics and compare this to the impact of investment in hardening. Overall this is a new and straightforward approach to quantify resilience and justify resilience investments to stakeholders that is directly driven by utility data. The approach driven by data avoids complicated models or modeling assumptions. Power distribution systems, outages, resilience, metrics, fragility, data, weather, wind ## I Introduction Resilience generally addresses the response of power systems to extreme weather events, as well as other unusual high stresses such as earthquakes, fires, and epidemics. There are many overlapping frameworks and definitions of resilience [1, 2, 3] and the most concise definition is "Power system resilience is the ability to limit the extent, severity, and duration of system degradation following an extreme event." [4]. There remains much scope for practically quantifying resilience [5] and for justifying investments that improve resilience. In particular, this paper proposes a data-driven approach to quantify the resilience of a distribution system to wind and to help justify investments in hardening the system or speeding up its restoration. Wind-related hazards due to storms and hurricanes are one of the most significant hazards for overhead distribution system; the effects of high winds include tree falls and flying debris as well as direct impacts such as pole toppling and conductor galloping. Investment decisions upgrading distribution systems or their restoration consider many factors, such as cost, reliability, load, distributed generation, deployment of crews and materials, and many engineering and community constraints. In order to also consider resilience in these decisions, it is desirable to quantify the benefits of investments that increase resilience and communicate those benefits to utilities, communities, and regulators. One useful but complex approach is to make detailed models of the extreme weather, its impact on the distribution system, the restoration process, and the impact on customers and then use these models to estimate the future benefits of a proposed upgrade. This model-based approach is challenging because of the extensive approximations and assumptions needed to make practical models of the entire process. Nevertheless, there has been considerable progress in using these models as reviewed in section II. The purpose of this paper is to open up another, complementary approach driven directly by utility data that can also help to quantify and communicate the benefits of investments that increase resilience to strong winds. We briefly outline the new data-driven approach in Fig. 1 and as follows: Consider an area inside the distribution system that is close to a weather station measuring wind speed. We process the area outage data and the wind speed together to obtain an "area outage rate curve" that describes how the mean outage rate of the area increases with wind speed. The area outage rate curve quantifies the resilience of the area with respect to the measured wind speed. A previous investment in hardening the area would have shifted the area outage rate curve and reduced the area outage rates. We can go back to the historical outage data and sample a reduced number of outages from it to match the reduced outage rates. This samples the historical outages that the investment would have caused. Computing resilience metrics for the historical outages and comparing these to the improved resilience metrics for the sampled outages quantifies the effect that the investment would have had on the area and its customers. Moreover, we also quantify the impact of improved restoration: Investment in restoration can make the repairs start earlier or increase the Fig. 1: Quantifying the benefits of wind-hardening resilience investments. rate at which repairs are completed. We change the historical restoration times to correspond to these improvements and recalculate the metrics to quantify their impacts. This brief outline is elaborated throughout the paper, but we start by discussing the value of the new approach in justifying resilience investments. In addition to the routine reluctance to pay more for improved electrical infrastructure, resilience investments are particularly hard to justify because they concern rarer large events that will recur, but at an indefinite time in the future1. There are indeed technical difficulties in estimating the impact of rare events, but arguably at least as important for practically improving resilience are the difficulties in communicating the impact of rare events and the benefits of investing to reduce their impact. The model-based approach can estimate with some approximations and assumptions the projected future benefits of a resilience investment. The data-driven approach can estimate the benefits that the investment would have had if the investment had previously been made. In some ways, the data-driven approach can be more persuasively tangible because it is related to people's lived past experience. For example, it could be persuasive to say that the community would have saved 20% fewer customer minutes of outages over the past 10 years, or even more specifically that a particular, extreme wind event (such as the upper midwest USA derecho in August 2020 that caused \(\sim\)11 billion dollars of damage) would have had 20% fewer customer minutes out. Footnote 1: Note that reliability investments already address the common failures. The overall paper contribution is showing the feasibility of a new, entirely data-driven method of quantifying distribution system resilience to wind, particularly the change in standard metrics that would have occurred if overall investments in hardening or earlier or faster restoration had been made. There are no modeling assumptions, the data is readily available to utilities with an outage management system, and the computations are fast and relatively straightforward. The new quantification of resilience benefits of investments can be tangible to stakeholders and help support the case for investments. After reviewing the literature in section II, section III describes the outage and weather data used. Section IV describes the area outage rate curves that quantify the wind resilience from data, and how they can be shifted to represent hardening. Section V explains how to extract resilience events from data and calculate their metrics. Section VI describes the sampling that represents the hardening and Section VII shows how improved restoration is represented. Results quantifying the improvement in metrics due to the hardening and improved restoration are presented in Section VIII. Sections IX and X give technical details of constructing area outage curves and tracking events after sampling. Finally, the paper contributions and conclusions are given in section XI. The paper elaborates and builds on the initial work presented in the MS thesis [6]. ## II Literature review Since high winds are a significant hazard for overground power distribution systems [6, 7], there is substantial work studying the resilience of these systems to the wind. We begin by briefly surveying a variety of methods that build and use models to quantify distribution system resilience. These methods form a useful complementary approach to our new data-driven quantification of resilience. Xu [8] defines a calculator for hurricanes that calculates the cost and benefits of distribution system undergrounding or hardening given utility estimates of input data. The modeling includes exponential fragility models for poles, power law fragility models for span damage, simulated hurricanes, and restoration times estimated from crew availability. Ma [9] formulates system hardening, the impacts of extreme weather, and minimizing load shedding as a multi-level mixed-integer linear program to find an optimal hardening strategy. Arif [10] co-optimizes distribution system operation and repair crew routing for outage restoration after extreme weather events using a two-stage stochastic mixed integer linear program. Tan [11] finds an optimal hardening and repair sequence to minimize the expected energy not served using mixed integer linear programming and associated convex relaxations and heuristics. Tan [12] finds optimal repair sequencing when there is large-scale damage by solving a scheduling problem using approximations to linear programming and accounting for multiple faults obstructing the power to the same customer. Ouyang [13] models the resilience of Harris County, Texas, to hurricane Ike using exponential fragility curves, a DC power flow network model, and high-level models of restoration resources and sequencing. Ciqessoni [14] evaluates the power grid's resilience in a mountainous area to the combined effects of wind, snow, and trees with a risk-based model. Poudel [15] simulates for a distribution of wind conditions the improvement in resilience risk of planned improvements using measures such as value at risk of energy not served. Wei [16] analyzes resilience to particular severe hurricanes with time-varying Poisson processes that model distribution system outage and repair processes as a queue. We now survey methods that analyze dependencies in distribution system data to describe wind resilience. Much of the analysis is for specific hurricanes. Davidson [17] studies resilience to hurricanes in the Carolinas using utility outage data, a combination of interpolated and simulated hurricane wind data, and land cover and rain data. Liu [18] constructs a spatial generalized linear mixed model of the number of hurricane and ice storm outages in zip codes as a function of weather, protection device density, and spatial data. Reed [19] studies the resilience to wind-induced damage in Hurricane Katrina. Jaech [20] estimates individual component repair times using a gamma distribution with parameters predicted using neural networks trained on utility outage records. Carrington [21] establishes methods for extracting events from utility outage data, computes standard resilience metrics for the events, and estimates restoration times. Cerrai [22] and Kezunovic [23] use machine learning methods based on multiple decision trees and logistic regression respectively to combine weather prediction, vegetation, outage and other data to predict the probability of storm outages in small areas of the distribution system. Now we review relevant work extracting fragility curves from wind and outage data for use in resilience models. Dunn [24] develops empirical fragility curves for overhead lines from 11 kV to 132 kV in the UK during wind storms, defined as winds exceeding 38 mph. The fragility curves are fit with power laws, and relate the number of faults per line length in areas of \(\sim\)2000 km\({}^{2}\) to the reanalyzed maximum wind gust over the storm duration in the areas. Reed [25] analyzes the fragility and outage duration of an urban distribution system for four different wind storms. Fragility curves show the fraction of affected feeder length as a function of the peak storm wind gust squared. Bjarnado [26] describes lognormal fragility curves for pole design subject to hurricane risk as well as reviewing deterministic pole design with safety factors. Murray [27] correlates reanalyzed wind gust data with transmission system faults in the UK to give an exponential fragility curve for transmission lines. Reliability addresses performance averaged over the year rather than resilience, but we acknowledge the extensive and useful tradition of evaluating distribution system reliability with steady-state Markov models [28] in which extreme weather is modeled with additional Markov states [29]. ## III Outage data and weather data Two different datasets are used, one with outage data recorded by a distribution utility in the USA and the other containing NOAA (National Oceanic and Atmospheric Administration) wind data for weather stations in that utility's service area. Both datasets cover the same time span of six years. The outage dataset contains 32 278 individual outages and has a one-minute resolution; i.e., all outages that occurred within one minute are labeled with the same timestamp. Each outage entry in the dataset corresponds to an outage of a component in the power distribution system and includes the location coordinates of that component, the number of customers affected during the outage, the starting and ending time of the outages, and cause codes.2 Footnote 2: 149 outage records with missing location information are removed. Further details about data cleaning and pre-processing are given in [6]. Each record in the weather dataset provides the average hourly wind speed, and various other weather details. Only the average hourly wind speed data is used here. There is data from multiple weather stations in the utility's service area. With the outage and weather stations' geographical locations known, each outage is associated with the weather station closest to the outage. This divides the distribution system into multiple areas, where each area contains all the outages closest to the weather station associated with that area. This paper analyzes the outages in two of these areas, as shown in Fig. 2. Area 1 and area 2 contain 7876 and 12 715 outages, respectively. The typical measured wind speeds at the weather station in area 2 are systematically slower than the typical measured wind speeds at the weather station for area 1. Differences in these two measurements are expected: Weather station 2 measures wind at 5 feet from the ground in a field surrounded by woods, whereas weather station 1 measures wind at 33 feet from the ground in a large flat area with no trees. The weather stations are 13 miles apart. ## IV Area outage rate curves This section explains area outage rate curves, how they quantify wind resilience, and how shifting them represents the effect of distribution system hardening. ### _Quantifying wind resilience with area outage rate curves_ Each area of the distribution system is associated with its weather station. The area outage rate curve specifies the mean outage rate of the area \(\overline{F}(v)\) as a function of the wind speed \(v\) measured at the weather station. The mean outage rate generally increases with wind speed, as shown by Figs. 3 and 4. The dots indicating the mean outage rates at integer wind speeds in Fig. 3 and 4 are calculated from the wind and outage data for areas 1 and 2; the details are given in section IX. The curves in Fig. 3 and 4 are exponential fits of the form \[\overline{F}(v)=ae^{bv} \tag{1}\] The exponential fit uses the Levenberg-Marquardt method (also known as the damped least-squares method) with a 99% confidence level for parameters and predictions. The parameter values obtained for the fit are \(a=1.44\times 10^{-7}\) and \(b=0.6\) for area 1, and \(a=0.006\) and \(b=0.48\) for area 2. The wind station in area 2 measures slower wind speeds than the wind station in area 1. This is expected as discussed in section III. The outage rate curve describes the resilience of the area with respect to the wind measurement at the weather station. The outage rate is low, except that it increases sharply for higher wind speeds. Other authors report a dependence of outage rate data on wind speed of a similar form and use exponential [8, 27, 30] or power law [8, 24] fits to describe this dependence. ### _Hardening shifts the area outage rate curves_ Overhead power line components such as poles are designed to withstand their rated wind speed. Hardening upgrades or Fig. 2: Geographical location of outages and the associated weather stations in two areas of a distribution system. reinforces the components. The overall effect of the hardening is that the same mean outage rate can be achieved at a higher wind speed; that is, the hardening shifts the outage rate curve to the right, as shown in Fig. 5.3 For example, replacing poles rated for 60 mph wind with poles rated at \((60+x)\) mph would shift the outage rate curve \(\overline{F}(v)\) right by \(x\) mph so that the new outage rate curve is \(\overline{F}_{new}(v)=\overline{F}(v-x)\). Since the mean outage rate generally increases with wind speed, \(\overline{F}_{new}(v)<\overline{F}(v)\) so that the hardening reduces the mean outage rate at wind speed \(v\). This reduction in the outage rate is implemented with sampling in section VI. In the case of the exponential area outage rate curve (1), the reduction in outage rate takes the simple form of multiplying the outage rate by the same factor \(e^{-bx}\) at all wind speeds: Footnote 3: A right-shift is also used to model the failure rate data for hardened transmission structures in [30, Figs. 5-4 and 5-5] \[\overline{F}_{new}(v)=\overline{F}(v-x)=ae^{b(v-x)}=\overline{F}(v)e^{-bx} \tag{2}\] ### _Comparing area outage rate curves and fragility curves_ Fragility curves for wind describe the probability of component failure or failure per kilometer of line as a function of wind speed [24, 25, 27]. Area outage rate curves have some similarities and differences with fragility curves, so it is interesting to compare them. Area outage rate curves give the mean outage rate of an area as a function of the measurement of wind speed at a particular weather station, whereas fragility curves give the probability of failure of a component as a function of the wind speed (at least conceptually) at that component. The wind speed at any given component in the area is correlated with, but different than the wind speed at any particular weather station. The area contains many types of components that can cause an outage (poles, lines, insulators, etc.) and these components can also differ in manufacturer, age, elevation, topography, local environment, tree cover, and condition and are subject to different wind conditions. Since the area outage rate curve is directly obtained from historical data, it incorporates all these types and variations, and describes the aggregated response of the area in terms of outages with respect to the wind speed at the particular weather station. One notable difference is in the use of the two curves: area outage rate curves bypass any component modeling to directly describe the aggregated resilience of the entire distribution system area with respect to a particular wind measurement, whereas fragility curves are used in models of components to design that component or to compute the resilience of many similar components. Indeed, it is not appropriate to directly substitute an area outage rate curve for a component fragility curve in models. ## V Events, processes and resilience metrics A necessary step in the data processing groups outages into resilience events and then calculates several metrics for each event. The particular metrics we use are among the typical resilience metrics proposed and explained in references such as [31, 32], allowing for the observation that in real data the theoretically successive phases of resilience usually overlap [21]. The focus on events, particularly the larger ones, and the metrics for events instead of average performance over a year, make this a resilience analysis. To define the resilience events and automatically extract them from the distribution system data we use the method Fig. 4: Area outage rate curve of area 2. Dots are the mean outage rate at each wind speed from data, and the curve is an exponential fit. Fig. 5: Comparison of original and shifted area outage rate curve of area 2 after 1 mph hardening to wind hazard. Fig. 3: Area outage rate curve of area 1. Dots are the mean outage rate at each wind speed from data, and the curve is an exponential fit. in [21, 33]. The start of an event is defined by an initial outage that occurs when all components are operational, and the end of the same event is defined by the first subsequent time when all the components are restored. We write \(n\) for the number of outages in an event. If we write \(o_{1}\) for the start time of the first outage and \(r_{n}\) for the time of the last restore, then the event occurs over the time interval \([o_{1},r_{n}]\). Two example events are shown in Fig. 6. Performance curves that track in time the negative of the cumulative number of unrestored outages or customers or other quantities are routinely used in studies of resilience to track the progress in time of resilience events [1, 2, 3, 15, 31]. Accordingly, we define the component performance curve \(P(t)\) as the negative of the cumulative number of unrestored outages in an event. (\(P(t)\) is also the cumulative number of restores at time \(t\) minus the cumulative number of outages at time \(t\)[21].) Component performance curves for two events are shown in Fig. 6. \(P(t)\) decrements by one when there is an outage and increments by one when there is a restore. In particular, \(P(t)\) is initially zero, the event starts at \(o_{1}\) when the cumulative number of failures \(P(t)\) first decrements from zero, and ends at \(r_{n}\) when \(P(t)\) increases to return to zero. The events group together the successive outages that have some overlap in duration. Events in our distribution utility data (2944 total events in area 1 and 3706 in area 2) are of all sizes, ranging from a single outage that is restored without involving any other outages to the largest event with more than 100 overlapping outages. In an event with \(n\) outages, we write \(o_{1}\leq o_{2}\leq...\leq o_{n}\) for the outage times in the order in which they occur and \(r_{1}\leq r_{2}\leq...\leq r_{n}\) for the restore times in the order in which they occur. The outages happen in the time interval \([o_{1},o_{n}]\) and the restores happen in the time interval \([r_{1},r_{n}]\), and typically in real data the restores start before the outages end so that these time intervals overlap. We write \(c_{k}\) for the number of customers outaged at the \(k\)th outage. The component performance curve \(P(t)\) tracking the number of unrestored outages easily generalizes to a customer performance curve \(P^{\mathrm{cust}}(t)\) that tracks the number of unrestored customers: \(P^{\mathrm{cust}}(t)\) is the negative of the cumulative number of unrestored customers in an event. It is now straightforward to give formulas for the resilience metrics that we evaluate for each event: * event size = number of outages \(=n\) * outage hours = area under performance curve \(=r_{1}-o_{1}+r_{2}-o_{2}+...+r_{n}-o_{n}=-\int_{o_{1}}^{r_{n}}P(t)dt\) * event duration \(=r_{n}-o_{1}\) * time to first restore \(=r_{1}-o_{1}\) * restore duration \(=r_{n}-r_{1}\) * restore rate \(=n/(r_{n}-r_{1})\) * outage rate \(=n/(o_{n}-o_{1})\) * customers out \(=c_{1}+c_{2}+...+c_{n}\) * customer hours = area under customer performance curve \(=c_{1}(r_{1}-o_{1})+c_{2}(r_{2}-o_{2})+...+c_{n}(r_{n}-o_{n})\)\(=-\int_{o_{1}}^{r_{1}}P^{\mathrm{cust}}(t)dt\) The two expressions for outage hours, or for customer hours, are shown to be equal in [34]. Note that dividing the customer hours for an event by the number of customers gives the contribution of that event to SAIDI, assuming for the larger events that major event days are included in SAIDI. ## VI Outage sampling to get the average metrics for reduced outage rates This section describes the sampling from the historical outages to select a reduced number of outages that represents hardening. The resilience metrics are recalculated for many such samples and then averaged to find the average improvements in the metrics. This metric calculation is applied separately to small, medium, and large events. Suppose that the mean outage rate \(\overline{F}(v)\) at wind speed \(v\) is calculated from the \(k\) outages \(\{e_{1},...,e_{k}\}\). According to section IV-B, a shift in the area outage rate curve gives the new outage rate \(\overline{F}_{\mathrm{new}}(v)\) at wind speed \(v\). To realize this reduced outage rate, we randomly sample \(k_{new}\) outages from \(\{e_{1},...,e_{k}\}\) where, in general, and writing \(\lceil\cdot\rceil\) for rounding up to the next integer, \[k_{new}=\left\lceil k\frac{\overline{F}_{\mathrm{new}}(v)}{\overline{F}(v)}\right\rceil \tag{3}\] But in our case of exponential outage rate curves, (3) simplifies using (2) to \[k_{new}=\lceil ke^{-bx}\rceil \tag{4}\] The sampling method is simple sampling with replacement and it is applied at each wind speed to obtain a new set of outages at all wind speeds that realizes the new area outage rate curve and the effect of the hardening. We calculate the new resilience metrics for the new set of outages. It is convenient to write \(M\) for one of these resilience metrics, and \(M_{\mathrm{new}}^{(1)}\) for the metric evaluated on the new set of outages at all wind speeds. The entire sampling and metric evaluation procedure is then repeated \(m\) times to obtain the new metrics \(M_{\mathrm{new}}^{(1)}\), \(M_{\mathrm{new}}^{(2)}\),..., \(M_{\mathrm{new}}^{(m)}\). Finally, the average new metric is computed as \[\overline{M}_{\mathrm{new}}=\frac{1}{m}\sum_{i=1}^{m}M_{\mathrm{new}}^{(i)} \tag{5}\] An explanation for this procedure is that while the shift in the outage rate curve determines the new reduced number of outages \(k_{\mathrm{new}}\) at each wind speed, it does not determine which outages are to be omitted when realizing this reduced number of outages. That is, we do not know which outages at each wind speed will be removed by the hardening. Therefore we compute the average new metric \(\overline{M}_{\mathrm{new}}\) for random samples of the reduced number of outages. Fig. 6: An event with one outage and an event with 3 outages. Above the time axis shows each outage start time (open circle) and restore time (dot). Below the time axis is the performance curve \(P(t)\) for each event. One complication is that the sampling can sometimes remove outages from an event in such a way that the event splits into smaller events. This complication is handled with super events as explained in section X. For our calculations, the number of repetitions of the sampling procedure is chosen as \(m=2000\) to ensure that the confidence interval \(\overline{M}_{\text{new}}\pm 0.01\) contains the true value of the mean metric with probability 99% or greater. \(m=2000\) is obtained as follows: Since the distributions of the sampled metrics \(M_{\text{new}}\) are observed to be approximately normal, the half width \(d\) of a \(99\%\) confidence interval for the mean \(\overline{M}_{\text{new}}\) satisfies \[d\leq t_{0.005,m-1}\frac{s}{\sqrt{m}} \tag{6}\] where \(s\) is the sample standard deviation of the metric samples and \(t_{0.005,m-1}\) is the 99.5% percentile of the Student-\(t\) distribution with \(m-1\) degrees of freedom. We take \(d=0.01\) and increase \(m\) until (6) is satisfied for each metric. There is a clear pattern in the data of far more smaller events and much fewer large events. For example, Fig. 7 shows the empirical probability distribution of event size for area 1 on a log-log scale. This pattern affects the processing of the results, because if one averages all the results together, the smaller events will dominate the average. To address this, and particularly because resilience must have some focus on the large events, we divide the events into small events (1 or 2 outages), medium events (3 to 15 outages), and large events (16 or more outages). Area 1 has 2386 small events, 526 medium events, and 32 large events, and area 2 has 2845 small events, 773 medium events and 88 large events. Average metrics for small, medium, or large events can distinguish the resilience performance for these different sizes of events, while still having enough large events to give usable estimates of the average metrics for large events. ## VII Representing earlier or faster restoration This section represents the effects of improved restoration by modifying the restore times of the historical data. Recall that in an event with \(n\) outages, we write \(r_{1}\leq r_{2}\leq...\leq r_{n}\) for the restore times in the order in which they occur. The outage times of the components that are restored in this restore order are written as \(o_{\pi(1)},o_{\pi(2)},...,o_{\pi(n)}\). The components do not usually outage in the order in which they are restored, so the \(\pi\) function permutes the order to account for this. We represent the improved restoration in two ways. First, the repair can start earlier by providing more resources for identifying, locating, and automatically resolving faults; this includes investing in more sensors, switches, communications, meters, and reclosers as well as more crews to inspect the lines and clear debris. Let the change in start time be specified by \(t_{\text{earlier}}\), then the new restore times for the event are \[r_{k}^{\text{new}}=\max\{r_{k}-t_{\text{earlier}},o_{\pi(k)}\},\quad k=1,...,n. \tag{7}\] Taking the maximum with \(o_{\pi(k)}\) in (7) limits the new restore time so that \(r_{k}^{\text{new}}\geq o_{\pi(k)}\); restoration of a component must occur after its outage. Second, the rate of restoration can be increased and the restoration duration decreased by investing in more repair crews, better stocks of spare parts, and better route scheduling. Let the faster restore duration be specified by multiplying by a factor \(c_{\text{faster}}<1\). Then the restore duration of the \(k\)th restore \(r_{k}-r_{1}\) is reduced by a factor of \(c_{\text{faster}}\), as long as the new restore time occurs after its corresponding outage: \[r_{k}^{\text{new}}=\max\{r_{1}+(r_{k}-r_{1})c_{\text{faster}},o_{\pi(k)}\},\ k=1,...,n. \tag{8}\] ## VIII Results This section presents a case study of the impacts of a hardening and improved restorations on the resilience metrics for areas 1 and 2 of the distribution system. ### _Base case resilience metrics_ The base case is the historical outages without any modifications. Table I shows the base case average values of the resilience metrics for small events (1 or 2 outages), medium events (3-15 outages), and large events (\(\geq\)16 outages). Considering the different sizes of events separately and with some special attention to the large events is needed for this quantification of resilience, as explained at the end of section VI. As expected, all the metrics (except time to first restore) clearly show the increased impact on customers as the event size increases from small to medium to large. The average resilience metrics show that area 1 has greater customer impacts than area 2 for large events. ### _Change in metrics due to hardening_ The hardening for each area with respect to wind is represented by an increased mile per hour wind rating, which gives a percentage reduction in the outage rate (see section IV-B) that is implemented by sampling a reduced number of outages (see section VI). For the case study, it is convenient to consider a hardening that gives a 10% reduction in outage rate for both areas. This 10% reduction in the outage rate corresponds to 0.18 mph wind hardening for area 1 and 0.22 mph wind hardening for area 2. The 10% reduction in outage rate is implemented by sampling 10% fewer outages, so that the hardening reduces the average event size (number of outages) by exactly 10%, as confirmed in Table I. The hardening also Fig. 7: Empirical distribution of event size in area 1 (log-log scale) reduces the average outage hours by exactly 10%. This result follows from a resilience metric formula in [34].4 Footnote 4: For each event, [34, (17)] gives \(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text ### _Comparing hardening and improved restoration_ Two overall options are available for power system resilience investments. One is to invest in hardening and the other is to invest in improved restoration. The hardening invests in the infrastructure, whereas improved restorations invest in the crews and their resources and automated actions. Our results confirm the general observation that hardening reduces the event size (number of outages) while affecting event durations less, whereas improved restorations decrease the outage durations but do not affect the event size. The results in table I show that a hardening decreasing outage hours and event size by 10% also decreases customer hours and customers out by approximately 10%, irrespective of small, medium or large events. The customer hours are particularly important in assessing the impact of power outages on consumers. On the other hand, the improved restorations to achieve a 10% reduction in the mean outage hours for large events also provide almost the same percentage decrease in customer hours. However, unlike hardening, the event size and customers out remain unchanged. So the customers would still face power outages but get restored more quickly. All these results show how different overall investments would have changed the various resilience metrics and customer impacts. This quantifies the various benefits that the investments would have made to the utility and their customers. ## IX Constructing area outage rate curves This section explains how to align and process the outage and wind data to construct the area outage rate curves. Since the outage times are recorded to the nearest minute and the wind speeds are recorded hourly5, we need to interpolate the wind speeds. Let \(V(t)\) be the piecewise linear interpolation of the wind data as shown in Fig. 8. \(V^{-1}(v)=\{t\mid V(t)=v\}\) defines the set of time instants with wind speed \(v\). Fig. 8 shows an example of \(V^{-1}(v)=\{t_{1},t_{2}\}\) when the wind speed value is \(v\) at times \(t_{1}\) and \(t_{2}\). Footnote 5: Eighteen outages with a time difference of more than 201 minutes and low wind speeds are omitted from further analysis. Some of the wind speed data is at 15 minute or more than one hour intervals. The outage times are quantized to the nearest minute. Therefore, if \(t\) is an exact number of minutes, then the outage rate \(R(t)\) per minute is given by the number of outages recorded in the area at minute \(t\). Further, if \(t\) is not an exact number of minutes, the outage rate at \(t\) is given by \(R(\mathrm{round}[t])\), where the rounding is to the nearest minute. The outage rate curve \(\overline{F}(v)\) is the mean outage rate at wind speed \(v\); that is, the mean of the outage rates over all the times at which the wind speed was \(v\): \[\overline{F}(v)=\frac{1}{|V^{-1}(v)|}\sum_{t\in V^{-1}(v)}R(\mathrm{round}[t]) \tag{9}\] ## X Sampling and super events Since the sampling removes outages, the remaining outages that were in the same event before sampling may not all be overlapping after sampling and so can sometimes split into two or more events. We call the set of events arising after sampling in this way from one event before sampling a "super event", and this section explains super events. For example, consider the timeline plot of a typical event \(E=\{e_{1}\), \(e_{2}\), \(e_{3}\), \(e_{4}\), \(e_{5}\), \(e_{6}\), \(e_{7}\), \(e_{8}\), \(e_{9}\}\) with 9 outages before sampling as shown in Fig. 9. If the sampling removes \(e_{1}\), then the remaining outages still form one event and the super event is \(\{\{e_{2}\), \(e_{3}\), \(e_{4}\), \(e_{5}\), \(e_{6}\), \(e_{7}\), \(e_{8}\), \(e_{9}\}\}\). However, if the sampling removes \(e_{4}\) and \(e_{5}\) then the remaining outages form two events since the system is fully restored when outage \(e_{3}\) is restored and the super event is \(\{\{e_{1},e_{2},e_{3}\},\{e_{6},e_{7},e_{8},e_{9}\}\}\). The changes in the sizes and numbers of events due to sampling cause problems when the metrics of events before and after sampling are compared: Basic to the analysis is the classification of events by size based on their number of outages, and the variable reduction in event size, and especially events splitting into multiple smaller events, interferes with tracking the effect of the sampling on event metrics and disrupts the effect of the sampling on the categories of small, medium, and large events, since the events can change categories after sampling. These problems are resolved by keeping track of all the events arising after sampling from one event before sampling in a super event, and appropriately defining the metric of a super event as follows: Consider a metric \(M\) that can be evaluated on an event \(E\) as \(M[E]\), and a super event \(\{E_{1},E_{2},...,E_{p}\}\) that has events \(E_{1},E_{2},...,E_{p}\) after sampling. Then for the metrics event size (number of outages), outage hours, restore duration, number of customers out, and customer hours, we define the metric evaluated on the super event as \[M[\{E_{1},E_{2},...,E_{p}\}]=M[E_{1}]+M[E_{2}]+...+M[E_{p}]\] and for the the metrics restore rate, outage rate, and time to first restore, we replace summation by the average to define \[M[\{E_{1},E_{2},...,E_{p}\}]=\tfrac{1}{p}(M[E_{1}]+M[E_{2}]+...+M[E_{p}])\] Events with only one outage can disappear if that outage is removed by sampling. In this case, the super event is the empty set \(\{\,\}\) and all the metrics evaluate to zero. Fig. 8: Linear interpolation of wind speed data ## XI Conclusions This paper combines historical outage and weather data to construct area outage rate curves to quantify the resilience of areas of a distribution system to wind. An investment hardening the distribution system would have shifted the area outage rate curves and reduced the outage rates, and the effect of this on resilience metrics is quantified by sampling a reduced number of the historical outages and recalculating the resilience metrics. The effect of an improvement in restoration times is also quantified by advancing or speeding up the historical restoration and recalculating the metrics. The resilience metrics include the event size (number of outages), durations, rates, and customer hours evaluated on resilience events of different sizes. These data-driven calculations quantify the impact on customers that previous investments would have had. Overall, we initiate a new approach towards resilience quantification. Specific contributions and attributes of this new approach are: * Quantifying the impact on customers that a resilience investment would have made in the past gives a novel and credible way to justify the benefits of the investment that can be tangible to utilities, communities, and regulators, because it clearly shows how the lived past experience of customers would have been improved. This is significant since effective ways to justify resilience investments to stakeholders are essential for practically implementing resilience. Quantifying the effect that the investment would have made in the past complements and augments justifications for resilience investments that rely on projections into the future with models. * Modifying historical data is an entirely new way to quantify resilience and resilience investments that calculates the changes in standard resilience metrics from the effects that the investments would have had. This approach directly driven by data has clear advantages in realism in accounting for all the conditions that the power system experienced over the period of observation, including variations in space and time in weather, load, upgrades, operating procedures and restoration policies, and component design, location, conditions and maintenance. No modeling assumptions are made. * We construct area outage rate curves that quantify the wind resilience of an area of a distribution system directly from data by describing how the mean outage rate of the area increases as a specific nearby wind measurement increases. The area outage rate curves have a similar form as component fragility curves, but describe the empirical aggregate area response in terms of outages rather than the response to wind experienced at specific components of the distribution system. * The overall effect of resilience investments are simply represented by an earlier or faster restoration or by a hardening that increases wind strength by a given number of miles per hour. This enables a novel and credible comparison of investment in hardening versus investment in better restoration in terms of customer impact. * The technical aspects of the calculations include: (a) the segregation of resilience events into small, medium, and large events to get a meaningful assessment of the resilience of events of different sizes with metrics, (b) super events to track the metrics of events that split into smaller events when outages are removed, and (c) leveraging recent work [21] that automatically extracts events of all sizes from utility outage data and calculates a range of metrics for each event. * The method is limited to a net reduction in historical outages; it does not synthesize additional new outages. Note that this limitation does not rule out the incorporation of all future effects. For example, the effect of a future increase in the average wind speed can still be represented, as long as it is offset by sufficient hardening so that the net outage rate decreases. Also the effect of a percentage increase in the rate of events is straightforward to evaluate because the metrics per event are unchanged, and any metrics that accumulate over a time period increase by the same percentage. The feasibility of these extensions is significant since wind speeds and storm frequencies are expected to increase with climate change. * The outage data and weather data required to use this approach are easily available to any distribution utility with an outage management system, and the computations are relatively straightforward and fast. There is promising scope for extending the methods of this paper in future work: Other wind data could be tested and evaluated. Other hazards such as flood or icing could be considered and outage cause code information could be related to the hazards and leveraged. Detailed model-based approaches could be used to directly link specific engineering improvements to the overall changes in hardening and faster restoration that model the investments in this paper. If cost estimates and probability estimates of rarer events can be improved, then the risk and improvements to risk could be quantified. This paper has established the feasibility and useful characteristics of a new approach to quantifying resilience and the benefits of investments in resilience from historical data, and we are confident that further developments can follow.
2308.10830
Scintillated microlensing: measuring cosmic distances with fast radio bursts
We propose a novel means of directly measuring cosmological distances using scintillated microlensing of fast radio bursts (FRBs). In standard strong lensing measurements of cosmic expansion, the main source of systematic uncertainty lies in modeling the mass profile of galactic halos. Using extra-galactic stellar microlensing to measure the Hubble constant avoids this systematic uncertainty as the lens potential of microlenses depends only on a single parameter: the mass of the lens. FRBs, which may achieve nanosecond precision on lensing time delays, are well-suited to precision measurements of stellar microlensing, for which the time delays are on the order of milliseconds. However, typical angular separations between the microlensed images on the order of microarcseconds make the individual images impossible to spatially resolve with ground-based telescopes. We propose leveraging scintillation in the ISM to resolve the microlensed images, effectively turning the ISM into an astrophysical-scale interferometer. Using this technique, we estimate a 6\% uncertainty on $H_0$ from a single observed scintillated microlensing event, with a sub-percent uncertainty on $H_0$ achievable with only 30 such events. With an optical depth for stellar microlensing of $10^{-3}$, this may be achievable in the near future with upcoming FRB telescopes.
Anna Tsai, Dylan L. Jow, Daniel Baker, Ue-Li Pen
2023-08-18T16:04:39Z
http://arxiv.org/abs/2308.10830v2
# Scintillated microlensing: measuring cosmic distances with fast radio bursts ###### Abstract We propose a novel means of directly measuring cosmological distances using scintillated microlensing of fast radio bursts (FRBs). In standard strong lensing measurements of cosmic expansion, the main source of systematic uncertainty lies in modeling the mass profile of galactic halos. Using extra-galactic stellar microlensing to measure the Hubble constant avoids this systematic uncertainty as the lens potential of microlenses depends only on a single parameter: the mass of the lens. FRBs, which may achieve nanosecond precision on lensing time delays, are well-suited to precision measurements of stellar microlensing, for which the time delays are on the order of milliseconds. However, typical angular separations between the microlensed images on the order of microarcsec-odds make the individual images impossible to spatially resolve with ground-based telescopes. We propose leveraging scintillation in the ISM to resolve the microlensed images, effectively turning the ISM into an astrophysical-scale interferometer. Using this technique, we estimate a 6% uncertainty on \(H_{0}\) from a single observed scintillated microlensing event, with a sub-percent uncertainty on \(H_{0}\) achievable with only 30 such events. With an optical depth for stellar microlensing of \(10^{-3}\), this may be achievable in the near future with upcoming FRB telescopes. ## I Introduction The direct measurement of cosmological distances remains a fundamental challenge in cosmology. Strong lensing is one of the few methods capable of making direct measurements; however, traditional strong lensing techniques suffer from systematic uncertainties due to line-of-sight contamination and difficulty in modeling the lens mass profile. The appeal of strong lensing methods lies in their ability to provide \(H_{0}\) measurements independent of Type Ia supernovae measurements and the cosmic microwave background (CMB): the main drivers of the so-called "Hubble tension". Ref. [1] uses galactic lensing of quasars to measure time delay distances, resulting in a Hubble constant of \(73.3^{+1.7}_{-1.8}\) (km/s)/Mpc. This measurement is in agreement with local Type Ia supernova measurements, but in \(3.1\sigma\) tension with the \(Planck\) CMB measurements (\(H_{0}=68.20\pm 0.63\) (km/s)/Mpc [2]). The distance measurements used to infer \(H_{0}\) accumulate errors of between 2% and 8% from uncertainties in the lens-profile modelling and between 2.7% and 6.4% from line-of-sight effects [1]. We propose a new strong lensing method of measuring cosmological distance using stellar microlensed extra-galactic fast radio bursts (FRBs). The term "microlensing", here, refers to lensing by stellar-mass, compact objects whose lensing potential is unambiguously determined by the total mass, thereby avoiding the systematic uncertainties associated with modeling the complicated mass profiles of galactic halos. Fast radio bursts (FRBs) are extremely bright, short flashes of coherent radio emission at cosmological distances. Since FRBs are coherent sources of radiation, uncertainties in measurements of the lensing time-delays are limited by the wavelength of the light, potentially yielding nanosecond precision [3]. This is a many-orders-of-magnitude improvement on the standard quasar time-delay precision of tens of days. In particular, this remarkable precision will make FRBs sensitive to stellar microlenses, which will have typical time delays of order milliseconds [4]. Time delays alone are not sufficient to derive a distance with which to measure cosmic expansion; a measurement of the angular separation of the microlensed images is also needed. However, typical angular separations for microlensed FRBs are on the order of a microarcsecond, well below the resolving power of even the largest baselines on earth. To resolve the angular separation between the microlensed images, we propose to use the scintillation of the lensed images due to multi-path propagation in the Milky Way interstellar medium (ISM) to effectively turn the ISM into an astrophysical-scale interferometer. A substantial fraction of FRBs are observed to scintillate [5; 6], and therefore, microlensed FRBs will also scintillate, making FRBs amenable to this technique. We estimate that a single scintillated microlensing event can achieve a 6% uncertainty on \(H_{0}\), and anticipate that enough observations of such lensing events are realistically obtainable in the next few years to achieve sub-percent uncertainty on \(H_{0}\). A direct measurement of the Hubble constant to percent accuracy would be competitive with other local probes of cosmic expansion and would inform debates on the Hubble tension [7]. ## II Microlensing We consider a microlensed FRB that is also scintillated due to scattering in the ISM (Fig. 1). The Fermat potential for microlensing is given by [8]: \[T(\mathbf{\theta},\mathbf{\beta})=\frac{D_{d}D_{s}}{2D_{ds}}|\mathbf{\theta}-\mathbf{\beta}|^{ 2}-4GM\log|\mathbf{\theta}|, \tag{1}\] where \(D_{d}\), \(D_{s}\), and \(D_{ds}\) denote distances shown in Fig. 1, \(\beta\) is the two dimensional angle between source and optical axis, \(\theta\) is the two dimensional angle between optical axis and image, and \(M\) is the microlens mass. The angular position of the two microlensed images are given by points with stationary Fermat potential: \[\mathbf{\theta_{\pm}}=\frac{\mathbf{\beta}}{2\beta}\left(\beta\pm\sqrt{\frac{16GM}{D} +\beta^{2}}\right), \tag{2}\] where \(\beta=|\mathbf{\beta}|\) and \(D\equiv\frac{D_{d}D_{s}}{D_{ds}}\), and the labels \(\pm\) refer to the brighter and dimmer microlensed images, respectively. For stellar microlenses, we are safely in the geometric optics regime [4], and the effect of lensing is to magnify the two images by an amount \[\mu_{\pm}=\frac{\beta+8GM/D}{2\beta\sqrt{\beta^{2}+16GM/D}}\pm\frac{1}{2}. \tag{3}\] There are three observables that one can measure from a microlensing event: the relative time delay, angular separation, and magnification between the two images. We define these observables as follows: \[\Delta T(D,\beta,M)\equiv T(\mathbf{\theta_{+}})-T(\mathbf{\theta_{-}}), \tag{4}\] \[\Delta\theta(D,\beta,M)\equiv|\mathbf{\theta_{+}}-\mathbf{\theta_{-}}|, \tag{5}\] \[\rho(D,\beta,M)\equiv\frac{\mu_{+}}{\mu_{-}}. \tag{6}\] The effective distance, \(D\), is uniquely determined by these three observables, and is given by: \[D=\frac{4G}{\Delta\theta^{2}}\left(\frac{(4+y^{2})\Delta T}{8G\log(y-\sqrt{4+ y^{2}}/2)-2G(y\sqrt{4+y^{2}})}\right), \tag{7}\] Figure 1: An FRB at a distance \(D_{s}\) away from the observer, with an angular position given by the two-dimensional angle \(\beta\) from the chosen optical axis, experiences microlensing from a star located at a distance \(D_{d}\) from the observer. The microlensed images pass through a scintillating screen in the Milky Way ISM at a distance from the observer that is much smaller than the distances to the lens and source. The images formed by scattering in the ISM are separated by angle \(\alpha\), whereas microlensed images are separated by angle \(\Delta\theta\). where \(y^{2}=\frac{\rho+1}{\sqrt{\rho}}-2\). Thus, measuring the three observables allows one to infer the effective distance, \(D\). For stellar mass microlenses, the time delays are of order \(\Delta T\sim 1\,\)ms, independent of distance, well within the prescision achievable with FRBs. However, for stellar masses and cosmological distances (\(D\sim 1\,\)Gpc), the angular separation is on the order of \(\Delta\theta\sim 1\,\mu\)as, which is far smaller than typical angular resolutions achievable by ground-based radio telescopes such as the Canadian Hydrogen Intensity Mapping Experiment (CHIME). Even with the upcoming addition of very-long baseline interferometry (VLBI) outriggers, the CHIME-TONE array will have a maximum baseline of \(\sim 3300\) km [9] which, at 1 GHz, can only resolve 30 mas. In the following section, we propose leveraging scintillation of microlensed FRBs in the ISM as a way to resolve the angular separation of the microlensed images. ## III Scintillating screens in the ISM as VLBI Bright sources of coherent radio emission are observed to scintillate due to multi-path propagation in the ISM. Recent VLBI imaging of scintillating pulsars reveal that the scattered images are co-linear on the sky, with an aspect ratio of within one percent [10][11]. To achieve co-linearity in the images to this degree, the scattering structures in the ISM must be highly anisotropic. In particular, a single, thin screen of effectively one-dimensional inhomogeneities must dominate contributions to the total bending angle. Such screens have been found to accommodate a wide range of pulsar scintillation observations [10; 11; 12; 13]. Just as ground-based VLBI can be used to resolve the angular separation of scattered images of scintillating pulsars, we propose to use the scattered images of scintillating FRBs as an effective astrophysical-scale interferometer to resolve the much smaller angular separations due to microlensing. Scintillation in the ISM is observed to produce hundreds of images on the sky, with \(b\sim 10\,\)AU separations in the plane of the ISM scattering screen. These enormous baselines can achieve angular resolutions of \(\lambda/b\sim 10^{-2}\,\mu\)as. By using ground-based VLBI to resolve the ISM-scattered images of scintillating FRBs, we can leverage the effective baselines provided by scattering in the ISM to achieve resolutions many orders-of-magnitude finer than what can be achieved with ground-based VLBI. While this technique almost seems too good to be true, it has been successfully implemented to resolve micro- and pico-arcsecond features in pulsar emission regions [14; 15]. To demonstrate the principle for FRB microlensing, consider microlensed FRB rays passing through a single scattering screen in the ISM, using the simple case of only two scattered images. In reality, the ISM screen will result in hundreds of scattered images, but, for the sake of clarity, we will use this simple picture to illustrate the technique and note that a more complete analysis can be carried out in practice. For two scattered images, there is a single angular separation, \(\alpha\) (shown in Fig. 1), which we assume can be measured with ground-based VLBI. Moreover, using standard techniques from pulsar scintillation, we assume that the distance to the ISM screen can be inferred from ground-based VLBI. Using estimates from [11], typical values of \(\alpha\) will be on the order of \(\sim 10\)mas and the distance to the scintillating screen will be \(\sim\,\)kpc, yielding a baseline on the ISM plane of \(b_{12}\sim 10\,\)AU. Using the ISM screen as an effective interferometer, one is sensitive to the combined phase offset, \((\phi_{2}^{-}-\phi_{2}^{+})-(\phi_{1}^{-}-\phi_{1}^{+})\), where \(\phi_{j}^{\pm}\) is the phase of the ray from the microlensed image, \(\theta_{\pm}\), incident on the ISM at the location of the \(j^{th}\) scattered image (see Fig. 2). The technical details of how one can infer this combined phase from scintillation are described in detail elsewhere [14; 15]. We would like to relate the combined phase offset to the angular separation between the two microlensed images so that the latter can be inferred from the former. To proceed, we will make a few simplifying assumptions. Firstly, the distance to the scintillating screen, \(\sim\,\)kpc, is negligible in comparison to \(D_{s}\) and \(D\) (both \(\sim\,\)Gpc). Secondly, the rays incident on the ISM screen from the same microlensed image (i.e. the rays with the same numeric subscript in Fig. 2) are effectively parallel when they hit the ISM screen. This assumption is true in the limit as the microlenisng plane becomes infinitely far away, and, thus, follows from the first assumption. Lastly, we assume that the phase offset between the rays is purely geometric. That is, we neglect the contribution to the phase from the refr Figure 2: Four rays pass through a single ISM plasma screen with phases labeled by \(\phi\). Superscripts (+/-) on the phase correspond different microlensed images. Numeric subscripts denote scattered images. Rays marked by the same superscript are assumed to be parallel. and the gravitational potential of the microlens. Indeed, given the vast distances traversed by the rays, the geometric terms in the time delay typically dominate in both microlensing and scintillation. With these assumptions in hand, we can now relate a combination of phase differences to the angular separation between microlensed images (\(\Delta\theta\)): \[(\phi_{2}^{-}-\phi_{2}^{+})-(\phi_{1}^{-}-\phi_{1}^{+})=\frac{\Delta d_{12}}{ \lambda}=\frac{\Delta\theta b_{12}2\pi}{\lambda}. \tag{8}\] The effective baseline, \(b_{12}\), can be measured using ground-based VLBI, thus allowing Eq. 8 to be inverted to find the angular separation of the microlensed images. ## IV Error on \(H_{0}\) The main source of statistical error in a measurement of \(H_{0}\) will come from the uncertainty in the measurement of the relative phase offsets. For bursts with high signal-to-noise, the uncertainty in measurements of the magnification ratio, \(\rho\), will be small in comparison. Of the three microlensing observables, the angular separation and \(\Delta\theta\) depend on phase measurements by Eq 8 and \(\Delta T\) depends on the phase offset by \((\phi^{-}-\phi^{+})=2\pi\nu\Delta T\). Using a conservative estimate of instrumental error, one may realistic attain an uncertainty in the phase, \(\phi^{-}-\phi^{+}\), of 3% [11], which yields a 6% uncertainty on the inferred distance, \(D\). The effective distance, \(D=D_{d}D_{s}/D_{ds}\), is a combination of distances. In order to measure \(H_{0}\), we need to be able to measure the lens distance, \(D_{d}\), as a function of redshift. We will assume that the FRB is well-localized so that a host and lens galaxy, and thereby, a host and lens redshift, can be inferred. Since we already require that the lensing event in question be observed using ground-based VLBI, any such event will automatically be well-localized. With the source and lens redshifts, one can infer \(H_{0}\) from the effective distance [16]. To obtain a rough estimate on the achievable uncertainty in \(H_{0}\), we will consider the low-redshift regime where \(z_{d}=D_{d}H_{0}/c\), independent of other cosmological parameters. In this regime, a single scintillated microlensing event will yield a measurement of \(H_{0}\) with a 6% uncertainty. The uncertainty scales with the number of observed lensing events like \(\sigma_{H0}\propto\frac{1}{\sqrt{N}}\). Thus, with approximately 30 observed events, we can measure \(H_{0}\) to within a 1% uncertainty. The optical depth for microlensing by extra-galactic stars is roughly \(\tau\approx 10^{-3}\)[17]. This yields an event rate for CHIME (with a rate of \(\sim\)10 FRB detections per day) of approximately 4 stellar microlensed FRBs per year. This event rate will likely increase, potentially by up to two orders of magnitude, as radio telescopes geared towards high event-rates are built. For instance, the proposed Packed Ultra-wideband Mapping Array (PUMA) anticipates an FRB detection rate of more than one-thousand per day [18]. Moreover, higher sensitivity telescopes will extend sensitivity to lower-magnification microlensing events, thereby simultaneously increasing the optical depth of microlensing. With many next-generation FRB telescopes set to come online in the coming years [19; 20], there will be an abundance of FRB data with which to search for scintillated microlensing events. We know from pulsar scintillation that practically every sight-line through the ISM results in scintillation; however, not every sight-line will be appropriate for our proposed method, as we require the scintillation to be dominated by a single anisotropic screen. Extrapolating from a recent pulsar scintillation survey [21], we expect this to be the case roughly half of the time. Even with \(\sim\)4 lensing events a year (using the current CHIME detection rate), observing a total of thirty scintillated microlensing events in the coming years is not unrealistic. ## V Systematic Uncertainties Although microlenses are potentially much cleaner systems than the complicated halo profiles of standard strong lensing measurements, they are not totally without systematic effects that may bias parameter estimation. Here we discuss potential sources of systematic uncertainty. ### Binary contamination Some fraction of stellar systems are binaries. While the exact fraction will depend on the lens' host galaxy, the fraction is typically larger than ten-percent and potentially as large as one half. Falsely identifying binary lenses as point-like microlenses may introduce bias in the parameter estimation. Binary lenses differ from point lenses primarily by the presence of extended caustics. The caustic structure of a binary lens is governed only by the ratio of the two masses, \(q=M_{1}/M_{2}\), and the angular separation between the masses, \(s=\Delta\theta/\theta_{E}\), normalized by the Einstein angle for the total mass, \(\theta_{E}=\sqrt{4G(M_{1}+M_{2})/D}\). For typical stellar binaries, the mass ratio is of order one. At cosmological distances, a separation of \(\sim 1\,\mathrm{AU}\) between the masses yields an angular separation of \(\Delta\theta\sim 0.001\,\mu\mathrm{as}\). A typical Einstein radius for a solar mass system at cosmological distance is \(\theta_{E}\sim 1\,\mu\mathrm{as}\), so that \(s\sim 10^{-3}\). Note that the separation parameter scales inversely with the distance to the lens, \(s\sim D_{d}^{-1/2}\), so that \(s\) will always be much less than unity for cosmological stellar lensing. For such small values of \(s\), the binary lens will have the "close" caustic topology (see Ref. [22] for a classification of binary-lens caustics). The primary caustic is a diamond shape, centred around the optical axis, contained within the Einstein ring. For a binary lens with \(q=1\) and \(s=10^{-3}\), the angular area of the caustic is \(\sim 10^{-12}\theta_{E}^{2}\). In other words, the optical depth of the binary caustics at cosmological distances is negligible. The observables for the binary lens only differ substantially from an equivalent-mass microlens within this caustic region. Indeed, at an impact parameter of \(10^{-3}\theta_{E}\) from the optical axis, the magnification ratio between the brightest images differs from the microlensing predictions by less than one percent, and the relative time delay differs by less than \(10^{-8}\%\). While, in principle, for the binary lens there is always an additional third image that does not exist for the microlens, outside of the central caustic this image is typically below detection thresholds. Moreover, when more than two lensed images are detected, such events can be excluded as non-microlensing events. ### Shear Stellar microlenses microlensing events will be affected by the shear from the lens' host galaxy halo. In principle, if the FRB can be localized, and the galaxy containing the microlens is known, then the shear can be estimated, and its effect fully accounted for via the Chang-Refsdal lens model [23, 24]. In general, however, the external shear of a galaxy halo at cosmological distances is small. For a source at redshift \(z_{s}=1\), and an NFW lens halfway between the source and observer with concentration parameter \(c_{200}=10\) and virial radius \(r_{200}=100\,\mathrm{kpc}\), the shear within the characteristic radius is \(\gamma\sim 0.01\), assuming standard cosmological parameters [25]. For shears of this magnitude, the effect on the bending angle, and hence the magnifications, will be small. The main effect will be on the inferred time delays. The maximum additional time delay due to the shear occurs when the microlensed images are oriented along the direction of the shear, and is on the order of \(\tau_{\gamma}\sim\frac{\gamma}{2}\frac{D}{2c}\theta_{E}^{2}\sim\gamma\Delta T\). Thus, the effect of shear on the observed time delay will typically be below the percent level. ### Line of sight effects Mass inhomogeneities along the line of sight contribute up to \(6.4\%\) uncertainty on each lensing observation in previous strong lensing studies [1]. However, for stellar microlensing, line-of-sight effects are mitigated by the small angular separation of the lens images. In other words, both images take effectively the same path from source to observer, and are affected by matter along the line of sight in the same way. Since we are only sensitive to differences in arrival times between the two images, stellar microlenisng is largely insensitive to line-of-sight inhomogeneities. Compared to standard macrolensing measurements, for which arcsecond angular separations lead to differences in path length of order \(\sim\,10\,\mathrm{kpc}\), path length differences for the two microlensed images will be \(\sim 100\,\mathrm{AU}\). ## VI Conclusion We propose a new strong lensing method for measuring the Hubble constant which employs stellar microlensed FRBs that are also scintillated. Scintillation allows for much finer angular resolution than would be possible with ground-based telescopes. The primary advantage of using microlenses, over the galactic halo "macrolenses" of standard strong lensing measurements, is that the lens potential of a microlens depends only a single parameter (the mass), and therefore does not have any systematic uncertainty related to the mass profile modelling. Using a conservative estimate for instrumental uncertainty in phase measurements, the proposed microlensing method can yield a measurement of \(H_{0}\) to within \(1\%\) uncertainty with approximately \(30\) observed events.
2305.10614
Token-wise Decomposition of Autoregressive Language Model Hidden States for Analyzing Model Predictions
While there is much recent interest in studying why Transformer-based large language models make predictions the way they do, the complex computations performed within each layer have made their behavior somewhat opaque. To mitigate this opacity, this work presents a linear decomposition of final hidden states from autoregressive language models based on each initial input token, which is exact for virtually all contemporary Transformer architectures. This decomposition allows the definition of probability distributions that ablate the contribution of specific input tokens, which can be used to analyze their influence on model probabilities over a sequence of upcoming words with only one forward pass from the model. Using the change in next-word probability as a measure of importance, this work first examines which context words make the biggest contribution to language model predictions. Regression experiments suggest that Transformer-based language models rely primarily on collocational associations, followed by linguistic factors such as syntactic dependencies and coreference relationships in making next-word predictions. Additionally, analyses using these measures to predict syntactic dependencies and coreferent mention spans show that collocational association and repetitions of the same token largely explain the language models' predictions on these tasks.
Byung-Doh Oh, William Schuler
2023-05-17T23:55:32Z
http://arxiv.org/abs/2305.10614v2
# Token-wise Decomposition of Autoregressive Language Model ###### Abstract While there is much recent interest in studying why Transformer-based large language models make predictions the way they do, the complex computations performed within each layer have made their behavior somewhat opaque. To mitigate this opacity, this work presents a linear decomposition of final hidden states from autoregressive language models based on each initial input token, which is exact for virtually all contemporary Transformer architectures. This decomposition allows the definition of probability distributions that ablate the contribution of specific input tokens, which can be used to analyze their influence on model probabilities over a sequence of upcoming words with only one forward pass from the model. Using the change in next-word probability as a measure of importance, this work first examines which context words make the biggest contribution to language model predictions. Regression experiments suggest that Transformer-based language models rely primarily on collocational associations, followed by linguistic factors such as syntactic dependencies and coreference relationships in making next-word predictions. Additionally, analyses using these measures to predict syntactic dependencies and coreferent mention spans show that collocational association and repetitions of the same token largely explain the language models' predictions on these tasks. ## 1 Introduction Much of contemporary natural language processing (NLP) is driven by Transformer-based large language models, which are trained to make predictions about words in their context by aggregating representations through their self-attention mechanism. The breakthrough in many NLP tasks these models have achieved has led to active research into interpreting their predictions and probing the knowledge embodied by these models Manning et al. (2020); Rogers et al. (2021); Belinkov (2022). One line of such research focuses on quantifying the importance of each input token to the models' final output, but due to the complexity of the computations performed within the Transformer layers, analysis has been limited to studying the self-attention mechanism and the feedforward neural network independently Kobayashi et al. (2020, 2021); Geva et al. (2021, 2022); Mickus et al. (2022) or has relied on e.g. gradient-based attribution methods Sanyal and Ren (2021); Zaman and Belinkov (2022) that yield measures that are not interpretable in terms of output model probabilities. To address these limitations, this work presents a linear decomposition of final language model Figure 1: Schematic of input and output representations from Transformer-based autoregressive language models. Standard models (top) calculate one vector of final hidden states at a given timestep (\(\textbf{x}_{L,i}\)), which in this work (bottom) is decomposed exactly into the sum of output representations of each input token (\(\textbf{x}_{L,i,k}\)) and a cumulative bias term (\(\textbf{b}_{L,i}\)). hidden states into the sum of final output representations of each initial input token and a cumulative bias term, which is schematized in Figure 1. This work focuses on decomposing autoregressive language models, in which the final hidden states are used to calculate a probability distribution over the next token. The decomposition allows the definition of probability distributions that ablate the contribution of specific input tokens, which can be used to study their impact on next-token probabilities with only one forward pass from the model. This decomposition is exact if the activation function of the feedforward neural network is differentiable almost everywhere,1 and therefore it does not require perturbing the original computations of the language model (e.g. by using approximations) to gauge the influence of input tokens for virtually all contemporary Transformer architectures. Additionally, this work defines an intuitive importance measure for each context token based on the change in next-token log probability, which does not correlate strongly with layer-wise attention weights or gradient norms. Since this measure is defined in terms of log probabilities, they can also be summed to quantify importance in predicting an arbitrary sequence of tokens according to the chain rule of conditional probabilities. Footnote 1: That is, the function is differentiable at all real numbers except a subset of Lebesgue measure zero, such as the rectified linear unit (ReLU; Nair and Hinton, 2010), which has an inflection point at \(x=0\). Using the proposed decomposition and associated importance measure, this work characterizes which kinds of context words autoregressive language models leverage most in order to make next-word predictions. Results from stepwise regression analyses suggest that Transformer-based language models rely mainly on collocational associations, followed by linguistic factors such as syntactic dependencies and coreference relationships. Follow-up analyses using these importance measures to predict syntactic dependencies and coreferent mention spans additionally show that collocational association and repetitions of the same token largely explain the language models' predictions on these tasks. ## 2 Background: Transformer Decoder of Autoregressive Language Models Transformer-based autoregressive language models (e.g. Radford et al., 2019; Brown et al., 2020; Zhang et al., 2022) use a variant of the multi-layer Transformer decoder (Vaswani et al., 2017). Each decoder layer consists of a masked self-attention block and a feedforward neural network, which together calculate a vector \(\mathbf{x}_{l,i}\in\mathbb{R}^{d}\) for token \(w_{i}\) at layer \(l\): \[\mathbf{x}_{l,i}=\text{FF}_{l}(\text{N}_{l,\text{out}}(\mathbf{x}^{\prime}_{l,i}+\mathbf{x}_{l-1,i}))+(\mathbf{x}^{\prime}_{l,i}+\mathbf{x}_{l-1,i}), \tag{1}\] where \(\text{FF}_{l}\) is a two-layer feedforward neural network, \(\text{N}_{l,\text{out}}\) is a vector-wise layer normalization operation, and \(\mathbf{x}^{\prime}_{l,i}\in\mathbb{R}^{d}\) is the output representation from the multi-head self-attention mechanism, in which \(H\) heads mix representations from the previous context. This output \(\mathbf{x}^{\prime}_{l,i}\) can be decomposed into the sum of representations resulting from each attention head \(h\) and a bias vector \(\mathbf{v}_{l}\): \[\mathbf{x}^{\prime}_{l,i}\!=\!\sum_{h=1}^{H}\!\mathbf{V}_{l,h}\left[\text{N}_ {l,\text{in}}(\mathbf{x}_{l-1,1})\,\cdots\,\text{N}_{l,\text{in}}(\mathbf{x}_ {l-1,i})\right]\mathbf{a}_{l,h,i}\!+\!\mathbf{v}_{l}, \tag{2}\] where \(\mathbf{V}_{l,h}\in\mathbb{R}^{d\times d}\) and \(\mathbf{v}_{l}\in\mathbb{R}^{d}\) represent the weights and biases of the composite value-output transformation2 respectively, and \(\mathbf{a}_{l,h,i}\in\mathbb{R}^{i}\) is the vector of self-attention weights from each head. Footnote 2: For the simplicity of notation, multi-head self-attention is formulated as a sum of ‘value-output’ transformed representations from each attention head instead of the ‘output’ transformed concatenation of ‘value’ transformed representations from each attention head as in Vaswani et al. (2017). To this end, the weights and biases of the ‘value’ and ‘output’ transformations are respectively composed into \(\mathbf{V}_{l,h}\) and \(\mathbf{v}_{l}\). Refer to Appendix A for the derivation of \(\mathbf{V}_{l,h}\) and \(\mathbf{v}_{l}\). \(\text{N}_{l,\alpha}\), where \(\alpha\in\{\text{in},\text{out}\}\),3 is a vector-wise layer normalization operation (Ba et al., 2016) that first standardizes the vector and subsequently conducts elementwise transformations using trainable parameters \(\mathbf{c}_{l,\alpha},\mathbf{b}_{l,\alpha}\in\mathbb{R}^{d}\): Footnote 3: \(\text{N}_{l,\alpha}\) is applied before the masked self-attention block, and \(\text{N}_{l,\text{out}}\) is applied before the feedforward neural network. \[\text{N}_{l,\alpha}(\mathbf{y})=\frac{\mathbf{y}-m(\mathbf{y})}{s(\mathbf{y}) }\odot\mathbf{c}_{l,\alpha}+\mathbf{b}_{l,\alpha}, \tag{3}\] where \(m(\mathbf{y})\) and \(s(\mathbf{y})\) denote the elementwise mean and standard deviation of \(\mathbf{y}\) respectively, and \(\odot\) denotes a Hadamard product. The output representation from the last decoder layer \(L\) is layer-normalized and multiplied by the projection matrix to yield logit scores for the probability distribution over token \(w_{i+1}\): \[\mathbf{z}_{i}=\mathbf{W}\,\text{N}_{L+1,\text{in}}(\mathbf{x}_{L,i}), \tag{4}\] where \(\mathbf{z}_{i}\in\mathbb{R}^{V}\) is the vector of logit scores, \(\mathbf{W}\in\mathbb{R}^{V\times d}\) is the projection matrix, \(V\) is the size of the vocabulary, and \(\text{N}_{L+1,\text{in}}\) is the final layer normalization operation with parameters \(\mathbf{c}_{L+1,\text{in}}\) and \(\mathbf{b}_{L+1,\text{in}}\). ## 3 Token-wise Decomposition of Language Model Hidden States This section provides a mathematical definition of the token-wise decomposition of language model hidden states, which allows the quantification of the contribution of each input token to the conditional probability of the next token. ### Mathematical Definition In this section, we show that the vector of logits \(\mathbf{z}_{i}\) in Equation 4 can be decomposed into the sum of final output representations of each input token \(w_{k}\) and a 'bias-like' term that accumulates bias vectors throughout the Transformer network, which is exact if the activation function within the feedforward neural network is differentiable almost everywhere: \[\mathbf{z}_{i}=\sum_{k=1}^{i}\mathbf{z}^{\prime}_{i,k}+\mathbf{b}_{i}, \tag{5}\] where \(\mathbf{z}^{\prime}_{i,k}\in\mathbb{R}^{V}\) is the final transformed output at timestep \(i\) of the input representation \(\mathbf{x}_{0,k}\)4 at timestep \(k\). This \(\mathbf{z}^{\prime}_{i,k}\) is calculated by aggregating the output of all computations performed on \(\mathbf{x}_{0,k}\) throughout the Transformer layers: Footnote 4: Throughout this paper, the input representation \(\mathbf{x}_{0,k}\) denotes the sum of the type-specific embedding for token \(w_{k}\) and the positional embedding for position \(k\). \[\mathbf{z}^{\prime}_{i,k}=\mathbf{W}\,\mathbf{n}_{\mathbf{x},L+1,j,k}, \tag{6}\] where \(\mathbf{n}_{\mathbf{x},L+1,i,k}\) is a layer-normalized version of \(\mathbf{x}_{L,i,k}\), explained below. Additionally, \(\mathbf{b}_{i}\in\mathbb{R}^{V}\) is the 'bias-like' term resulting from accumulating computations performed on bias vectors that are difficult to attribute to any specific source position \(k\): \[\mathbf{b}_{i}=\mathbf{W}\,\mathbf{n}_{\mathbf{b},L+1,i}, \tag{7}\] where \(\mathbf{n}_{\mathbf{b},L+1,i}\) is a layer-normalized version of \(\mathbf{b}_{L,i}\), also explained below. This decomposition is in turn achieved by maintaining input-specific vectors \(\mathbf{x}_{l,i,k}\in\mathbb{R}^{d}\) and a 'bias-like' vector \(\mathbf{b}_{l,i}\in\mathbb{R}^{d}\) throughout the network. The second index of both \(\mathbf{x}_{l,i,k}\) and \(\mathbf{b}_{l,i}\) represents each target position \(i\), and the third index of \(\mathbf{x}_{l,i,k}\) represents each source position \(k\in\{1,...,i\}\). Therefore, when the third index of \(\mathbf{x}_{l,i,k}\) is reduced and the result is added to \(\mathbf{b}_{l,i}\), the undecomposed output representation \(\mathbf{x}_{l,i}\in\mathbb{R}^{d}\) is returned: \[\mathbf{x}_{l,i}=\sum_{k=1}^{i}\mathbf{x}_{l,i,k}+\mathbf{b}_{l,i}. \tag{8}\] These decomposed representations are updated by each decoder layer (Eq. 1; Fig. 2) as follows: \[\mathbf{x}_{l,i,k} =\mathbf{f}_{\mathbf{x},l,i,k}+(\mathbf{x}^{\prime}_{l,i,k}+ \mathbf{x}_{l-1,i,k}), \tag{9}\] \[\mathbf{b}_{l,i} =\mathbf{f}_{\mathbf{b},l,i}+(\mathbf{b}^{\prime}_{l,i}+\mathbf{ b}_{l-1,i}), \tag{10}\] where \(\mathbf{b}_{0,i}=\mathbf{0}\) and \(\mathbf{x}_{0,i,k}\) is a position-sensitive version of \(\mathbf{x}_{0,k}\): \[\mathbf{x}_{0,i,k}=\begin{cases}\mathbf{x}_{0,k}&\text{if $i=k$},\\ \mathbf{0}&\text{if $i\neq k$},\end{cases} \tag{11}\] and \(\mathbf{f}_{\mathbf{x},l,i,k}\) and \(\mathbf{f}_{\mathbf{b},l,i}\) are decomposed versions of the output from the feedforward network for \(\mathbf{x}_{l,i,k}\) and \(\mathbf{b}_{l,i}\), defined below. The exact decomposition of hidden states according to each source position is made possible due to the linear nature of computations within the masked self-attention block and a local linear approximation of the activation function within the feedforward neural network. First, layer normalization \(\text{N}_{l,\text{in}}\) (Eq. 3) is applied to \(\mathbf{x}_{l-1,i,k}\) to yield \(\mathbf{n}_{\mathbf{x},l,i,k}\) by centering it, scaling it by the standard deviation of the undecomposed representation \(s(\mathbf{x}_{l-1,i})\) Figure 2: Alternative formulation of computations performed within one decoder layer of a Transformer-based autoregressive language model, which allows the contribution of each input token \(w_{k}\) to \(\mathbf{x}_{l,i}\) to be preserved as \(\mathbf{x}_{l,i,k}\). and obtaining a Hadamard product with trainable vector \(\mathbf{c}_{l,\text{in}}\): \[\mathbf{n}_{\text{x},l,i,k}=\frac{\mathbf{x}_{l-1,i,k}-m(\mathbf{x}_{l-1,i,k})}{ s(\mathbf{x}_{l-1,i})}\odot\mathbf{c}_{l,\text{in}}. \tag{12}\] \(\mathbf{N}_{l,\text{in}}\) is also applied to \(\mathbf{b}_{l-1,i}\) to yield \(\mathbf{n}_{\text{b},l,i}\), except that the bias vector \(\mathbf{b}_{l,\text{in}}\) is accumulated by this term: \[\mathbf{n}_{\text{b},l,i}=\frac{\mathbf{b}_{l-1,i}-m(\mathbf{b}_{l-1,i})}{s( \mathbf{x}_{l-1,i})}\odot\mathbf{c}_{l,\text{in}}+\mathbf{b}_{l,\text{in}}. \tag{13}\] Subsequently, the masked self-attention mechanism (Eq. 2) is applied to \([\mathbf{n}_{\text{x},l,1,k}\,\cdots\,\mathbf{n}_{\text{x},l,i,k}]\) to yield \(\mathbf{x}^{\prime}_{l,i,k}\), which updates the total representation from source position \(k\) to target position \(i\) using self-attention weights \(\mathbf{a}_{l,h,i}\): \[\mathbf{x}^{\prime}_{l,i,k}=\sum_{h=1}^{H}\mathbf{V}_{l,h}\left[\mathbf{n}_{ \text{x},l,1,k}\,\cdots\,\mathbf{n}_{\text{x},l,i,k}\right]\mathbf{a}_{l,h,i}. \tag{14}\] The self-attention mechanism is also applied to \([\mathbf{n}_{\text{b},l,1}\,\cdots\,\mathbf{n}_{\text{b},l,j}]\) to yield \(\mathbf{b}^{\prime}_{l,i}\). Similarly to layer normalization, the bias vector \(\mathbf{v}_{l}\) is accumulated by this term: \[\mathbf{b}^{\prime}_{l,i}=\sum_{h=1}^{H}\mathbf{V}_{l,h}\left[\mathbf{n}_{ \text{b},l,1}\,\cdots\,\mathbf{n}_{\text{b},l,i}\right]\mathbf{a}_{l,h,i}+ \mathbf{v}_{l}. \tag{15}\] After adding the residual representations, layer normalization \(\mathbf{N}_{l,\text{out}}\) is applied to \(\mathbf{x}^{\prime}_{l,i,k}+\mathbf{x}_{l-1,i,k}\) and \(\mathbf{b}^{\prime}_{l,i}+\mathbf{b}_{l-1,i}\) in a similar manner to Equations 12 and 13 to yield \(\mathbf{n}^{\prime}_{\text{x},l,i,k}\) and \(\mathbf{n}^{\prime}_{\text{b},l,j}\) respectively, by centering each vector, scaling them by the standard deviation of their corresponding undecomposed representation \(s(\mathbf{x}^{\prime}_{l,i}+\mathbf{x}_{l-1,i})\), and applying the learned parameters \(\mathbf{c}_{l,\text{out}}\) and \(\mathbf{b}_{l,\text{out}}\): \[\mathbf{n}^{\prime}_{\text{x},l,i,k}=\frac{\mathbf{x}^{\prime}_{l,i,k}+\mathbf{ x}_{l-1,i,k}-m(\mathbf{x}^{\prime}_{l,i,k}+\mathbf{x}_{l-1,i,k})}{s(\mathbf{x}^{ \prime}_{l,i}+\mathbf{x}_{l-1,i})}\odot\mathbf{c}_{l,\text{out}}, \tag{16}\] \[\mathbf{n}^{\prime}_{\text{b},l,i}=\frac{\mathbf{b}^{\prime}_{l,i}+\mathbf{b} _{l-1,i}-m(\mathbf{b}^{\prime}_{l,i}+\mathbf{b}_{l-1,i})}{s(\mathbf{x}^{ \prime}_{l,i}+\mathbf{x}_{l-1,i})}\odot\mathbf{c}_{l,\text{out}}+\mathbf{b}_{l,\text{out}}. \tag{17}\] Finally, if the activation function within the feedforward neural network from Equation 1 is differentiable almost everywhere,5 local linear approximation can be used to calculate its output values: Footnote 5: Virtually all widely used activation functions such as the rectified linear unit (ReLU; Nair and Hinton, 2010) and the Gaussian error linear unit (GELU; Hendrycks and Gimpel, 2016) satisfy this property. \[\text{FF}_{l}(\mathbf{y}) =\mathbf{F}_{l,2}\,\sigma(\mathbf{F}_{l,1}\,\mathbf{y}+\mathbf{f} _{l,1})+\mathbf{f}_{l,2} \tag{18}\] \[=\mathbf{F}_{l,2}(\mathbf{s}\odot(\mathbf{F}_{l,1}\,\mathbf{y}+ \mathbf{f}_{l,1})+\mathbf{i})+\mathbf{f}_{l,2}, \tag{19}\] where \(\mathbf{F}_{l,1}\), \(\mathbf{F}_{l,2}\) and \(\mathbf{f}_{l,1}\), \(\mathbf{f}_{l,2}\) are the weights and biases of the feedforward neural network, \(\sigma\) is the activation function, and \(\mathbf{s}\) and \(\mathbf{i}\) are respectively the vector of slopes and intercepts of tangent lines specified by each element of the input vector \(\mathbf{F}_{l,1}\,\mathbf{y}+\mathbf{f}_{l,1}\).6 This reformulation of the activation function allows the feedforward neural network to apply to each decomposed vector \(\mathbf{n}^{\prime}_{\text{x},l,i,k}\) and \(\mathbf{n}^{\prime}_{\text{b},l,i}\) to yield \(\mathbf{f}_{\text{x},l,i,k}\) and \(\mathbf{f}_{\text{b},l,i}\) respectively: Footnote 6: That is, \(\mathbf{s}=\sigma^{\prime}(\mathbf{F}_{l,1}\,\mathbf{y}+\mathbf{f}_{l,1})\), and \(\mathbf{i}=\sigma(\mathbf{F}_{l,1}\,\mathbf{y}+\mathbf{f}_{l,1})-\sigma^{ \prime}(\mathbf{F}_{l,1}\,\mathbf{y}+\mathbf{f}_{l,1})\odot(\mathbf{F}_{l,1} \,\mathbf{y}+\mathbf{f}_{l,1})\). \[\mathbf{f}_{\text{x},l,i,k}=\mathbf{F}_{l,2}\,\mathbf{s}_{l,i}\odot\mathbf{F}_{ l,1}\,\mathbf{n}^{\prime}_{\text{x},l,i,k}, \tag{20}\] \[\mathbf{f}_{\text{b},l,i}=\mathbf{F}_{l,2}(\mathbf{s}_{l,i}\odot(\mathbf{F}_{l,1 }\,\mathbf{n}^{\prime}_{\text{b},l,i}+\mathbf{f}_{l,1})+\mathbf{i}_{l,i})+ \mathbf{f}_{l,2}, \tag{21}\] where \(\mathbf{s}_{l,i}\) and \(\mathbf{i}_{l,i}\) are the vector of slopes and intercepts of tangent lines specified by each element of the undecomposed \(\mathbf{F}_{l,1}\,\mathbf{N}_{l,\text{out}}(\mathbf{x}^{\prime}_{l,i}+\mathbf{ x}_{l-1,i})+\mathbf{f}_{l,1}\). As with other operations, the bias vectors \(\mathbf{f}_{l,1}\), \(\mathbf{f}_{l,2}\), and \(\mathbf{i}_{l,1}\) are accumulated by \(\mathbf{f}_{\text{b},l,i}\). ### Proposed Importance Measure \(\Delta\mathsf{LP}\): Change in Next-Word Probabilities Based on the decomposition outlined in Section 3.1, the importance of each input token \(w_{1,i}\) to the probability of the next token \(\mathsf{P}(w_{i+1}\mid w_{1,i})\) can be quantified. To this end, the probability distribution over the next token that ablates the contribution of \(w_{k}\) is defined as follows: \[\mathsf{P}(w_{i+1}\mid w_{1.\wedge(|k|)})=\underset{w_{i+1}}{\text{SoftMax}}( \mathbf{z}_{i}-\mathbf{z}^{\prime}_{i,k}). \tag{22}\] Subsequently, the importance measure of \(w_{k}\) to the prediction of \(w_{i+1}\) is calculated as the difference between log probabilities of \(w_{i+1}\) given the full context (\(w_{1.i}\)) and the context without it (\(w_{1.\wedge(|k|)}\)): \[\Delta\mathsf{LP}(w_{i+1}\mid w_{1.i},w_{k\in\{1,\ldots,i\}})= \tag{23}\] \[\log_{2}\mathsf{P}(w_{i+1}\mid w_{1.\wedge i})-\log_{2}\mathsf{P}( w_{i+1}\mid w_{1.\wedge(|k)}).\] This measure captures the intuition that an input token that is more crucial to predicting the next token \(w_{i+1}\) will result in larger decreases in \(\mathsf{P}(w_{i+1}\mid w_{1.\wedge i})\) when its contribution to the logit scores is ablated out. It is also possible for \(\Delta\mathsf{LP}\) to be negative, or in other words, \(\mathsf{P}(w_{i+1}\mid w_{1.i})\) can increase as a result of ablating an input token \(w_{k}\). However, a preliminary analysis showed that negative \(\Delta\mathsf{LP}\) values were much less commonly observed than positive \(\Delta\mathsf{LP}\) values and input tokens with negative \(\Delta\mathsf{LP}\) values were not in an easily interpretable relationship with the predicted token. Therefore, the experiments in this work focus on characterizing input tokens with high \(\Delta\)LP values, which are the tokens that drive a large increase in \(\mathsf{P}(w_{i+1}\mid w_{1\_i})\). ## 4 Experiment 1: Correlation with Other Importance Measures This work first compares the decomposition-based \(\Delta\)LP defined in Section 3.2 with other measures of importance that have been used in the literature to examine the degree to which \(\Delta\)LP may be redundant with them. To this end, Pearson correlation coefficients were calculated between the proposed \(\Delta\)LP and attention weights and gradient norms at a token level. ### Procedures The first experiment used the English section of the Conference on Natural Language Learning shared task corpus (CoNLL-2012; Pradhan et al., 2012) as well as the Wall Street Journal corpus of the Penn Treebank (WSJ; Marcus et al., 1993). Both corpora include text from the newswire domain, and the CoNLL-2012 corpus additionally includes text from broadcasts, magazines, telephone conversations, weblogs, and the Bible. The development sets of the two corpora were used in this experiment, which consist of 9,603 and 1,700 sentences respectively. To calculate importance measures on the two corpora, the Open Pre-trained Transformer language model (OPT; Zhang et al., 2022) with \(\sim\)125M parameters was used for efficiency. In addition to \(\Delta\)LP defined in Section 3.2,7 the following importance measures were calculated for each context token \(w_{k\in\{1,...,l\}}\) at timestep \(i\): Footnote 7: Code for calculating decomposed OPT representations and their associated \(\Delta\)LP is publicly available at [https://github.com/byungdoh/llm_decomposition](https://github.com/byungdoh/llm_decomposition). * Layer-wise attention weights (Vaswani et al., 2017): Average attention weights over \(w_{k}\) from all heads within each layer, i.e. \(\frac{1}{H}\sum_{h=1}^{H}\delta_{k}^{\top}\mathbf{a}_{l,h,i}\), where \(\delta_{k}\in\mathbb{R}^{i}\) is a Kronecker delta vector consisting of a one at element \(k\) and zeros elsewhere, and \(l\in\{1,...,L\}\). * Gradient norms (Simonyan et al., 2014): Norm of gradient of next-token log probability w.r.t. the input \(\mathbf{x}_{0,k}\), i.e. \(\|\nabla_{\mathbf{x}_{0,k}}\log\mathsf{P}(w_{i+1}\mid w_{1\_i})\|_{n}\), where \(n\in\{1,2\}\). * Input \(\times\) gradient norms (Shrikumar et al., 2017): \(\|\mathbf{x}_{0,k}\odot\nabla_{\mathbf{x}_{0,k}}\log\mathsf{P}(w_{i+1}\mid w _{1\_i})\|_{n}\), where \(n\in\{1,2\}\). Each article of the CoNLL-2012 and WSJ corpora was tokenized according OPT's byte-pair encoding (BPE; Sennrich et al., 2016) tokenizer and was provided as input to the OPT model. In cases where each article did not fit into a single context window, the second half of the previous context window served as the first half of a new context window to calculate importance measures for the remaining tokens.8 Finally, Pearson correlation coefficients were calculated between token-level \(\Delta\)LP and attention-/gradient-based importance measures on each corpus (163,309,857 points in CoNLL-2012; 25,900,924 points in WSJ). Footnote 8: In practice, most articles fit within one context window of 2,048 tokens. ### Results The results in Figure 3 show that across both corpora, the proposed \(\Delta\)LP shows weak correlation with both attention weights and gradient norms, which suggests that \(\Delta\)LP does not capture a redundant quantity from importance measures that have been used in previous work to examine language model predictions. The gradient norms are more correlated with \(\Delta\)LP, which is likely due to the fact that the gradients calculated with respect to the original input representation \(\mathbf{x}_{0,k}\) accumulate all computations performed within the network like the token-wise decomposition. However, one crucial difference between \(\Delta\)LP and gradient norms is that gradient norms can'saturate' and approach zero when the model makes accurate predictions, as \(\nabla_{\mathbf{z}_{i}}\log\mathsf{P}(w_{i+1}\mid w_{1\_i})\approx\mathbf{0}\) when \(\mathsf{P}(w_{i+1}\mid w_{1\_i})\approx 1\). This means that the importance measures of all context tokens will be systematically underestimated for high-probability target tokens, which may be especially problematic for analyzing large language models that have been trained on billions of training tokens. For average attention weights, they seem to correlate with \(\Delta\)LP most at layer 1, where they are calculated over layer-normalized input representations [\(\mathbf{N}_{1,\text{in}}(\mathbf{x}_{0,1})\)\(\cdots\)\(\mathbf{N}_{1,\text{in}}(\mathbf{x}_{0,i})\)]. In contrast, the attention weights at higher layers seem to correlate less with \(\Delta\)LP, as they are calculated over representations that have been'mixed' by the self-attention mechanism. ## 5 Experiment 2: Characterizing High-Importance Context Words Having established that \(\Delta\)LP provides a novel method to quantify the importance of each context token to language model predictions, the second experiment conducts a series of regression analyses to characterize high-importance context words (i.e. words with high \(\Delta\)LP values) and shed light on which kinds of context words language models leverage most in order to make predictions about the next word. ### Procedures In order to characterize high-importance context words that drive next-word predictions, linear regression models were fit in a stepwise manner to \(\Delta\)LP values on the development set of the CoNLL-2012 corpus, which contains manual annotations of both syntactic structures and coreference relationships. To this end, the \(\Delta\)LP values were calculated for each context word at a word level (following the Penn Treebank tokenization conventions such that they align with the annotations) using the OPT model with \(\sim\)125M parameters. Whenever the predicted word consisted of multiple tokens, the \(\Delta\)LP values were added together to calculate: \[\Delta\text{LP}(w_{i+1},w_{i+2}\mid w_{1..i},w_{k})= \tag{24}\] \[\Delta\text{LP}(w_{i+2}\mid w_{1..i+1},w_{k})+\Delta\text{LP}(w_ {i+1}\mid w_{1..i},w_{k}),\] which is well-defined by the chain rule of conditional probabilities. Likewise, when the context word consisted of multiple tokens, the contributions of all component tokens were ablated simultaneously (Eq. 22) to calculate the \(\Delta\)LP of that context word.9 In order to keep the regression models tractable, the \(\Delta\)LP value of the most important context word for each predicted word (i.e. highest \(\Delta\)LP value) provided the response data for this experiment. This resulted in a total of 162,882 observations, which are visualized in Figure 4. Footnote 9: This ability to quantify the contribution of each context token in predicting multiple target tokens or the simultaneous contribution of multiple context tokens in model prediction is another advantage of \(\Delta\)LP over attention weights or gradient norms, which are inherently defined at a single-token level. Subsequently, a 'baseline' regression model that contains baseline predictors was fit to the set of \(\Delta\)LP values. These baseline predictors include the index of the predicted word (i.e. how many words are in the context), the linear distance between the context word and the predicted word, and \(\log\text{P}(w_{i+1}\mid w_{1..i})\), which may be correlated with \(\Delta\)LP values. Additionally, in order to guide the identification of factors underlying the \(\Delta\)LP values of high-importance context words, each data point was associated with the following predictors of interest that capture associations between the predicted word and the context word: Figure 4: Histogram of top \(\Delta\)LP values for each predicted word on the development set of the CoNLL-2012 corpus calculated from the OPT model. Figure 3: Pearson correlation coefficients between \(\Delta\)LP and other importance measures for each context token. A-\(l\) is average attention weight at layer \(l\); G-\(n\) is \(n\)-norm of gradient; IG-\(n\) is \(n\)-norm of input \(\times\) gradient. * Pointwise mutual information (PMI): \(\log_{2}\frac{\text{P}(w_{k},w_{i+1})}{\text{P}(w_{k})\text{P}(w_{i+1})}\), which is calculated using unigram and bigram probabilities estimated from the Gigaword 4 corpus [15]. Two variants of PMI are explored in this work, which capture associations of word pairs in contiguous bigrams (\(\text{PMI}_{\text{bigram}}\)) and document co-occurrences (\(\text{PMI}_{\text{doc}}\)).10 Footnote 10: The corpus was tokenized following the Penn Treebank conventions for consistency. PMI was defined to be 0 for word pairs without unigram or bigram probability estimates. * Syntactic dependency: A binary variable indicating whether the context word and the predicted word form a syntactic dependency. The CoreNLP toolkit [10] was used to convert annotated constituency structures to dependency representations. * Coreference relationship: A binary variable indicating whether the context word and the predicted word are in coreferent spans. These predictors of interest were included in a stepwise manner, by including the one predictor that contributes most to regression model fit at each iteration and testing its statistical significance through a likelihood ratio test (LRT). All predictors were centered and scaled prior to regression modeling, so the regression coefficients \(\beta\) are defined in units of standard deviation and are comparable across predictors. ### Results The results in Table 1 show that among the predictors of interest, both variants of PMI made the biggest contribution to regression model fit, followed by syntactic dependency and coreference relationship.11 This suggests that Transformer-based autoregressive language models rely primarily on collocational associations in making next-word predictions (e.g. _wedding_ predicting _groom_, _medical_ predicting _hospital_). Linguistic factors like syntactic dependencies and coreference relationships explained additional variance in \(\Delta\)LP values, although their contribution was not as large. Footnote 11: Refer to Appendix B for regression results from the first iteration of the stepwise analysis, which evaluates each predictor independently on top of the baseline regression model. The baseline predictors also shed light on the characteristics of context words that have a large influence on next-word probabilities. Most notably, the linear distance between the predicted word and the context word was a positive predictor of \(\Delta\)LP, which indicates that language models can leverage words far back in the context and that the contribution of such context words is large when they do. Moreover, \(\Delta\)LP values were negatively correlated with log probability, which indicates that the contribution of context words generally decreases when the model is making confident predictions about the next word. Finally, although there was a positive correlation between word index and \(\Delta\)LP values, its strength was too weak to draw conclusive interpretations. ## 6 Experiment 3: Syntactic Dependency and Coreference Prediction Using \(\Delta\)LP The previous experiment revealed that compared to measures of collocational association, syntactic dependency and coreference relationships were not as strong predictors of \(\Delta\)LP. Experiment 3 further examines the connection between high-importance context words and syntactic dependency and coreference relationships by using \(\Delta\)LP to predict them independently and analyzing the extent to which each relationship type aligns with \(\Delta\)LP. ### Procedures This experiment used \(\Delta\)LP to make predictions about context words in syntactic dependency and coreference relationships on the development sets of the WSJ and CoNLL-2012 corpora respectively. First, on the WSJ corpus, the precision scores for syntactic dependency relations were calculated by counting how many times context words with high \(\Delta\)LP match words in syntactic dependency relations. While each word has exactly one incoming typed edge from its head in a typical depen \begin{table} \begin{tabular}{l|r|r|r} Predictor & \(\beta\) & \(t\)-value & \(\Delta\)LL \\ \hline Word index & 0.034 & 1.919 & - \\ Distance & 1.126 & 62.755 & - \\ Log prob. & -0.083 & -5.350 & - \\ \hline \(\text{PMI}_{\text{bigram}}\) & 1.220 & 70.857 & 6151.262\({}^{*}\) \\ \(\text{PMI}_{\text{doc}}\) & 1.286 & 73.952 & 3194.815\({}^{*}\) \\ Dependency & 1.055 & 63.720 & 1981.778\({}^{*}\) \\ Coreference & 0.123 & 7.195 & 25.883\({}^{*}\) \\ \end{tabular} \end{table} Table 1: Regression coefficients from the final stepwise regression model and increase in regression model likelihood (\(\Delta\)LL) from including each predictor of interest. The predictors of interest are presented in the order they were included during stepwise regression (i.e. strongest predictor at each iteration). *: \(p<0.001\). dency syntax representation, since autoregressive language models have no access to the forward context, all edges between word pairs were treated as undirected edges and were evaluated at the later word in the pair. For each predicted word \(w_{i+1}\) that is in \(n\) syntactic dependency relationships, the top-\(n\) context words were selected based on \(\Delta\)LP within the same sentence and compared to the \(n\) words that are in syntactic dependency relationships with \(w_{i+1}\). The syntactic dependency representations converted using the CoreNLP toolkit Manning et al. (2014) were used to evaluate the performance on the WSJ corpus. As a baseline, the expected precision scores from randomly selecting \(n\) previous words within the same sentence are also reported. Similarly, antecedent selection precision scores for coreference relations were calculated by counting how many times the context word with the highest \(\Delta\)LP value matched words in spans denoting the same entity. For each mention span, \(\Delta\)LP quantifying the impact of every context word on the prediction of the entire span (Eq. 24) was calculated. Subsequently, the context word with the highest \(\Delta\)LP was evaluated in terms of whether it belonged to any antecedent spans denoting the same entity. As a baseline, precision scores from selecting the most recent word with the same part-of-speech as the head word of the span are reported. ### Results The syntactic dependency results in Table 2 reveal a discrepancy in performance according to the type of relation that is being predicted. Generally, context words with high \(\Delta\)LP values corresponded most closely to words in adjectival modifier and compound relations, followed by those in subject and direct object relations, which are core arguments in English. Performance on adjunct nouns such as nominal modifiers and oblique nouns, as well as function words like determiners and case markers was lower. This trend in turn seems to be generally driven by the strength of collocational associations, as can be seen by the corresponding average PMI values in Table 2. This corroborates the regression results of Experiment 2 and further suggests that the seeming connection between language model predictions and syntactic dependencies may underlyingly be the effects of collocational association. One counterexample to this trend seems to be the syntactic dependency between the main verb and its direct object, which shows close correspondence to \(\Delta\)LP despite not having high average PMI values. The coreference results in Table 3 show an even larger gap in performance according to the type of entity mention. Generally, context words with high \(\Delta\)LP values corresponded most closely to previous mentions of proper nouns and common nouns. In contrast, they did not correspond well to antecedents of personal and possessive pronouns, showing lower precision scores than a simple baseline that chooses the most recent pronoun. A follow-up analysis of the \(\Delta\)LP values showed that when the language model has to predict a head word that has already been observed in its context, the earlier occurrence of that head word contributes substantially to its prediction. The proportion of \begin{table} \begin{tabular}{l|c|c||c|c} Relation & \(\Delta\)LP & Base. & PMI\({}_{b}\) & PMI\({}_{d}\) \\ \hline Nom. subj. & 61.15 & 39.79 & 1.38 & 1.44 \\ Direct obj. & 70.43 & 22.01 & 0.91 & 1.57 \\ Oblique & 52.54 & 24.31 & -0.68 & 1.54 \\ Compound & 80.44 & 39.56 & 4.97 & 2.93 \\ Nom. mod. & 53.84 & 26.09 & -0.41 & 1.84 \\ Adj. mod. & 82.55 & 36.02 & 4.36 & 2.17 \\ Determiner & 52.03 & 36.52 & 1.51 & 1.08 \\ Case marker & 52.38 & 27.96 & -0.29 & 1.08 \\ \hline Microavg. & 56.20 & 29.22 & 1.11 & 1.58 \\ \end{tabular} \end{table} Table 2: Precision scores calculated using \(\Delta\)LP, random word baseline, and average PMI of frequent syntactic dependency relations in the WSJ corpus. The less frequent relations are not presented separately but are included in the microaverage. PMI\({}_{b}\) is average PMI based on contiguous bigrams; PMI\({}_{d}\) is average PMI based on document co-occurrences. \begin{table} \begin{tabular}{l|c|c||c} Mention head POS & \(\Delta\)LP & Base. & Rep.\% \\ \hline Personal pronoun & 26.55 & 36.80 & 30.92 \\ Possessive pronoun & 23.29 & 36.45 & 30.59 \\ Proper noun (sg.) & 61.21 & 23.19 & 68.80 \\ Proper noun (pl.) & 70.67 & 57.33 & 68.00 \\ Common noun (sg.) & 43.39 & 12.55 & 48.75 \\ Common noun (pl.) & 47.01 & 24.73 & 55.03 \\ Possessive ending & 46.28 & 30.58 & 40.91 \\ \hline Microavg. & 38.21 & 28.65 & 43.26 \\ \end{tabular} \end{table} Table 3: Precision scores calculated using \(\Delta\)LP, most recent head POS baseline, and Rep. % of frequent types of coreferent spans in the CoNLL-2012 corpus. The less frequent types are not presented separately but are included in the microaverage. Rep. % is the proportion of mention spans whose head words are repeated from previous coreferent spans. mention spans whose head words are repeated from head words of previous coreferent spans in Table 3 shows that the close correspondence between \(\Delta\)LP and previous mentions of proper nouns is driven by the fact that these proper nouns are often repeated verbatim in the corpus. In contrast, the prediction of pronouns does not seem to be mainly driven by context words that denote their antecedents. ## 7 Discussion and Conclusion This work advances recent efforts to interpret the predictions of Transformer-based large language models. To this end, a linear decomposition of final language model hidden states into the sum of final output representations of each initial input token and a cumulative bias term was presented. This decomposition is exact as long as the activation function of the feedforward neural network is differentiable almost everywhere, and therefore it is applicable to virtually all Transformer-based architectures. Additionally, this decomposition does not require perturbing any intermediate computations nor re-running the language model to examine the impact of each input token. The decomposition in turn allows the definition of probability distributions that ablate the influence of input tokens, which was used to define the importance measure \(\Delta\)LP that quantifies the change in next-token log probability. The first experiment in this work demonstrated that \(\Delta\)LP does not capture a redundant quantity from importance measures that have been used in previous work to examine language model predictions such as layer-wise attention weights or gradient norms. Subsequently, based on the proposed \(\Delta\)LP, a stepwise regression analysis was conducted to shed light on the characteristics of context words that autoregressive language models rely on most in order to make next-word predictions. The regression results show that Transformer-based language models mainly leverage context words that form strong collocational associations with the predicted word, followed by context words that are in syntactic dependencies and coreference relationships with the predicted word. The high reliance on collocational associations is consistent with the mathematical analysis of Transformers that a layer of self-attention effectively functions as a lookup table that tracks bigram statistics of the input data Elhage et al. (2021), as well as empirical observations that Transformer-based autoregressive language models have a propensity to'memorize' sequences from the training data Carlini et al. (2022). Finally, as a follow-up analysis, \(\Delta\)LP was used to predict syntactic dependencies and coreferent mentions to further examine their relationship to high-importance context words. The precision scores on both tasks revealed a large discrepancy in performance according to the type of syntactic dependency relations and entity mentions. On syntactic dependency prediction, \(\Delta\)LP corresponded closer to words in relations with high collocational association such as compounds and adjectival modifiers, providing further support for its importance in a language model's next-word prediction. Moreover, on coreferent antecedent selection, \(\Delta\)LP more accurately identified previous mentions of proper nouns and common nouns that were already observed verbatim in context. This is consistent with the tendency of Transformer-based language models to predict identical tokens from its context Sun et al. (2021), which seems to be enabled by dedicated 'induction heads' Elhage et al. (2021); Olsson et al. (2022) that learn such in-context copying behavior. Taken together, these results suggest that collocational association and verbatim repetitions strongly drive the predictions of Transformer-based autoregressive language models. As such, the connection drawn between a large language model's computations and linguistic phenomena such as syntactic dependencies and coreference observed in previous work (e.g. Manning et al., 2020) may underlyingly be the effects of these factors. ## Acknowledgments We thank the reviewers for their helpful comments. This work was supported by the National Science Foundation grant #1816891. All views expressed are those of the authors and do not necessarily reflect the views of the National Science Foundation. ## Limitations The connection between factors underlying the predictions of Transformer-based autoregressive language models and linguistic factors drawn in this work is based on a model trained on English text and annotated corpora of English text. Therefore, this connection may not generalize to other languages with e.g. more flexible word order. Additionally, although the alternative formulations of Transformer hidden states yielded insights about language model predictions, they are more compu tationally expensive to calculate as they rely on an explicit decomposition of the matrix multiplication operation, which in undecomposed form is highly optimized for in most packages. ## Ethics Statement Experiments presented in this work used datasets from previously published research [3, 17], in which the procedures for data collection, validation, and cleaning are outlined. These datasets were used to study a large language model's predictions about coreference resolution and dependency parsing respectively, which is consistent with their intended use. As this work focuses on studying the factors underlying the predictions of large language models, its potential risks and negative impacts on society seem to be minimal.
2303.05829
Anomalous Hall effect in type-I Weyl metals beyond the noncrossing approximation
We study the anomalous Hall effect (AHE) in tilted Weyl metals with Gaussian disorder due to the crossed X and {\Psi} diagrams in this work. The importance of such diagrams to the AHE has been demonstrated recently in two dimensional (2D) massive Dirac model and Rashba ferromagnets. It has been shown that the inclusion of such diagrams dramatically changes the total AHE in such systems. In this work, we show that the contributions from the X and {\Psi} diagrams to the AHE in tilted Weyl metals are of the same order of the non-crossing diagram we studied in a previous work, but with opposite sign. The total contribution of the X and {\Psi} diagrams cancels the majority part of the contribution from the non-crossing diagram in tilted Weyl metals, similar to the 2D massive Dirac model. We also discuss the difference of the contributions from the crossed diagrams between 2D massive Dirac model and the tilted Weyl metals. At last, we discuss the experimental relevance of observing the AHE due to the X and {\Psi} diagrams in type-I Weyl metal such as Co3Sn2S2.
Jia-Xing Zhang, Wei Chen
2023-03-10T10:06:40Z
http://arxiv.org/abs/2303.05829v2
# Anomalous Hall effect in type-I Weyl metals beyond the noncrossing approximation ###### Abstract We study the anomalous Hall effect (AHE) in tilted Weyl metals with Gaussian disorder due to the crossed \(X\) and \(\Psi\) diagrams in this work. The importance of such diagrams to the AHE has been demonstrated recently in 2D massive Dirac model and Rashba ferromagnets. It was shown that the inclusion of such diagrams dramatically changes the total AHE in such systems. In this work, we show that the contributions from the \(X\) and \(\Psi\) diagrams to the AHE in tilted Weyl metals are of the same order of the non-crossing diagram we studied in a previous work, but with opposite sign. The total contribution of the \(X\) and \(\Psi\) diagrams cancels the majority of the contribution from the non-crossing diagram in tilted Weyl metals, similar to the 2D massive Dirac model. We also discussed the difference of the contributions from the crossed diagrams between 2D massive Dirac model and the tilted Weyl metals. At last, we discussed the experimental relevance of observing the AHE due to the \(X\) and \(\Psi\) diagrams in type-I Weyl metal Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\). ## I I. Introduction The anomalous Hall effect (AHE) has been a topic of interest since it's first observed in ferromagnetic iron by Edwin Hall in 1881 [1]. It is analogous to an usual Hall effect but without the need of an external magnetic field [2; 3]. The transverse motion in the anomalous Hall systems originates from the spin-orbit interaction and to have a net transverse current, the time reversal symmetry (TRS) has to be broken in the system [2; 3; 4; 5; 6]. In insulator or semiconductor, the anomalous Hall conductivity is quantized and insensitive to impurity scatterings. In metals, however, the impurity scatterings affect the AHE significantly, and the AHE in such cases can be divided to the intrinsic contribution, which is due to the non-trivial topology of the electronic band structure and remains in the clean limit, and the extrinsic contribution, which is due to the impurity scatterings [5]. The anomalous Hall current can be obtained either from the quantum Kubo-Streda (QKS) formula [7] or a semi-classical Boltzmann equation (SBE) approach [4; 5]. The former approach is more systematic whereas the latter is physically more transparent. It is well-known that the Feynman diagrams with crossed impurity lines result in a longitudinal conductivity which is smaller than the non-crossing diagram by a factor of \(1/\epsilon_{F}\tau\) so the crossed diagrams are usually neglected in computing the longitudinal conductivity. For a long time, the crossed diagrams were also ignored for the AHE and only the Feynman diagrams with non-crossing impurity scattering lines are considered for the AHE [4; 5; 8]. The anomalous Hall conductivity from the non-crossing diagrams is independent of the impurity scattering rate or strength, i.e., \(\sigma_{H}^{a}\sim\tau^{0}\). However, it was demonstrated recently that the diagrams with two crossed impurity lines, namely the so-called X and \(\Psi\) diagrams, may also contribute to the anomalous Hall conductivity with the same order of magnitude as the non-crossing diagram [9; 10; 11]. The contributions of such crossed diagrams come from the skew scatterings on pairs of closely located impurities with distance of the order of the Fermi wavelength. The account of such crossed diagrams in a number of anomalous Hall systems changes the total AHE in the systems dramatically [9; 10; 11; 12]. For example, the inclusion of the X and \(\Psi\) diagrams in two-dimensional (2D) Rashba ferromagnetic metal results in a non-vanishing AHE instead of the vanishing result under the non-crossing approximation (NCA) [9; 13; 14]. In 2D massive Dirac model, the X and \(\Psi\) diagrams almost cancel out the NCA contribution at high energy [10; 11]. It was also shown that the same crossed diagrams play an important role for the AHE on the surface of topological Kondo insulator [12], for the Kerr effect in chiral p-wave superconductors [15], and for the spin Hall effect in the presence of strong impurities [16]. The above cases show that the crossed diagrams play an important role for a complete study of the AHE in a general case. For the reason, we study the contribution of the crossed diagrams, namely the X and \(\Psi\) diagram to the AHE in tilted Weyl metals with breaking TRS and weak Gaussian disorder in this work [17; 18]. Diagrams with more crossed lines have smaller contribution in \(1/\epsilon_{F}\tau\). For untilted Weyl metals it has been shown that the impurity scatterings have little effect on the AHE only if the Fermi energy is not very far from the Weyl nodes [19]. This is because the low energy effective Hamiltonian of a single Weyl node of the untilted Weyl metal gains an emergent TRS and the AHE due to the impurity scatterings vanish. For tilted Weyl metals, the tilting breaks the TRS of the effective Hamiltonian of a single Weyl node [20; 21] and the impurity scatterings have significant effects on the AHE in such system [22; 23]. In a previous paper, we have studied the disorder induced AHE in the tilted Weyl metals due to the non-crossing diagrams and obtained both the intrinsic and extrinsic contribution for such diagrams from the quantum Kubo-Streda formula [23]. We also separated the two different extrinsic contributions, namely the side jump and skew scattering contribution from the non-crossing diagrams in this system. The study of the crossed diagrams for the tilted Weyl metals in this work is an important supplement of the skew scattering contribution to the AHE in such system. We show that the contribution from both the \(X\) and \(\Psi\) diagrams for tilted Weyl metals is of the same order of the contribution from the NCA diagram, i.e., \(\sim\tau^{0}\). This is different from the 2D massive Dirac model for which the contribution from the \(\Psi\) diagram vanishes for Gaussian disorder [10; 11]. On the other hand, our calculation shows that the total contribution of the \(X\) and \(\Psi\) diagram cancels a majority part of the contribution from the NCA diagram for tilted Weyl metals. This is similar to the 2D massive Dirac model. However, the inclusion of the \(X\) and \(\Psi\) diagram does not change the dependence of the anomalous Hall conductivity on the Fermi energy whereas in 2D massive Dirac model, the crossed diagram changes the total anomalous Hall conductivity from \(\sigma_{xy}\sim m/\epsilon_{F}\) for NCA diagram to \(\sigma_{xy}\sim(m/\epsilon_{F})^{3}\)[11]. We also discussed the experimental relevance to observe the effects of the \(X\) and \(\Psi\) diagrams in tilted Weyl metals in experiments, such as Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\)[24; 25; 26]. We point out that the condition to observe the contributions of the crossed diagrams is much more demanding than the non-crossing diagram. Moreover, since the intrinsic AHE from the Chern-Simons term is much higher than the AHE from both the non-crossing and crossed diagrams in Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\), the effects of the disorder on the AHE is not very distinguishable in experiments in this system. We propose that one can observe the effects of the disorder on the AHE by measuring the anomalous Nernst effect (ANE) in such system because the contributions of the disorder to the ANE and AHE are proportional to each other but the Chern-Simons term has no contribution to the ANE. This paper is organized as follows. In Sec. II, we present the model and the calculation of the anomalous Hall effect due to the crossed \(X\) and \(\Psi\) diagrams in tilted Weyl metals, and compare the AHE from the crossed diagrams with the non-crossing diagram, as well as with other systems, such as 2D massive Dirac model. In Sec.III, we have a brief conclusion of the experimental relevance of observing the effects of \(X\) and \(\Psi\) diagrams. In Sec. IV, we have a summary of this work. ## II II. AHE in tilted Weyl metals due to X and \(\Psi\) diagrams The low energy physics of a Type-I Weyl metal with breaking TRS can be described by an effective low energy Hamiltonian of two independent Weyl nodes and a topological Chern-Simons term [23; 27]. The Chern-Simons term results in an AHE proportional to the distance of the two Weyl nodes and is not affected by the impurity scatterings. We will then focus on the low energy effective Hamiltonian of the Weyl nodes in the following, which is \[H=\sum_{\chi}(\chi v\mathbf{\sigma}\cdot\mathbf{p}+\mathbf{u}_{\chi} \cdot\mathbf{p}), \tag{1}\] where \(\chi=\pm 1\) is the chirality of the two Weyl nodes, \(\sigma\) are the Pauli matrices and \(\mathbf{u}_{\chi}\) is a tilting velocity with \(u_{\chi}<v\) for type-I Weyl metals we consider in this work. Here we assume the tilting \(\mathbf{u}_{+}=-\mathbf{u}_{-}=\mathbf{u}\), i.e., the tilting is opposite for the two valleys. For this case, the AHE in the two Weyl nodes adds up instead of cancel out. The Hamiltonian \(H_{\chi}\) for each single valley results in two tilting linear bands \(\epsilon_{\pm}=\pm vp+\mathbf{u}_{\chi}\cdot\mathbf{p}\). The tilting term breaks the TRS of a single Weyl node so the AHE from each valley does not vanish. The term \(\chi v\mathbf{\sigma}\cdot\mathbf{p}\) breaks the global TRS so the total AHE of the two valleys is non-zero. We consider weak Gaussian disorder (white noise) with random potential \(V(\mathbf{r})=V_{0}\sum_{a}\delta(\mathbf{r}-r_{a})\) and correlation \(\langle V(\mathbf{r})V(\mathbf{r}^{\prime})\rangle=\gamma\delta(\mathbf{r}- \mathbf{r}^{\prime})\), where \(\gamma=n_{i}V_{0}^{2}\) and \(n_{i}\) is the impurity density. We assume that the mean free path of the electrons is much larger than the Fermi wavelength, i.e., \(k_{F}l\gg 1\) or \(\epsilon_{F}\tau\gg 1\). The anomalous Hall conductivity may be written in two parts, i.e., \(\sigma_{H}^{1}\) and \(\sigma_{H}^{\Pi}\), in the Kubo-Streda formula[5; 7]. Formally \(\sigma_{H}^{1}\) takes into account the contribution on the Fermi surface, and \(\sigma_{H}^{\Pi}\) includes contribution from the whole Fermi sea. Since \(\sigma_{H}^{\Pi}\) is not sensitive to impurity scatterings and its contribution in the clean limit has been studied in previous works for tilted Weyl metals[20; 21], we only need to study the \(\sigma_{H}^{1}\) part in this work. The leading order contribution to the response function \(\Pi_{\alpha\beta}^{\mathbf{I}}\) includes the Figure 1: (a)The Feynman diagram of the response function \(\Pi_{\alpha\beta}^{\Pi}\) under the non-crossing approximation (NCA) in the spin basis. The thick solid lines are Green’s function in the spin basis under the Born approximation, and the solid square represents the current vertex \(\Gamma_{\alpha}\) renormalized by the ladder diagram under the NCA. (b)The X diagram with two crossed impurity lines. (c) and (d): The \(\Psi\) diagram with two crossed impurity lines. (e)The recursion equation satisfied by the renormalized current vertex \(\Gamma_{\alpha}\). diagrams in Fig.1, where Fig.1a is the diagram under the NCA and has been studied in our previous work [23]. The NCA diagram includes both the intrinsic and extrinsic contributions, and both contributions to the AHE are independent of the scattering rate \(1/\tau\) in the leading order, i.e., \(\sim\tau^{0}\). For the crossed diagrams in Fig. 1(b)-(d), previous works [9; 11; 12] have shown that for Gaussian disorder, the leading order AHE from these diagrams for 2D Rashba ferromagnets and massive Dirac models is of the same order as the non-crossing diagram in Fig.1a,, i.e., \(\sim\tau^{0}\). In the following, we study the contribution of the crossed \(X\) and \(\Psi\) diagrams for tilted Weyl metals and compare the leading order contribution of these diagrams with the non-crossing diagram in Fig.1a. Diagrams with more crossed impurity lines have smaller contribution in \(1/\tau\) and so are negligible. We assume that the impurity potential is diagonal for both the spin and valley index, so the two valleys decouple and one can compute the AHE in each valley separately. The leading order contribution to the AHE from the NCA diagram of the tilted Weyl metals has been worked out in our previous work [23]. The total dc anomalous Hall conductivity from the non-crossing diagram for the two Weyl nodes is \(\sigma^{\rm NCA}_{xy}=4e^{2}\epsilon_{F}u/3\pi^{2}v^{2}\) in the leading order of \(u/v\) for \({\bf u}\) in the \(z\) direction. As a comparison, we compute the anomalous Hall conductivity in the dc limit due to the crossed \(X\) and \(\Psi\) diagram in the tilted Weyl metals in the leading order of \(u/v\) in the following. We consider a uniform electric field \({\bf E}=-\partial_{t}{\bf A}\) applied to the system. In the linear response regime \(j^{\bf I}_{\alpha}=\Pi^{\bf I}_{\alpha\beta}A^{\beta}\), where \(A^{\alpha}=(0,{\bf A})\). The response functions \(\Pi^{\bf I}_{\alpha\beta}\) in the dc limit for the X and \(\Psi\) diagrams for a single Weyl node (e.g., \(\chi=1\)) are respectively \[\Pi^{X}_{\alpha\beta}=\gamma^{2}\omega\sum_{{\bf p}_{1},\ldots,{ \bf p}_{4}}\int\frac{d\epsilon}{2\pi i}\frac{dn_{F}(\epsilon)}{d\epsilon} \delta({\bf p}_{1}+{\bf p}_{2}-{\bf p}_{3}-{\bf p}_{4})\] \[{\rm Tr}[\Gamma_{\alpha}{\rm G}^{\rm R}({\bf p}_{1}){\rm G}^{\rm R }({\bf p}_{3}){\rm G}^{\rm R}({\bf p}_{2})\Gamma_{\beta}{\rm G}^{\rm A}({\bf p }_{2}){\rm G}^{\rm A}({\bf p}_{4}){\rm G}^{\rm A}({\bf p}_{1})], \tag{2}\] and \[\Pi^{\Psi}_{\alpha\beta}=\gamma^{2}\omega\sum_{{\bf p}_{1},\ldots,{\bf p}_{4}}\int\frac{d\epsilon}{2\pi i}\frac{dn_{F}(\epsilon)}{d\epsilon} \delta({\bf p}_{1}+{\bf p}_{4}-{\bf p}_{2}-{\bf p}_{3})\] \[{\rm Tr}[G^{A}({\bf p}_{1})\Gamma_{\alpha}G^{R}({\bf p}_{1})G^{R} ({\bf p}_{3})G^{R}({\bf p}_{4})G^{R}({\bf p}_{2})\Gamma_{\beta}G^{A}({\bf p}_{ 2})\] \[+G^{A}({\bf p}_{1})\Gamma_{\alpha}G^{R}({\bf p}_{1})G^{R}({\bf p}_ {2})\Gamma_{\beta}G^{A}({\bf p}_{2})G^{A}({\bf p}_{4})G^{A}({\bf p}_{3})], \tag{3}\] where \(G^{R/A}\) is the retarded/advanced Green's function (GF) of the tilted Weyl metals, and \(\Gamma_{\alpha}\) is the current vertex renormalized by the non-crossing ladder diagram [23]. We have omitted the argument \(\epsilon\) in \(G^{R/A}(\epsilon,{\bf p})\) in the above equations for brevity. The impurity averaged retarded/advanced GF in a single valley (e.g. with \(\chi=1\)) under the first Born approximation is [23] \[G^{R/A}(\epsilon,{\bf p})=(\epsilon-v\mathbf{\sigma}\cdot{\bf p}-{ \bf u}\cdot{\bf p}-\Sigma^{R/A})^{-1}, \tag{4}\] where the self-energy due to the impurity scatterings is \(\Sigma^{R/A}=\mp\frac{i}{2\tau}[1+\mathbf{\Delta}({\bf u})\cdot \mathbf{\sigma}]\) with \(1/\tau=\pi\gamma g(\epsilon_{F})\), \(g(\epsilon_{F})=\int\frac{d^{3}p}{(2\pi)^{3}}\delta({\bf u}\cdot{\bf p}+vp- \epsilon_{F})=\frac{\epsilon_{F}^{2}v}{2\pi^{2}(v^{2}-u^{2})^{2}}\) being the density of states at the Fermi energy \(\epsilon_{F}>0\) and \(\mathbf{\Delta}({\bf u})=-{\bf u}/v\). For the calculation in this work, it is convenient to write the \(G^{R/A}\) in Eq.(4) as \[G^{R/A}(\epsilon,{\bf p})=\frac{(\epsilon\pm\frac{i}{2\tau}-\mathbf{u }\cdot\mathbf{p})\sigma^{0}+v\mathbf{p}\cdot\mbox{\boldmath $\sigma$}\mp\frac{i}{2\tau}(\mathbf{\Delta}\cdot\mathbf{ \sigma})}{(\epsilon-\epsilon_{p}^{+}\pm\frac{i}{2\tau^{+}})(\epsilon-\epsilon_ {p}^{-}\pm\frac{i}{2\tau^{-}})}, \tag{5}\] with \(1/\tau^{\pm}=\frac{1}{\tau}(1\pm\frac{{\bf p}\cdot\mathbf{\Delta}}{p})\). The renormalized current vertex \(\Gamma_{\alpha}\) in Eq.(2) and (3) has been worked out in our previous work [23]. The bare current vertex for the tilted Weyl metals is \(\hat{j}_{\alpha}=e(v\sigma_{\alpha}+u_{\alpha}\sigma_{0})\) (we define \(u_{0}\equiv 0\)). By expressing \(\hat{j}_{\alpha}\) and \(\Gamma_{\alpha}\) with the Pauli matrices as \(\hat{j}_{\alpha}={\cal J}_{\alpha\beta}\sigma_{\beta}\) and \(\hat{\Gamma}_{\alpha}=\Gamma_{\alpha\beta}\sigma_{\beta},\alpha,\beta=0,x,y,z\), one can solve the coefficient of the renormalized current vertex as \(\Gamma_{\alpha\beta}={\cal J}_{\alpha\gamma}{\cal D}_{\gamma\beta}\), where the summation over the repeated index \(\gamma\) is implied as usual and \({\cal D}=(1-\gamma{\cal I})^{-1}\) is the \(4\times 4\) diffusion matrix with the polarization operator \({\cal I}\) defined as \[{\cal I}_{\alpha\beta}=\frac{1}{2}\int\frac{d{\bf p}}{(2\pi)^{3}}{\rm Tr}[ \sigma_{\alpha}{\rm G}^{\rm R}(\epsilon+\omega,{\bf p}+{\bf q})\sigma_{\beta}{ \rm G}^{\rm A}(\epsilon,{\bf p})]. \tag{6}\] In our previous work [23], we have shown that the renormalized current vertex \(\Gamma_{\alpha\beta}=ev{\cal D}_{\alpha\beta}\), i.e., the tilting term \(u_{\alpha}\sigma_{0}\) in the bare current vertex has no contribution to the AHE and the main effect of the tilting is to produce an anisotropy of the Fermi surface. We have also worked out the \({\cal I}\) matrix and \({\cal D}\) matrix for the tilted Weyl metals in the previous work [23], so we will just apply the results of such matrices for the study in this work. Denoting the integrand of \({\cal I}_{\alpha\beta}\) in the dc limit as \(I_{\alpha\beta}({\bf p})=\frac{1}{2}{\rm Tr}[\sigma_{\alpha}{\rm G}^{\rm R}( \epsilon,{\bf p})\sigma_{\beta}{\rm G}^{\rm A}(\epsilon,{\bf p})]\), one gets \(G^{A}\hat{\Gamma}_{\alpha}G^{R}=\Gamma_{\alpha\beta}I_{\beta\gamma}\sigma_{\gamma}\), \(G^{R}\hat{\Gamma}_{\alpha}G^{A}=\sigma_{\gamma}I_{\gamma\beta}\Gamma_{\alpha\beta}\). The response functions for the X and \(\Psi\) diagrams in Fig.1(b)-(d) can then be written as \[\Pi^{X}_{\alpha\beta} = e^{2}\gamma^{2}v^{2}\omega\int\frac{d\epsilon}{2\pi i}\frac{dn_{F}( \epsilon)}{d\epsilon}\sum_{{\bf p}_{1},\ldots,{\bf p}_{4}}\delta({\bf p}_{1}+{ \bf p}_{2}-{\bf p}_{3}-{\bf p}_{4}){\cal D}_{\alpha\xi}I_{\xi\mu}({\bf p}_{1})F _{\mu\nu}({\bf p}_{3},{\bf p}_{4})I_{\nu\gamma}({\bf p}_{2}){\cal D}^{T}_{ \gamma\beta}, \tag{7}\] \[\Pi^{\Psi}_{\alpha\beta} = e^{2}\gamma^{2}v^{2}\omega\int\frac{d\epsilon}{2\pi i}\frac{dn_{F} (\epsilon)}{d\epsilon}\sum_{{\bf p}_{1},\ldots,{\bf p}_{4}}\delta({\bf p}_{1}+ {\bf p}_{4}-{\bf p}_{2}-{\bf p}_{3}){\cal D}_{\alpha\xi}I_{\xi\mu}({\bf p}_{1} )M_{\mu\nu}({\bf p}_{3},{\bf p}_{4})I_{\nu\gamma}({\bf p}_{2}){\cal D}^{T}_{ \gamma\beta}, \tag{8}\] where we have defined \[F_{\mu\nu}({\bf p}_{3},{\bf p}_{4}) \equiv {\rm Tr}[\sigma_{\mu}G^{R}(\epsilon,{\bf p}_{3})\sigma_{\nu}G^{A }(\epsilon,{\bf p}_{4})], \tag{9}\] \[M_{\mu\nu}({\bf p}_{3},{\bf p}_{4}) \equiv {\rm Tr}[\sigma_{\mu}\sigma_{\nu}G^{A}(\epsilon,{\bf p}_{4})G^{A }(\epsilon,{\bf p}_{3})\] (10) \[+\sigma_{\nu}\sigma_{\mu}G^{R}(\epsilon,{\bf p}_{3})G^{R}( \epsilon,{\bf p}_{4})].\] The AHE due to the X and \(\Psi\) diagram corresponds to the anti-symmetric part of the response function \(\Pi^{X}_{\alpha\beta}\) and \(\Pi^{\Psi}_{\alpha\beta}\). In the following, we study the AHE in tilted Weyl metals due to the two diagrams respectively. ### AHE due to the X diagram In this subsection, we study the AHE due to the \(X\) diagram in tilted Weyl metals. To do this, we first compute the anti-symmetric part of the response function \(\Pi^{X}_{\alpha\beta}\) in Eq.(7). For the matrices \({\cal D},I\) and \(F\) in Eq.(7), the symmetric parts of these matrices are \({\cal D}_{0}\sim\tau^{0},I^{s}\sim\tau,F^{s}\sim\tau^{0}\), and the anti-symmetric parts \({\cal D}^{a}\sim\tau^{-1},I^{a}\sim\tau^{0},F^{a}\sim\tau^{0}\). In the leading order of \(1/\epsilon_{F}\tau\), the anti-symmetric part of \(\Pi^{X}_{\alpha\beta}\) is then \[\Pi^{X,\alpha}_{\alpha\beta}=e^{2}\gamma^{2}\omega\int\frac{d \epsilon}{2\pi i}\frac{dn_{F}(\epsilon)}{d\epsilon}\sum_{{\bf p}_{1},{\bf p} _{2},{\bf Q}}\] \[{\cal D}_{0,\alpha\gamma}I^{s}_{\gamma\mu}({\bf p}_{1})F^{a}_{ \mu\nu}({\bf p}_{1}-{\bf Q},{\bf p}_{2}+{\bf Q})I^{s}_{\nu\eta}({\bf p}_{2}){ \cal D}^{T}_{0,\eta\beta},\] where \({\bf Q}\equiv{\bf p}_{1}-{\bf p}_{3}={\bf p}_{4}-{\bf p}_{2}\). The vertex correction factor \({\cal D}_{0}\) and \({\cal D}^{T}_{0}\) on the two ends of \(\Pi^{X,a}_{\alpha\beta}\) are constant matrices as a function of \(u\) and \(v\), as given in our previous work [23], and when multiplied with the remaining part of the response function, it results in an extra total factor \(\tilde{\alpha}^{2}\approx 9/4+{\cal O}(u^{2}/v^{2})\) only if the remaining part is an anti-symmetric matrix of the linear order of \(u_{i},i=1,2,3\), which is the case for both the X and \(\Psi\) diagrams. For convenience, we will then drop the \({\cal D}_{0}\) factor in Eq.(III) in the following calculation and add the vertex correction factor \(\tilde{\alpha}^{2}\) at the end. Since the symmetric part of the \(I\) matrix in the dc limit is \[I^{s}_{\alpha\beta}({\bf p})\approx\pi\tau^{+}\delta(\epsilon-\mathbf{ u}\cdot\mathbf{p}-vp)\frac{1}{p^{2}}\times p_{\alpha}p_{\beta}, \tag{12}\] the integration over the momentum \({\bf p}_{1}\) and \({\bf p}_{2}\) is bound to the Fermi surface due to the \(\delta\) function in \(I^{s}\). The anti-symmetric part of the \(F\) matrix for the X diagram is \(F^{a}_{\mu\nu}=N_{\mu\nu}(F^{a})/D(F)\), where \[N_{\mu\nu}(F^{a})=2iv^{2}\{e^{0\mu\nu k}[(p_{1}p_{2k}-p_{2}p_{1k} )+(p_{1}+p_{2})Q_{k}\] \[+\frac{\mathbf{u}\cdot\mathbf{Q}}{v}(p_{1k}+p_{2 k})]-e^{\mu\nu lk}({\bf p}_{1}-{\bf Q})_{l}({\bf p}_{2}+{\bf Q})_{k}\},\] \[D(F)=[\epsilon-{\bf u}\cdot({\bf p}_{1}-{\bf Q})-v|{\bf p}_{1}-{ \bf Q}]+\frac{i}{2\tau_{3}^{s}}]\] \[\times[\epsilon-{\bf u}\cdot({\bf p}_{1}-{\bf Q})+v|{\bf p}_{1}-{ \bf Q}]+\frac{i}{2\tau_{3}^{s}}]\] \[\times[\epsilon-{\bf u}\cdot({\bf p}_{2}+{\bf Q})-v|{\bf p}_{2}+{ \bf Q}|-\frac{i}{2\tau_{4}^{s}}]\] \[\times[\epsilon-{\bf u}\cdot({\bf p}_{2}+{\bf Q})+v|{\bf p}_{2}+{ \bf Q}|-\frac{i}{2\tau_{4}^{s}}], \tag{13}\] and \(\frac{1}{\tau_{i}^{x}}\equiv\frac{1}{\tau}(1\pm\delta_{i}),\delta_{i}\equiv \frac{{\bf p}_{i}\cdot\mathbf{\Delta}}{p_{i}}\), \(\mu,\nu=0,1,2,3\), and \(l,k=1,2,3\). In \(N_{\mu\nu}(F^{a})\), we have only kept the leading order in \(1/\tau\). We assume \({\bf u}\) in the \(z\) direction for simplicity and \({\mathbf{Q}}=Q(\sin\alpha\cos\beta,\sin\alpha\sin\beta,\cos\alpha)\). Rotate the \(z\)-axis to the direction of \(Q\) by the transformation \[\left(\begin{array}{c}\hat{\bf x}^{\prime}\\ \hat{\bf y}^{\prime}\\ \hat{\bf z}^{\prime}\end{array}\right)=\left(\begin{array}{ccc}\cos\alpha \cos\beta&\cos\alpha\sin\beta&-\sin\alpha\\ -\sin\beta&\cos\beta&0\\ \sin\alpha\cos\beta&\sin\alpha\sin\beta&\cos\alpha\end{array}\right)\ \left(\begin{array}{c}\hat{\bf x}\\ \hat{\bf y}\\ \hat{\bf z}\end{array}\right),\] where \((\hat{\bf x},\hat{\bf y},\hat{\bf z})\) and \(({\bf x}^{\prime},{\bf y}^{\prime},{\bf z}^{\prime})\) are the bases of the old and new frames respectively. The coordinates of \({\bf p}_{i},i=1,2\) in the old and new frames are denoted as \((p_{ix},p_{iy},p_{iz})\) and \((p^{\prime}_{ix},p^{\prime}_{iy},p^{\prime}_{iz})\) respectively. Assuming in the rotated frame \({\bf p}_{i}=p\)\((\sin\theta_{i}\cos\phi_{i}\cdot\hat{\bf x}^{\prime}+\sin\theta_{i}\sin\phi_{i} \cdot\hat{\bf y}^{\prime}+\cos\theta_{i}\cdot\hat{\bf z}^{\prime})\) for \(i=1,2\), we then have \({\bf p}_{1}\cdot{\bf Q}=p_{1}Q\cos\theta_{1},{\bf p}_{2}\cdot{\bf Q}=p_{2}Q\cos \theta_{2},{\bf u}\cdot{\bf Q}=uQ\cos\alpha\). The coordinates \(p_{i\alpha},i=1,2\) in the old frame may be expressed as \[\left\{\begin{array}{ll}p_{i,x}=&p_{i}(\cos\phi_{i}\sin\theta_{i}\cos \alpha\cos\beta-\sin\theta_{i}\sin\phi_{i}\sin\beta+\cos\theta_{i}\sin\alpha \cos\beta),\\ p_{i,y}=&p_{i}(\cos\phi_{i}\sin\theta_{i}\cos\alpha\sin\beta+\sin\theta_{i}\sin \phi_{i}\cos\beta+\cos\theta_{i}\sin\alpha\sin\beta),\\ p_{i,z}=&p_{i}(-\sin\theta_{i}\cos\phi\sin\alpha+\cos\theta_{i}\cos\alpha).\end{array}\right. \tag{14}\] From the \(\delta\) function in \(I^{s}\), one can get \(p_{i}=\frac{\epsilon}{v+u\hat{s}_{i}},i=1,2\), where \[\hat{z}_{i} = p_{iz}/p_{i}=-\sin\theta_{i}\cos\phi_{i}\sin\alpha+\cos\theta_{i} \cos\alpha. \tag{15}\] Applying the \(\delta\) function in \(I^{s}\) to replace \(\epsilon\) by \({\bf p}_{1}\) and \({\bf p}_{2}\) in \(D(F)\), we get \[\frac{1}{D(F)}= [(v^{2}-u^{2}\cos^{2}\alpha)Q^{2}-2v^{2}p_{1}Q(\cos\theta_{1}+ \frac{u}{v}\cos\alpha)-\frac{i}{\tau}(vp_{1}+uQ\cos\alpha+v\delta_{3}|{\bf p}_{ 1}-{\bf Q}|)]^{-1} \tag{16}\] \[\times[(v^{2}-u^{2}\cos^{2}\alpha)Q^{2}+2v^{2}p_{2}Q(\cos\theta_{2 }+\frac{u}{v}\cos\alpha)+\frac{i}{\tau}(vp_{2}-uQ\cos\alpha+v\delta_{4}|{\bf p} _{2}+{\bf Q}|)]^{-1}.\] The AHE due to the \(X\) and \(\Psi\) diagram is finite only when \({\bf u}\) is non-zero. In this work, we only compute the AHE in the tilted Weyl metals in the leading order of \({\bf u}\) for simplicity. For the reason, we expand \(1/D(F)\) in terms of \(u/v\) and keep only the terms up to the linear order of \(u\). We then get \[\frac{1}{D(F)} \approx [v^{2}Q^{2}-2v^{2}{\bf p}_{1}\cdot{\mathbf{Q}}-\frac{i}{\tau}vp_{1}-2 vp_{1}{\bf u}\cdot{\mathbf{Q}}]^{-1}[v^{2}Q^{2}+2v^{2}{\bf p}_{2}\cdot{\mathbf{Q}}+\frac{i}{ \tau}vp_{2}+2vp_{2}{\bf u}\cdot{\mathbf{Q}}]^{-1} \tag{17}\] \[\approx \frac{1+\frac{u}{v}\hat{z}_{1}}{v^{2}Q^{2}-2veQ(\cos\theta_{1}+ \frac{u}{v}\cos\alpha)-\frac{i}{\tau}\epsilon+vuQ^{2}\hat{z}_{1}}\frac{1+\frac{ u}{v}\hat{z}_{2}}{v^{2}Q^{2}+2veQ(\cos\theta_{2}+\frac{u}{v}\cos\alpha)+ \frac{i}{\tau}\epsilon+vuQ^{2}\hat{z}_{2}}\] \[\approx (1+\frac{u}{v}\hat{z}_{1})(1+\frac{u}{v}\hat{z}_{2})\frac{1}{v^{2 }Q^{2}-2veQ\cos\theta_{1}-\frac{i}{\tau}\epsilon}(1-\frac{vuQ^{2}\hat{z}_{1}- 2euQ\cos\theta_{1}-\frac{i}{\tau}\epsilon}{v^{2}Q^{2}-2veQ\cos\theta_{1}- \frac{i}{\tau}\epsilon})\] \[\times\frac{1}{v^{2}Q^{2}+2v\epsilon Q\cos\theta_{2}+\frac{i}{ \tau}\epsilon}(1-\frac{vuQ^{2}\hat{z}_{2}+2\epsilon uQ\cos\alpha}{v^{2}Q^{2}+ 2veQ\cos\theta_{2}+\frac{i}{\tau}\epsilon}),\] where we have neglected the linear order of \(u\) terms \(\sim iuQ\cos\alpha/\tau\) and \(i\delta_{3}/\tau,i\delta_{4}/\tau\) since they contain an extra small factor \(1/\tau\). Putting \(I^{s}\) and \(F^{a}\) together and neglecting the vertex correction at the two ends of the \(X\) diagram at the moment, we get the anti-symmetric part of the response function \(\Pi^{X}_{xy}\) as \[\Pi^{X,a}_{\alpha\beta} = \pi^{2}e^{2}\gamma^{2}\omega\int\frac{d\epsilon}{2\pi i}\frac{dn _{F}(\epsilon)}{d\epsilon}\int_{0}^{\infty}\frac{dp_{1}}{8\pi^{3}}p_{1}^{2} \int_{0}^{\infty}\frac{dp_{2}}{8\pi^{3}}p_{2}^{2}\int_{0}^{\infty}\frac{dQ}{8 \pi^{3}}Q^{2}\int_{0}^{\pi}\sin\theta_{1}d\theta_{1}\int_{0}^{\pi}\sin\theta_ {2}d\theta_{2}\int_{0}^{\pi}\sin\alpha d\alpha \tag{18}\] \[\int_{0}^{2\pi}d\phi_{1}\int_{0}^{2\pi}d\phi_{2}\int_{0}^{2\pi}d \beta\frac{4iv^{2}p_{1\alpha}p_{2\beta}(p_{1}+p_{2})({\bf p}_{1}\times{\bf p}_ {2})\cdot{\mathbf{Q}}}{p_{1}^{2}\ p_{2}^{2}\ D(F)}\mathop{\Pi}_{i=1,2}\tau_{i}^{+} \delta(\epsilon-{\mathbf{u}}\cdot{\mathbf{p}}_{i}-vp_{i}).\] The scalar factor \(({\mathbf{p}}_{1}\times{\mathbf{p}}_{2})\cdot{\mathbf{Q}}=p_{1}p_{2}Q\sin\theta_{1}\sin \theta_{2}\sin(\phi_{2}-\phi_{1})\) in the rotated frame, and \(\tau_{i}^{+}=\tau/(1-\frac{u}{v}\hat{z}_{i}),i=1,2\). For the integrand in Eq.(18), only the factor \(p_{1\alpha}p_{2\beta}\) includes the angle \(\beta\) and one can easily integrate out this angle. For \({\bf u}\) in the \(z\) direction, if the electric field \({\bf E}\) is also in the \(z\) direction, \(\Pi^{X,a}_{\alpha z}=0\) after the integration over the angle \(\beta\) for \(\alpha\neq z\). For the reason, we only need to consider the case when \({\bf E}\) is perpendicular to \({\bf u}\). Assuming the electric field \({\bf E}\) in the \(y\) direction, \(\Pi^{X,a}_{xy}\) is zero with the integration over \(\beta\). We then only need to compute the non-vanishing component \(\Pi^{X,a}_{xy}\). Since \[\int_{0}^{2\pi}d\beta p_{1x}p_{2y}=\pi p_{1}p_{2}[\cos\alpha\sin \theta_{1}\sin\theta_{2}\sin(\phi_{2}-\phi_{1}) \tag{19}\] \[+\sin\alpha(\cos\theta_{1}\sin\theta_{2}\sin\phi_{2}-\cos\theta_{2} \sin\theta_{1}\sin\phi_{1})],\] the response function \(\Pi^{X,a}_{xy}\) for the \(X\) diagram becomes \[\Pi^{X,a}_{xy} = e^{2}v^{2}\gamma^{2}\tau^{2}\omega\int\frac{d\epsilon}{2\pi i}\frac {dn_{F}(\epsilon)}{d\epsilon}\frac{1}{(2\pi)^{9}}4iv^{2}\pi^{3} \tag{20}\] \[\times \int_{0}^{\infty}Q^{2}dQ\int_{0}^{\pi}\sin\alpha d\alpha\int_{0}^{ \pi}\sin\theta_{1}d\theta_{1}\int_{0}^{\pi}\sin\theta_{2}d\theta_{2}\int_{0}^{2 \pi}d\phi_{1}\int_{0}^{2\pi}d\phi_{2}\ Q\sin\theta_{1}\sin\theta_{2}\sin(\phi_{2 }-\phi_{1})\] \[\times [\cos\alpha\sin\theta_{1}\sin\theta_{2}\sin(\phi_{2}-\phi_{1})+ \sin\alpha(\cos\theta_{1}\sin\theta_{2}\sin\phi_{2}-\cos\theta_{2}\sin\theta_{ 1}\sin\phi_{1})]\] \[\times \frac{v}{v-u\hat{z}_{1}}\frac{v}{v-u\hat{z}_{2}}(\frac{\epsilon }{v+u\hat{z}_{1}}+\frac{\epsilon}{v+u\hat{z}_{2}})\frac{\epsilon^{2}}{(v+u \hat{z}_{1})^{3}}\frac{\epsilon^{2}}{(v+u\hat{z}_{2})^{3}}\frac{1}{D(F)}.\] It is easy to check that at \(u=0\), the response function in Eq.(20) vanishes. Expanding Eq.(20) to the linear order of \(u\) and combing \(1/D(F)\) in Eq.(17), we get \[\Pi^{X,a}_{xy}(u) = e^{2}v^{2}\gamma^{2}\tau^{2}\omega\int\frac{d\epsilon}{2\pi i} \frac{dn_{F}(\epsilon)}{d\epsilon}\times 4iv^{2}\pi^{3}\frac{1}{(2\pi)^{9}}\times \frac{\epsilon^{5}}{v^{7}} \tag{21}\] \[\times \int_{0}^{\infty}Q^{2}dQ\int_{0}^{\pi}\sin\alpha d\alpha\int_{0}^ {\pi}\sin\theta_{1}d\theta_{1}\int_{0}^{\pi}\sin\theta_{2}d\theta_{2}\int_{0}^ {2\pi}d\phi_{1}\int_{0}^{2\pi}d\phi_{2}\times Q\sin\theta_{1}\sin\theta_{2}\sin (\phi_{2}-\phi_{1})\] \[\times [\cos\alpha\sin\theta_{1}\sin\theta_{2}\sin(\phi_{2}-\phi_{1})+ \sin\alpha(\cos\theta_{1}\sin\theta_{2}\sin\phi_{2}-\cos\theta_{2}\sin\theta_{ 1}\sin\phi_{1})]\] \[\times \frac{1}{v^{2}Q^{2}-2v\epsilon Q\cos\theta_{1}-\frac{i}{\tau} }\frac{1}{v^{2}Q^{2}+2v\epsilon Q\cos\theta_{2}+\frac{i}{\tau}\epsilon}\] \[\times [-3\frac{u}{v}(\hat{z}_{1}+\hat{z}_{2})-2\frac{vudQ^{2}\hat{z}_{1}- 2\epsilon uQ\cos\alpha}{v^{2}Q^{2}-2v\epsilon Q\cos\theta_{1}-\frac{i}{\tau} \epsilon}-2\frac{vuQ^{2}\hat{z}_{2}+2\epsilon uQ\cos\alpha}{v^{2}Q^{2}+2v \epsilon Q\cos\theta_{2}+\frac{i}{\tau}\epsilon}].\] The angular integration over \(\phi_{1},\phi_{2}\) and \(\alpha\) can be easily done in the above equation and the contributions from the terms with the factor \(\hat{z}_{1}\) and \(\hat{z}_{2}\) vanish after these angular integration. The response function for the \(X\) diagram after the integration over \(\phi_{1},\phi_{2},\alpha\) and \(\epsilon\) becomes \[\Pi^{X,a}_{xy}(u) = -\frac{\omega}{12\pi^{3}}e^{2}v^{3}\epsilon_{F}^{2}u\times\int_{ 0}^{\infty}Q^{4}dQ\int_{0}^{\pi}\sin\theta_{1}d\theta_{1}\int_{0}^{\pi}\sin \theta_{2}d\theta_{2}\sin^{2}\theta_{1}\sin^{2}\theta_{2}\frac{1}{v^{2}Q^{2}- 2v\epsilon_{F}Q\cos\theta_{1}-\frac{i}{\tau}\epsilon_{F}} \tag{22}\] \[\times[\frac{1}{v^{2}Q^{2}-2v\epsilon_{F}Q\cos\theta_{1}-\frac{i} {\tau}\epsilon_{F}}-\frac{1}{v^{2}Q^{2}+2v\epsilon_{F}Q\cos\theta_{2}+\frac{i }{\tau}\epsilon_{F}}]\frac{1}{v^{2}Q^{2}+2v\epsilon_{F}Q\cos\theta_{2}+\frac{i }{\tau}\epsilon_{F}}\] \[= -i\omega\frac{u}{6\pi^{3}}e^{2}v^{3}\epsilon_{F}^{2}\times{\rm Im} \int_{0}^{\infty}dQS^{X}(Q),\] where \[S^{X}(Q)\equiv Q^{4}\int_{0}^{\pi}\sin\theta_{1}d\theta_{1}\int_{0}^{\pi}\sin \theta_{2}d\theta_{2}\sin^{2}\theta_{1}\sin^{2}\theta_{2}\times\frac{1}{(v^{2}Q ^{2}+2v\epsilon_{F}Q\cos\theta_{1}-\frac{i}{\tau}\epsilon_{F})^{2}}\times\frac {1}{v^{2}Q^{2}+2v\epsilon_{F}Q\cos\theta_{2}+\frac{i}{\tau}\epsilon_{F}}. \tag{23}\] The anomalous Hall conductivity from the \(X\) diagram is \(\sigma^{X}_{xy}=\Pi^{X,a}_{xy}/i\omega\), which is then completely real and dissipationless. The integration in Eq.(22) can be done by a change of variable \(x=\cos\theta_{1},y=\cos\theta_{2}\), as shown in the Appendix. We get in the leading order of \(1/\tau\) \[I^{X}\equiv{\rm Im}\int_{0}^{\infty}dQ\ S^{X}(Q)\approx\frac{\pi}{\epsilon_{F} v^{5}}. \tag{24}\] The leading order response function without vertex correction for the \(X\) diagram from a single valley is \[\Pi^{X,a}_{xy}(u)\approx-i\omega\frac{e^{2}\epsilon_{F}u}{6\pi^{2}v^{2}}. \tag{25}\] The vertex correction adds a factor of \(9/4\) in the leading order of \(u/v\) to the response function \(\Pi^{X}_{xy}(u)\). For tilted Weyl metals with two valleys, the response function doubles. We then obtain the leading order anomalous Hall conductivity of the tilted Weyl metals due to the diagram as \[\sigma^{X}_{xy}\approx-\frac{3e^{2}\epsilon_{F}u}{4\pi^{2}v^{2}}, \tag{26}\] which is of the same order of the leading order contribution from the NCA diagram \(\sigma^{\rm NCA}_{xy}=\frac{4e^{2}\epsilon_{F}u}{3\pi^{2}v^{2}}\), but with opposite sign. ### AHE due to the \(\Psi\) diagram In this subsection, we study the AHE in tilted Weyl metals due to the \(\Psi\) diagram. To do this, we compute the anti-symmetric part of the response function \(\Pi^{\Psi}_{\alpha\beta}\) in Eq.(8). For the matrices \({\cal D},I\) and \(M\) in Eq.(8), the symmetric parts of these matrices are \({\cal D}_{0}\sim\tau^{0},I^{s}\sim\tau,M^{s}\sim\tau^{0}\), and the anti-symmetric parts \({\cal D}^{a}\sim\tau^{-1},I^{a}\sim\tau^{0},M^{a}\sim\tau^{0}\). In the leading order of \(1/\epsilon_{F}\tau\), the anti-symmetric part of \(\Pi^{\Psi}_{\alpha\beta}\) is then \[\Pi^{\Psi,a}_{\alpha\beta}=e^{2}\gamma^{2}\omega\int\frac{d\epsilon}{2\pi i} \frac{dn_{F}(\epsilon)}{d\epsilon}\sum_{{\bf p}_{1},{\bf p}_{2},{\bf Q}}{ \cal D}_{0,\alpha\gamma}I^{s}_{\gamma\mu}({\bf p}_{1})M^{a}_{\mu\nu}({\bf Q}- {\bf p}_{2},{\bf Q}-{\bf p}_{1})I^{s}_{\nu\eta}({\bf p}_{2}){\cal D}^{T}_{0, \eta\beta}, \tag{27}\] where \({\bf Q}\equiv{\bf p}_{1}+{\bf p}_{4}={\bf p}_{2}+{\bf p}_{3}\), \(I^{s}\) is given in Eq.(12) and \(M^{a}_{\mu\nu}\) is the anti-symmetric part of \[M_{\mu\nu}\equiv{\rm Tr}[\sigma_{\mu}\sigma_{\nu}G^{A}({\bf p}_{4})G^{A}({\bf p }_{3})]+{\rm Tr}[\sigma_{\nu}\sigma_{\mu}G^{R}({\bf p}_{3})G^{R}({\bf p}_{4})]. \tag{28}\] We denote the \(G^{A}G^{A}\) term in Eq.(28) as \(M^{A}\) and the \(G^{R}G^{R}\) term as \(M^{R}\). Since \(M^{R}=(M^{A})^{*}\), the \(M\) matrix is then \(M=2{\rm Re}~{}{\rm M}^{\rm A}\). The anti-symmetric part of the \(M^{A}\) matrix in the leading order of \(1/\tau\) is \(M^{A,a}_{\mu\nu}=N_{\mu\nu}(M^{A,a})/D(M^{A})\), where \[N_{\mu\nu}(M^{A,a}) = 2iv\epsilon^{0\mu\nu k}\{[2(\epsilon-{\mathbf{u}}\cdot{ \mathbf{Q}})+{\mathbf{u}}\cdot({\mathbf{p}}_{1}+{ \mathbf{p}}_{2})]Q_{k}-[\epsilon-{\mathbf{u}}\cdot({\mathbf{Q}}-{\mathbf{p}}_{2})]p_{1k}-[\epsilon-{\mathbf{u} }\cdot({\mathbf{Q}}-{\mathbf{p}}_{1})]p_{2k}\} \tag{29}\] \[-2v^{2}[(Q-p_{1})_{\mu}(Q-p_{2})_{\nu}-(Q-p_{1})_{\nu}(Q-p_{2})_{ \mu}],\] \[D(M^{A}) = [\epsilon-{\mathbf{u}}\cdot({\mathbf{Q}}-{\mbox {\boldmath$p$}}_{1})-v|{\mathbf{Q}}-{\mathbf{p}}_{1}|-\frac{i} {2\tau_{4}^{+}}][\epsilon-{\mathbf{u}}\cdot({\mathbf{Q}}-{ \mathbf{p}}_{1})+v|{\mathbf{Q}}-{\mathbf{p}}_{1}|- \frac{i}{2\tau_{4}^{-}}]\] (30) \[\times [\epsilon-{\mathbf{u}}\cdot({\mathbf{Q}}-{\mbox {\boldmath$p$}}_{2})-v|{\mathbf{Q}}-{\mathbf{p}}_{2}|-\frac{i} {2\tau_{3}^{+}}][\epsilon-{\mathbf{u}}\cdot({\mathbf{Q}}-{ \mathbf{p}}_{2})+v|{\mathbf{Q}}-{\mathbf{p}}_{2}|- \frac{i}{2\tau_{3}^{-}}].\] In the above equation, \(\frac{1}{\tau_{i}^{x}}=\frac{1}{\tau}(1\pm\delta_{i}),\delta_{i}=\frac{{\bf p}_ {i}\cdot{\bf\Delta}}{p_{i}}\) as defined before. For \({\bf u}\) in the \(z\) direction and \({\bf Q}=Q(\sin\alpha\cos\beta,\sin\alpha\sin\beta,\cos\alpha)\), after the same rotation of the \(z\)-axis to the direction of \({\bf Q}\) as for the \(X\) diagram, and applying the \(\delta\) function in \(I^{s}\), we obtain \[D(M^{A})\approx[v^{2}Q^{2}-2v^{2}{\bf p_{1}}\cdot{\bf Q}+\frac{i}{\tau}vp_{1} -2vp_{1}{\mathbf{u}}\cdot(2{\mathbf{p}}_{1}-{\mbox{\boldmath $Q$}})][v^{2}Q^{2}-2v^{2}{\bf p_{2}}\cdot{\bf Q}+\frac{i}{\tau}vp_{2}-2vp_{2}{ \mathbf{u}}\cdot(2{\mathbf{p}}_{2}-{\mathbf{Q}})], \tag{31}\] where we have neglected the second order \({\bf u}\) terms as well as the \({\bf u}/\tau\) terms. Putting \(I^{s}\) and \(M^{a}\) together and neglecting the vertex correction at the two ends of the \(\Psi\) diagram at the moment, we get the anti-symmetric part of the response function for the \(\Psi\) diagram as \[\Pi^{\Psi,a}_{\alpha\beta} = \pi^{2}e^{2}\gamma^{2}\omega\int\frac{d\epsilon}{2\pi i}\frac{dn _{F}(\epsilon)}{d\epsilon} \tag{32}\] \[\sum_{{\bf p}_{1},{\bf p}_{2},{\bf Q}}\left[\frac{2ivc^{0\mu\nu k} p_{1,\alpha}p_{1,\mu}p_{2,\nu}p_{2,\beta}[2(\epsilon-{\mathbf{u}}\cdot{ \mathbf{Q}})+{\mathbf{u}}\cdot({\mathbf{p}}_{1}+{ \mathbf{p}}_{2})]Q_{k}}+c.c\right]\prod_{i=1,2}\tau_{i}^{+}\delta( \epsilon-{\mathbf{u}}\cdot{\mathbf{p}}_{i}-vp_{i})\] \[= e^{2}v^{2}\gamma^{2}\omega\int\frac{d\epsilon}{2\pi i}\frac{dn_{F} (\epsilon)}{d\epsilon}\times 2iv\pi^{2}\frac{1}{(2\pi)^{9}}\int_{0}^{\infty}Q^{2}dQ\int_{0}^{ \pi}\sin\alpha d\alpha\int_{0}^{2\pi}d\beta p_{1\alpha}p_{2\beta}\] \[\times\int_{0}^{\infty}dp_{1}\int_{0}^{\infty}dp_{2}\int_{0}^{\pi} \sin\theta_{1}d\theta_{1}\int_{0}^{\pi}\sin\theta_{2}d\theta_{2}\int_{0}^{2\pi} d\phi_{1}\int_{0}^{2\pi}d\phi_{2}\ \tau_{1}^{+}\tau_{2}^{+}({\mathbf{p}}_{1}\times{\mathbf{p}}_{2}) \cdot{\mathbf{Q}}\] \[\times[2\epsilon+{\bf u}\cdot({\mathbf{p}}_{1}+{\mathbf{p}}_{2}-2{\mathbf{Q}})]\times\delta(\epsilon-vp_{1}-{ \mathbf{u}}\cdot{\mathbf{p}}_{1})\delta(\epsilon-vp_{2}-{ \mathbf{u}}\cdot{\mathbf{p}}_{2})\times[\frac{1}{D(M^{A})}-{ \rm c.c}].\] At \(\mathbf{u}=0\), the response function \(\Pi^{\Psi,a}_{\alpha\beta}\) vanishes. We then expand \(\Pi^{\Psi,a}_{\alpha\beta}\) to the linear order of \(\mathbf{u}\) and neglect the higher order contributions. Similar to the \(X\) diagram, for \(\mathbf{u}\) in the \(z\) direction and \(\mathbf{E}\) in the \(y\) direction, \(\Pi^{\Psi,a}_{zy}=0\) and we only need to consider \(\Pi^{\Psi,a}_{xy}\) for the AHE. Keeping only the linear order of \(u\) and integrating out the angle \(\beta\), we get \[\Pi^{\Psi,a}_{xy}(u) =e^{2}v^{2}\gamma^{2}\tau^{2}\omega\int\frac{d\epsilon}{2\pi i} \frac{dn_{F}(\epsilon)}{d\epsilon}4iv\pi^{3}\frac{1}{(2\pi)^{9}}\frac{ \epsilon^{5}}{v^{6}}\] \[\times\int_{0}^{\infty}Q^{2}dQ\int_{0}^{\pi}\sin\alpha d\alpha \int_{0}^{\pi}\sin\theta_{1}d\theta_{1}\int_{0}^{\pi}\sin\theta_{2}d\theta_{2 }\int_{0}^{2\pi}d\phi_{1}\int_{0}^{2\pi}d\phi_{2}\ Q\sin\theta_{1}\sin\theta_{ 2}\sin(\phi_{2}-\phi_{1})\] \[\times[\cos\alpha\sin\theta_{1}\sin\theta_{2}\sin(\phi_{2}-\phi_ {1})+\sin\alpha(\cos\theta_{1}\sin\theta_{2}\sin\phi_{2}-\cos\theta_{2}\sin \theta_{1}\sin\phi_{1})]\] \[\times\{\frac{1}{v^{2}Q^{2}-2v\epsilon Q\cos\theta_{1}+\frac{i}{ \tau}\epsilon}\frac{1}{v^{2}Q^{2}-2v\epsilon Q\cos\theta_{2}+\frac{i}{\tau} \epsilon}[-\frac{u}{\epsilon}Q\cos\alpha+\frac{u}{2v}(\hat{z}_{1}+\hat{z}_{2})\] \[-\frac{2uv\hat{z}_{1}(Q^{2}-\frac{\epsilon}{v}Q\cos\theta_{1}-2 \frac{\epsilon^{2}}{v^{2}})+2\epsilon uQ\cos\alpha}{v^{2}Q^{2}-2\epsilon vQ \cos\theta_{1}+\frac{i}{\tau}\epsilon}-\frac{2uv\hat{z}_{2}(Q^{2}-\frac{ \epsilon}{v}Q\cos\theta_{2}-2\frac{\epsilon^{2}}{v^{2}})+2\epsilon uQ\cos \alpha}{v^{2}Q^{2}-2\epsilon vQ\cos\theta_{2}+\frac{i}{\tau}\epsilon}]-\text{c.c }\}. \tag{33}\] After the integration over \(\phi_{1},\phi_{2}\) and \(\alpha\), we get the anti-symmetric part of \(\Pi^{\Psi}_{xy}\) as \[\Pi^{\Psi,a}_{xy}(u) = -\frac{\omega}{12\pi^{3}}e^{2}v^{3}\epsilon^{2}u\int_{0}^{\infty }Q^{4}dQ\int_{0}^{\pi}\sin\theta_{1}d\theta_{1}\int_{0}^{\pi}\sin\theta_{2}d \theta_{2}\sin^{2}\theta_{1}\sin^{2}\theta_{2} \tag{34}\] \[\times\frac{1}{2}\{\frac{1}{v^{2}Q^{2}-2v\epsilon Q\cos\theta_{1} +\frac{i}{\tau}\epsilon}\frac{1}{v^{2}Q^{2}-2v\epsilon Q\cos\theta_{2}+\frac{i }{\tau}\epsilon}\] \[\times[-\frac{1}{2\epsilon^{2}}-(\frac{1}{v^{2}Q^{2}-2v\epsilon Q \cos\theta_{1}+\frac{i}{\tau}\epsilon}+\frac{1}{v^{2}Q^{2}-2\epsilon vQ\cos \theta_{2}+\frac{i}{\tau}\epsilon})]-\text{c.c}\}\] \[= -i\omega\frac{u}{6\pi^{3}}e^{2}v^{3}\epsilon^{2}\times\text{Im} \int_{0}^{\infty}\text{dQS}^{\Psi}(\text{Q}),\] for which we separate \(S^{\Psi}(Q)\) to two parts as \[S^{\Psi}(Q) = S^{\Psi,1}(Q)+S^{\Psi,2}(Q),\] \[S^{\Psi,1}(Q) = -\frac{Q^{4}}{4\epsilon^{2}}\times\int_{0}^{\pi}\sin^{3}\theta_{1 }d\theta_{1}\int_{0}^{\pi}\sin^{3}\theta_{2}d\theta_{2}\frac{1}{v^{2}Q^{2}-2v \epsilon Q\cos\theta_{1}+\frac{i}{\tau}\epsilon}\times\frac{1}{v^{2}Q^{2}-2v \epsilon Q\cos\theta_{2}+\frac{i}{\tau}\epsilon}, \tag{35}\] \[S^{\Psi,2}(Q) = -Q^{4}\times\int_{0}^{\pi}\sin^{3}\theta_{1}d\theta_{1}\int_{0}^{ \pi}\sin^{3}\theta_{2}d\theta_{2}\frac{1}{(v^{2}Q^{2}-2v\epsilon Q\cos\theta_ {1}+\frac{i}{\tau}\epsilon)^{2}}\times\frac{1}{v^{2}Q^{2}-2v\epsilon Q\cos \theta_{2}+\frac{i}{\tau}\epsilon}. \tag{36}\] As shown in the Appendix, in the leading order of \(1/\tau\), the integration over \(S^{\Psi,1}(Q)\) and \(S^{\Psi,2}(Q)\) gives \[I^{\Psi,1}\equiv\text{Im}\int_{0}^{\infty}dQ\ S^{\Psi,1}(Q)\approx\frac{17+16\ln 2 }{105}\frac{\pi}{\epsilon_{F}v^{5}}, \tag{37}\] and \[I^{\Psi,2}\equiv\text{Im}\int_{0}^{\infty}dQ\ S^{\Psi,2}(Q)\approx\frac{1+8\ln 2 }{15}\frac{\pi}{\epsilon_{F}v^{5}}. \tag{38}\] The antisymmetric part of the response function for the \(\Psi\) diagram for a single valley without vertex correction is \[\Pi^{\Psi,a}_{xy}(u)=\Pi^{\Psi,1,a}_{xy}(u)+\Pi^{\Psi,2,a}_{xy}(u)\approx-\frac{ 4+12\ln 2}{105}i\omega\frac{e^{2}\epsilon_{F}u}{\pi^{2}v^{2}}. \tag{39}\] Adding the vertex correction factor \(9/4\) in the leading order of \(u\), and taking into account the two valleys of the tilted Weyl metals, we get the total anomalous Hall conductivity due to the \(\Psi\) diagram in the leading order of \(u\) as \[\sigma^{\Psi}_{xy}\approx-\frac{6+18\ln 2}{35}\frac{e^{2}\epsilon_{F}u}{\pi^{2}v ^{2}}\approx-0.53\frac{e^{2}\epsilon_{F}u}{\pi^{2}v^{2}}. \tag{40}\] The contribution from the \(\Psi\) diagram is also of the same order of the contribution from the NCA diagram \(\sigma^{\text{NCA}}_{xy}=\frac{4e^{2}\epsilon_{F}u}{3\pi^{2}v^{2}}\), but with opposite sign. This is different from the 2D massive Dirac model, for which the anomalous Hall conductivity from the \(\Psi\) diagram vanishes for Gaussian disorder [11]. ### Comparison between the crossed and non-crossing diagrams The total contributions of the \(X\) and \(\Psi\) diagrams to the AHE for the tilted Weyl metals with two valleys are \[\sigma_{xy}^{X+\Psi}=\sigma_{xy}^{X}+\sigma_{xy}^{\Psi}\approx-1.28\frac{e^{2} \epsilon_{F}u}{\pi^{2}v^{2}}. \tag{41}\] As a comparison, we plot the different contributions to the anomalous Hall conductivity of the tilted Weyl metals due to both the non-crossing and crossed diagrams in Fig.2. The anomalous Hall conductivity from the non-crossing diagram was obtained in our previous work [23] and includes three different mechanisms: intrinsic, side jump and skew scattering. The total anomalous Hall conductivity from the non-crossing diagram in the leading order of \(u/v\) is \[\sigma_{xy}^{\rm NCA}\approx\frac{4e^{2}\epsilon_{F}u}{3\pi^{2}v^{2}}. \tag{42}\] This contribution includes the intrinsic part \(\sigma_{int}^{1}=\frac{e^{2}\epsilon_{F}u}{3\pi^{2}v^{2}}\) from the Fermi surface and \(\sigma_{int}^{1}=-\frac{e^{2}\epsilon_{F}u}{6\pi^{2}v^{2}}\) from the Fermi sea. The remaining part is the extrinsic contribution due to impurity scatterings, including the side jump and skew scattering contribution as shown in Fig.2. From Eq.(41) and Eq.(42), we see that the inclusion of the \(X\) and \(\Psi\) diagram cancels most of the contribution from the non-crossing diagram in the leading order of \(u/v\), as shown in Fig.2. This is similar to the case of the 2D massive Dirac model at large energy with Gaussian disorder. However, for the 2D massive Dirac model, the inclusion of the \(X\) and \(\Psi\) diagram changes the dependence of the total anomalous Hall conductivity on the energy from \(\sigma_{xy}^{\rm NCA}\sim m/\epsilon_{F}\) to \(\sigma_{xy}^{\rm total}\sim(m/\epsilon_{F})^{3}\), which greatly reduces the total anomalous Hall conductivity in the metallic regime \(m/\epsilon_{F}\ll 1\). Whereas for tilted Weyl metals, the contributions of the \(X\) and \(\Psi\) diagrams have the same dependence on the Fermi energy as the non-crossing diagram and the cancellation is due to the opposite signs but close values of the coefficients of the two contributions. ## III III. Discussions The expansion of the response functions of the \(X\) and \(\Psi\) diagrams to the second order of \({\bf u}\) reveals that the contributions to the AHE from these diagrams vanish in the second order of \({\bf u}\) for the tilted Weyl metals. The next leading order correction to the AHE we obtained for the \(X\) and \(\Psi\) diagram is then \(\sim(u/v)^{3}\). The same is true for the contribution to the AHE from the non-crossing diagram [23]. For the type-I Weyl metals with not very large \(u/v\), the anomalous Hall conductivity in the linear order of \(u\) we obtained in this work is then accurate enough. The contributions to the AHE from the \(X\) and \(\Psi\) diagrams do not depend on the disorder strength and scattering rate, and have the same dependence on the Fermi energy and the tilting of the Weyl metals as the NCA diagram in the leading order. This makes it hard to distinguish the contributions from the two types of diagrams in experiments. However, the AHE from the crossed diagrams originates from the skew scatterings over pairs of closely located impurities with distances of the order of electron Fermi wavelength [9; 10; 11; 12]. This is very demanding on the disorder condition. To validate the self-average over the impurities in the response function, every sub-system of the size of the phase coherence length \(l_{\varphi}\) (which is much smaller than the sample size) needs to contain at least one pair of such rare impurities. At not very low temperatures, the phase coherence length \(l_{\varphi}\) is very short. This indicates that the density of the rare impurity pairs needs to be very high to observe the AHE from the crossed diagrams. Considering the random distribution of the disorder, it requires a much higher density of the disorder to observe the AHE from the crossed diagrams than from the non-crossing diagram. This might be the reason why the anomalous Hall conductivity obtained in experiment in quasi-2D Dirac materials Fe\({}_{3}\)Sn\({}_{2}\) is very close to the theoretical result including only the disorder effects from the NCA diagram [22; 28]. For the recently studied type-I Weyl metal Co\({}_{3}\)Sn\({}_{2}\) Figure 2: The different contributions to the anomalous Hall conductivity from the non-crossing and crossed diagrams for the tilted Weyl metals. The Fermi surface contributions from the non-crossing diagram in the plot come from Ref. [23] and are exact, whereas the results for the crossed diagrams are kept in the linear order of \(u/v\). The intrinsic contribution from the Fermi sea \(\sigma_{H}^{\rm II}\) comes from Ref.[20]. The black solid line represents the total Fermi surface contribution from the non-crossing diagram, and the blue solid line represents the total contribution from both the Fermi surface and Fermi sea, including both non-crossing and crossed diagrams. in experiments [24, 25], the topological Chern-Simons term [19, 23, 27] gives an extra anomalous Hall conductivity \(\sigma_{H}\sim\frac{e^{2}}{2\pi^{2}}\mathcal{K}\) which is proportional to the distance \(\mathcal{K}\) between the two Weyl nodes. This contribution is independent of the impurity scatterings and constitutes part of the intrinsic AHE. For Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\), the AHE from the Chern-Simons term is one order of magnitude greater than both the contribution from the non-crossing diagram and the crossed diagrams of the low energy effective Hamiltonian [22] so it dominates the total AHE in this system. This makes it hard to distinguish the contribution of the disorder in experiments, either due to the NCA diagram or the crossed diagrams. In Ref. [24], all the AHE measured in Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) was attributed to the intrinsic one. In Ref. [25], the authors measured the AHE in both the clean and dirty samples, but the difference of the anomalous Hall conductivity in the two samples is only about 10% of the clean case. To better observe the effects of the disorder and the interplay of the non-crossing and crossed diagrams in experiments, one may increase the Fermi level of Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) by doping so to enhance the weight of the contribution to the AHE due to both the non-crossing and crossed diagrams since the anomalous Hall conductivity due to the Chern-Simons term does not depend on the Fermi energy. Another way to observe the interplay between the non-crossing and crossed diagrams in Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\) in experiments is by the measurement of the anomalous Nernst effect (ANE) [25, 29, 30] in such system. The ANE only comes from the scatterings on the Fermi surface so the Chern-Simons term has no contribution to the ANE. The ANE is proportional to the Fermi surface contribution of the AHE, i.e., \(\sigma_{H}^{I}\) we studied in Ref. [23] and this work, with the ratio \(\sim k_{B}T/\epsilon_{F}\)[22]. By measuring the ANE in different disorder conditions, one can tell whether and when the crossed diagrams play a role in both the ANE and AHE in the system. Indeed, in Ref. [25], the ANE in the disordered sample is about three times of the clean sample, which makes the effects of the disorder much more discernible in the ANE than in the AHE. The large enhancement of the ANE in this experiment seems to indicate that the NCA diagram dominates the contribution in the measured disordered sample according to our calculation in Ref. [23] for the Gaussian disorder. However, the real system may include more complicated disorder which may affect the AHE and ANE significantly. For example, it was shown in Ref.[11] that for the 2D massive Dirac model with smooth disorder, the anomalous Hall conductivity is enhanced by the \(X\) and \(\Psi\) diagrams instead of being canceled as in the case of Gaussian disorder. A more complete theoretical study including the smooth disorder is then needed to tell whether the crossed diagrams play a role in such experiments. We will leave this for a future study since the study of the tilted Weyl metals with smooth disorder is more complicated than the 2D massive Dirac model due to the increased dimensionality. On the other hand, to observe the AHE or ANE due to the crossed diagrams, more experiments with varying disorder conditions may also need to be carried out in the future. ## IV IV. Summary To sum up, we studied the AHE due to the crossed \(X\) and \(\Psi\) diagrams in Type-I Weyl metals with Gaussian disorder. We show that similar to the 2D massive Dirac model, the contributions from the crossed diagrams cancel a majority part of the contribution from the non-crossing diagram of the low energy effective Hamiltonian. However, the condition to observe the effect of the crossed diagrams is more demanding than the non-crossing diagram. Moreover, since the AHE from the Chern-Simons term is much greater than the AHE due to the impurity scatterings in real type-I Weyl metals Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\), it is hard to distinguish the AHE due to the impurity scatterings in this system. To observe the contributions or interplay of the non-crossing and crossed diagrams in this system, one can measure the ANE effect instead of the AHE because the Chern-Simons term has no contribution to the ANE and the disorder induced ANE is proportional to that of the AHE. ## Acknowledgement This work is supported by the National Natural Science Foundation of China under Grant No. 11974166. ## Appendix In this appendix, we show the details of the integration in Eq.(24) and (37) -(38) for the \(X\) and \(\Psi\) diagram. We first present the integration \[I^{X} = \mbox{Im}\int_{0}^{\infty}dQS^{X}(Q)\] \[= \mbox{Im}\int_{0}^{\infty}Q^{4}dQ\ \int_{0}^{\pi}\sin^{3}\theta_{1}d \theta_{1}\int_{0}^{\pi}\sin^{3}\theta_{2}d\theta_{2}\frac{1}{(v^{2}Q^{2}+2v \epsilon_{F}Q\cos\theta_{1}-\frac{i}{\epsilon}\epsilon_{F})^{2}}\frac{1}{v^{2 }Q^{2}+2v\epsilon_{F}Q\cos\theta_{2}+\frac{i}{\epsilon}\epsilon_{F}}.\] By a change of the variable \(x=\cos\theta_{1},y=\cos\theta_{2}\) and denoting \(\epsilon_{F}\) as \(\epsilon\) for brevity, we get \[I^{X} = \int_{0}^{\infty}Q^{4}dQ\int_{-1}^{1}dx\int_{-1}^{1}dy(1-x^{2})(1-y ^{2})\] \[\times\{-\frac{\frac{\epsilon}{\tau}}{(v^{2}Q^{2}+2v\epsilon Qy)^ {2}+\frac{\epsilon^{2}}{\tau^{2}}}\frac{(v^{2}Q^{2}+2v\epsilon Qx)^{2}-\frac{ \epsilon^{2}}{\tau^{2}}}{[(v^{2}Q^{2}+2v\epsilon Qx)^{2}+\frac{\epsilon^{2}}{ \tau^{2}}]^{2}}+\frac{2\frac{\epsilon}{\tau}(v^{2}Q^{2}+2v\epsilon Qy)}{(v^{2} Q^{2}+2v\epsilon Qy)^{2}+\frac{\epsilon^{2}}{\tau^{2}}}\frac{v^{2}Q^{2}+2v \epsilon Qx}{[(v^{2}Q^{2}+2v\epsilon Qx)^{2}+\frac{\epsilon^{2}}{\tau^{2}}]^{ 2}}\}.\] We denote the first and second term in the above equation as \(I^{X,1}\) and \(I^{X,2}\) respectively and calculate them separately in the following. With the variable substitution \(v^{2}Q^{2}+2v\epsilon Qy=t,v^{2}Q^{2}+2v\epsilon Qx=s\) and the relationship \[\frac{\frac{\epsilon}{\tau}}{(v^{2}Q^{2}+2v\epsilon Qy)^{2}+\frac{\epsilon^{2 }}{\tau^{2}}}\approx\pi\delta(v^{2}Q^{2}+2v\epsilon Qy), \tag{44}\] we get \[I^{X,1} = -\pi\int_{0}^{\infty}\frac{Q^{4}}{(2\epsilon vQ)^{2}}dQ\int_{v^{ 2}Q^{2}-2v\epsilon Q}^{v^{2}Q^{2}+2v\epsilon Q}ds[1-(\frac{s-v^{2}Q^{2}}{2v \epsilon Q})^{2}]\frac{s^{2}-\frac{\epsilon^{2}}{\tau^{2}}}{(s^{2}+\frac{ \epsilon^{2}}{\tau^{2}})^{2}}\int_{v^{2}Q^{2}-2v\epsilon Q}^{v^{2}Q^{2}+2v \epsilon Q}dt[1-(\frac{t-v^{2}Q^{2}}{2v\epsilon Q})^{2}]\delta(t), \tag{45}\] \[I^{X,2} = \frac{2\epsilon}{\tau}\int_{0}^{\infty}\frac{Q^{4}}{(2\epsilon vQ )^{2}}dQ\int_{v^{2}Q^{2}-2v\epsilon Q}^{v^{2}Q^{2}+2v\epsilon Q}ds[1-(\frac{s- v^{2}Q^{2}}{2v\epsilon Q})^{2}]\frac{s}{(s^{2}+c^{2})^{2}}\int_{v^{2}Q^{2}-2v \epsilon Q}^{v^{2}Q^{2}+2v\epsilon Q}dt[1-(\frac{t-v^{2}Q^{2}}{2v\epsilon Q})^ {2}]\frac{t}{t^{2}+c^{2}}.\] We first do the integration of \(I^{X,1}\). The integration over \(s\) and \(t\) for \(I^{X,1}\) in Eq.(45) can be carried out separately at first. To get a non-vanishing integration over the \(\delta(t)\) factor in Eq.(45), \(Q\) must be limited to \(0<Q<2\epsilon/v\). Denoting \(c\equiv\epsilon/\tau\) and integrating out \(s\) and \(t\), \(I^{X,1}\) becomes \[I^{X,1} = \int_{0}^{2\epsilon/v}dQ\frac{-\pi Q^{4}}{(2\epsilon vQ)^{2}}[1- \frac{v^{2}Q^{2}}{4\epsilon^{2}}]\] \[\times\{-(1-\frac{v^{2}Q^{2}}{4\epsilon^{2}})\frac{s}{s^{2}+c^{2 }}+\frac{1}{2\epsilon^{2}}[c\pi\delta(s)+\frac{1}{2}\ln(s^{2}+c^{2})]-\frac{1} {4\epsilon^{2}v^{2}Q^{2}}[s+cs\delta(s)-2c\arctan(\frac{s}{c})]\}|_{s=v^{2}Q^ {2}+2v\epsilon Q}^{v^{2}+2v\epsilon Q}.\] Neglecting the terms small in \(1/\tau\), we get \[I^{X,1} = \int_{0}^{2\epsilon/v}dQ\frac{-\pi Q^{4}}{(2\epsilon vQ)^{2}}[1- \frac{v^{2}Q^{2}}{4\epsilon^{2}}]\times\{-(1-\frac{v^{2}Q^{2}}{4\epsilon^{2}}) \frac{s}{s^{2}+c^{2}}+\frac{1}{4\epsilon^{2}}\ln(s^{2}+c^{2})-\frac{1}{4 \epsilon^{2}v^{2}Q^{2}}s\}|_{s=v^{2}Q^{2}-2v\epsilon Q}^{v^{2}Q^{2}+2v\epsilon Q} \tag{48}\] \[= \frac{\pi}{2\epsilon v^{5}}[1-\frac{1}{15}(1+8\ln 2)].\] Similarly, after the integration over \(s\) and \(t\), \(I^{X,2}\) becomes \[I^{X,2} = 2\int_{0}^{\infty}\frac{Q^{4}}{(2\epsilon vQ)^{2}}dQ\;[(1-\frac{ v^{2}Q^{2}}{4\epsilon^{2}})\frac{1}{2}\ln(t^{2}+c^{2})+\frac{1}{2\epsilon^{2}}t- \frac{1}{8\epsilon^{2}v^{2}Q^{2}}t^{2}]\bigg{|}_{t=v^{2}Q^{2}-2v\epsilon Q}^{v^ {2}Q^{2}+2v\epsilon Q} \tag{49}\] \[\times\left.[-(1-\frac{v^{2}Q^{2}}{4\epsilon^{2}})\frac{\pi}{2} \delta(s)+\frac{1}{4\epsilon^{2}}\arctan(\frac{s}{c})]\right|_{s=v^{2}Q^{2} +2v\epsilon Q}^{v^{2}Q^{2}+2v\epsilon Q},\] where we have omitted the terms proportional to \(c\) or \(s\delta(s)\). The integration over the terms with \(\delta(s)\) in Eq.(49) is zero because at \(s=0\), \(Q=0\) or \(1-\frac{v^{2}Q^{2}}{4\epsilon^{2}}=0\), and the terms with \(\delta(s)\) in the integrand become zero. For the reason, we only need to consider the terms with \(\frac{1}{4\epsilon^{2}}\arctan(\frac{s}{c})\) after the integration of \(s\). Since \(c\) is small, \(\arctan(\frac{s}{c})|_{s=v^{2}Q^{2}+2v\epsilon Q}^{v^{2}Q^{2}+2v\epsilon Q}\) is non-zero only when \(v^{2}Q^{2}-2v\epsilon Q<0<v^{2}Q^{2}+2v\epsilon Q\), i.e., \(0<Q<2\epsilon/v\). In this regime, \[\arctan(\frac{s}{c})|_{s=v^{2}Q^{2}-2v\epsilon Q}^{v^{2}Q^{2}+2v\epsilon Q} \approx\pi, \tag{50}\] \[I^{X,2} = \frac{\pi}{2\epsilon^{2}}\int_{0}^{2\epsilon/v}\frac{Q^{4}}{(2 \epsilon vQ)^{2}}dQ\times[\frac{1}{2}(1-\frac{v^{2}Q^{2}}{4\epsilon^{2}})\ln \frac{(v^{2}Q^{2}+2v\epsilon Q)^{2}+c^{2}}{(v^{2}Q^{2}-2v\epsilon Q)^{2}+c^{2}}+ \frac{vQ}{\epsilon}]\] \[= \frac{\pi}{2\epsilon v^{5}}[1+\frac{1}{15}(1+8\ln 2)].\] Adding \(I^{X,1}\) and \(I^{X,2}\) together, we get \(I^{X}=\pi/\epsilon_{F}v^{5}\) as in Eq.(24). We next compute \(I^{\Psi}\equiv\mathrm{Im}\int_{0}^{\infty}dQS^{\Psi}(Q)\) for the \(\Psi\) diagram. As shown in the main text, we divide \(I^{\Psi}\) to two parts \(I^{\Psi,1}\) and \(I^{\Psi,2}\) and compute them separately. From Eq.(35), we get \[I^{\Psi,1} = \mathrm{Im}\int_{0}^{\infty}dQS^{\Psi,1}(Q) \tag{51}\] \[= \int_{0}^{\infty}\frac{Q^{4}}{4\epsilon^{2}}dQ\int_{-1}^{1}dx \int_{-1}^{1}dy(1-x^{2})(1-y^{2})\] \[\times[\frac{\frac{\epsilon}{\tau}}{(v^{2}Q^{2}-2v\epsilon Qy)^{ 2}+\frac{\epsilon^{2}}{\tau^{2}}}\frac{(v^{2}Q^{2}-2v\epsilon Qx)}{(v^{2}Q^{2 }-2v\epsilon Qx)^{2}+\frac{\epsilon^{2}}{\tau^{2}}}+\frac{\frac{\epsilon}{ \tau}}{(v^{2}Q^{2}-2v\epsilon Qx)^{2}+\frac{\epsilon^{2}}{\tau^{2}}}\frac{v^{ 2}Q^{2}-2v\epsilon Qy)^{2}+\frac{\epsilon^{2}}{\tau^{2}}}{(v^{2}Q^{2}-2v \epsilon Qy)^{2}+\frac{\epsilon^{2}}{\tau^{2}}}]\] \[= \frac{\pi}{2\epsilon^{2}}\int_{0}^{\infty}\frac{Q^{4}}{(2 \epsilon vQ)^{2}}dQ\int_{v^{2}Q^{2}-2v\epsilon Q}^{v^{2}Q^{2}+2v\epsilon Q}ds[ 1-(\frac{s-v^{2}Q^{2}}{2v\epsilon Q})^{2}]\frac{s}{s^{2}+\frac{\epsilon^{2}}{ \tau^{2}}}\int_{v^{2}Q^{2}-2v\epsilon Q}^{v^{2}Q^{2}+2v\epsilon Q}dt[1-(\frac{ t-v^{2}Q^{2}}{2v\epsilon Q})^{2}]\delta(t)\] \[= \frac{\pi}{2\epsilon^{2}}\int_{0}^{2\epsilon/v}dQ\frac{Q^{4}}{(2 \epsilon vQ)^{2}}(1-\frac{v^{2}Q^{2}}{4\epsilon^{2}})[(1-\frac{v^{2}Q^{2}}{4 \epsilon^{2}})\frac{1}{2}\ln\frac{(v^{2}Q^{2}+2v\epsilon Q)^{2}+c^{2}}{(v^{2} Q^{2}-2v\epsilon Q)^{2}+c^{2}}+\frac{vQ}{\epsilon}]\] \[= \frac{\pi}{\epsilon v^{5}}\frac{17+16\ln 2}{105}.\] Similarly, we get \[I^{\Psi,2} = -\int_{0}^{\infty}Q^{4}dQ\times\mathrm{Im}\int_{-1}^{1}dx\int_{- 1}^{1}dy(1-x^{2})(1-y^{2})[\frac{1}{(v^{2}Q^{2}-2v\epsilon Qx+\frac{i}{\tau} \epsilon)^{2}}\times\frac{1}{v^{2}Q^{2}-2v\epsilon Qy+\frac{i}{\tau}\epsilon}] \tag{52}\] \[= \int_{0}^{\infty}Q^{4}dQ\int_{-1}^{1}dx\int_{-1}^{1}dy(1-x^{2})(1 -y^{2})\] \[\times[\frac{\frac{\epsilon}{\tau}}{(v^{2}Q^{2}-2v\epsilon Qy)^{ 2}+\frac{\epsilon^{2}}{\tau^{2}}}\frac{(v^{2}Q^{2}-2v\epsilon Qx)^{2}-\frac{ \epsilon^{2}}{\tau^{2}}}{[(v^{2}Q^{2}-2v\epsilon Qx)^{2}+\frac{\epsilon^{2}}{ \tau^{2}}]^{2}}+\frac{2\frac{\epsilon}{\tau}(v^{2}Q^{2}-2v\epsilon Qy)}{(v^{2} Q^{2}-2v\epsilon Qy)^{2}+\frac{\epsilon^{2}}{\tau^{2}}}\frac{v^{2}Q^{2}-2v \epsilon Qx}{[(v^{2}Q^{2}-2v\epsilon Qx)^{2}+\frac{\epsilon^{2}}{\tau^{2}}]^{ 2}}]\] \[= -I^{X,1}+I^{X,2}\] \[= \frac{\pi}{\epsilon v^{5}}\frac{1}{15}(1+8\ln 2).\]
2310.05911
Energy Management in a Cooperative Energy Harvesting Wireless Sensor Network
In this paper, we consider the problem of finding an optimal energy management policy for a network of sensor nodes capable of harvesting their own energy and sharing it with other nodes in the network. We formulate this problem in the discounted cost Markov decision process framework and obtain good energy-sharing policies using the Deep Deterministic Policy Gradient (DDPG) algorithm. Earlier works have attempted to obtain the optimal energy allocation policy for a single sensor and for multiple sensors arranged on a mote with a single centralized energy buffer. Our algorithms, on the other hand, provide optimal policies for a distributed network of sensors individually harvesting energy and capable of sharing energy amongst themselves. Through simulations, we illustrate that the policies obtained by our DDPG algorithm using this enhanced network model outperform algorithms that do not share energy or use a centralized energy buffer in the distributed multi-nodal case.
Arghyadeep Barat, Prabuchandran. K. J, Shalabh Bhatnagar
2023-10-09T17:57:59Z
http://arxiv.org/abs/2310.05911v1
# Energy Management in a Cooperative Energy Harvesting Wireless Sensor Network ###### Abstract In this paper, we consider the problem of finding an optimal energy management policy for a network of sensor nodes capable of harvesting their own energy and sharing it with other nodes in the network. We formulate this problem in the discounted cost Markov decision process framework and obtain good energy-sharing policies using the Deep Deterministic Policy Gradient (DDPG) algorithm. Earlier works have attempted to obtain the optimal energy allocation policy for a single sensor and for multiple sensors arranged on a more with a single centralized energy buffer. Our algorithms, on the other hand, provide optimal policies for a distributed network of sensors individually harvesting energy and capable of sharing energy amongst themselves. Through simulations, we illustrate that the policies obtained by our DDPG algorithm using this enhanced network model outperform algorithms that do not share energy or use a centralized energy buffer in the distributed multi-nodal case. Energy Management Policies, Energy Harvesting Wireless Sensor Networks, Cooperative Wireless Sensor Networks, Deep Deterministic Policy Gradient (DDPG) algorithm. ## I Introduction Energy harvesting wireless sensor networks (EHWSNs) are rapidly overshadowing regular WSNs in modern-day IoT applications, for surveillance as well as in monitoring physical and environmental conditions such as temperature, humidity, air pressure, and noise level [1]. Sensor nodes require a continuous supply of energy to detect signals and transmit them. The conventional nodes are battery-operated to power the sensors and hence have a finite lifetime which depends on the individual workload. A large enough number of inactive nodes leaves the network inoperable. Since EHWSNs, on the other hand, harvest natural energy, such as solar, thermal, wind, or vibrational energy, given an optimized energy management policy, this energy source can be used as a sustainable limitless powering method for the sensor nodes making their lifetime practically infinite, See [2] and [3]. Such an energy management policy can be further utilized to distribute energy in microgrids for optimized reallocation of power based on varying rates of energy production and consumption in different centers. Recent articles such as [4, 5, 6], discuss the efficient energy harvesting and utilization mechanisms in order to make EHWSNs a viable alternative. Recent developments in the field of Simultaneous Wireless Information and Power Transfer (SWIPT) have materialized the possibility of developing cooperative sensor networks as well. Technologies are being developed whereby extremely high efficiency [7, 8, 9, 10], even up to 90% is achieved [11] in energy transfer using SWIPT at a 2.4GHz frequency range, the highest operating frequency of WSNs as per the IEEE 802.15.4 standard. Therefore, SWIPT has been viably utilized as an available energy-sharing mechanism in a variety of systems such as Distributed Antenna Systems (DAS) [12, 13, 14], IoT [15, 16, 17, 18], WSNs [19, 20, 21, 22] and mobile edge computing [23]. Further, we have articles such as [24, 25] further supporting the usage of cooperative WSNs and [26, 27, 28, 29, 30] which validate the use of SWIPT technology in wireless communication networks and other such distributed systems. Additionally, [31, 32, 33, 34, 35] refer to the usage of SWIPT for sharing energy to and amongst WSNs and find ways to do so more efficiently. Earlier work [36] for a single sensor node with finite data and energy buffers utilized the Q-Learning and Speedy Q-Learning Reinforcement Learning (RL) algorithms to optimize the node's performance and optimally manage the energy available. While this provides an optimal Energy Management Policy (EMP) for sensing a transmission in a single node, it is clearly suboptimal in the case of a network of sensors. The primary reason behind this is that nodes in the network with low data influx might overflow with energy while others with high data rates might starve, leading to very high packet loss. Therefore, the harvested energy would not be utilized optimally. In [37], a Q-Learning-based algorithm has been proposed for a more where the energy is harvested and stored in a single central energy buffer and is then distributed among multiple sensors placed on the mode. The model is therefore trained to distribute energy based on the individual data queue length of each node from the central energy buffer. The main reason that [37] fails to scale up to decentralized energy harvesting is that it uses a hand-featured value function primarily designed to reduce the growth of state space in a mode. In our work, we utilize RL algorithms that can automatically learn features using neural networks and further share energy, as shown in [38], between sensors, i.e., from nodes with a high influx of energy to support lower energy nodes leading to more complete utilization of the harvested energy leading to a cooperative approach to reduce the loss of data packets as well as the average latency in transmission. ### _Our Contributions_ * We propose a model for Energy Harvesting Wireless Sensor Networks (EHWSNs) that has the ability to share energy amongst sensor nodes for obtaining efficient Energy Management Policies (EMPs). * We formulate an infinite horizon \(\alpha\)-discounted cost minimization problem in the Markov Decision Process (MDP) framework using an appropriate single-stage cost function. * We solve the MDP for finding an \(\alpha\)-discount optimal EMP using the Deep Deterministic Policy Gradient algorithm because of its ability to handle large state and action spaces thus making our solution scalable Fig. 1: A) Model for an individual sensor node in a sensor network. B) Model for Sensor Network consisting of multiple sensor nodes capable of sharing energy. ## II Model and Notation In this section, we describe the model of the energy harvesting sensor network used in our paper. We have considered a discrete time-slotted model for a network consisting of \(N\) sensor nodes. We assume the individual sensor nodes have a finite data buffer and a finite energy buffer. The finite buffer assumption is realistic for small-scale sensor nodes. Each of the nodes in the network has its own energy harvesting mechanism from which the energy is stored directly into its own energy buffer. The information regarding the individual data queue and energy levels from all the sensors is sent to a central controller at the end of each slot. The controller determines the energy allocation of each node's energy for transmission and sharing with every other node and notifies each of the nodes accordingly at the beginning of the next slot. The description of a single sensor node and the network of sensors is depicted in Fig. 1 (A) and (B) respectively. In Fig. 1 (A), the sensor detects a random field and generates corresponding data packets for transmission to the central node. We have assumed a discretized data buffer based on the fact that individual data packets must be transmitted at once at the beginning of a slot avoiding any fractional data packets. The energy buffer however is assumed to be continuous in order to allow maximum efficiency and flexibility for the model to transmit the data as well as share its energy with other sensor nodes. The data and energy buffers have a finite capacity and are denoted by \(D_{max}\) (packets or bits) and \(E_{max}\) (units of energy) respectively. In a time slot \(k\), the sensor \(i\) captures a random field and generates \(X_{k}^{i}\) units of data.At the same time, \(Y_{k}^{i}\) units of energy are produced by the energy harvesting mechanism which is stored in the energy buffer. The cumulative state information of the network before the beginning of slot \(k\), for sensors \(i\in\{1,\ldots,N\}\), i.e., the queue length of the data buffer \(q_{k-1}^{i}\) and the energy available \(E_{k-1}^{i}\) denoted by \((q_{k-1}^{i},E_{k-1}^{i}),\ \ \forall i\in\{1,\ldots,N\}\), is sent to the central controller at the end of the \(k-1\)st slot. The controller then determines the energy allocation at the next step, \(T_{k}^{i}\) the amount of energy to be used for transmission of the data packet for node \(i\) and \(A_{k}^{ij}\ \ \forall j\neq i\) the amount of energy to be shared by the node \(i\) to node \(j\) at timestep \(k\). Note that any amount of energy that node \(i\) receives at the timestep \(k\) from other nodes will be completely used for transmission in the same slot. The conversion function \(g(\cdot)\) determines the number of bits that can be transmitted, i.e., if \(E\) amount of energy is used, then \(g(E)\) bits of data can be transmitted. Therefore, the state variables for \(i\in\{1,\ldots,N\}\) can be updated as: \[q_{k+1}^{i}=\left[q_{k}^{i}-g\left(T_{k}^{i}+\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}A_{k}^{ji}\right)\right]+X_{k}^{i}, \tag{1}\] \[E_{k+1}^{i}=E_{k}^{i}-T_{k}^{i}-\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}A_{k}^{ij}+Y_{k}^{i}. \tag{2}\] The values of \(q_{k+1}^{i}\) and \(E_{k+1}^{i}\) are then bounded in the ranges \(\{0,D_{max}\}\) and \([0,E_{max}]\) respectively. In previous literature [39, 40, 41, 42], a logarithmic relation between energy used and data transmitted is assumed based on Shannon's Channel Capacity Theorem. Therefore we have selected a logarithmic conversion function \(g(x)=log_{2}(1+x)\) to define a rather simplistic relation between the energy used and data transmitted while keeping a realistic nonlinear relationship between the two variables as well. Using equations (1) and (2), one can simulate the operation of a network of sensor nodes. We assume (A) below on the data and energy arrivals. 1. \(X_{k}^{i}\left(Y_{k}^{i}\right)_{k\geq 1}\), is independent of \(\{X_{k-1}^{i}...X_{0}^{i}\}\) (\(\{Y_{k-1}^{i}...Y_{0}^{i}\}\)) given \(q_{k}^{i}(E_{k}^{i})\), \(T_{k}^{i}\) and \(A_{k}^{ij},i,j\in\{1,\ldots,N\}\). Further, \(\{X_{k}^{i}\}\) and \(\{Y_{k}^{i}\}\) are independent of one another for \(k\geq 0\). The sequence \(\{X_{k}^{i}\}_{k\geq 0}\) satisfies \(\sup_{k}E[X_{k}^{i}]\leq r<\infty\ \forall i\in\{1,\ldots,N\}\) Assumption (A) helps establish the Markov property, i.e., all future states of the system are dependent only on the current state and independent of the previous states. We have assumed the random variables corresponding to the amount of data and energy received at each time step to be independent and identically distributed (i.i.d). Both the data arrivals and energy harvesting are modeled as samples from a \(Poisson\) distribution with a predetermined mean, as assumed in [36, 37] and [43]. The mean is a preset constant assuming that the average rate of energy or data arrival does not change in a small time scale, with all the variability caused because of natural noise. In terms of the energy sharing amongst the nodes, for the simulations in this article, we have assumed the network to be distributed in a small radius so that the efficiency of energy sharing is affected negligibly. Our central controller consists of the implementation of an RL algorithm that takes information about the states of every node in the network i.e., \((q_{k}^{i},E_{k}^{i})\ \forall i\in\{1,\ldots,N\}\) and recommends \((T_{k}^{i},A_{k}^{ij})\ \forall i,j\in\{1,\ldots,N\},i\neq j\). Our RL algorithm is based on the actor-critic model wherein the actor proposes the optimal action or energy allocation strategy. The critic model then evaluates the effectiveness of the action by evaluating a corresponding value function. The details of our deep RL model and the algorithm are provided in Section IV. ## III Energy Management Policy via an MDP The Markov Decision Process (MDP) refers to a discrete-time stochastic control process where the actions are chosen in each state so as to minimize some predefined long-term cost. In our problem, the queue length and energy level of each node constitute the state variables. The data queue length \(q_{k}^{i}\in{0,1,\ldots,D_{max}}\). Energy levels of the nodes are however continuous state variables \(E_{k}^{i}\in[0,E_{max}],\ \ \forall i\in\{1,\ldots,N\},k\geq 0\) where \(N\) is the total number of sensor nodes. For the joint state of the network \(s_{k}=(q_{k}^{i},E_{k}^{i}),\forall i\in\{1,\ldots,N\}\), the action variables are described as \(A_{k}^{ij}\ \forall i,j\in\{1,\ldots,N\}\). The variable \(A_{k}^{ii}\ \forall i\in(1,N)\) denotes the energy used by the \(i^{th}\) node for transmission of data from its own data queue whereas \(A_{k}^{i,j}\ \forall i,j\in\{1,\ldots,N\},i\neq j\) corresponds to the amount of energy to be shared from the node \(i\) to the node \(j\) at timestep \(k\). The actions determined must follow the energy constraint \(\sum_{j=1}^{N}A_{k}^{ij}\leq E_{k}^{i}\). This constraint simply enforces that the nodes can only use a net amount of energy that is bounded by the amount of energy already available for transmission and sharing. A policy \(\pi\) is a sequence of maps \(A_{k}\) from the joint state space to the joint action space such that when the joint state is \(s_{k}=(q_{k}^{1},E_{k}^{1},q_{k}^{2},E_{k}^{2},...q_{k}^{N},E_{k}^{N})\) at timestep \(k\), \(A_{k}(s_{k})\) specifies the energy allocation or distribution for the transmission and sharing amongst each node. By abuse of notation, we denote \(A_{k}(s_{k})\) as \(A_{k}\). Therefore, we can denote the joint stationary policy as \(\pi=\{A,A,\ldots,A\}\). Based on the assumption (A) stated in Section II, we can state that the joint state \(\{(q_{k}^{i},E_{k}^{i})\}\) follows the Markov property for all \(i\in\{1,\ldots,N\}\) and \(\pi\in\Pi\) [the set of all stationary policies]. We define the single-stage cost function as \[c(q_{k}^{1},\ldots,q_{k}^{N},E_{k}^{1},\ldots,E_{k}^{N},A_{k})=\sum_{i=1}^{N} (\Phi(q_{k}^{i})^{+}), \tag{3}\] where \((q_{k}^{i})^{+}\) indicates the remaining queue length after the action \(A_{k}^{i}\) has been chosen and \(\Phi\) is any increasing convex function. There are many choices for \(\Phi(\cdot)\) like \(\Phi(x)=x\), \(\Phi(x)=x^{2}\) or \(\Phi(x)=\exp(\alpha x),\alpha\geq 0\). In our experiments, we consider \(\Phi(x)=x^{2}\). A simpler choice for the cost function would be the sum of the queue lengths of each node, in which case, \(\Phi(x)=x\). However, such a cost function would minimize the overall sum of queue lengths and will not distinguish between a set of medium queue lengths and a set with highly varying large and small queue lengths. The added benefit of setting the cost function as an increasing convex function such as the sum of squares of queue lengths is that along with minimizing the queue lengths, it tries to reduce the difference in the average queue lengths of the individual sensor nodes. Therefore we can define the long run \(\alpha\)-discounted cost \(w_{\pi}(q_{0},E_{0})\) for a policy \(\pi\) as follows: \[w_{\pi}(q_{0},E_{0})=E_{\pi}\left[\sum_{k=0}^{\infty}\alpha^{k}\cdot(\sum_{i=1}^{ N}((q_{k}^{i})^{+})^{2}\mid q_{0},E_{0})\right] \tag{4}\] where \(q_{k}\) and \(E_{k}\) denote the collective set of queue lengths and energy levels for each node \(i\)\((i\in[1,\ldots,N])\) at the \(k^{th}\) step and \(E_{\pi}\left[\cdot\right]\) represents the expectation when actions are selected as per policy \(\pi\). An \(\alpha\)-discounted optimal EMP in this setting minimizes \(w_{\pi}(q_{0},E_{0})\) over all stationary deterministic policies, \(\Pi\). Therefore, \[w^{\star}(q_{0},E_{0})=\min_{\pi\in\Pi}w_{\pi}(q_{0},E_{0}). \tag{5}\] The primary benefit of representing this problem as an \(\alpha\)-discounted cost is that a variety of performance objectives can be achieved as per the requirement of the designer through a suitable choice of \(\alpha\). ## IV Reinforcement Learning Algorithms In this section, we describe the RL algorithms that we have utilized to learn the optimal EMPs for a distributed Energy Harvesting Wireless Sensor Network (EHWSN). ### _Deep Q Network (DQN)_ The Deep Q Network or DQN model proposed in [44] is a neural network model based on the Q-Learning algorithm. The neural network (NN) effectively acts as a function approximation to the Q-table used in Q-Learning [45]. Therefore, the weights of the neural network are trained to predict the Q-values of every feasible action associated with a state instead of using the Q-table. Thus, each output value of the DQN model is the Q-value associated with a particular action in that state. The optimal action is determined by finding the action having the minimum predicted Q-value. The update rule for training the model involves a gradient descent in the Bellman error loss objective using an NN-based function approximator of the Q-function. The advantage in using the DQN model over Q-Learning is that the number of possible states can be infinite. In our problem, the size of the state space is \((D_{max}\cdot E_{max})^{N}\) where \(D_{max}\) and \(E_{max}\) are the maximum capacities of the data buffer and energy buffer, \(N\) is the number of nodes, we can see that even if data and energy buffer are both considered discretized, the state space increases exponentially with each additional node. However, since every output node of the DQN corresponds to the Q-value of a unique action, we are still limited by the size of feasible action space in state \(s\). ### _Deep Deterministic Policy Gradient (DDPG)_ The DDPG algorithm proposed in [46] is a model-free actor-critic-based deep RL algorithm. The model used in our case consists of a pair of actor and critic networks as well as a pair of target actor and critic networks as shown in Fig. 2. The actor models predict the optimal action whereas the critic models evaluate them. The action suggested by the target actor is implemented. Whenever the actor performs better than the target actor, the second network is updated. A representation of the DDPG architecture utilized has been illustrated in Fig. 2 The biggest advantage of using such a model is the flexibility to operate with both continuous state and action spaces. In our case, although we have taken the data buffer to be quantized, the state space is infinite, since the amount of energy available is a continuous state variable. The actions to be determined are the quantities of energy to be used and shared for each node and hence need to be continuous as well. When the DDPG model starts to learn, the actor network predicts an action to which noise is added in order to explore new actions and evaluate them. The noise function reduces with time as the model converges to the optimal policy. The critic network predicts the Q-value associated with the action proposed by the actor. When the action is implemented in the environment, the transition is stored in the replay buffer in the form of tuples of the current state, action taken, next state, and reward generated by the environment. The critic loss is generated as a function of the observed reward and the predicted reward. Based on the update of the critic model, the actor model is updated. During training, each time the average performance of the actor and critic model surpasses that of the last recorded best performance, the target actor and critic models are updated with the weights of the trained actor and critic models. ## V Simulation Results The results of our model have been compared to the models given in [36] and [37]. In [36], Q-Learning has been implemented to find the optimal EMP for individual nodes in the network. In [37], a centralized model learns to optimally distribute centrally harvested energy to different nodes in the network utilizing a modified Q-Learning algorithm aided by linear function approximations. However, in order to cope with larger state and action spaces for comparison, we have implemented it with the DQN algorithm. Finally, our model considers a sensor network capable of sharing energy, the management of which is done using a central controller that learns the optimal policy via the DDPG algorithm. _Experimental Setup:_ In order to compare the different models, we have chosen a setup where the network consists of multiple nodes with varying data rates but identical energy rates with \(E[Y_{k}^{i}]=5\). The data and energy arrivals \(X_{k}^{i},Y_{k}^{i}\)\(\forall k\geq 0\) are i.i.d sequences and follow Poisson distribution, i.e., \(X_{k}^{i}\sim Poisson(E[\lambda_{D}^{i}])\) and \(Y_{k}^{i}\sim Poisson(E[\lambda_{E}^{i}])\). We have taken \(D_{max}\) and \(E_{max}\) to be \(10\) each. The conversion function is selected as \(g(x)=\log(1+x)\) which determines the amount of data transmitted from the amount of energy used. ### _Two Nodes Case_ In order to highlight the advantages of sharing energy, as considered in our model, the average data rate for one node is set at \(E[X_{k}^{1}]=0.5\). The performance of the network is measured using two parameters, the long-run average queue length and the average percentage loss of data packets measured across different average data arrival rates for the second node. Therefore, \(E[X_{k}^{2}]\) is plotted along the X-axis and for the two plots in Fig. 3A(i) and (ii) long-run average queue length and data packet loss are plotted along the Y-axis respectively as an additional performance metric. In the following figure, the models proposed in [36] and [37] have been referred to as the "No Sharing" and the "Centralized" model respectively whereas our model has been referred to as the "Sharing model". Fig. 2: Training process of the Deep Deterministic Policy Gradient (DDPG) algorithm Fig. 3A(i) illustrates the fact that compared to the models proposed in [36] and [37], our decentralized model converges to policies having lower average data queue length. Based on Little's Law [47], we can state that the long-term average queue length is equal to the product of the long-term average data arrival rate and the average time delay for transmission. Therefore, we can directly conclude, that the policies derived by using our model lead to a lower average time delay in the transmission of the data (being collected) by the network as a whole. Fig. 3A(ii) shows that our model successfully develops an energy split profile for sharing amongst neighboring nodes in order to effectively distribute the varying load of data arrival rates in different nodes. The model presented in [37], although more cost-effective, compromises the accuracy of convergence to the optimal policy primarily due to linear function approximation and limitations to the energy distribution mechanism. This leads to a slightly higher average queue length, hence a higher transmission delay as well as a higher loss of data packets. In clear contrast, our model successfully finds a policy to share energy amongst neighboring nodes in a sensor network to efficiently reduce the transmission delay as well as the loss of data packets beyond the possible margins for nodes operating individually. A correlated equilibrium is established by individual sensor nodes cooperating for the collective gain of the sensor network. Fig. 3B(i) and (ii) demonstrate the variation of long-run average queue length and percentage loss of data packets for a network consisting of two nodes for mean data arrival rate varying from 0.5 to 4.5 for each node, with an interval of 0.5. Since the expected energy arrival rate has been fixed for each node, a critical data arrival rate can be calculated as \[E[T_{k}]=E[g(\sum_{i=1}^{N}E[Y_{k}^{i}])]. \tag{6}\] Fig. 3: Comparison with earlier models [36, 37] in terms of Average Queue Length [A(i)] and Average Percentage Loss of Data Packets of Network [A(ii)] and Heat-map of Queue Length [B(i)] and Data Loss Percentage [B(ii)] for optimal policy learned by the current model. \(X_{k}^{i},Y_{k}^{i}\sim PoissonDistribution\); \(E[Y_{k}^{i}]=5\:\forall i\:\in 1,2\) with the condition of \(E[X_{k}^{1}]=0.5\) assumed for A(i) and A(ii) Here, \(T_{k}\) is the maximum average rate of data that can be transmitted given the current distribution of \(Y_{k}^{i}\). Hence it is the same as the critical rate or the maximum average rate at which data arrival can be handled by the arriving energy. Therefore, beyond this capacity, the network should become unstable leading to very high queue length as well as a high percentage of packet loss. For a network consisting of two nodes, each receiving energy at a mean rate of 5, the critical rate for the network as a whole comes out to be 3.395 approximately. Similarly in the results corresponding to our model as presented Fig. 3, we observe a large increase in both beyond the point where \(\sum_{i=1}^{N}E[X_{k}^{i}]\geq E[T_{k}]\). Even then, due to the ability of the model to share energy amongst nodes, it reduces the queue length and packet loss much lower than what is achieved using previously proposed models [36, 37]. The optimal policies derived by our model are such that even when the data arrival rate is about 2.7 times the critical rate, i.e., the maximum rate at which the network is stable, the percentage of data loss is limited to 43%. ### _Scalability_ One of the added benefits of using Deep RL algorithms such as DDPG is the possibility to solve for larger state and action spaces, hence allowing for the scalability of our model. We have simulated each of the aforementioned algorithms, i.e., the Q-Learning-based No Sharing model, the DQN-based Centralized model, and our DDPG-based Energy Sharing model. In each case, simulations have been carried out for networks containing multiple nodes, each with \(E[Y_{k}^{i}]=5\) while \(E[X_{k}^{i}]\) is randomly selected in the range of 0 to 4. For a simulation having 10 nodes, the minimum percentage of data loss is given as 43%, 17%, and 11% for each of the models respectively. Therefore, our model is able to handle the same load of data influx with the lowest data loss rate among the three algorithms. Now, for simulations in the same device, the largest size of a network possible turns out to be 200 nodes, 6 nodes, and 500 nodes for each algorithm respectively. The first model is restricted by the amount of RAM to process the Q-Table and the space required for storing the entire table. In the second model using the DQN model, every possible combined action has to be considered as an output. For our model, neural networks with just two hidden layers having only 2 and 4 units respectively for the actor and critic networks have been used. Using the same neural architecture, we can reliably solve the optimization problem for even up to 500 nodes. Larger simulations are restricted by RAM. Fig. 4 demonstrates the superior performance of our model compared to the other two models. All the above simulations are done using the software Virtual Studio Code \(v1.75.1\) on an Intel \(i5-10210U\) processor computer with a clock speed of 1.60 GHz. Fig. 4 shows the average percentage of data loss due to the policies learned by the compared algorithms and their variability over multiple runs. The solid line represents the average data loss results obtained over multiple runs on the same device. The dotted lines show the extension of the results with additional computational resources. The results clearly demonstrate that with the same amount of computational resources, our model is able to optimize to a better result even for a much larger network. In our model, in contrast to the others in comparison, the policies learned are almost equally optimal with data loss increasing to only 13% in the 500-node network. The average data and energy queue levels maintained at almost every node in the 500-node network were approximately identical to the same for the optimized policy in a network of 2 nodes. Therefore, it can outperform the other methods even for much larger networks. The above results demonstrate the scalability and further optimality of our model in comparison with the other models described above. ## VI Implementation Details The model architecture consists of the DDPG model which has a pair of training actor and critic networks and a pair of target actor and critic networks. Each actor and critic network has an identical internal structure of just two hidden layers with two units on the first layer and 4 units on the second. This makes for a very light model allowing for scalability. The output layer for the actor-network consists of \(N^{2}\) units, \(N\) is the number of nodes in the network. This is because every node corresponds to \(N\) action variables since it decides the amount of energy to be used for transmission in order to clear its own data queue and the amount of energy to be shared individually with the other \(N-1\) nodes in the network. The energy left after subtracting the sum from the energy level is the amount stored for future use of the node itself. We have been able to derive our results with such a light architecture because of its ability to generalize a low-level policy learned for energy distribution in smaller WSNs to a much larger scale with equal efficiency. ## VII Conclusion and Future Work We formulated the problem of energy sharing and distribution in Energy Harvesting Wireless Sensor Networks (EHWSNs) as Markov Decision Processes (MDP) and studied an application of a Deep Deterministic Policy Gradient algorithm to find the optimal policy for minimizing the transmission delay for the sensor network. Owing to the energy-sharing capability of the network and due to the efficient energy usage policy, our model succeeds in minimizing the loss of data packets in case of an overload of individual nodes by rerouting the energy harvested from other nodes. The usage of the DDPG algorithm enables learning the optimal policy without an explicit model of the environment and also improves the joint state and action space handling capacity of the algorithm. The benefits of the proposed model and used algorithm have been established via experimental results and simulation that demonstrate significantly lower average queue length, transmission delay, and percentage loss of data in overloaded situations. The results can also be reproduced with limited computational resources at a much larger scale compared to the compared algorithms Fig. 4: Comparison with earlier models [36, 37] in terms of Average Percentage Data Loss for optimal policy learned by the respective models. \(E[Y_{k}^{i}]=5\ \forall i\ \in\{1,\ldots,N\}\) (\(N\)=number of sensor nodes in network) \(E[X_{k}^{i}]\) is randomly selected \(\forall i\ \in\{1,\ldots,N\}\) such that the range is \([0,4]\) and the mean is \(2\). The current model becomes computationally inefficient on a larger scale due to the rapidly increasing computational cost as well as the time required for training the model for learning policies over larger state and action spaces. Hence, models can be developed to create a layered structure to classify the network into clusters to reduce individual computational costs as well as parallelize the learning of optimal policies for different clusters. In the future, we would like to extend the energy distribution protocol so that nodes are classified into smaller clusters and only members of the same cluster can share energy with each other. This would also help with the decentralization of the control. Furthermore, we would like to incorporate intricacies related to the efficiency of energy sharing like loss in efficiency due to wireless transfer and the variability of average data arrival or energy arrival rate in our future work.
2308.16299
Conservation of Helium while Maintaining High System Purity
Recent helium shortages and helium price increases have lead to an increased emphasis being placed on conserving helium. The need to conserve helium must be balanced with need to maintain the high levels of purity necessary to prevent operational problems caused by contamination. Helium losses and contamination control are especially important for test stands that have cryogenic distribution systems operating continuously with frequent changeover of cryogenic temperature components that are being tested. This paper describes a mathematical model to estimate the quantity of helium lost and the purity of the helium after the pump and backfill procedure is complete. The process to determine the optimal time during pump down to cut off pumping and start backfilling is described. There is a tradeoff between trying to achieve the lowest possible pressure during pumping and the quantity of air leaking into the volume while pumping is occurring. An additional benefit of careful selection of pump and backfill parameters in conjunction with real-time pressure monitoring can reduce the labor and time required to complete a successful pump and backfill procedure. This paper is intended to be a tool for engineers to review their pump and backfill procedures and measured data to optimize helium losses, system purity, and labor required.
M. White, J. Theilacker, M. Barba
2023-08-21T20:39:59Z
http://arxiv.org/abs/2308.16299v1
# Conservation of Helium while Maintaining High System Purity ###### Abstract Recent helium shortages and helium price increases have lead to an increased emphasis being placed on conserving helium. The need to conserve helium must be balanced with need to maintain the high levels of purity necessary to prevent operational problems caused by contamination. Helium losses and contamination control are especially important for test stands that have cryogenic distribution systems operating continuously with frequent changeover of cryogenic temperature components that are being tested. This paper describes a mathematical model to estimate the quantity of helium lost and the purity of the helium after the pump and backfill procedure is complete. The process to determine the optimal time during pump down to cut off pumping and start backfilling is described. There is a tradeoff between trying to achieve the lowest possible pressure during pumping and the quantity of air leaking into the volume while pumping is occurring. An additional benefit of careful selection of pump and backfill parameters in conjunction with real-time pressure monitoring can reduce the labor and time required to complete a successful pump and backfill procedure. This paper is intended to be a tool for engineers to review their pump and backfill procedures and measured data to optimize helium losses, system purity, and labor required. ## 1 Introduction Helium is a non-renewable resource and eventually known helium reserves will be depleted. This suggests that helium shortages and price increases will be more severe in the future than they have been in the past. Helium is a byproduct of uranium and thorium decay. Commercially available helium is typically derived from natural gas, since helium released during radioactive decay can be trapped in the same natural formations as natural gas [1]. Reduced fossil fuel extraction to mitigate climate change could also result in a reduced supply of helium. Only a few locations in the United States have natural gas deposits with \(>=0.3\%\) helium. Dilute concentrations make separating helium expensive. Limited number of locations where extracting helium is economically feasible. Helium is an international commodity and geopolitical conflict can lead to helium shortages and/or price spikes. There are a limited number of helium supply facilities and distributors. Force majeure facility closure of a single facility can cause restrictions on quantities of helium being delivered over a wide area. Finding ways to conserve helium now starts the accrual of financial savings and conserves a finite resource. Reducing helium losses now also mitigates the impact of future helium price increases and/or shortages. One method of reducing helium losses is to carefully evaluate pump & backfill procedures to achieve desired purity levels with the minimum amount of helium loss. Clean-up of a helium cryogenic system is accomplished by vacuum pumping the gas, initially air or nitrogen, out of the system and backfilling with helium. Repeated pump and backfill cycles leave the helium pure enough to connect the volume to a helium system. Unnecessary backfill cycles are a helium loss that can be avoided. A dry nitrogen purge is typically applied prior to pump and backfilling to remove residual water. The first reason is that pumping is not very effective at removing water, since evaporation cools the water and limits the vapor pressure. The second reason is that the presence of water can limit the ultimate pressure reached during the pumping cycle. At a temperature of 300 K the vapor pressure of water is 35 mbar. If the ultimate pressure is limited to 35 mbar, then additional backfill cycles will be needed and unnecessary helium losses will be incurred ## 2 Helium Purity Multiple cycles are required to get down to a contamination level acceptable as an input to a LN\({}_{2}\) temperature charcoal adsorber (\(<50\) PPM\({}_{\rm v}\)). More stringent purity requirements may apply if the helium used to sweep the volume after pump and backfill cannot be fully routed to LN\({}_{2}\) temperature charcoal adsorber. The final purity of an ideal system with no leakage after pump and backfilling can be estimated using equation (1) \[\theta_{Ideal}=\left(\frac{P_{end}}{P_{start}}\right)^{N}\ x\ 10^{6} \tag{1}\] where \(\theta_{Ideal}\) is the contamination concentration in PPM\({}_{\rm v}\) on a system with no leaks, \(P_{start}\) is the starting pressure of the pump cycle, generally atmospheric pressure, \(P_{end}\) is the ending pressure of the pump cycle, and \(N\) is the number of pump and backfill cycles. As shown in Table 1, with an end pressure of 150 mbar it takes 6 cycles and 51.1 m\({}^{3}\) of helium to clean a 10 m\({}^{3}\) volume to less than 50 PPM\({}_{\rm v}\). Reducing the end pressure to 50 mbar decreases the number of cycles to 4 and the 38 m\({}^{3}\) of helium used. Reducing the end pressure to 5 mbar further reduces the number of cycles to 2 and the helium used to 19.9 m\({}^{3}\). Tightening the end pressure requirement from 150 mbar to 5 mbar reduces pump and backfill helium usage by 60% while achieving the same purity. Schedule and labor costs also improve due to the reduced overall pump and backfill procedure time required. ## 3 Determining Pump and Backfill Parameters The purpose of this paper is to provide a methodology to quantitively optimize pump and backfill parameters to achieve the desired purity level with a minimum loss of helium and a minimum time required to complete the procedure. Using the simple formula in equation 1 is insufficient by itself, since real systems have leaks and other limitations on ultimate pumping pressure. Reviewing cryogenic literature and cryogenic engineering handbooks yielded little useful information on optimizing pump and backfill parameters using calculations and experimental data. Furthermore, different people at different times and locations have selected various pressure and time criteria to stop pumping and various numbers of pump and backfill cycles even within a single organization such as Fermilab. The \begin{table} \begin{tabular}{l|c c|c c|c c} & \multicolumn{2}{c|}{\(P_{end}=150\) mbar} & \multicolumn{2}{c|}{\(P_{end}=50\) mbar} & \multicolumn{2}{c}{\(P_{end}=5\) mbar} \\ Cycle & Purity & Helium & Purity & Helium & Purity & Helium \\ & PPM\({}_{\rm v}\) & m\({}^{3}\) & PPM\({}_{\rm v}\) & m\({}^{3}\) & PPM\({}_{\rm v}\) & m\({}^{3}\) \\ \cline{2-7} 1 & 148,075 & 8.5 & 49,358 & 9.5 & 4,936 & 10.0 \\ 2 & 21,926 & 17.0 & 2,436 & 19.0 & 24.4 & 19.9 \\ 3 & 3,247 & 25.6 & 120 & 28.5 & 0.1 & 29.9 \\ 4 & 481 & 34.1 & 5.9 & 38.0 & 0.0 & 39.8 \\ 5 & 71.2 & 42.6 & 0.3 & 47.5 & 0.0 & 49.8 \\ 6 & 10.5 & 51.1 & 0.0 & 57.0 & 0.0 & 59.7 \\ \end{tabular} \end{table} Table 1: Expected contamination level after each pump and back fill cycle using equation (1). The pumped volume was arbitrarily selected as 10 m\({}^{3}\). methodology in this paper will help organizations standardize their pump and backfill procedures and reduce the associated helium losses. A critical component necessary for optimizing pump and backfill procedures is the pressure transmitter. The pressure transmitter must be capable of reading accurately with sufficient precision at the desired vacuum level and should be on a periodic calibration. Additionally, transmitter readings should be read through a control system where the reading can be plotted in real-time on a semi-log plot. Ideally the transmitter is placed at the opposite end of the pumped volume from the vacuum pump for continuous accurate readings. If it is unavoidable to place the transmitter on the pumping line, then there needs to be an isolation valve between the pump and the transmitter. The drawback is that the isolation valve will need to be periodically closed and enough time must be given for the vacuum pressure to equalize across the pumped volume. Therefore, the procedure will take longer, more time at vacuum pressures provides more time for air to leak in, and it becomes more difficult to determine when pumpdown becomes non-linear on a semi-log plot. The pressure versus time plot on a semi-log scale is ideally a straight line, so having the capability to monitor the pressure transmitter signal in real time is helpful in optimizing when to stop the pumpdown. The pumpdown should be stopped once the pumpdown curve becomes non-linear, since at this point the quantity of air leaking into the system becomes significant relative to the quantity of air or air/helium mixture being pumped out. In addition to air leaks there are three other potential reasons for deviations from a straight line on a semi-log plot. The first is that there is residual water vapor on metal surfaces that is being de-adsorbed. Second is that there is water in the vacuum pump oil degrading its performance. The last reason is that ultimate pressure of the vacuum pump has been reached. ### Case 1: Perfect System A perfect system uses the assumptions of ideal gas, no pumping resistance, no external leaks into the system, and no residual entrained water vapor. The mass balance formulas are shown in equations (2) through (4) \[\frac{dM}{dt}=\ \dot{m}_{In}-\ \dot{m}_{Out} \tag{2}\] where \(M=\) mass of gas in the system, \(\dot{m}_{In}=\) mass flow rate leaking into the system, \(\dot{m}_{Out}=\) mass flow rate pumping out of the system. The ideal gas law is shown in equation (3) \[M=\frac{PV}{RT} \tag{3}\] where \(P=\) system pressure, \(V=\) system volume, \(R=\) gas constant, and \(T=\) gas temperature. For a system with no leaks \(\dot{m}_{In}=0\). The pumping capacity is calculated per equation (4). \[\dot{m}_{Out}=\ \rho Q=\frac{PQ}{RT} \tag{4}\] where \(\rho=\) system gas density and \(Q=\) vacuum pump volume flow capacity. Combining (2), (3) and (4) results in equation 5 \[\frac{V}{RT}\frac{dP}{dt}=\ -\frac{PQ}{RT} \tag{5}\] Equation (5) is rearranged as an integral in equation (6) and the solution to the integral is shown in equation (7). Equation (7) is rearranged to solve for \(P_{2}\), which is the pressure at the end of pumpdown, as a function of time in equation (8) \[\int\limits_{P_{1}}^{P_{2}}\frac{dP}{P}=\ -\frac{Q}{V}\int\limits_{0}^{t}dt \tag{6}\] \[ln\left(\frac{P_{2}}{P_{1}}\right)=\ -\frac{Q}{V}t \tag{7}\] \[P_{2}=P_{1}\ e^{-\frac{Q}{V}t}=P_{1}\ e^{-\frac{t}{\tau}} \tag{8}\] where \(\frac{\nu}{Q}\) is the time constant, \(\tau\), of the system and \(P_{i}\) is the pressure at the start of pump down ### Case 2: System with Pumping Resistance The second case is a system that uses the assumptions of ideal gas, pumping resistance, no external leaks into the system, and no residual entrained water vapor. The pumped mass flow rate calculation is shown in equation (9) \[\dot{m}_{Out}=\ \rho_{P}Q=\ \frac{P_{P}Q}{RT} \tag{9}\] where \(\rho_{P}=\) gas density at the vacuum pump inlet, \(P_{P}=\) pressure at the vacuum pump inlet \(=P-\Delta P\), and \(Q=\) vacuum pump volume flow capacity. Combining equations (2), (3) and (9) yields equation (10). \[\frac{V}{RT}\frac{dP}{dt}=\ -\frac{P_{P}Q}{RT}=\ -\frac{(P-\ \Delta P)Q}{RT} \tag{10}\] The pressure drop between the volume and the pump assumed to be a constant fraction of the volume pressure in order to simplify the integration into a easy-to-use analytical expression. In most cases, users will adjust the \(X\) term in the calculations to match experimental data rather than trying to calculate pumping resistance using equation (11) found in engineering handbooks [2]. \[\Delta P=\ \frac{fL}{D}\ \frac{\rho V^{2}}{2}=\ \frac{fL}{D}\ \frac{Q^{2}}{2A^{2}}\ \frac{P}{RT}=XP \tag{11}\] where \(f=\) Darcy-Weisbach friction factor, \(L=\) effective length of pumping line, \(D=\) Inside diameter of the pumping line, \(V=\) flow velocity in the pumping line \(=\frac{Q}{A}\), \(A=\) inside cross-sectional area of the pumping line, and the term \(X=\frac{fL}{D}\ \frac{Q^{2}}{2A^{2}}\ \frac{1}{RT}=\ \frac{\Delta P}{P}\) used to simplify subsequent formulas. Rearranging equation (11) as an integral results in equation (12): \[\int\limits_{P_{1}}^{P_{2}}\frac{dP}{P(1-X)}=\ -\frac{Q}{V}\int\limits_{0}^{t}dt \tag{12}\] The solution of the integral in equation (12) assuming X term is constant is shown in equation (13). Equation (13) is rearranged to solve for \(P_{2}\), which is the pressure at the end of pumpdown, as a function of time in equation (14). \[\frac{ln\left(\frac{P_{2}}{P_{1}}\right)}{1-X}=\ -\frac{Q}{V}t \tag{12}\] \[P_{2}=\ P_{1}\ e^{-\frac{(1-X)Q}{V}}{V} \tag{14}\] ### Case 3: System with Pumping Resistance and External Leak The third case is a system that uses the assumptions of ideal gas, pumping resistance, external leaks into the system, and no residual entrained water vapor. The flow of atmospheric air into the volume will be modeled as flow through a control valve, equating the leak as an effective control valve \(C_{v}\). The air leak mass flow rate calculation is shown in equation (15). \[\dot{m}_{In}=\ N_{6}\ C_{v}\ Y\ \sqrt{F_{Y}\ X_{T}\ \rho_{A}\ P_{A}} \tag{15}\] where \(N_{6}=\) numerical constant from ISA 75.01.01 or IEC 60534, \(C_{v}=\) equivalent valve C\({}_{v}\) representing the leak, \(Y=\) gas expansion factor from ISA 75.01.01 or IEC 60534, \(\rho_{A}=\) density of atmospheric air, \(P_{A}=\) pressure of atmospheric air, \(F_{7}=\) specific heat ratio, C\({}_{p}\)/1.4, which is 1 for air or any diatomic gas, and \(X_{T}=\) ratio of \(\Delta\)P / P\({}_{\ln}\) for choked flow condition. Note that \(N_{6}^{\prime}=\frac{2.73}{3600\sqrt{1000}}\) for pure SI units, \(\dot{m}_{In}[=]\frac{kg}{s}\), \(\rho[=]\frac{kg}{m^{3}}\), \(P[=]Pa\). The gas expansion factor \(Y\) is a function of downstream pressure until the flow becomes choked and varies from 2/3 to 1. The pump down starts with \(Y\)=1 and decreases until the leak is choked flow where \(Y\)=2/3. The pump down will spend most of the time in the choked flow regime. Since the region of interest is at low volume pressures, it is reasonable to use a constant value of \(Y\)=2/3. If the pressure dependence of \(Y\) is considered, it significantly complicates the integration of the resulting differential equation. Assuming a value of \(X_{T}\) = 0.5 is reasonable for the inefficient leak area geometry and is consistent with ideal gas flow through an orifice without pressure recovery. The factor G shown in equation 16 is used to simplify subsequent formulas. Note that this is a constant, which is not technically constant until the volume pressure is less than or equal to one-half of atmospheric pressure (choked flow). \[G=\ N_{6}^{\prime}\ C_{v}\ Y\ \sqrt{F_{Y}\ X_{T}\ \rho_{A}\ P_{A}} \tag{16}\] Combining equations (2), (3), (9), (15) and (16) yields equation 17: \[\frac{V}{RT}\frac{dP}{dt}=\ G\ -\frac{P\ (1-\ X)Q}{RT} \tag{17}\] Rearranging equation (17) yields equation (18), which is then shown in integral form in equation (19). \[\frac{dP}{dt}=\frac{GRT}{V}\ -\frac{P\ (1-\ X)Q}{V} \tag{18}\] \[\int\limits_{P_{1}}^{P_{2}}\frac{dP}{\frac{GRT}{V}\ -\ \frac{P\ (1-\ X)Q}{V}}=\int \limits_{t_{1}}^{t_{2}}dt \tag{19}\] Substituting the expressions \(E=\;RT/V\) and \(F=\;(1-X)Q/V\) into equation (19) yields equation (20) \[\int_{P_{1}}^{P_{2}}\frac{dP}{E\;-PF}\;=\;\int_{t_{1}}^{t_{2}}dt \tag{20}\] Solving the integral in equation (20) yield equation (21), which can then be rearranged to solve for the end pressure \(P_{2}\) as a function of time as shown in equation (22) \[\ln\left(\frac{FP_{2}-E}{FP_{1}-E}\right)=\;-Ft \tag{21}\] \[P_{2}\;=\;\frac{E\;+\;(FP_{1}-E)\;e^{-\frac{(1-X)Q}{V}t}}{F} \tag{22}\] ### Ultimate Pumping Pressure Any leak into the system means that there is an ultimate pressure at which the pumping flow is equal to the incoming leak. It is not possible to pump below that ultimate pressure without changing the pumping system or repairing the incoming leak. This ultimate pressure can be determined by setting \(\dot{m}_{In}=\;\dot{m}_{out}\) and solving for \(P\) as shown in equations (22) and (23). Conversely, if the ultimate pressure is known after the first pump cycle, the size of the leak can be estimated as a control valve C\({}_{v}\) equivalent. \[\dot{m}_{In}=\;N_{6}^{\prime}\;C_{v}\;Y\;\sqrt{F_{\gamma}\;X_{T}\;\rho_{A}\;P _{A}}\;=\;\dot{m}_{out}=\;\rho(1-X)Q=\;\frac{P(1-X)Q}{RT} \tag{23}\] \[P_{Ultimate}=\;\frac{RT}{(1-X)Q}\;N_{6}^{\prime}\;C_{v}\;Y\;\sqrt{F_{\gamma} \;X_{T}\;\rho_{A}\;P_{A}} \tag{24}\] ### Calculation Results The plot in figure 1 uses a volume \(V\) of 10 m\({}^{3}\), a pumping capacity \(Q\) of 50 m\({}^{3}\)/hr, assumed \(X\)= \(\Delta P/P\) of 10%, an effective \(C_{v}\) = 0.05, and an ultimate pressure \(P_{ultimate}\) of 13 mbar. As expected, Case 1 and Case 2 result in a linear line on the semi-log plot, with Case 1 having a steeper pump down slope due to the assumption of no pumping resistance. If \(X\) is unknown, then \(X\) can be readily fit to the experimental data at the beginning of the pumpdown. If the effective \(C_{v}\) is unknown, then the effective \(C_{v}\) can be readily fit to the experimental data as the pressure reaches the ultimate pressure. The region of most interest for Case 3 is below 100 mbar. The simplifying assumption was made that flow through the leak was always sonic, which is acceptable since flow through the leak is clearly in the sonic range when the volume pressure is less than 100 mbar. By the time the pressure reaches about 35 mbar the vacuum pump operator should be able to see the deviation from a straight line on a semi-log plot. After the first backfill, the operator should stop pumping at this point to keep the remaining helium in the volume as pure as possible. Prior to the first backfill it is also preferable to stop pumping before significant air ingress to keep moisture out of the volume. ### Experimental Results Large volumes and volumes which are frequently pump and backfilled should be prioritized for optimizing pump and backfill parameters in order to conserve the most helium. An obvious selection for prioritization at Fermilab was the three Vertical Test Stands used for testing bare SRF cavities. Each of these stands typically completes at least one round of pump and backfill cycles per week. Each Vertical Test Stand has 2 pressure transmitters, one in the range of 0 to 1000 mbar and the other in the range of 0 to 100 mbar to ensure accurate readings across the full pressure range of a pumpdown. The Vertical Test Stands have all metal seals since the volumes must be stringently leak tight to minimize air ingress during operation at 30 mbar while under normal testing conditions. Since very low pressures (\(<3\) mbar) can be readily achieved without significant deviation from a straight line on a semi-log plot, the number of pump and backfill cycles was able to be reduced to 2 cycles as shown in Figure 2. No contamination is detectable on a commercial oxygen analyzer when sweeping the Vertical Test Stand volumes after the 2 pump and backfill cycles are completed. Note that for the Vertical Test Stand a leak check is performed on the first cycle, then on the on the second cycle pumping is abruptly cutoff at around 2 mbar. Leak tightness checks should be performed before the first backfill since air leaks are only displacing air or nitrogen at that point, whereas after the first backfill any leaks are more detrimental since leaking air is displacing helium. Figure 1: Plot showing Cases 1 through 3 as a function of time. The pumping for the Case 3 system should be cut off by 35 mbar. During the second hour of pumping the air leaking is displacing the helium-air mixture going out through the pump as the ultimate pressure is being asymptotically being approached, so the purification effect of previous pump and backfill cycles is being defeated over time. ## 4 Summary This paper presented a methodology for optimizing pump and backfill procedures. This methodology has been verified on frequently used test stands at Fermilab. A summary of best practices for pump and backfilling is the following: * Start by purging with dry nitrogen to remove water and get the best achievable ultimate vacuum for the system * Use helium leak detector as part of the first pump and backfill cycle and where possible locate and repair leaks. This minimizes air in-leak during future pumping cycles and possibly reduces the number of backfills required * Use a pressure transmitter and real-time semi-log plot to determine when to stop pumping * Don't pump on the volume overnight during pump & backfill. Instead, have the operator watch volume pressure and cut off pumping before air ingress significantly effects the purity of the remaining helium in the volume. * Based on the pressure when pumping is stopped, calculate the minimum number of cycles necessary to achieve desired purity level In conclusion, it is possible to improve test schedules, reduce labor costs, and conserve helium simultaneously using optimized pump and backfill procedures.
2301.05088
Extreme mass ratio inspirals in galaxies with dark matter halos
Using the analytic, static and spherically symmetric metric for a Schwarzschild black hole immersed in dark matter (DM) halos with Hernquist type density distribution, we derive analytic formulae for the orbital period and orbital precession, the evolutions of the semi-latus rectum and the eccentricity for eccentric EMRIs with the environment of DM halos. We show how orbital precessions are decreased and even reverse the direction if the density of DM halo is large enough. The presence of local DM halos slows down the decrease of the semi-latus rectum and the eccentricity. Comparing the number of orbital cycles with and without DM halos over one-year evolution before the merger, we find that DM halos with the compactness as small as $10^{-4}$ can be detected. By calculating the mismatch between GW waveforms with and without DM halos, we show that we can use GWs from EMRIs in the environments of galaxies to test the existence of DM halos and detect the compactness as small as $10^{-5}$.
Ning Dai, Yungui Gong, Yang Zhao, Tong Jiang
2023-01-12T15:35:58Z
http://arxiv.org/abs/2301.05088v1
# Extreme mass ratio inspirals in galaxies with dark matter halos ###### Abstract Using the analytic, static and spherically symmetric metric for a Schwarzschild black hole immersed in dark matter (DM) halos with Hernquist type density distribution, we derive analytic formulae for the orbital period and orbital precession, the evolutions of the semi-latus rectum and the eccentricity for eccentric EMRIs with the environment of DM halos. We show how orbital precessions are decreased and even reverse the direction if the density of DM halo is large enough. The presence of local DM halos slows down the decrease of the semi-latus rectum and the eccentricity. Comparing the number of orbital cycles with and without DM halos over one-year evolution before the merger, we find that DM halos with the compactness as small as \(10^{-4}\) can be detected. By calculating the mismatch between GW waveforms with and without DM halos, we show that we can use GWs from EMRIs in the environments of galaxies to test the existence of DM halos and detect the compactness as small as \(10^{-5}\). ## I Introduction The first detection of gravitational waves (GWs) from the merger of black hole (BH) binary by the LIGO Scientific Collaboration and the Virgo Collaboration in 2015 [1; 2] opened a new window for probing gravitational physics and fundamental physics. Since then, tens of confirmed GW events have been detected by the ground-based GW observatories [3; 4; 5; 6]. The ground-based GW observatories are only sensitive to GWs in the frequency range of \(10-10^{3}\) Hz. The space-based GW observatories such as LISA [7], TianQin [8] and Taiji [9; 10] will usher a new era in GW astronomy due to their unprecedented accuracy and their sensitive range of mHz [11; 12; 13; 14]. One particular interesting target of space-based GW detectors is a stellar-mass compact object (SCO) inspiralling onto a massive black hole (MBH), the extreme mass ratio inspirals (EMRIs) [15]. There are \(10^{5}-10^{6}\) GW cycles in the detector band when the SCO inspirals deep inside the strong field region of the MBH, and rich information about the spacetime geometry around the MBH is encoded in GW waveforms. Therefore, the observations of GWs emitted from EMRIs present us a good opportunity for the study of astrophysics, gravity in the strong and nonlinear regions and the nature of BHs [15; 16; 17; 18; 19; 20]. Although the property of DM is still a mystery in physics, there are a lot of indirect evidence for the existence of dark matter (DM) in the Universe [21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. DM may cluster at the center of galaxies and around BHs [31; 32; 33; 34], and affect the dynamics of binaries and hence GWs emitted from them. Since EMRIs are believed to reside in stellar clusters and the center of galaxies, so DM may affect the dynamics of EMRIs and the observations of GWs from EMRIs, especially those in DM environments may be used to understand the astrophysical environment surrounding EMRIs and probably confirm the existence of DM and uncover the nature of DM [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. In the studies of DM effects discussed above, Newtonian approaches to the problems were applied and the gravitational effects of DM on the dynamical evolution of EMRIs were modeled at Newtonian level. In Ref. [50], the authors generalized Einstein clusters [51; 52] to include horizons, solved Einstein's equations sourced by DM halo of Hernquist type density distribution [34] with a MBH at its center and obtained analytical formulae for the metric of galaxies harboring MBHs. Exact solutions for the geometry of a MBH immersed in DM halos with different density distributions were then derived [53; 54]. With the fully relativistic formalism, it was found that the leading order correction to the ringdown stage induced by the external matter and fluxes by orbiting particles is a gravitational redshift, and the difference between the number of GW cycles accumulated by EMRIs with and without DM halos over one year before the innermost stable circular orbit can reach about 500 [50]. In galaxies harboring MBHs, tidal forces and geodesic deviation depend on the masses of the DM halos and the typical length scales of the galaxies [55]. Due to the gravitational pull of DM halos, the apsidal precession of the geodesic orbits for EMRIs is strongly affected and even prograde-to-retrograde drift can occur [56]. In prograde-to-retrograde orbital alterations, GWs show transient frequency phenomena around a critial non-precessing turning point [56]. A fully relativistic formalism to study GWs from EMRIs in static, spherically symmetric spacetimes describing a MBH immersed in generic astrophysical environments was established in Ref. [57] and it was shown how the astrophysical environment changes GW generation and propagation. The above discussions are based on circular motions or eccentric cases without GW reaction. In this paper, we study eccentric orbital motions and GWs of EMRIs in galaxies with DM environments. The paper is organized as follows. A review of the spacetime of galaxies harboring MBHs is given first, then we discuss the geodesic motions of EMRIs in the spacetime in Section II. In Section III, we use the "Numerical Klugde" method [58; 59; 60] to calculate GWs from eccentric EMRIs in galaxies with DM environments. To assess the capability of detecting DM halos with LISA, we calculate the mismatch between GWs from EMRIs with and without DM halos along with their signal-to noise (SNR) ratios in Section III. We draw conclusions in Section IV. In this paper we use the units \(G=c=1\). ## II The motions of binaries in the environments of galaxies Following [50], we use the Hernquist-type density distribution [34] to describe the profiles observed in the bulges and elliptical galaxies \[\rho_{\rm H}=\frac{Mr_{0}}{2\pi r(r+r_{0})^{3}}, \tag{1}\] where \(M\) is the total mass of the DM halo, and \(r_{0}\) is the typical lengthscale of a galaxy. The energy-momentum tensor of a galaxy harboring a MBH with the mass \(M_{\rm BH}\) is assumed to be an anisotropic fluid \[T_{\nu}^{\mu}={\rm diag}(-\rho_{\rm DM},0,P_{t},P_{t}), \tag{2}\] where the density profile for a MBH residing at the center of the distribution (1) is \[4\pi\rho_{\rm DM}=\frac{m^{\prime}}{r^{2}}=\frac{2M(r_{0}+2M_{\rm BH})(1-2M_{ \rm BH}/r)}{r(r+r_{0})^{3}}, \tag{3}\] the mass function \(m(r)\) is \[m(r)=M_{\rm BH}+\frac{Mr^{2}}{(r_{0}+r)^{2}}\left(1-\frac{2M_{\rm BH}}{r} \right)^{2}, \tag{4}\] and the tangential pressure \(P_{t}\) is \[2P_{t}=\frac{m(r)\rho_{\rm DM}}{r-2m(r)}. \tag{5}\] Obviously, in the absence of the MBH, the density profile (3) reduces to Eq. (1). At large distance, \(r\gg M_{\rm BH}\), the density profile \(\rho_{\rm DM}\) becomes the Hernquist-type distribution (1) for large galaxies with \(r_{0}\gg M_{\rm BH}\), \(\rho_{\rm DM}\sim(M/r_{0})^{2}/(Mr)\), so the DM density \(\rho_{\rm DM}\) is smaller if the compactness \(M/r_{0}\) is smaller with fixed \(M\) or if \(M\) is larger with fixed compactness \(M/r_{0}\). Using the following ansatz for the static, spherically symmetric spacetime [50], \[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{1-2m(r)/r}+r^{2}(d\theta^{2}+\sin^{2}\theta\, d\phi^{2}), \tag{6}\] and solving Einstein equations, we get [50] \[f(r) =\left(1-\frac{2M_{\rm BH}}{r}\right)e^{\Upsilon}, \tag{7}\] \[\Upsilon =-\pi\sqrt{\frac{M}{\xi}}+2\sqrt{\frac{M}{\xi}}\arctan\left( \frac{r+r_{0}-M}{\sqrt{M\xi}}\right),\] \[\xi =2r_{0}-M+4M_{\rm BH}.\] The geometry (6) describes a BH spacetime with an horizon at \(r=2M_{\rm BH}\) and a curvature singularity at \(r=0\), the matter density vanishes at the horizon and the ADM mass of the spacetime is \(M+M_{\rm BH}\). In the absence of DM halo, \(M=0\), the spacetime (6) reduces to Schwarzschild BH with mass \(M_{\rm BH}\). In galaxies, the compactness \(M/r_{0}\) can be as large as \(10^{-4}\)[32]. In general astrophysical environments the compactness \(M/r_{0}\) is usually small. Expanding the function \(f(r)\) in Eq. (7) about \(M/r_{0}=0\) to the second order we get \[f(r) \simeq\left(1-\frac{2M_{\rm BH}}{r}\right)\left(1-\frac{2M}{r_{0}} +\frac{4M^{2}}{3r_{0}^{2}}+\frac{2Mr}{r_{0}^{2}}+\mathcal{O}[r_{0}^{-3}]\right) \tag{8}\] \[=\left(1-\frac{2M_{\rm BH}}{r}\right)(1+\alpha+r\beta),\] where \(\alpha=-2M/r_{0}+4M^{2}/3r_{0}^{2}\) and \(\beta=2M/r_{0}^{2}\). Now we consider a MBH in the center of a DM halo and a SCO moving on geodesics around the MBH in the equatorial plane (\(\theta=\pi/2\)). The geodesic equation is \[\frac{du_{\mu}}{d\tau}=\frac{1}{2}u^{\alpha}u^{\beta}\partial_{\mu}g_{\alpha \beta}, \tag{9}\] where \(u^{\alpha}=dr^{\alpha}/d\tau\), \(\tau\) is the proper time and \(r^{\alpha}=(t,r,\theta,\phi)\). Because the spacetime is static and spherically symmetric, from the geodesic equation (9) we obtain two conserved quantities \(u_{0}=-E/\mu\) and \(u_{\phi}=L/\mu\), \[u_{0} =-E/\mu=-\sqrt{1+2\varepsilon}, \tag{10}\] \[u_{\phi} =L/\mu=h, \tag{11}\] where \(E\) and \(L\) represent the orbital energy and angular momentum of the system, respectively, and the reduced mass \(\mu\) is approximately equal to the mass of the SCO. The radial equation of motion is \[1+\left(\frac{dr}{d\tau}\right)^{2}\left(1-\frac{2m(r)}{r}\right)^{-1}+\frac{h^{ 2}}{r^{2}}=\frac{1+2\varepsilon}{f}. \tag{12}\] For convenience, we introduce the orbital elements, the semi-latus rectum \(p\) and the eccentricity \(e\), to parameterize the orbital motion, \[r=\frac{p}{1+e\cos\chi}, \tag{13}\] where \(\chi\) is a parameter. Rewriting the variables \(h\) and \(\varepsilon\) in terms of \(p\) and \(e\), we obtain \[h^{2} =\frac{p\,R_{s}\,(1+\alpha)+p^{3}\beta\,(1-e^{2})^{-1}}{2(1+\alpha )\left(1-\frac{1}{2}\frac{R_{s}}{p}(3+e^{2})\right)+p\,\beta\,\left(1-2\frac{R _{s}}{p}\right)}, \tag{14}\] \[\varepsilon =-\frac{\frac{R_{s}}{2p}(1-e^{2})\left(1-\frac{2R_{s}}{p}\right) +\alpha\,j+\alpha^{2}\,g+\beta\,k}{2\left(1-\frac{1}{2}\frac{R_{s}}{p}(3+e^{2 })\right)(1+\alpha)+p\,\beta\left(1-2\frac{R_{s}}{p}\right)}, \tag{15}\] where \(R_{s}=2M_{\rm BH}\), \[j =-\left(1-\frac{2R_{s}}{p}\right)+\frac{R_{s}}{2p}\left(1-\frac{4 R_{s}}{p}\right)(1-e^{2}),\] \[g =-\left(1-\frac{2R_{s}}{p}\right)-\frac{R_{s}^{2}}{p^{2}}(1-e^{2}),\] \[k =-\frac{p(3+e^{2})}{2(1-e^{2})}\left(1-2\frac{R_{s}}{p}\right)- \frac{2R_{s}^{2}}{p}.\] In terms of \(\chi\), Eqs. (10) and (11) become \[\begin{split}\frac{d\phi}{d\chi}&=\left[\frac{1}{2} \frac{R_{s}}{p}(1+\alpha)+\frac{1}{2}p\beta(1-e^{2})^{-1}\right]^{\frac{1}{2}} \left\{\frac{1}{2}\frac{R_{s}}{p}\left[1-\frac{R_{s}}{p}\left(3+e\cos\chi \right)\right]\right.\\ &\qquad+\alpha\,A+2\alpha^{2}\,A+\beta\,B\right\}^{-\frac{1}{2}} J_{1},\end{split} \tag{16}\] \[\begin{split}\frac{dt}{d\chi}&=\frac{p}{(1+e\cos \chi)^{2}}\left(\left[1-(1+e)\frac{R_{s}}{p}\right]\left[1-(1-e)\frac{R_{s}}{ p}\right]+C\right)^{\frac{1}{2}}\times\\ &\left[1-\frac{R_{s}}{p}(1+e\cos\chi)\right]^{-1}\left(\frac{1}{ 2}\frac{R_{s}}{p}\left[1-\frac{R_{s}}{p}(3+e\cos\chi)+\alpha A+2\alpha^{2}A+ \beta B\right]\right)^{-\frac{1}{2}}J_{2},\end{split} \tag{17}\] where \[A =\frac{R_{s}}{p}\left[1-\frac{R_{s}}{p}(3+e\cos\chi)\right],\] \[B =\frac{p}{2(1-e^{2})(1+e\cos\chi)}\bigg{\{}2\left(1-\frac{R_{s}}{p }\right)+\] \[\left[1-\frac{4R_{s}}{p}-\left(\frac{R_{s}}{p}\right)^{2}(1-e^{2} )(1+e\cos\chi)-\frac{R_{s}}{p}e^{2}(1+\cos^{2}\chi)\right]\bigg{\}},\] \[C =\alpha\left[1-\frac{1}{2}(3+e^{2})\frac{R_{s}}{p}\right]+\frac{1 }{2}p\beta\left(1-2\frac{R_{s}}{r}\right)-(\alpha j+\alpha^{2}g+\beta k),\] \[J_{1} =\left(1+\alpha+\frac{\beta p}{1+e\cos\chi}\right)^{\frac{1}{2}} \bigg{\{}1-\frac{2Mp/(1+e\cos\chi)}{a+p/(1+e\cos\chi)^{2}}\left[1-\frac{R_{s} }{p}(1+e\cos\chi)\right]\bigg{\}}^{-\frac{1}{2}},\] \[J_{2} =\left(1+\alpha+\frac{\beta p}{1+e\cos\chi}\right)^{-\frac{1}{2} }\bigg{\{}1-\frac{2Mp/(1+e\cos\chi)}{a+p/(1+e\cos\chi)^{2}}\left[1-\frac{R_{s }}{p}(1+e\cos\chi)\right]\bigg{\}}^{-\frac{1}{2}}.\] Eqs. (16) and (17) can be integrated to obtain \(\phi(\chi)\) and \(t(\chi)\). Taking different compactness and mass for the DM halo, using Cartesian coordinate \((x,y)=(r\cos\phi,r\sin\phi)\) in the equatorial plane, we show the orbits of EMRIs in galaxies with and without DM in Fig. 1. Due to the gravitational drag of DM halos, the orbits with DM halos are different from those without DM. From Fig. 1, we see that for the same value of \(M\), the effect of DM halos on the orbital precession is larger if the compactness of the DM halo \(M/r_{0}\) is bigger. DM halos decrease the orbital precessions, and can even reverse the direction of precession if the density of DM halo \(\rho_{\rm DM}\) is large enough. The result of retrograde precessions of the orbital motion in the spacetime (6) is consistent with that found in [56], and the anomalous precessions of binaries in DM environments were also found in [48; 61; 62]. To probe DM halos and study their impact on the orbits of EMRIs, we calculate the time \(P\) and the orbital precession \(\Delta\phi\) over one cycle when the orbital parameter \(\chi\) increases by \(2\pi\), \[T =\int_{0}^{2\pi}\frac{dt}{d\chi}d\chi, \tag{18}\] \[\Delta\phi =\int_{0}^{2\pi}\frac{d\phi}{d\chi}d\chi-2\pi. \tag{19}\] Expanding Eqs. (16) and (17) about \(R_{s}/p=0\) to the second order and substituting the Figure 1: The orbits of EMRIs in galaxies with and without DM halos. The mass of MBHs is set as \(M_{\rm BH}=10^{6}M_{\odot}\), the eccentricity \(e=0.6\), and the semi-latus rectum \(p=20R_{s}\). We take the compactness \(M/r_{0}\) as \(10^{-2}\) and \(10^{-3}\), and the total mass \(M\) as \(10^{2}M_{\rm BH}\) and \(10^{3}M_{\rm BH}\). The red dashed lines show the trajectories with DM and the blue solid lines show the orbits without DM. The arrows represent the directions of orbital precessions. results into Eqs. (18) and (19), we get \[T =2\pi\sqrt{\frac{2p^{3}}{R_{s}}}\frac{1}{(1-e^{2})^{3/2}}\bigg{\{}1+ \frac{3}{2}(1-e^{2})\frac{R_{s}}{p}+\frac{3}{2}(1-e^{2})\left[1+\frac{5}{4}(1-e ^{2})^{\frac{1}{2}}\right]\left(\frac{R_{s}}{p}\right)^{2}\] \[+\frac{M}{r_{0}}+\frac{5M^{2}}{6r_{0}^{2}}+\frac{Mp}{r_{0}^{2}(1- e^{2})}\left(e^{2}-\frac{11}{2}\right)-\frac{3Mp^{2}/R_{s}}{r_{0}^{2}(1-e^{2})} \bigg{\}}, \tag{20}\] \[\Delta\phi =3\pi\frac{R_{s}}{p}+\frac{3\pi}{8}(18+e^{2})\left(\frac{R_{s}}{ p}\right)^{2}-\frac{2\pi}{1-e^{2}}\frac{Mp}{r_{0}^{2}}\left[3+\frac{1+e^{2}+2 \frac{R_{s}}{p}}{(1-e^{2})^{1/2}}\right]. \tag{21}\] The terms with \(M\) in the above Eqs. (20) and (21) come from DM halos. In the absence of DM, \(M=0\), the above results (20) and (21) recover those for EMRIs with the central MBH being a Schwarzschild BH. The dominant contribution to the period \(T\) in Eq. (20) is the first term, so \(T\) becomes larger as the semi-latus rectum \(p\) increases. However, there are positive and negative contributions from the local DM halos, the local DM halos may slow down the increase of \(T\) as \(p\) increases because the negative contribution in the last term in Eq. (20) and the presence of DM halos helps the increase of \(T\) with \(p\) if the last negative contribution is negligible. From Eq. (21), it is easy to understand that the presence of DM halo decreases the orbital precession and even retrogrades the orbital procession if the local density of DM halos \(\rho_{\rm DM}\sim M/r_{0}^{2}\) is large enough so that the third term dominates over the first two terms. As the orbit becomes larger, i.e., the semi-latus rectum \(p\) increases, the orbital precession decreases and the prograde precession decreases faster in the presence of DM halos because the third term due to DM halos in Eq. (21) becomes bigger. With DM halos, the prograde-to-retrograde precession transition happens at some critial value of \(p\) and then the prograde precessions change to retrograde precessions as \(p\) increases further; afterwards, the retrograde precessions increase as \(p\) increases. Choosing different values for the compactness \(M/r_{0}\) and the total mass of DM halos \(M\) and using Eqs. (20) and (21), we plot the results of the period \(T\) and the orbital precession \(\Delta\phi\) versus the semi-latus rectum \(p\) in Fig. 2. As expected, the orbital period \(T\) increases with \(p\); the prograde precessions decrease with \(p\) and DM halos help the decrease. For the case of \(r_{0}=10^{2}M\) and \(M=10^{2}M_{\rm BH}\), the periapsis shifts change from prograde precessions to retrograde precessions at \(p=60R_{s}\) and the retrograde precession increases with \(p\) when \(p\gtrsim 60R_{s}\). From the above discussions, we see that the orbital motions of EMRIs are influenced by DM halos, and we expect that the effects of local DM halos will leave imprints on GWs so that we can probe local DM halos through the observations of GWs emitted from EMRIs. ## III GWs of EMRIs in the environments of galaxies Using the above results for the orbital motions of EMRIs, we get the leading order energy and angular momentum fluxes \[\left\langle\frac{dE}{dt}\right\rangle_{\rm GW} \simeq\frac{32}{5}\left(\frac{\mu}{M_{\rm BH}}\right)^{2}\left( \frac{M_{\rm BH}}{p}\right)^{5}(1-e^{2})^{3/2}\left(1+\frac{73}{24}e^{2}+\frac {37}{96}e^{4}\right)\left(1-6\frac{M}{r_{0}}\right), \tag{22}\] \[\left\langle\frac{dL}{dt}\right\rangle_{\rm GW} \simeq\frac{32}{5}\left(\frac{\mu}{M_{\rm BH}}\right)^{2}M_{\rm BH }\,\left(\frac{M_{\rm BH}}{p}\right)^{7/2}(1-e^{2})^{3/2}\left(1+\frac{7}{8}e^ {2}\right)\left(1-5\frac{M}{r_{0}}\right). \tag{23}\] The last factors \(1-6M/r_{0}\) and \(1-5M/r_{0}\) are the corrections from DM halos around the MBH. Note that the effects of environmental DM halos on the losses of energy and angular momentum only depend on the compactness \(M/r_{0}\) and the energy and angular momentum fluxes become smaller if the compactness is larger. In the absence of local DM halos, \(M=0\), Eqs. (22) and (23) recover the standard results for eccentric binaries [63; 64]. Applying the energy and angular momentum balance equations \[\left\langle\frac{dE}{dt}\right\rangle_{\rm GW} =-\left(\frac{dE}{dt}\right)_{\rm orbit}, \tag{24}\] \[\left\langle\frac{dL}{dt}\right\rangle_{\rm GW} =-\left(\frac{dL}{dt}\right)_{\rm orbit}, \tag{25}\] Figure 2: The results of orbital period and precession for EMRIs in galaxies with and without DM. The mass of central MBHs is set as \(M_{\rm BH}=10^{6}M_{\odot}\) and the eccentricity \(e=0.6\). We take the compactness \(M/r_{0}\) as \(10^{-2}\) and \(10^{-3}\), and the total mass \(M\) as \(10^{2}M_{\rm BH}\), \(10^{3}M_{\rm BH}\) and \(M=0\). The inserts show the evolution in a short time period. we get the leading order evolution of the orbital parameters \(p(t)\) and \(e(t)\) due to the emission of GWs, \[\frac{dp}{dt} =-\frac{64}{5}\frac{\mu}{M_{\rm BH}}\left(\frac{M_{\rm BH}}{p} \right)^{3}\left(1-e^{2}\right)^{\frac{3}{2}}\left(1+\frac{7}{8}e^{2}\right) \left(1-5\frac{M}{r_{0}}\right), \tag{26}\] \[\frac{de}{dt} =-\frac{304}{15}\frac{e}{p}\frac{\mu}{M_{\rm BH}}\left(\frac{M_{ \rm BH}}{p}\right)^{3}\left(1-e^{2}\right)^{\frac{3}{2}}\left(1+\frac{121}{304 }e^{2}\right)\left(1-5\frac{M}{r_{0}}\right). \tag{27}\] Since the right sides of Eqs. (26) and (27) are negative, both the semi-latus rectum \(p\) and the eccentricity decrease with time due to the radiation of GWs. The presence of local DM halos slows down the decrease of \(p\) and \(e\), the bigger the compactness \(M/r_{0}\) is, the slower the semi-latus rectum \(p(t)\) and the eccentricity decrease. In Fig. 3, we show the evolution of the orbital parameters \(p(t)\) and \(e(t)\) due to the emission of GWs. Comparing with the astrophysical environments without DM, it takes more time for EMRIs with DM halos to evolve from \(p=20R_{\rm s}\) to \(p=3R_{\rm s}\). The larger the compactness \(M/r_{0}\) is, the more time it takes. The presence of DM halos also slows down the decrease rate of the eccentricity and the final eccentricity is a bit larger with larger compactness. As discussed above, the effects of DM halos will be manifested in GW waveforms. The quadrupole formula of GWs is \[h^{jk}=\frac{2}{d_{L}}\ddot{I}^{jk}, \tag{28}\] Figure 3: The evolution of the orbital parameters \(p\) and \(e\) from the initial \(p=20R_{\rm s}\) to \(p=(3+e)R_{\rm s}\). The mass of central MBHs is chosen as \(M_{\rm BH}=10^{6}M_{\odot}\), the mass of the SCO is \(\mu=10M_{\odot}\) and the initial eccentricity is chosen as \(e_{0}=0.2,0.6\). We consider two different values for the compactness of the DM halo, \(M/r_{0}=10^{-2}\) and \(10^{-3}\). The solid lines correspond to the cases without DM. where \(d_{L}\) is the luminosity distance between the detector and the source and \(I_{jk}\) is the quadrupole moment of EMRIs. The tenser modes \(h_{+}\) and \(h_{\times}\) in the transverse-traceless gauge are given by \[h_{+} =\frac{1}{2}\left(e_{X}^{j}e_{X}^{k}-e_{Y}^{j}e_{Y}^{k}\right)h_{jk}, \tag{29}\] \[h_{\times} =\frac{1}{2}\left(e_{X}^{j}e_{Y}^{k}-e_{Y}^{j}e_{X}^{k}\right)h_{ jk}, \tag{30}\] where \(e_{X}\) and \(e_{Y}\) are the orthonormal vectors in the plane that is perpendicular to the direction from the detector to the GW source. Plugging the results for the orbital evolution obtained above into Eq. (28), we numerically calculate the time-domain GW waveforms. The time-domain plus-mode GW waveforms for EMRIs with and without DM halos are shown in Fig. 4. From Fig 4, we see that initially the difference between GW waveforms with and without DM halos is negligible. One year later, the two waveforms for EMRIs with and without DM halos are quite different. In order to quantify the impact of DM halo environments on the dephasing of GW waveforms, we calculate the number of orbital cycles accumulated from time \(t_{i}\) to \(t_{f}\)[65; 66; 67] \[\mathcal{N}(t)=\int_{t_{i}}^{t_{f}}\dot{\phi}(t)dt. \tag{31}\] Over one-year evolution before the merger, the numbers of orbital cycles for EMRIs with and without DM halos are \(\mathcal{N}_{\rm DM}\) and \(\mathcal{N}_{0}\) respectively. In Fig 5, we show the difference \(\Delta\mathcal{N}=\mathcal{N}_{\rm DM}-\mathcal{N}_{0}\) between the number of orbital cycles with and without DM halos accumulated over one year before the merger. Following [68], we choose \(\Delta\mathcal{N}\sim 1\,\)rad as the threshold for a detectable dephasing. The results show that we can detect the compactness as small as \(\lesssim 10^{-4}\). The results also show that eccentric orbits can help detect DM halos with smaller compactness. To distinguish the waveforms more accurately, we calculate the mismatch between GW signals emitted from EMRIs with and without DM halos. Given two signals \(h_{1}(t)\) and \(h_{2}(t)\), the inner product \((h_{1}|h_{2})\) is defined as \[(h_{1}|h_{2})=2\int_{0}^{+\infty}\frac{\tilde{h}_{1}(f)\tilde{h}_{2}^{*}(f)+ \tilde{h}_{2}(f)\tilde{h}_{1}^{*}(f)}{S_{h}(f)}\,df, \tag{32}\] where \(\tilde{h}(f)\) is the Fourier transformation of the time-domain signal \(h(t)\), \(\tilde{h}^{*}\) denotes the complex conjugate of \(\tilde{h}\), and the SNR for the signal \(h\) is \(\sqrt{(h|h)}\). For LISA, the one-side noise power spectral density is [69] \[S_{h}(f)=\frac{S_{x}}{L^{2}}+\frac{2S_{a}\left[1+\cos^{2}(2\pi\,fL/c)\right]}{(2 \,\pi f)^{4}L^{2}}\left[1+\left(\frac{4\times 10^{-4}\text{Hz}}{f}\right) \right], \tag{33}\] where \(\sqrt{S_{a}}=3\times 10^{-15}\) m s\({}^{-2}\)/Hz\({}^{1/2}\) is the acceleration noise, \(\sqrt{S_{x}}=1.5\times 10^{-11}\) m/Hz\({}^{1/2}\) is the displacement noise and \(L=2.5\times 10^{6}\) km is the arm length of LISA [7]. The overlap between two GW signals is quantified as [60] \[\mathcal{O}(\tilde{h}_{1},\tilde{h}_{2})=\frac{(\tilde{h}_{1}|\tilde{h}_{2})} {\sqrt{(\tilde{h}_{1}|\tilde{h}_{1})(\tilde{h}_{2}|\tilde{h}_{2})}}, \tag{34}\] Figure 4: The time-domain plus mode GW waveforms for EMRIs with and without DM halos. The mass of central MBHs is \(M_{\text{BH}}=10^{6}M_{\odot}\), the mass of the SCO is \(\mu=10M_{\odot}\), the total mass of DM halos \(M\) is \(=10^{2}M_{\text{BH}}\), the inclination angle \(\iota=\pi/6\), the luminosity distance \(d_{L}=1\text{Gpc}\), the initial longitude of pericenter \(\omega_{0}=0\) and the initial eccentricity \(e_{0}=0.6\) at \(p_{0}=20R_{s}\). \(M=0\) corresponds to the case without DM halos. The left panels show the initial waveforms. The right panels show the waveforms after one year. The top panels are for \(M/r_{0}=10^{-2}\) and the bottom panels are for \(M/r_{0}=10^{-3}\). and the mismatch between two signals is defined as \[\text{Mismatch}=1-\mathcal{O}_{\text{max}}(\tilde{h}_{1},\tilde{h}_{2}), \tag{35}\] where the maximum is evaluated with respect to time and phase shifts. The mismatch is zero if two signals are identical. Two signals are considered experimentally distinguishable if their mismatch is larger than \(d/(2\,\text{SNR}^{2})\), where \(d=13\) is the number of intrinsic parameters of the GW source [70; 71; 72]. Considering EMRIs with masses \((10^{6}+10)M_{\odot}\) at \(d_{L}=1\) Gpc and integration time of one year before the coalescence, we calculate the mismatch between GW waveforms with and without DM halos and the results with LISA are shown in Fig 6. The SNR is about 32 for the GW signals from EMRIs considered above. The initial eccentricity \(e_{0}\) is chosen at \(p_{0}=20R_{s}\). As shown in Fig 6, if the compactness of DM halo \(M/r_{0}\) is larger, then the mismatch between GW waveforms with and without DM halos is bigger, so more compact DM halos can be detected easier with LISA. Again eccentric orbits can detect smaller compactness. Therefore, we can use GWs from EMRIs in the environments of galaxies to test the existence of DM halos and detect the compactness of the halos \(M/r_{0}\) as small as \(10^{-5}\). Figure 5: The difference between the orbital cycles with and without DM halos \(\Delta\mathcal{N}(t)\) over one-year evolution before the merger for different compactness of halos \(M/r_{0}\). The initial eccentricity \(e_{0}\) is chosen at \(p_{0}=20R_{s}\). The mass of central MBHs is \(M_{\text{BH}}=10^{6}M_{\odot}\) and the mass of the SCO is \(\mu=10M_{\odot}\). The masses of DM halos are \(M=10^{2}M_{\text{BH}}\). The black dashed line corresponds to \(\Delta\mathcal{N}=1\,\text{rad}\). ## IV Conclusions and discussions Using the analytic, static and spherically symmetric metric for a Schwarzschild black hole immersed in DM halos with Hernquist type density distribution, we derive analytic formulae for the orbital period and orbital precession for eccentric EMRIs with the environment of DM halos. The results show that the presence of DM halo decreases the orbital procession and even retrogrades the orbital procession if the local density of DM halos \(\rho_{\rm DM}\sim M/r_{0}^{2}\) is large enough. As the orbit becomes larger, the orbital precession decreases and the prograde precession decreases faster in the presence of DM halos. With DM halos, the prograde-to-retrograde precession transition happens at some critial value of \(p\) and then the prograde precessions change to retrograde precessions as \(p\) increases further; afterwards, the retrograde precessions increase as \(p\) increases. Taking the energy and angular momentum fluxes of GWs into consideration, we derive analytic formulae for the evolutions of the semi-latus rectum and the eccentricity. The presence of local DM halos slows down the decrease of the semi-latus rectum and the eccentricity. Comparing the numbers of orbital cycles with and without DM halos over one-year evolution before the merger, we find that DM halos with the compactness as small as \(10^{-4}\) can be detected. By calculating the mismatch between GW waveforms with and without DM halos, we show that we can use GWs from EMRIs in the environments of galaxies to Figure 6: The results of the mismatch between GW waveforms with and without DM halos for different compactness \(M/r_{0}\) and initial eccentricity \(e_{0}\). The black dashed line corresponds to the threshold \(d/(2\,{\rm SNR}^{2})\approx 0.0072\). test the existence of DM halos and detect the compactness as small as \(10^{-5}\). We also find that eccentric orbits can help detect DM halos with smaller compactness. Binaries in the environments of galaxies are also affected by the dynamical frictions of the surrounding medium [73; 74; 75; 76; 77], and the accretion of the medium [78; 79; 46]. It is necessary to consider the effects of dynamical frictions and accretion when the medium is dense. To distinguish the effects of DM halos from other mediums (e.g. accretion disks), or modified gravity on GWs, further study is needed [80; 81; 43; 82; 82]. ###### Acknowledgements. The computing work in this paper is supported by the Public Service Platform of High Performance Computing by Network and Computing Center of HUST. This research is supported in part by the National Key Research and Development Program of China under Grant No. 2020YFC2201504.
2307.01160
Optimized experimental optical tomography of quantum states of room-temperature alkali-metal vapor
We demonstrate a novel experimental technique for quantum-state tomography of the collective density matrix. It is based on measurements of the polarization of light, traversing the atomic vapor. To assess the technique's robustness against errors, experimental investigations are supported with numerical simulations. This not only allows to determine the fidelity of the reconstruction, but also to analyze the quality of the reconstruction for specific experimental parameters light tuning and number of measurements). By utilizing the so-called conditional number, we demonstrate that the reconstruction can be optimized for a specific tuning of the system parameters, and further improvement is possible by selective repetition of the measurements. Our results underscore the potential high-fidelity quantum-state reconstruction while optimizing measurement resources.
Marek Kopciuch, Magdalena Smolis, Adam Miranowicz, Szymon Pustelny
2023-07-03T17:10:27Z
http://arxiv.org/abs/2307.01160v1
# Optimized experimental optical tomography of quantum states of room-temperature alkali-metal vapor ###### Abstract We demonstrate a novel experimental technique for quantum-state tomography of the collective density matrix. It is based on measurements of the polarization of light, traversing the atomic vapor. To assess the technique's robustness against errors, experimental investigations are supported with numerical simulations. This not only allows to determine the fidelity of the reconstruction, but also to analyze the quality of the reconstruction for specific experimental parameters (light tuning and number of measurements). By utilizing the so-called conditional number, we demonstrate that the reconstruction can be optimized for a specific tuning of the system parameters, and further improvement is possible by selective repetition of the measurements. Our results underscore the potential high-fidelity quantum-state reconstruction while optimizing measurement resources. 1 Doctoral School of Exact and Natural Sciences, Jagiellonian University, Faculty of Physics, Astronomy and Applied Computer Sciences, Lojasiewicza 11, 30-348 Krakow, Poland 2 Institute of Physics, Jagiellonian University in Krakow, Lojasiewicza 11, 30-348 Krakow, Poland 3 Institute of Spintronics and Quantum Information, Faculty of Physics, Adam Mickiewicz University, 61-614 Poznan, Poland *[email protected] *[email protected] ## 1 Introduction Quantum technology is built on the precise manipulation and reconstruction of quantum states. When dealing with single microscopic quantum objects, the reconstruction of states becomes challenging. This stems from a (often) destructive nature of the reconstruction and small amplitudes of recorded signals. To address these difficulties, some researchers have turned their focus towards studying ensembles of quantum objects, which display a collective quantum behavior [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. Atomic vapors serve as a prime example of a medium utilized for the engineering of collective quantum states. In their ultracold form, they allow for precise quantum control through light and other external fields, albeit implementation of the control requires complex experimental setups. On the other hand, room-temperature vapors can be studied using simpler apparatuses, but they simultaneously present challenges in terms of theoretical understanding [3]. Despite these problems, however, the room-temperature atomic vapors were used to demonstrate various quantum-mechanical effects including coherent population trapping [4], spin squeezing [5, 6], macroscopic entanglement [7, 8], spin waves [8, 9], squeezed light generation [10, 5, 11] and entanglement of light modes [12]. Rubidium vapor was also used to construct an on-demand quantum memory [13, 14]. These experiments revived the interest in such media, while also necessitated the development of reliable quantum-state tomography (QST) methods. In this work, we demonstrate the first experimental implementation of recently proposed QST method [15]. The method enables the reconstruction of a collective density matrix of a room-temperature atomic vapor and is based on the illumination of the vapor with an off-resonant probing light and monitoring properties of the light after traversing a medium subjected to an external magnetic field. This enables to reconstruct a collective quantum state of \({}^{87}\)Rb atoms residing in the \(F=1\) ground state (qutrit). To evaluate the efficiency of the tomographic technique, we used the so-called conditional number [16, 17, 18]. Previously, the parameter was used for a comprehensive comparison of tomographic methods of two polarization qubits [17], NMR tomography of two \({}^{1}H\) spins-1/2 (two qubits) [19], and a single nuclear spin-3/2 (a quartit) in a semiconductor quantum well [20]. We demonstrate that by an appropriate tuning of the probing light, the conditional number can be minimaized (corresponding to an optimized reconstruction) and as small as 2.25 can be achieved. We also discuss means of further improvement of the reconstruction efficiency by repeating specific measurements. ## 2 Principles of the optical tomography We begin with a brief overview of the QST technique developed in Ref. [15]. This method relies on measuring the polarization rotation of linearly polarized probe light traversing a medium (e.g., room-temperature alkali metal atoms) subjected to a longitudinal magnetic field. We assume that the amplitude of the light is low, which allows us to describe its interaction with atoms using perturbation theory at the lowest order (linear interaction). At the same time, unlike previous approaches (see, e.g. Ref. [21, 22]), we do not assume a significant detuning of the light from the optical transition. This enables us to consider not only the vector contributions to a polarization rotation [21, 23, 24], but also tensor one [25], and hence reconstruct the collective density matrix of the atoms. It is noteworthy that this reconstruction is achieved without full control over the system, as successive magnetic sublevels are equally splitted due to a weak magnetic field (under the conditions of the linear Zeeman effect) [26]. In Ref. [15], the relation between time-dependent polarization rotation \(\delta\alpha(t)\) and operators \(\hat{\alpha}_{R,I}\) and \(\hat{\beta}\) was introduced. The operators are associated with coherences and population difference of specific magnetic sublevels hence provide access to specific density-matrix elements. In this work, we employ a slightly modified version of that relationship, i.e., \[\delta\alpha(t;\Delta)=\eta(\Delta)\left(e^{-\gamma_{I}t}\left[\langle\hat{ \alpha}_{R}\rangle\sin(2\Omega_{L}t)+\langle\hat{\alpha}_{I}\rangle\cos(2 \Omega_{L}t)\right]-\zeta(\Delta)e^{-\gamma_{2}t}\left\langle\hat{\beta} \right\rangle\right), \tag{1}\] where \(\eta(\Delta)=\chi V_{R}(\Delta)\) and \(\zeta(\Delta)=V_{I}(\Delta)/V_{R}(\Delta)\) are the so-called global and local scaling factors associated with real \(V_{R}\) and imaginary \(V_{I}\) parts of the Voigt profile, and \(\chi\) is related to experimental parameters such as atomic density and transition frequency (for more details see the Supplemental Information - SI). As shown in Eq. (1), the time dependence of the polarization rotation is determined by the Larmor frequency \(\Omega_{L}\) and the relaxation rates \(\gamma_{1}\) and \(\gamma_{2}\). Since a single measurement described by Eq. (1) allows us to extract only limited information about the system (specifically the population difference and coherence between magnetic sublevels with \(\Delta m_{F}=2\)) it is necessary to expand the set of measured signals to obtain a more comprehensive information. To achieve this, we introduce a series of unitary operations known as control pulses, which systematically manipulate a given state in the Hilbert space. This provides the access to other density-matrix elements and hence offer a complete characterization of the system [15]. In turn, the reconstruction problem can be presented as \[\mathbb{O}\rho_{V}=\mathbf{b}, \tag{2}\] where \(\mathbb{O}\) represents the coefficient matrix determined by the set of observables, and \(\rho_{V}=\left[\rho_{1\overline{1}}^{R},\rho_{10}^{R},\rho_{10}^{I},\ldots \right]^{T}\) (where \(\rho_{mn}^{R}=\text{Re}\{\rho_{mn}\}\), \(\rho_{mn}^{I}=\text{Im}\{\rho_{mn}\}\) and \(\overline{1}=-1\)) is the vectorized form of a standard form density matrix \(\rho\) with entrances \(\rho_{ij}\) (see SI for more information), \(\mathbf{b}\) is the observation vector and contains the values of the measured values of the observables. In a typical experimental scenario, the set of measurements given in Eq. (2) is often overdetermined, and it is advantageous to rescale it to a more suitable form \[\mathbb{C}\rho_{V}=\tilde{\mathbf{b}}, \tag{3}\] where \(\mathbb{C}=\mathbb{O}^{\dagger}\mathbb{O}\) and \(\tilde{\mathbf{b}}=\mathbb{O}^{\dagger}\mathbf{b}\). This rescaling enables the calculation of the density operator by simply inverting the aforementioned linear problem. ## 3 Experimental details ### Experimental setup The heart of our experimental system is a 3 cm diameter paraffin-coated spherical cell, containing an isotopically enriched sample of \({}^{87}\)Rb atoms. The cell is heated up to 50\({}^{\circ}\)C and is placed inside a cylindrical magnetic shield made of three layers of mumetal and a qubit innermost ferrite layer. Apart from the cell, the shield additionally contains a set of magnetic-field coils, which enables residual-field compensation and generation of magnetic-field pulses in the \(\mathbf{x}\), \(\mathbf{y}\), and \(\mathbf{z}\) directions. Light used for the illumination of the rubidium atoms is provided by three diode lasers, where the pump and probe lasers are distributed-feedback lasers (DFBs), and the repump laser is the Fabry-Perot laser (ECDL). All lasers are independently tuned, and the repump laser wavelength is frequency-stabilized using a Dichroic Atomic Vapor Laser Lock (DAVLL) [27]. The wavelengths of the other two lasers are passively maintained due to their inherent temporal stability. Performance of all lasers is monitored using a wavemeter, while the pump and probe lasers are additionally monitored through saturated absorption spectroscopy (SAS). The intensities of the laser beams are dynamically controlled by three acusto-optical modulators (AOMs). To generate a specific quantum state in the vapor, the pump-light polarization is set by polarizers (POLs) and quarter-wave plates (\(\lambda/4\)), while the repump light is linearly polarized orthogonal to the pump-light propagation direction (i.e., along \(\mathbf{y}\)-axis). To determine the local scaling factor (see discussion below), the intensity of the probe light is monitored and its \(\mathbf{y}\) linear polarization prior to the shield is provided by a Glan-Thomson polarizer. Finally, the polarization rotation of the probe light is measured after the cell using a balanced polarimeter consisting of a Wollastron prism (WOL) and a balanced photodetector (BPD). The schematic of the setup is shown in Figure 1: (a) Simplified scheme of the experimental setup used for the quantum-state generation and tomography. SAS – saturation absorption spectroscopy, DAVLL – dichroic atomic vapor laser lock, AOM – acusto-optic modulator, PD – photodiode, Pol – Glan-Thompson polarizer, \(\lambda/4\) – quarter-wave plate, \({}^{87}\)Rb – paraffin-coated vapor cell filled with \({}^{87}\)Rb, Wol – Wollaston prism, BPD – balanced photodetector. (b) Experimental sequence used in our method. The initial state of atoms is prepared with the pump light turned on for about 200 ms (red trace). After the preparation period, we apply a sequence of magnetic pulses to modify the state of the atoms (green trace). Here, the CYCLOPS pulses (see Sec. 3.4) are first used and then the control pulses are implemented for a total time of about 1.5 ms. Finally, the probe light is turned on, alongside with the longitudinal magnetic field, for about 1 s (blue) and the polarization rotation signal is recorded. Fig. 1(a). ### Experimental sequence The experimental sequence utilized in our measurements is shown in Fig. 1(b). The sequence begins with a pumping period during which a specific quantum state is engineered. This stage typically consists of a 200 ms light pulse (optical pumping), which is applied simultaneously with the repumping that prevent the atoms from escaping into the dark (\(F=2\)) state, followed by a few short (\(\approx 100\)\(\mu\)s) magnetic-field pulses, enabling generation of a desired complex state. Subsequently, a series of magnetic-field pulses is used to mitigate technical problems (see Sec. 3.4), which is followed by a set of control pulses. Once the pulses are completed, a constant magnetic field along the \(\mathbf{z}\)-direction, ranging from 10-100 nT, is established. At the same time, a probe light beam, propagating along \(\mathbf{z}\) with intensity of 1-10 \(\mu\)W/cm\({}^{2}\), is turned on. In order to improve the signal-to-noise ratio, the intensity of the probe light is modulated at a frequency of 200 kHz and the polarimeter signal is detected using a lock-in amplifier. ### Global and local scaling factor An important element of the reconstruction of the density matrix is the determination of the global scaling factor \(\eta(\Delta)\) [see Eq. (1)]. This can be done by measuring the light absorption in an unpolarized vapor. Using the absorption relationship derived in the SI, the factor can be identified by comparing the absorption of the probe light, tuned to the same wavelength as that during the tomography measurements (i.e., blue-detuned from \(f=1\to F=2\) by 50-400 MHz), with the absorption of far-detuned light (>15 GHz). \[\eta(\Delta)=\frac{27}{16}\left(\sqrt{\frac{U_{2}(\Delta)/U_{1}(\Delta)}{U_{2 }(\infty)/U_{1}(\infty)}}-1\right), \tag{4}\] where \(U_{1}\) is the voltage measured at the transimpedance photodetector placed in front of and \(U_{2}\) after the medium (see Fig. 1(a) and the SI for more details) with \(\Delta\) indicating the probe light tuned for QST and \(\infty\) far-detuned light. Experimental determination of the local scaling factor \(\zeta(\Delta)\) [see Eq. (1)] presents a greater challenge. It requires preparation of an anisotropic, yet well-defined quantum state. In this work, we select "stretched" states that are generated along the \(\mathbf{x}\)- and \(\mathbf{z}\)-axes. The first state can be created by illuminating the atoms with a circularly polarized pump light propagating along the \(\mathbf{x}\)-axis. The preparation of the second state is more involved and requires the application of an additional magnetic-field pulse after the pumping, which rotates the atomic \(\mathbf{x}\)-polarization to the \(\mathbf{z}\)-direction (we have experimentally verified that this process did not introduce dephasing, as evidenced by the unchanged signal amplitude for a many-\(\pi\) pulse). Employing this procedure allows us to mitigate potential systematic errors arising from varying polarization levels achieved with the pump light propagating along different directions, while simultaneously simplifying the experimental setup. The formulas for the light polarization rotation corresponding to these two states are (see the SI for more details) \[\delta\alpha^{(z)}(t;\Delta) = -\frac{5(1-\epsilon)}{24}\eta(\Delta)\zeta(\Delta)e^{-\gamma_{2} t}, \tag{5a}\] \[\delta\alpha^{(x)}(t;\Delta) = -\frac{(1-\epsilon)}{48}\eta(\Delta)e^{-\gamma_{1}t}\cos(2\Omega _{L}t), \tag{5b}\] where \(\epsilon\) is the remaining isotropic part of state. This allows one to calculate the local scaling factor \[\zeta(\Delta)=\frac{1}{10}\frac{\delta\alpha^{(z)}(0;\Delta)}{\delta\alpha^{(x )}(0;\Delta)}. \tag{6}\] ### CYCLOPS-like measurement Equation (1) shows that our reconstruction method is sensitive to the initial phase of the measured signal. As uncontrollable phase delays are present in every experiment, the identification of the quadrature components of the signal becomes difficult. To address this issue, we adapt the CYCLically Ordered Phase Sequence (CYCLOPS) method, commonly utilized in nuclear magnetic resonance experiments [28, 29]. In our approach, we leverage the fact that the \(\pi\)-rotation of the state around the \(\mathbf{y}\)-axis leads to a sign reversal of \(\left\langle\hat{\alpha}_{I}\right\rangle\) and \(\left\langle\hat{\beta}\right\rangle\) (for more information, see the SI). At the same time, by applying the pulse rotating the state by \(\pi/2\) around the \(\mathbf{z}\)-axis and next the pulse rotating the state around the \(\mathbf{y}\)-axis by \(\pi\) (see the SI) the signs of the \(\left\langle\hat{\alpha}_{R}\right\rangle\) and \(\left\langle\hat{\beta}\right\rangle\) are reversed. By subtracting these two transformed states from the initial signal, we obtain \[\left(\delta\alpha-\delta\alpha^{(Y)}\right)(t;\Delta) =2\eta(\Delta)\left[-\zeta(\Delta)e^{\gamma_{2}t}\left\langle \hat{\beta}\right\rangle+e^{-\gamma_{1}t}\left\langle\hat{\alpha}_{I}\right\rangle \cos(2\Omega_{L}t+\varphi)\right], \tag{7a}\] \[\left(\delta\alpha-\delta\alpha^{(ZY)}\right)(t;\Delta) =2\eta(\Delta)\left[-\zeta(\Delta)e^{\gamma_{2}t}\left\langle \hat{\beta}\right\rangle+e^{-\gamma_{1}t}\left\langle\hat{\alpha}_{R}\right\rangle \sin(2\Omega_{L}t+\varphi)\right], \tag{7b}\] where \(\varphi\) is an unknown phase shift originating from the experimental apparatus. In our CYCLOPS-like measurements, the problem of unknown phase is alleviated, as the final signals [see Eqs. (7)] depend only on one quadrature (via either sine or cosine time dependence) and, thus, \(\varphi\) becomes insignificant. The procedure also allows us to remove systematic shifts of the signals associated with the imbalance of the polarimeter (for more details, see the SI). ## 4 Reconstruction of states To perform QST, we conducted the above-described nine measurements, consisting of three sets of CYCLOPS-like pulses for each of three control pulses. To ensure the self-consistency of our reconstruction procedure, we simultaneously fit all of the polarization-rotation signals with shared parameters such as the global phase, relaxation rates, and oscillation frequency. The fitting values are then used to determine the observables and reconstruct the qutrit density-matrix elements using the linear inversion method given in Eq. (3). However, as this method does not guarantee the reconstructed matrices to be positive semidefinite, we utilize the maximum likelihood method with the Euclidean norm [20, 30] to find the closest physical realization of the reconstructed matrix. To validate our tomography technique, we compare the reconstructed density matrices with numerical simulations of the state obtained during the pumping stage. For the simulations, we assume the interaction of an appropriately polarized light with a Doppler-broadened medium consisting of atoms of the energy-level structure similar to that of the \(D_{1}\) line in \({}^{87}\)Rb. As in the real experiment, we assume that there are two distinct regions between which the atoms can freely move. In the first region, the atoms evolve in a homogeneous magnetic field and relax to thermal equilibrium due to the collisions with vapor-cell walls and between one another. This corresponds to the atoms residing outside of the light beams. In the second region, the atoms still interact with the magnetic field but also with the pump and repump light. Moreover, we neglect the wall relaxation in this region. The latter region corresponds to the atoms inside the light beams. All parameters used in the simulations match the parameters of our experimental setup. As representative examples for our reconstruction, we consider two states that can be easily generated experimentally and simulated theoretically. The first state can be pumped with a strong, circularly polarized pumping light, propagating along the \(\mathbf{x}\)-axis [Fig. 2(a)]. The state has a nonuniform population distribution and its all coherences are nonzero. This allows us to demonstrate that our method can reconstruct not only different coherences but also determine their amplitudes and phases with a high accuracy. The results of the experimental reconstruction and simulations are presented in Fig. 2(a). As seen, the results are in a very good agreement revealing a reconstruction fidelity of 0.995. As the second example, we with the \(\pi\)-polarized light, propagating along the \(\mathbf{x}\)-axis. In the ideal case (without experimental artefacts), this scheme leads to the total depletion of the \(m_{F}=\pm 1\) states and no coherences between any sublevels. As shown in Fig. 2(b), our measurements demonstrate a good agreement with numerical simulations, demonstrating the fidelity of 0.998. Nonetheless one can notice that a very small amplitude of the coherences can lead to the deterioration of the phase reconstruction. The very high quality of the reconstruction of these two representative states demonstrates the usefulness of our QST technique. ## 5 Conditioning and optimization of quantum state tomography ### Condition number in linear inversion As mentioned above, the condition number \(\kappa\) is a useful parameter to evaluate the reliability of a QST method [see Eq. (3)]. Specifically, to quantify the ability to tolerate errors or sensitivity to the errors, we use the condition number of a (nonsingular) matrix \(\mathbb{C}\), which, assuming the spectral norm \(\|\ldots\|_{2}\), can be defined as [31, 32, 33] \[\kappa(\mathbb{C})=\|\mathbb{C}\|_{2}\ \|\mathbb{C}^{-1}\|_{2}=\max[\mathrm{ svd}(\mathbb{C})]\max[\mathrm{svd}(\mathbb{C}^{-1})]=\frac{\max[\mathrm{svd}( \mathbb{C})]}{\min[\mathrm{svd}(\mathbb{C})]}\geq 1, \tag{8}\] where \(\mathrm{svd}(\mathbb{C})\) denotes the singular values of \(\mathbb{C}\). The significance of this error-robustness parameter explains well the Gastinel-Kahan theorem [32], which states that a relative distance of Figure 2: Comparison of two experimentally reconstructed density matrix elements (blue bars), with simulated ones (red bars), including their amplitude (upper plots) and phase (lower plots). Here we chose simple pumping schemes with (a) circularly polarized and (b) linearly \(\mathbf{z}\)–polarized pumps propagating along \(\mathbf{x}\)–axis. The fidelity achieved between the experimental results and simulations in both cases exceeds 0.99. a nonsingular square matrix \(\mathbb{C}\) from the set of singular matrices corresponds to the inverse of a condition number. Utilizing the error \(\delta\tilde{\mathbf{b}}\) in the observation vector \(\tilde{\mathbf{b}}\) and the condition number \(\kappa(\mathbb{C})\), one can estimate the error \(\delta\rho_{V}\) in the reconstructed density matrix \(\rho_{V}\) from the so-called Atkinson inequalities [31] \[\frac{1}{\kappa(\mathbb{C})}\frac{\|\delta\tilde{\mathbf{b}}\|}{\|\tilde{ \mathbf{b}}\|}\leq\frac{\|\delta\rho_{V}\|}{\|\rho_{V}\|}\leq\kappa(\mathbb{C} )\frac{\|\delta\tilde{\mathbf{b}}\|}{\|\tilde{\mathbf{b}}\|}. \tag{9}\] When the condition number approaches 1, it becomes apparent that small relative variations in the observation vector \(\tilde{\mathbf{b}}\) result in correspondingly small relative changes in the reconstructed state \(\rho_{V}\). In order to account for errors \(\delta\mathbb{C}\) present in the coefficient matrix \(\mathbb{C}\), these inequalities can be expanded according to the formulation derived in Ref. [31], giving rise to the expression \[\frac{\|\delta\rho_{V}\|}{\|\rho_{V}\|}\leq\frac{\kappa(\mathbb{C})}{1-\kappa (\mathbb{C})\|\delta\mathbb{C}\|/\|\mathbb{C}\|}\left[\frac{\|\delta\tilde{ \mathbf{b}}\|}{\|\tilde{\mathbf{b}}\|}+\frac{\|\delta\mathbb{C}\|}{\|\mathbb{ C}\|}\right]. \tag{10}\] By referring to the inequalities in Eqs. (9) and (10), we can infer that the quality of a QST method, in terms of its error sensitivity or robustness, can be assessed through its condition number \(\kappa(\mathbb{C})\), which characterizes the degree to which small (large) changes in the observation vector \(\tilde{\mathbf{b}}\) lead to relatively small (large) changes in the reconstructed state \(\rho_{V}\). Thus, if \(\kappa(\mathbb{C})\) is small (large), the QST method is well-conditioned (ill-conditioned), indicating the robustness (sensitivity) of the method to errors in the observation vector \(\tilde{\mathbf{b}}\). In the case of ill-conditioned QST, even slight errors in \(\tilde{\mathbf{b}}\) can cause significant errors in the reconstructed \(\rho_{V}\). In short, the smaller the condition number the stronger robustness of a given linear-inversion-based QST method against errors. Thus, one can refer to an optimal method in this respect if \(\kappa(\mathbb{C})=1\). Numerical examples of ill-conditioned QST problems can be found in Refs. [31, 17]. ### Optimization via probe light tuning In order to optimize a QST process, it is desired to make the coefficient matrix \(\mathbb{C}\) more isotropic, which means that each measurement brings an equal amount of information about the system. A simple example of such an optimized problem is when each measurement brings information about only a specific density-matrix element, with all measurements having the same weight [17, 18]. In this case, the coefficient matrix \(\mathbb{C}\) is proportional to identity. Even though such optimization is intuitive, it is often unpractical, as experimental transformations required to achieve a desired scheme are very complex. Instead, here we propose a scheme, where a single experimental parameter is adjusted. In our case, this parameter is the probing light detuning, which, incorporated in Eq. (1) through \(\zeta(\Delta)\), makes one of the observable detuning dependent. It is important to note that our method does not guarantee an optimal tomography process, \(\kappa(\mathbb{C})=1\). Therefore, to explore the limit of the method, we calculate the eigenvalues of coefficient matrix with \(\zeta(\Delta)\) as a free parameter. In our case, the eigenvalues of \(\mathbb{C}\) can be analytically calculated, taking the values: \(\left\{\frac{1}{100},\,\frac{1}{150},\,\frac{1}{225},\,\frac{1}{225},\,\frac{ 1}{225},\,\frac{\zeta^{2}}{18},\,\frac{\zeta^{2}}{9},\,\frac{\zeta^{2}}{9}\right\}\). From this, we obtain the dependence of \(\kappa(\mathbb{C})\) on \(\zeta(\Delta)\) [Fig. 3(a)] and a minimal possible conditional number of 2.25 is determined. To further illustrate the effect of the probing-light detuning on the reconstruction uncertainty and, hence, demonstrate the potential of this approach, we perform a series of reconstructions of a state generated under the same conditions but reconstructed using different probing-light detunings. In our experiment, the detuning is changed from 50 to 270 MHz. The results of these investigations are shown in Fig. 3(b). They demonstrate that the reliability of the reconstruction deteriorates with the detuning and it achieves the minimum in the vicinity of the center of the Doppler-broadened \(f=1\to F=2\) transition. This agrees with our theoretical prediction of the condition-number detuning dependence, which we calculate assuming that \(\zeta(\Delta)=V_{I}(\Delta)/V_{R}(\Delta)\). #### 5.2.1 Conditional number versus the number of measurements The repetition of specific measurements offers a straightforward and versatile method for optimizing the relative weights of the observables used in the state-reconstruction procedure. This approach allows for achieving an arbitrarily small condition number, making it particularly valuable when the previous method is infeasible or when the condition number is desired to be smaller than the detuning-optimized bound (e.g., 2.25). However, it should be noted that this technique is associated with a potential drawback; the number of repetitions required to attain \(\kappa(\mathbb{C})=1\) is typically substantial, especially when dealing with initially high condition numbers, as illustrated in Fig. 4. ## 6 Conclusions In this study, we presented the first experimental implementation of a quantum-state tomography technique, which was originally proposed in Ref. [15]. The technique enabled us the successful reconstruction of collective quantum states of a qutrit in room-temperature rubidium vapor at the \(f=1\) ground state with a fidelity of 0.99. To overcome experimental challenges of the reconstruction, we adapted the CYCLOPS technique, which allowed us to achieve reliable reconstruction by mitigating a problem of unknown phase delays present in measured signals. Additionally, we presented a comprehensive analysis of the technique by introducing the conditional number, which quantifies the reliability of the reconstruction. This parameter was investigated versus different experimental factors, including tuning of the probing light used for the reconstruction. We demonstrated that by appropriate tuning of the light, the conditional numbers as low as 2.25 can be achieved (where the conditional number of 1 refers to ideal reconstruction). We also demonstrated that further improvement of the reconstruction (lowering the conditional number) can be achieved by the repetition of the specific measurements. Figure 3: (a) Condition number of a state measured versus the probing-light detuning from the center of the Doppler-broadened \(f=1\to F=2\) transition. The red points indicate the values calculated based on experimental measurements (horizontal uncertainty comes from the uncertainty of detuning, the evaluated vertical errors are small hence not visible), while the blue line shows the theoretical dependence calculated from the absorption measurements. Green dashed line indicates the smallest \(\kappa(\mathbb{C})=2.25\) achievable using this approach. (b) Relative uncertainty of the linear inversion [see Eq. (9)] as a function of the condition number \(\kappa(\mathbb{C})\). Red points correspond to the reconstruction of the state pumped with circularly polarized light propagated along the \(\mathbf{x}\)-axis [see Fig. 2(a)] and blue points correspond to the reconstruction of the state generated with linearly \(\mathbf{z}\)-polarized pump light, propagating along the \(\mathbf{x}\)-axis [see Fig. 2(b)]. The uncertainty of the condition number is significantly smaller than the size of the data points and solid lines are added for clarity. The successful implementation of the presented QST technique opens up avenues for measuring a range of fundamental properties of qutrits. In future, we plan to focus on exploring different measures of nonclassicality and establishing their ordering for various classes of quantum states. We also plan on a further development of the technique to demonstrate quantum-process tomography, expanding the method capabilities in the characterization of quantum operations and transformations. Finally, the ability to accurately reconstruct the quantum states of atomic ensembles allows for experimental optimization of generation of metrologically appealing quantum states. This is the research direction that we currently pursuit in our work. ## 7 Acknowledgements The authors would like to thank Arash D. Fard for his help in experimental measurements. The work was supported by the National Science Centre, Poland within the SONATA BIS programme (Grant No. 2019/34/E/ST2/00440). MK would like to acknowledge support from the Excellence Initiative - Research University of the Jagiellonian University in Krakow. A.M. is supported by the Polish National Science Centre (NCN) under the Maestro Grant No. DEC-2019/34/A/ST2/00081.
2303.13093
Type-II Saddles and Probabilistic Stability of Stochastic Gradient Descent
Characterizing and understanding the dynamics of stochastic gradient descent (SGD) around saddle points remains an open problem. We first show that saddle points in neural networks can be divided into two types, among which the Type-II saddles are especially difficult to escape from because the gradient noise vanishes at the saddle. The dynamics of SGD around these saddles are thus to leading order described by a random matrix product process, and it is thus natural to study the dynamics of SGD around these saddles using the notion of probabilistic stability and the related Lyapunov exponent. Theoretically, we link the study of SGD dynamics to well-known concepts in ergodic theory, which we leverage to show that saddle points can be either attractive or repulsive for SGD, and its dynamics can be classified into four different phases, depending on the signal-to-noise ratio in the gradient close to the saddle.
Liu Ziyin, Botao Li, Tomer Galanti, Masahito Ueda
2023-03-23T08:17:10Z
http://arxiv.org/abs/2303.13093v4
# The Probabilistic Stability of Stochastic Gradient Descent ###### Abstract A fundamental open problem in deep learning theory is how to define and understand the stability of stochastic gradient descent (SGD) close to a fixed point. Conventional literature relies on the convergence of statistical moments, esp., the variance, of the parameters to quantify the stability. We revisit the definition of stability for SGD and use the _convergence in probability_ condition to define the _probabilistic stability_ of SGD. The proposed stability directly answers a fundamental question in deep learning theory: how SGD selects a meaningful solution for a neural network from an enormous number of solutions that may overfit badly. To achieve this, we show that only under the lens of probabilistic stability does SGD exhibit rich and practically relevant phases of learning, such as the phases of the complete loss of stability, incorrect learning, convergence to low-rank saddles, and correct learning. When applied to a neural network, these phase diagrams imply that SGD prefers low-rank saddles when the underlying gradient is noisy, thereby improving the learning performance. This result is in sharp contrast to the conventional wisdom that SGD prefers flatter minima to sharp ones, which we find insufficient to explain the experimental data. We also prove that the probabilistic stability of SGD can be quantified by the Lyapunov exponents of the SGD dynamics, which can easily be measured in practice. Our work potentially opens a new venue for addressing the fundamental question of how the learning algorithm affects the learning outcome in deep learning. ## 1 Introduction Stochastic gradient descent (SGD) is the main workhorse for optimizing deep learning models. A fundamental problem in deep learning theory is to characterize how SGD selects the solution of a deep learning model, which often exhibits remarkable generalization capability. At the heart of this problem is the _stability_ of SGD because the models trained with SGD stay close to the solution where the dynamics is stable and moves away from unstable solutions. Solving this problem thus hinges on having a good definition of the stability of SGD. The established literature often defines the stability of SGD as a function of the variance of the model's parameters or gradients during training. The hidden assumption behind the mainstream thought is that if the variance diverges, then the training becomes unstable (Wu et al., 2018; Zhu et al., 2018; Liu et al., 2020, 2021; Ziyin et al., 2022b). In some sense, the thought that the variance of the parameters matters the most is also an underlying assumption of the deep learning optimization literature, where the utmost important quantity is how fast the variance and the expected distance of the parameters decay to zero (Vaswani et al., 2019; Gower et al., 2019). We revisit this perspective and show that a variance-based notion of stability is insufficient to understand the empirically observed stability of training of SGD. For example, we demonstrate natural learning settings where the variance of SGD diverges, yet the model still converges with high probability. In this work, we study the _convergence in probability_ condition to understand the stability of SGD. We then show that this stability condition can be quantified with a stochastic extension of the Lyapunov exponent (Lyapunov, 1992), a quantity rooted in the study of dynamical systems and has been wellunderstood in physics and control theory (Eckmann and Ruelle, 1985). The main contribution of this work is to propose a new notion of stability that sheds light on how SGD selects solutions and multiple deep-learning phenomena that can only be understood by this notion. Perhaps the most important implication of our theory is the characterization of the highly nontrivial and practically important phase diagram of SGD in a neural-network loss landscape (as illustrated in Figure 1). ## 2 Problem Setting In this section, we introduce the problem setting, describe the standard linear stability theory, and discuss its implications for understanding the stability of minibatch SGD. Definitions.In a standard supervised learning setting, we are given a data distribution \(p(x,y)\) from which independently sampled input-label pairs \((x,y)\) are drawn. For notational concision, we let \(y=y(x)\) be a function of \(x\), so that \(p(x,y)=p(x)\). We allow \(p(x)\) to be a uniform distribution over samples from a given dataset of size \(N\) or a distribution over a continuous space. The training loss is defined as the empirical expectation \(L(w)=\mathbb{E}_{x}[\ell(w,x)]\) of a sample-wise loss function \(\ell(w,x)\), where \(w\) is the vectorized model parameters. Model training proceeds with the stochastic gradient descent (SGD) algorithm with a batch size \(S\) and learning rate \(\lambda\). At each iteration \(t\), SGD computes the gradient by using a randomly drawn mini-batch of Figure 1: **SGD exhibits a complex phase diagram through the lens of probabilistic stability.** (a1) Landscape of a simple two-layer tanh neural network: \(f(x)=u\tanh(wx)\). Triangles show the location of the global minima. The star shows the origin, which is a saddle point. (a2) With a large learning rate, SGD converges to the saddle in probability, even though it escapes in expectation. Here, \(m\) is the Lyapunov exponent times \(t\), which agrees well with a typical learning trajectory. The inset shows the distribution of the parameters before and after training. (b) A phase diagram of SGD. For a matrix factorization saddle point, the dynamics of SGD can be categorized into at least five different phases. Phase **I**, **II**, and **IV** correspond to a successful escape from the saddle. Phase **III** is the case where the model converges to a low-rank saddle point. Phase **I** corresponds to the case \(w_{t}\rightarrow_{p}u_{t}\), which signals correct learning. In phase **Ib**, the model also converges in variance. Phase **II** corresponds to stable but incorrect learning, where \(w_{t}\rightarrow_{p}-u_{t}\). Phase **IV** corresponds to complete instability. See Section 4.2 for a detailed discussion. (c1) Numerical results on training with ResNet18 for an image classification task. The color shows the penultimate-layer representation of Resnet18 trained with different levels of label noise. The inset shows the estimated boundary of rank 511 and 1. We observe that the phase boundary agrees qualitatively with simple two-layer networks without nonlinearity (c2) and with the tanh activation (c3), for which phase boundaries can be theoretically computed. (c4) The phase diagram is not limited to SGD but also applies to models trained with Adam, suggesting a universal effect that may be attributable to all first-order learning algorithms with minibatch sampling. \(S\) samples \((x_{j})_{j=1}^{S}\) from \(p(x)\). Then, SGD updates the parameters \(w\) according to the following rule: \[w_{t+1}=w_{t}-\frac{\lambda}{S}\sum_{j=1}^{S}\nabla_{w}\ell(w_{t},x_{j}). \tag{1}\] In this definition, we can handily define gradient descent (GD) as the infinite \(S\) limit of SGD. To study the stability of SGD, we will focus on the notion of convergence in probability, denoted by \(\sim_{p}\). The weight parameters \(w_{t}\) is said to converge to \(c\) in probability if for any \(\epsilon>0\lim_{t\to\infty}\mathbb{P}(|w_{t}-c|>\epsilon)=0\). A choice of the learning rate \(\lambda\) and that of the batch size constitutes an important practical problem that involves complicated tradeoffs. On the one hand, one wants to use a large learning rate and a small batch size so that the model trains faster and generalizes better (Shirish Keskar et al., 2016; Hoffer et al., 2017; He et al., 2019; Li et al., 2019; Galanti and Poggio, 2022). On the other hand, one wants to use a small learning rate and a large batch size to keep the training stable and convergent. At the core of this tradeoff thus lies the notion of stability. To understand the stability of an interpolation minimum, we follow the standard practice and consider the linearized dynamics of SGD around a local minimum \(w^{*}\): \[w_{t+1}-w^{*}=w_{t}-\frac{\lambda}{S}\sum_{j=1}^{S}\hat{H}(w^{*},x_{j})(w_{t} -w^{*}), \tag{2}\] where \(\hat{H}(w,x)\coloneqq\nabla_{w}^{2}\ell(w,x)\) is the sample-wise Hessian. The previous literature focuses on variants of the following notion of stability. GD is said to be stable at \(w^{*}\) if \(w_{t}-w^{*}\) converges to zero. For SGD, the learning is considered stable if \(\mathbb{E}[\|w_{t}-w^{*}\|^{2}]\to 0\). In probability theory, this condition is equivalent to the condition that \(w_{t}\) converges \(w^{*}\) in mean square. It is elementary to show that convergence in mean square implies the convergence in probability but not vice versa. We will show that these two convergence conditions are dramatically different for SGD and that convergence in mean square is too strong a condition to understand the actual learning behavior of SGD in deep learning. Stability of a minimal model.To investigate the stability of the SGD algorithm, we examine a simple one-dimensional linear regression problem. The training loss function for this problem is defined as \(L(w)=\frac{1}{N}\sum_{i=1}^{N}(wx_{i}-y_{i})^{2}\). For GD, the dynamics diverge when the learning rate is larger than twice the inverse of the largest eigenvalue of the Hessian. To see this, let \(H=\mathbb{E}_{x}[\hat{H}(w^{*},x)]\) denote the Hessian of \(L\) and \(h\) its largest eigenvalue. The dynamics of GD leads to \(\|w_{t+1}\|=\|w_{0}(I-\lambda H)^{t}\|\propto|1-\lambda h|^{t}\). Divergence happens if and only if \(|1-\lambda h|>1\). The range of viable learning rates is thus given by: \[\lambda_{\text{GD}}\leq 2/h=2/\mathbb{E}_{x}[x^{2}]. \tag{3}\] Naively, one would expect that a similar condition approximates the stability condition for the case when mini-batch sampling is used to estimate the gradient. This has indeed been argued to be the case in the recent works (Wu et al., 2018; Liu et al., 2021; Ziyin et al., 2022). For SGD, the stability condition is the same as the condition that the second moment of SGD decreases after every time step, starting from an arbitrary initialization (see Appendix A): \[\lambda_{\text{DS}}\leq\frac{2S^{2}\mathbb{E}[x^{2}]}{\mathbb{E}[x^{4}]+(S-1 )^{2}\mathbb{E}[x^{2}]^{2}}. \tag{4}\] Also related is the stability condition proposed by Ziyin et al. (2022), who showed that starting from a stationary distribution, \(w\) stays stationary under the condition \(\lambda_{\text{SS}}<\frac{2}{h}\frac{1}{1+1/S}\), which we call the stationary linear stability condition (SS). Namely, when minibatch sampling is used, one expects the dynamics to be less stable by a factor of \(1+1/S\). When we have batch size \(1\), the stability condition halves: \(\lambda<1/h\). For all stability conditions, we denote the maximum stable learning rate with an asterisk: \(\lambda^{*}\). The most important prediction made by the linear stability theories is that SGD prefers flatters minima to sharper ones because the linear stability is lost as one increases the learning rate above \(2/h\), which is believed to lead to better performance (Zhu et al., 2018; Wu et al., 2018; Xie et al., 2020; Wu et al., 2022). On the contrary, we will show that this mechanism of minimum selection is not what probabilistic stability implies. Throughout this work, we use the term "linear stability theory" to refer to any theory of stability that is based on the statistical moments in order to emphasize their difference from probabilistic stability, which is our main proposal. ## 3 Convergence in Probability Condition for SGD We now present the main theoretical results of this work. We first show that there exist critical learning rates that are far larger than \(2/h\) for which SGD converges in probability to the global minimum. We then show that such learning rates are not special or isolated points but span a rather large space with a nonzero measure. Lastly, we prove the connection of the notion of convergence in probability to a stochastic generalization of the Lyapunov exponent, which could serve as a foundation of future study of the probabilistic stability in the context of deep learning. ### Critical Learning Rates We begin by demonstrating a specific case to illustrate our analysis. Let us consider a simple interpolation regime, where all data points \((x,y)\in\mathbb{R}^{2}\) lie on a straight line. In this situation, the loss function has a unique global minimum of \(w=y_{i}/x_{i}\) for any \(x_{i}\). Our initial question is: what is the maximum learning rate that can be used for SGD without causing divergence? We prove that for all \(i\), SGD is convergent in probability if \(\lambda=1/x_{i}^{2}\). Therefore, the largest stable learning rate is roughly given by: \[\lambda_{\max}=1/x_{\min}^{2}. \tag{5}\] However, for these special choices of learning rates, linear stability is not always guaranteed. As mentioned earlier, convergence in mean occurs when \(\lambda\leq\lambda_{DS}^{*}\), but this condition does not hold when \(\lambda=1/x_{\min}^{2}\) and \(x_{\min}<\mathbb{E}[x_{i}]/2\), which is often the case for standard datasets. This result shows that the maximal learning rate that ensures stable training can be much larger than the maximal learning rate required for convergence in mean (cf. (4)). For a fixed value of \(\mathbb{E}[x^{2}]\), \(x_{\min}^{2}\) can be arbitrarily small, which means that the maximal stable learning rate can be arbitrarily large. Another consequence of this result is that the stability of SGD depends strongly on individual data points and not just on summary statistics of the whole dataset. ### Are Such Learning Rates Isolated? The important question is whether the critical learning rates in Proposition 3 are special or not, and whether the convergence is robust against perturbations in the learning rate. The answer to both questions is yes. We show that there exists a wide neighborhood close to the critical learning rates such that the model still converges in probability. We note that convergence in probability is sufficient to ensure that SGD is stable at the corresponding learning rate for all practical purposes, as the probability of an unstable trajectory being observed decreases to zero as the number of training steps increases. Let \(v_{t}:=w_{t}-y_{t}/x_{t}=w_{t}-w^{*}\). The SGD dynamics in terms of \(v_{t}\) can be written as \[v_{t+1}=v_{t}-\lambda x_{t}^{2}v_{t}. \tag{6}\] Apparently, the unique global minimum is \(v^{*}=0\), i.e., \(w^{*}=y_{t}/x_{t}\). Furthermore, because the dynamics of Eq. (6) is independent of \(w^{*}\), we can assume \(w^{*}=y_{t}/x_{t}=0\). The following proposition gives the convergence condition. **Proposition 1**.: _Let \(\lambda\) be such that \(\mathbb{E}_{x}[\log|1-\lambda x^{2}|]\neq 0\). Then, for any \(w_{0}\), \(w_{t}\xrightarrow{}_{p}w^{*}\) if and only if \(\mathbb{E}_{x}[\log|1-\lambda x^{2}|]<0\)._ It is worth remarking that this condition is distinctively different from the case when the gradient noise is a parameter-independent random vector. For example, Liu et al. (2021) showed that if the gradient noise is a parameter-independent gaussian, SGD diverges in distribution if \(\lambda>2/h\). This suggests that the fact that the noise of SGD is \(w\)-dependent is crucial for its probabilistic stability. This result highlights the importance of the quantity \(m:=\mathbb{E}_{x}[\log|1-\lambda x^{2}|]\) and its sign in understanding the convergence of \(w_{t}\) to the global minimum. When \(m\) is negative, the convergence to the global minimum occurs. If \(m\) is positive, SGD becomes unstable. We can determine when \(m\) is negative for a training set of finite size by examining the following equation: \[m=\frac{1}{N}\sum_{i}\log|1-\lambda x_{i}^{2}|, \tag{7}\] which is negative when \(\lambda\) is close to \(1/x_{i}^{2}\) for some \(i\in 1,\ldots,N\). What is the range of \(\lambda\) values that satisfy this condition? Suppose that \(\lambda\) is in the vicinity of some \(1/x_{i}^{2}\): \(\lambda=\delta\lambda+1/x_{i}^{2}\), and the instability is caused by a single outlier data point \(x_{\text{out}}\gg 1\). Then, \(m\) decided by the competitive contributions from the outlier, which destablizes training, and \(x_{i}^{2}\), which stablizes training, and the condition is approximately \(|1-\lambda x_{i}^{2}|<1/|\lambda x_{\text{out}}^{2}|\). Because \(\lambda\approx 1/x_{i}^{2}\), this condition leads to: \[|\delta\lambda|<x_{i}^{2}/x_{\text{out}}^{2}. \tag{8}\] This is a small quantity. However, if we change the learning rate to the stability region associated with another data point \(x_{j}\) as soon as we exit the stability region of \(x_{i}\), we still maintain stability. Therefore, the global stability region depends on the density of data points near \(x_{i}\). Assuming there are \(N\) data points near \(x_{i}\) with a variance of \(\sigma^{2}\), the average distance between \(x_{i}\) and its neighbors is approximately \(\sigma^{2}/N\). As long as \(\sigma^{2}/N<x_{i}^{2}/x_{\text{out}}^{2}\), SGD will remain stable in a large neighborhood. In practical terms, this means that when the number of data points is large, SGD is highly resilient to outliers in the data as shown in Figure 2. We see that the region of convergence in probability is very dramatic, featuring stripes of convergent regions that correspond to \(1/x_{i}^{2}\) for each data point and divergent regions where \(m>0\). Lastly, we comment that the same analysis carries over to the case of large batch size. The difference is that for \(S=1\), the distribution of the gradient only depended on the distribution of \(x^{2}\), whereas for \(S>1\), the gradient depends on a sum of squares: \(\mathbb{P}\big{(}\frac{1}{S}\sum_{i}^{S}x_{i}^{2}\big{)}\). When \(x\) is Gaussian, this is a rescaled chi-squared distribution with the degree of freedom \(S\). Let \(g\) denote the gradient on a single data. When \(S\to\infty\), the distribution tends to a Gaussian with variance \(v:=\text{Var}[g]/S\) and mean \(\mathbb{E}[g]\). In the case of infinite batch size, the dynamics becomes the same as that of GD, and the condition for convergence reduces to the prediction of the linear stability theory. When the batch size is large but not too large (namely, when the distribution is effectively a Gaussian, but its variance is not vanishingly small), the stability condition is nothing but the expectation of the quantity \(\log|1-\lambda x^{2}|\) against a Gaussian distribution, which exists in general and can be both negative and positive depending on its mean, variance, and \(\lambda\). Also, the case when interpolation is no longer possible is more involved. Here, since there is no single parameter that fits all the batches, the location of the local minimum necessarily oscillates across batches. We analyze this case briefly in Appendix Section A.3. We now show that the above discussion carries naturally to the general linearized SGD dynamics for a multidimensional parameter space in Eq. (2). Figure 2: **Stability of SGD against a single outlier data** in a dataset of size \(N\). Yellow denotes where SGD converges in probability, and dark blue denotes divergence. We control the norm of the first data point (\(x_{1}^{2}\)) while sampling the rest data from a standard normal distribution. (a-c) stability of SGD for different sizes of the dataset; (d) zoom-in of (c) at a small learning rate. The grey dashed curves show \(\lambda_{GD}^{*}\), and the green dashed curve shows \(\lambda_{GD}^{*}/N\). The intermediate finite learning rates are robust against outliers in the data, whereas the smallest learning rates are strongly sensitive to outliers in the data. ### The Stochastic Lyapunov Exponent and Probabilistic Stability In practice, it is difficult to check the condition of convergence in probability. A remaining question is thus whether there is an easy-to-measure quantity that captures the probabilistic stability of SGD. Our theory implies that the following quantity is an important metric: \[m_{w}(t)\coloneqq\mathbb{E}_{w_{t}}[\log|w_{t}-w^{*}|^{2}], \tag{9}\] where the expectation is taken over the trajectories of \(w_{t}\) with different samplings of the minibatch. One is interested in its time dependence and whether it is positive or negative. In fact, \(m/t\) is a stochastic generalization of the Lyapunov exponent, a quantity of fundamental importance in the study of dynamical systems. \(m>0\) corresponds to a chaotic system that loses stability at the rate \(m/t\), and \(m<0\) corresponds to a stabilizing system that converges to a fixed point. When \(w^{*}\) is not accessible, one can instead study the following quantity: \[m_{L}\coloneqq\mathbb{E}_{w_{t}}[\log L(w_{t})], \tag{10}\] which expands to \(m_{w}\) when \(w_{t}\) is close to an interpolating minimum. From a dynamical system point of view, this quantity can be used to investigate the stability of stationary points of \(L\). Alternatively, \(e^{m_{L}}\) can be seen as a robust estimate of the training loss, and our theory suggests that the practitioners may use this quantity as an alternative monitoring quantity to gauge the progress of training because \(m_{L}\) is much less sensitive to outliers due to the use of a small batch size or a large learning rate. In fact, for the example in the previous section, one can show that the condition \(\mathbb{E}[\log|1-\lambda x^{2}|]<0\) is identical to the condition \(m_{L}<0\). The formal connection is established in following theorem between the stochastic Lyapunov exponent and convergence in probability. **Theorem 1**.: _Let \(g(w)\) be a function of the parameters, \(\Delta g(t)\coloneqq\|g(w_{t})-g^{*}\|\), \(\hat{m}_{g}(w_{t}):=\log\Delta g(w_{t})\), and \(m_{g}(t)=\mathbb{E}_{w}[\hat{m}_{g}(t)]\). Let \(\operatorname{Var}[\log\|\Delta g(t)\|]=o(t^{2})\). Then, if and only if \(\lim_{t\to\infty}m_{g}(t)<0\), \(\log\Delta g(t)\to_{p}0\)._ A detailed analysis proves that the SGD dynamics in Eq. (2) indeed obeys this theorem (Section A). Note that a key assumption here is that the variance of \(\log\Delta g(w_{t})\) does not grow too fast, which is a mild condition that applies to SGD in general. For example, it is much weaker than the assumption that \(g(t)\) has a well-controlled variance as in the linear stability theory. Generally speaking, if \(g(w_{t})\) follows a multiplicative process, one expects \(\operatorname{Var}[\log\|\Delta g(t)\|]\propto t\), like a Brownian motion. ## 4 Implications ### Abnormal Sensitivity and Robustness to outliers One important implication of our result is the robustness of SGD to outliers in comparison to gradient descent. As Figure 2 shows, the bulk region of probabilistic stability stays roughly unchanged as the outlier data point becomes larger and larger; in contrast, both \(\lambda_{GD}^{*}\) and \(\lambda_{DS}^{*}\) decreases quickly to zero. In the bulk region of the learning rates, SGD is thus probabilistically stable but not stable in the moments. Meanwhile, in sharp contrast to this bulk robustness is the sensitivity of the smallest branch of learning rates of SGD to the outliers. Assuming that there is an outlier data point with a very large norm \(c\gg N\), the largest \(\lambda_{\text{GD}}\) scales as \(\lambda_{\text{max}}\sim Nc^{-1}\). In contrast, for SGD, the smallest branch of probabilistically stable learning rate scales as \(c^{-1}\), independent of the dataset size. This means that if we only consider the smallest learning rate, SGD is much less stable than GD, and one needs to use a much smaller learning rate to ensure convergence. For \(\lambda_{\text{DS}}\) A detailed analysis in Section A.2 shows that \(\lambda_{\text{DS}}^{*}=(Nc)^{-1}\). Thus, the threshold of convergence in mean square is yet one order of magnitude smaller than that of probabilistic convergence. In the limit \(N\to\infty\), SGD cannot converge in variance but can still converge in probability. ### Phase Diagram of SGD With this notation of stability, we can study the actual effect of mini-batch noise on a neural network-like landscape. A commonly-studied minimal model of the landscape of neural networks is a deep linear net (or deep matrix factorization) (Kawaguchi, 2016; Lu and Kawaguchi, 2017; Ziyin et al., 2022; Wang and Ziyin, 2022). For these problems, we understand that all local minima are identical copies of each other, and so all local minima have the same generalization capability (Kawaguchi, 2016; Ge et al., 2016). The special and interesting solutions of a deep linear net are the saddle points, which are low-rank solutions and often achieving similar training loss with dramatically different generalization performances. More importantly, these saddles points also appear in nonlinear models with similar geometric properties and they could be a rather general feature of the deep learning landscape (Brea et al., 2019). It is thus important to understand how the noise of SGD affects the stability of a low-rank saddle here. Let the loss function be, where is any nonlinearity that is locally linear at. We let and focus on the case where both and are one-dimensional. Locally around, the model is either rank-1 or rank-0. The rank-0 point where for all is a saddle point as long as. In this section, we show that the stability of this saddle point features complex and dramatic phase transition-like behaviors as we change the learning rate of SGD. Consider the linearized dynamics around the saddle at. The expanded loss function takes the form: (11) For learning to happen, SGD needs to escape from the saddle point. For analytical tractability, we let with probability and ; otherwise, for a controllable parameter. When, _correct_ learning happens when. We thus focus on the case when. The case for is symmetric to this case up to a rescaling. We solve the probabilistic stability regimes of this saddle in Section A.6. See Figure 1 (b). The two most important observations are: (1) SGD can indeed converge to low-rank saddle points; however, this happens only when the gradient noise is sufficiently strong and when the learning rate is large (but not too large); (2) the region for convergence to saddles (region III) is exclusive with the region for convergence in mean square (Ia), and thus one can only understand the saddle-seeking behavior of SGD within the proposed probabilistic framework. Rigorously, we prove the following proposition. It becomes evident that the low-rank solution is reached when both and. **Proposition 2**.: _For any. and only if for, converges to in probability if and only if._ We present empirical demonstrations of this effect in Section 4.4. Many recent works have suggested how neural networks could be biased toward low-rank solutions. Theoretically, Galanti and Poggio (2022) showed that with a weak weight decay, SGD is biased towards low-rank solutions. Ziyin et al. (2022) showed that with weight decay, GD converges to a low-rank solution. Therefore, weight decay already induces a low-rank bias in learning, and it is not known if SGD on its own has any bias toward low-rank solutions. Andriushchenko et al. (2022) showed empirical hints of a preference of low-rank solutions when training without SGD. However, it remains to be clarified when or why SGD has such a preference on its own. To the best of our knowledge, our theory is the first to precisely characterize the low-rank bias of SGD in a deep learning setting. Compared with the stability diagram of linear regression, this result implies that a large learning rate can both help and hinder optimization. Our theory shows that the phase diagram of SGD is a function of the data distribution, and it is interesting to explore and compare a few different settings. We consider a size- Gaussian data. Let and noise, and generate a noisy label. See the phase diagram for this dataset in Figure 3 for an infinite. The phase diagrams in Figure 4 show the phase diagram for a finite. We see that the phase diagram has a very rich structure at a finite size. We make three rather surprising observations about the phase diagrams: (1) as, the phase diagram tends to be something smooth and quite universal; (2) phase II seems to disappear as becomes large; (3) the lower part of the phase diagram seems universal, taking the same shape for all samplings of the datasets and across different sizes of the dataset. This suggests that the convergence to low-rank structures can be a universal aspect of SGD dynamics, which corroborates the widely observed phenomenon of collapse in deep learning (Papyan et al., 2020; Wang and Ziyin, 2022; Tian, 2022). The theory also shows that if we fix the learning rate and noise level, increasing the batch size makes it more and more difficult to converge to the low-rank solution (see section B). This is expected because the larger the batch size, the smaller the effective noise in the gradient. Figure 4: **Phase diagrams of SGD stability for finite-size dataset**. The sampling of the data is the same as in Figure 3. From upper left to lower right: \(N=3,\ 4,\ 8,\ 10,\ 24,\ 100\). As the dataset size tends to infinity, the phase diagram converges to that in Figure 3. The lower parts of all the phase diagrams look very similar, suggesting a universal structure. Figure 3: **Phase diagrams of SGD stability**. The definitions of the phases are the same as Figure 1. We sample a dataset of size N such that \(x\sim\mathcal{N}(0,1)\) and noise \(\epsilon\sim\mathcal{N}(0,4)\), and generate a noisy label \(y=\mu x+(1-\mu)\epsilon\). Left: the \(\lambda-\mu\) phase diagram for \(S=1\) and \(N=\infty\). Right: The \(\lambda-S\) phase diagram for \(\mu=0.06\) and \(N=\infty\). ### How SGD Selects a Solution We now investigate one of the most fundamental problems in deep learning: how SGD selects a solution for a neural network. In this section, we study a two-layer network with a single hidden neuron with the swish activation function: \(f(w,u,x)=u\times\mathrm{swish}(wx)\), where \(\mathrm{swish}(x)=x\times\mathrm{sigmoid}(x)\). Swish is a differentiable variant of ReLU that is discovered by meta-learning techniques and consistently outperforms ReLU in various tasks. We generate \(100\) data points \((x,y)\) as \(y=0.1\mathrm{swish}(x)+0.9\epsilon\), where both \(x\) and \(\epsilon\) are sampled from normal distributions. See Figure 5 for an illustration of the training loss landscape. Here, there are two local minima: solution A at roughly \((-0.7,-0.2)\) and solution B at \((1.1,-0.3)\). Here, the solution with better generalization is A because it captures the correct correlation between \(x\) and \(y\) when \(x\) is small. Solution A is also sharper; its largest Hessian eigenvalue is roughly \(h_{a}=7.7\). Solution B is the worse solution; it is also flatter, with the largest Hessian value being \(h_{b}=3.0\). There is also a saddle point C at \((0,0)\), which performs significantly better than B and slightly worse than A in generalization. If we initialize the model at A, linear stability theory would predict that as we increase the learning rate, the model moves from the sharper solution A to the flatter minimum B when SGD loses linear stability in A; the model would then lose total stability once SGD becomes linearly unstable at (b). As shown by the red arrows in Figure 5. In contrast, probabilistic stability predicts that SGD will move from A to C as C becomes attractive and then lose stability, as indicated by the black arrows. See the right panel of the figure for the comparison with the experiment for the model's generalization performance. The dashed lines show the predictions of the linear stability theory and the probabilistic theory, respectively. We see the probabilistic theory predicts both the error and the place of transition right, whereas linear stability neither predicts the right transition nor the correct level of performance. If we initialize at B, the flatter minimum, linear stability theory would predict that as we increase the learning rate, the model will only have one jump from B to divergence. Thus, from linear stability, SGD would have roughly the performance of B until it diverges, and having a large learning rate will not help with performance. In sharp contrast, the probabilistic stability predicts that the model will have two jumps: it stays at B for a small \(\lambda\) and jumps to C as it becomes attractive at an intermediate learning rate. The model will ultimately diverge if \(C\) loses stability. Thus, our theory predicts that the model will first have a bad performance, then a better performance at an intermediate learning rate, and finally diverge. See the middle panel of Figure 5. We see that the prediction of the probabilistic stability agrees with the experiment and correctly explains why SGD leads to a better performance of the neural network. ### Neural Network Phase Diagrams We start with a controlled experiment where, at every training step, we sample input \(x\sim\mathcal{N}(0,I_{200})\) and noise \(\epsilon\sim\mathcal{N}(0,4I_{200})\), and generate a noisy label \(y=\mu x+(1-\mu)\epsilon\). Note that \(1-\mu\) controls the level of the Figure 5: **How SGD selects a solution. Left: The landscape of a two layer network with the swish activation function (Ramachandran et al., 2017). Middle, Right: the generalization performance of the model as one increases the learning rate. Middle: Initialized at solution B, SGD first jumps to C and then diverge. Right: Initialized at A, SGD also jumps to C and diverge. In both cases, the behavior of SGD agrees with the prediction of the probabilistic stability, instead of the linear stablity. Instead of jumping between local minima, SGD at a large learning rate transitions from minima to saddles.** noise. Training proceeds with SGD on the MSE loss. We train a two-layer model with the architecture: \(200\to 200\to 200\). See Figure 7 for the theoretical phase diagram. SGD escapes from the saddle with a finite variance to the right of the dashed line and has an infinite variance to its left. In the region \(\lambda\in(0,0.2)\), this loss of linear stability condition coincides with the condition for the convergence to the saddle. The experiment shows that the theoretical boundary agrees well with the numerical results. For completeness, we show that the Adam optimizer (Kingma and Ba, 2014) also has a qualitatively similar phase diagram in Appendix B. This suggests that the effects we studied in this work are rather universal, not just a special feature of SGD. Lastly, we train independently initialized ResNets on CIFAR-10 with SGD. The training proceeds with SGD without momentum at a fixed learning rate and batch size \(S=32\) (unless specified otherwise) for \(10^{5}\) iterations. Our implementation of Resnet18 contains 11M parameters in total and achieves 94% test accuracy under the standard training protocol, consistent with the established values. To probe the effect of noise, we artificially inject a dynamical label noise during every training step, where, at every step, a correct label is flipped to a random label with probability \(noise\). See Figure 1. We see that the results agree with the theoretical expectation and with the phase diagram of simpler models. We also study the sparsity of the ResNets in different layers in Appendix B, and we observe that the phase diagrams are all qualitatively similar. We also note that the effect is not due to having a dynamical label noise. Our experiments with a static label noise also show the same results with almost the same regime boundaries. ## 5 Discussion In this work, we have demonstrated that the convergence in probability condition serves as an essential notion for understanding the stability of SGD, leading to highly nontrivial and practically relevant phase diagrams at a finite learning rate. We also clarified its connection to Lyapunov exponents, which are conventional and easy-to-measure metrics of stability in the study of dynamical systems. At a small learning rate and large batch size, the proposed stability agrees with the conventional notion of stability. At a large learning rate and a small batch size, we have shown that the proposed notion of stability captures the actual behavior of SGD much better and successfully explained a series of experiment phenomena that had been quite puzzling. Among the many implications that we discussed, perhaps the most fundamental one is a novel understanding of the implicit bias of SGD. When viewed from a dynamical stability point of view, the implicit bias of stochastic gradient descent is thus fundamentally different from the implicit bias of gradient descent. The new perspective we provide is also different from the popular perspective that SGD influences the learning outcome directly by making the model converge to flat minima. In fact, in our construction, the flatter minimum does not have better generalization properties, nor does SGD favor it over the sharper one. Instead, SGD performs a selection between converging to saddles and local minima, which directly and significantly affects its performance. Thus, we believe that the probabilistic stability is an important future direction to investigate the stability of SGD in a deep learning scenario through this new angle we suggested. In the current work, our analysis centers around studying when and why the quantity \(m\coloneqq\mathbb{E}[\log|w_{t}-w^{*}|]\) becomes positive and does not discuss too much what the magnitude of \(m\) means. In fact, the quantity \(m\) tells us that the quantity \(|w_{t}-w^{*}|\) is typically evolving like \[|w_{t}-w^{*}|\propto e^{m}, \tag{12}\] and, therefore, \(m\) can be seen as a robust metric of the convergence rate of the system. Thus, \(m/t\) can be seen as a metric of the time scale of the relevant dynamics in the neighborhood of a stationary point. Moreover, it is much more informative than the common metric of convergence in the theoretical literature, such as the expected regret or the training loss. As we have argued, these quantities are not good metrics of convergence because they are dominated by rare outliers of trajectories and can diverge even if the system is probabilistically stable. What makes the problem worse is the fact that at a large learning rate, such outlier trajectories lead to divergence of fluctuation. In this sense, \(m\) reflects the _typical_ behavior of training much better. See Section B.1, where we compare \(e^{2m}\) with sample trajectories of learning empirically. Practitioners often plot the training loss in a logarithmic scale vs. training iteration to monitor the progress, where the training loss is estimated by a minibatch sampling. For example, see Figure 2 and 3 of (Kingma and Ba, 2014). Looking at the logarithmic scale often gives us a much better grasp of how the training is progressing than looking at the raw training loss. This is essentially monitoring the quantity \(m_{L}\). Our theory, thus, offers a first step towards understanding a more practically relevant metric of training progress.
2306.01249
Transforming ECG Diagnosis:An In-depth Review of Transformer-based DeepLearning Models in Cardiovascular Disease Detection
The emergence of deep learning has significantly enhanced the analysis of electrocardiograms (ECGs), a non-invasive method that is essential for assessing heart health. Despite the complexity of ECG interpretation, advanced deep learning models outperform traditional methods. However, the increasing complexity of ECG data and the need for real-time and accurate diagnosis necessitate exploring more robust architectures, such as transformers. Here, we present an in-depth review of transformer architectures that are applied to ECG classification. Originally developed for natural language processing, these models capture complex temporal relationships in ECG signals that other models might overlook. We conducted an extensive search of the latest transformer-based models and summarize them to discuss the advances and challenges in their application and suggest potential future improvements. This review serves as a valuable resource for researchers and practitioners and aims to shed light on this innovative application in ECG interpretation.
Zibin Zhao
2023-06-02T03:23:16Z
http://arxiv.org/abs/2306.01249v1
Transforming ECG Diagnosis: An In-depth Review of Transformer-based Deep Learning Models in Cardiovascular Disease Detection ###### Abstract The emergence of deep learning has significantly enhanced the analysis of electrocardiograms (ECGs), a non-invasive method that is essential for assessing heart health. Despite the complexity of ECG interpretation, advanced deep learning models outperform traditional methods. However, the increasing complexity of ECG data and the need for real-time and accurate diagnosis necessitate exploring more robust architectures, such as transformers. Here, we present an in-depth review of transformer architectures that are applied to ECG classification. Originally developed for natural language processing, these models capture complex temporal relationships in ECG signals that other models might overlook. We conducted an extensive search of the latest transformer-based models and summarize them to discuss the advances and challenges in their application and suggest potential future improvements. This review serves as a valuable resource for researchers and practitioners and aims to shed light on this innovative application in ECG interpretation. ECG, Deep learning, Transformer ## I Introduction The development of deep learning has led to significant breakthroughs in various fields, including healthcare. One area where it has made a particularly profound impact is in the analysis of electrocardiograms ( ECGs) [1, 2]. ECGs are noninvasive tests that measure the electrical activity of the heart and play a critical role in assessing heart health. However, interpreting ECGs requires extensive education and training [3, 4]. The integration of deep learning into ECG analysis has ushered in a new era of improved accuracy. In recent years, there has been a surge of research exploring deep learning's potential in ECG diagnosis [1, 5]. Various architectures, such as Stacked Auto-Encoders (SAE) [6], Deep Belief Networks (DBN) [7], Convolutional Neural Networks (CNN) [8], and Recurrent Neural Networks (RNN) [9], have been developed and have shown comparably better performance than manual classifications by experts. However, due to the increasing complexity of ECG data and the need for more accurate and real-time diagnosis, more robust and efficient deep learning architectures are needed. Transformers, originally designed for natural language processing tasks, have been introduced to ECG classification. Transformers' self-attention mechanism [10] allows for the consideration of the entire sequence of an ECG signal, potentially capturing complex temporal relationships that other architectures might miss. However, there are few comprehensive reviews on the application of transformer architectures to ECG classification. This paper aims to provide a detailed overview of the advances and challenges in applying transformer architectures to ECG classification. We will analyze and summarize the technical underpinnings of transformer models, and their application to ECG data in terms of accuracy, efficiency, significance, and potential challenges. Additionally, we will discuss the limitations of the current approaches and the potential improvements to be made on a broader scale for the ECG community in the future. We believe this review will be a valuable resource for researchers and practitioners in the field, shedding light on the novel use of transformer architectures in ECG classification and paving the way for future innovations. This literature review focuses specifically on transformer-based models in the context of electrocardiogram (ECG) interpretation. While conventional machine learning and other deep learning technologies also play important roles in this field, we will briefly introduce the current advancements but will not be discussing them extensively in this review. This is because there are already many excellent reviews that comprehensively cover these methodologies in the context of ECG analysis [1, 5, 11, 12]. Our primary discussion and comparative analysis will be reserved for the innovative use of transformer-based models in ECG interpretation. This paper is organized as follows: The current state-of-the-art ECG deep learning models will be summarized in Section 2. Section 3 discusses some novel deployments of transformers in ECG. In Section 4, we present both challenges and opportunities for deep learning in ECG society. Finally, a brief conclusion is drawn in Section 5.
2305.18248
Do Language Models Know When They're Hallucinating References?
State-of-the-art language models (LMs) are notoriously susceptible to generating hallucinated information. Such inaccurate outputs not only undermine the reliability of these models but also limit their use and raise serious concerns about misinformation and propaganda. In this work, we focus on hallucinated book and article references and present them as the "model organism" of language model hallucination research, due to their frequent and easy-to-discern nature. We posit that if a language model cites a particular reference in its output, then it should ideally possess sufficient information about its authors and content, among other relevant details. Using this basic insight, we illustrate that one can identify hallucinated references without ever consulting any external resources, by asking a set of direct or indirect queries to the language model about the references. These queries can be considered as "consistency checks." Our findings highlight that while LMs, including GPT-4, often produce inconsistent author lists for hallucinated references, they also often accurately recall the authors of real references. In this sense, the LM can be said to "know" when it is hallucinating references. Furthermore, these findings show how hallucinated references can be dissected to shed light on their nature. Replication code and results can be found at https://github.com/microsoft/hallucinated-references.
Ayush Agrawal, Mirac Suzgun, Lester Mackey, Adam Tauman Kalai
2023-05-29T17:12:03Z
http://arxiv.org/abs/2305.18248v3
# Do Language Models Know When They're Hallucinating References? ###### Abstract Current state-of-the-art language models (LMs) are notorious for generating text with "hallucinations," a primary example being book and paper references that lack any solid grounding in their training data. However, we find that many of these fabrications can be identified using the same LM, using only black-box queries without consulting any external resources. Consistency checks done with _direct_ queries about whether the generated reference title is real (inspired by Kadavath et al. [10], Lin et al. [12], Manakul et al. [13]) are compared to consistency checks with _indirect_ queries which ask for ancillary details such as the authors of the work. These consistency checks are found to be partially reliable indicators of whether or not the reference is a hallucination. In particular, we find that LMs in the GPT-series will hallucinate _differing_ authors of hallucinated references when queried in independent sessions, while it will _consistently_ identify authors of real references. This suggests that the hallucination may be more a result of generation techniques than the underlying representation. ## 1 Introduction Language models (LMs) famously hallucinate1, meaning that they fabricate strings of plausible but unfounded text. As LMs become more accurate, their fabrications become more believable and therefore more problematic. A primary example is "hallucinated references" to non-existent articles with titles readily fabricated by the LM. For instance, a real New York Times article entitled "When A.I. Chatbots Hallucinate" leads with a ChatGPT2-fabricated New York Times article titled "Machines Will Be Capable of Learning, Solving Problems, Scientists Predict" [24]. In this work, we study the problem of hallucinated references. Footnote 1: Though it is an anthropomorphism, we use the term _hallucinate_ due to its widespread adoption, following the use-theory of meaning [25]. We use the terms _hallucinate_ and _fabricate_ interchangeably. Footnote 2: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) The hallucinated reference problem is worth study for multiple reasons. First, as we discuss, hallucinated references can easily be evaluated and debunked. Second, hallucinated references impact applications, as LMs help generate literature reviews [11] for the exploration and citation of related work and may assist in writing of paper reviews [15]. Third, due to the deployment of these models, the problem of hallucinated references has pushed beyond an academic curiosity to attract the attention of masses [e.g., 4, 24, 14, 22, 18] and has been highlighted as a problem in the medical domain [4, 1] where hallucinations could be extremely harmful. Finally, the insights gained from studying hallucinated references may apply to hallucination in domains beyond references. A motivating question for this work is, _why do LMs hallucinate, and what can be done about it?_ Is it a problem of LM _representation_, a problem of _training_ (maximizing next-word likelihood), or a problem due to the way they are used for _generation_? Specifically, we investigate whether an LM itself can be used to detect whether or not an output it has produced is a hallucination, without any external resources. While this does not provide a complete answer to the questions of why and what to do, it does inform the discussion. In particular, to the extent that LMs can be used to detect their own hallucinations, this suggests that the hallucination problem is not inherently one of training or representation but is rather one of generation because the models contain enough information to at least reduce the hallucination rate. In this work, by hallucinations we are referring to fabricated text that has _little or no grounding in the training data_. Note that this has been referred to as _open-domain hallucination_ to distinguish it from _closed-domain hallucination_ (see, e.g., [9], which is often studied in summarization and machine translation, where the fabrications are defined relative to a specific source document to be summarized or translated (as opposed to the training data). The two types of hallucinations are different in terms of what is often considered a hallucination: background information based on the training corpus is often defined to be a hallucination in the study of closed-domain hallucinations (assuming it is not in the source document, e.g., the text to be translated). However, open-domain hallucination has attracted significant recent attention within scientific communities and journalism. In this work, when we refer to hallucinations we are referring to absolute (i.e., open-domain) hallucinations. Groundedness versus correctnessThe opposite of fabrication is _groundedness_ in the sense of being based on the training corpus rather than accuracy in the sense of being a true fact (a genuine publication, in the case of references). We define hallucination to be fabricated text, meaning text that is not grounded in this training set. In contrast, correctness is evaluated with respect to ground-truth answers. This distinction is called _honesty_ versus _truthfulness_ by Evans et al. [6]. For example, the common misconception that "people use 10% of their brains" is grounded because it is almost surely mentioned in the training data, either exactly or in various paraphrased versions. However, it is not scientifically correct. Much work on hallucination conflates groundedness and accuracy, often equating hallucination with fallacy and evaluating hallucinations using accuracy on fact-based assessments, without regard to the training data [9]. We adopt the groundedness definition of hallucination even though it may often be less clear-cut and more difficult to evaluate than factuality. Evaluating groundednessPerfectly evaluating hallucinations would require access to the LM's training data. An advantage of the hallucinated reference problem is ease of (approximate) evaluation in that exact-match Web search is a reasonable heuristic for groundedness. This is because the vast majority of article titles present in the training data are included in Web search results--articles are meant to be published and shared, and publishers aim to make their work discoverable by search. Furthermore, references generally have titles that are specific enough not to spuriously occur on the Web. Regarding other types of hallucinations, besides article names, which cannot be as easily evaluated, we still hope that our methodology and findings would apply, even if evaluating those types of hallucinations would require access to the training data. Figure 1: Example direct vs. indirect LM queries for predicting whether a given paper title is hallucinated or grounded. Direct queries are binary, repeated multiple times to estimate a probability. Indirect queries are open-ended, and their answers are compared to one another, using the LM, to output an agreement fraction. Language model generations are indicated in **boldface**. Prompts in this figure have been shortened for illustrative purposes. Direct queriesOur work builds upon and is inspired by two recent works that show how to use black-box generative LMs to assess confidence in generations, without consulting external references or inspecting weights. In particular, Kadavath et al. [10] introduce multiple direct black-box strategies for using an LM to extract confidence estimates by querying the language models on question-answer problems. Manakul et al. [13] apply a similar direct self-consistency check called SelfCheckGPT to identify relative hallucinations in a summarization context. These queries are direct true/false correctness queries. We test similar approaches in the context of hallucinated references. Black-box generative approaches stand in contrast to the work that either introspects the weights on LMs [2] or that consults existing databases [7]. Indirect queriesIn addition, we suggest a new approach using what we call _indirect queries_. A direct query may ask, _Is the following paper real?_ while an indirect query may ask, _Who are the authors of this paper?_, as illustrated in Fig. 1. Answers are then generated to the indirect query in \(i>1\) independent sessions, and tested for consistency. The motivation for indirect queries comes from investigative interviews, where detectives are advised to interview individuals separately and ask open-ended questions. For instance, consistency may be better evaluated by asking multiple witnesses to _"Describe in detail what the suspect was holding"_ rather than asking, _"Was the suspect holding a gun in their right hand?"_[23]. In the context of reference hallucination, our hypothesis is that the likelihood of multiple generations agreeing on the same authors for a hallucinated reference would be smaller than the likelihood of multiple responses to a direct query indicating that the reference exists. ContributionsThere are several contributions of this work. First, we perform a systematic LM study of hallucinated references, enabling us to compare hallucination rates across LMs. Second, we introduce indirect queries for evaluating hallucinations. Third, we compare these to direct queries, inspired by studies in LM question-answering [10] and summarization-based hallucinations [13]. A conclusion of our work for reducing hallucination is the recognition that changing the generation pipeline can certainly help, while it is less clear if training or representation changes are necessary. ## 2 Related Work Open-domain hallucination were discussed in the context of GPT-4 [16; 3], due to their prevalence and potential danger, Bubeck et al. [3, page 82] write: _Open domain hallucinations pose more difficult challenges, per requiring more extensive research, including searches and information gathering outside of the session._ We show that open domain hallucinations can in fact be addressed, at least in part, without consulting external resources. As mentioned, there are multiple definitions of hallucination. In this work, we use the term hallucinations to mean fabricated text that is not grounded in the training data. Factually incorrect generations can be decomposed into two types of errors: grounded errors which may be due to fallacies in the training data (e.g., that people use only 10% of their brains) and ungrounded errors. These two types of errors may need different techniques for remedy. The grounded errors may be reduced by curating a training set with fewer errors or other techniques such as RLHF [17]. However, the ungrounded errors which we study3 are a fascinating curiosity which still challenge the AI community and one which is not clearly addressable by improving training data. The distinction is further elucidated by Evans et al. [6]. Footnote 3: One can also imagine ungrounded correct generations, such as a generated paper title that exists but is not in the training data, but we find these to be quite rare. There is comparatively little prior work studying _open-domain groundedness_ like ours. Some work [e.g., 8] in attribution aims to understand which training examples are most influential in a given output. In recent independent work in the health space, Athaluri et al. [1] did an empirical evaluation of hallucinated references within the medical domain. Similar to our approach, they used a Google search for exact string match as a heuristic for evaluating hallucinations. Our study of hallucinated references enables us to estimate the hallucination rates of different models, and, as discussed in prior work, the hallucination problem interestingly becomes more pressing as models become more accurate because users trust them more [16]. Related recent works include black-box techniques for measuring confidence in LM generations. Although these works are targeted at factual confidence, the approaches are highly related to our work. While Kadavath et al. [10] use probability estimates drawn from LMs, it is straightforward to extend their procedures to generation-only LMs like ChatGPT using sampling. Lin et al. [12] show that LMs can be used to articulate estimates by generating numbers or words as we do. Finally, Manakul et al. [13] perform self-checks in the context of summarizing a document. All of these works use direct queries which influenced the design of our direct queries. Due to space limitations, we do not discuss the work studying closed-domain hallucination (e.g., in translation or summarization) but instead refer the reader to recent survey of Ji et al. [9]. ## 3 Methodology We now give an overview of our methodology followed by further details on our direct and indirect queries. Note that this full pipeline is run separately for each of our LMs, so there is no mixing across LMs. We first describe how we generate lists of candidate reference titles. Generating referencesThe input to our evaluation is a set of topics from which we generate \(k\) references each using the LM by prompting it with temperature \(1\) as illustrated in Fig. 2. The procedure is re-run if the LM fails to generate a list of \(k\) candidate titles. We then run our classification procedures, described below, on each of the candidate titles. Hallucination estimation proceduresEach of our procedures takes three inputs: 1. A candidate reference title. Given that there is generally less ambiguity in the title of a reference than in the spelling or abbreviation of its authors names, for each reference we chose to use only its title as input. 2. A black-box LM capable of completing a prompt. This is the most general model which includes dialogue-based models, such as ChatGPT that offer an API without probabilities. 3. A number of queries. This parameter is slightly different for direct and indirect queries. * Direct queries: parameter \(j\geq 1\) which determines how many judgments to make. * Indirect queries: parameter \(i\geq 1\) determining how many indirect responses to request. In our experiments, the candidate title will have been generated using the LM, though this is not a requirement. The procedure detects (possibly) hallucinated references by querying the LM to check the existence of the reference. It does so by making black-box completion queries to the same LM. Finally, the procedure outputs a real-valued prediction in \([0,1]\) of the probability the title is grounded (G) or a hallucination (H). We consider both performing a single judgment \(j=1\) per paper title and \(j>1\) to implement a version of the procedure that outputs probabilities rather than just G/H judgments. Since we do not have access to the probability distribution of the LM completions for all models, the above procedure effectively simulates probabilities using sampling at temperature 1. (Note that each query is run independently "from scratch" in a new prompt; one would expect an artificially high degree of consistency if one were to ask the same query repeatedly within a single dialogue.) LabelingFor labeling, we use exact match in a search engine as a heuristic for labeling G/H. These labels are treated as ground truth (though like all labels they have some error and ambiguities). Final receiver operating characteristic (ROC) curves and false discovery rates (FDR) are determined Figure 2: The prompt used to generate \(k=5\) reference titles. This method generates both grounded and hallucinated references. Topics are chosen from the ACM Computing Classification System. by comparing the ground truth labels to the classifications. Note that we also experimented with academic reference APIs such as Semantic Scholar. While these gave thorough details about each paper in its index, many grounded references (even for real papers) did not appear in their indexes, and we found search engine results to be significantly more complete. ### Direct queries details The direct query (DQ) procedures simply query whether or not the given title exists following the format shown in Fig. 3. We created three query templates (DQ1, DQ2, and DQ3) based on the multiple direct query approaches advocated by Kadavath et al. [10], Manakul et al. [13]. The first query asks whether the reference exists directly. However, as discussed in prior work, some LMs can be strongly biased in answering the question when phrased this way, e.g., it may be presumed real without any context about where the reference came from. DQ2 and DQ3 establish the context indicating that the reference was generated by an assistant or LM. DQ3 goes further by giving additional comparisons, as advocated for in prior work. For DQ3, all \(k\) queries from our generation step (using the same LM) are shown. For each query, we generate \(j\geq 1\) completions to approximate the probability distribution of the model. These strings are converted to binary judgements as follows: We calculate how many completions contained the word _yes_ and divide it by the total number of completions to get the estimates of groundedness. This means that empty or otherwise invalid answers were assigned _no_. We do not assume that this score is calibrated as our analysis considers arbitrary probability thresholds. We sample4\(j\) completions for each direct prompt. Temperature \(1\) is used when \(j>1\) and temperature \(0\) is used when \(j=1\) to approximate the most likely LM completion. Footnote 4: For models that support probability computations, they could be used directly for greater accuracy and efficiency. However, for uniformity, since models such as ChatGPT that we employ does not offer probabilities, we employ sampling. ### Indirect queries details The indirect queries proceed in two steps. Step 1: InterrogationSeparately for each reference, an indirect query is made of the LM \(i>1\) times at temperature 1, as shown in Fig. 4 (top). Responses were truncated to 100 characters. Step 2: Overlap estimation.The LM is used to evaluate overlap between the \(i\) responses. For each pair of answers, an estimate is computed by calling the overlap query, as shown in Fig. 4 (bottom). The leading number is extracted, or, if no number is given, then a 0 is used. (We divide by 100 and clip the answer to the interval \([0,1]\) to convert the percentages to fractions.) It is worth noting that LMs may return an answer that does not consist of a list of authors, such as a long response beginning with _"I could not find a specific reference titled..."_. Thus the overlap estimation prompt clarifies that an answer of 0 should be given if either response is not a list. Figure 3: Examples of the three direct prompt templates used for the direct queries, instantiated with candidate reference titles. The rationale for this approach is that we expect consistent responses to indirect questions to indicate the existence of a grounded reference title, while inconsistent responses may be taken as an warning sign for hallucination. Our method does not rely on external resources and uses the same language model for hallucination detection end-to-end. Of course, parsing and string-matching could be used in place of a LM for the overlap step, though this would require name matching which is known to be a thorny problem and one which is well suited for pretrained LMs. ### Ground Truth Labelling A Web search the reference title surrounded by quotes (e.g., "Language models are few-shot learners") using web search. If no results are retrieved, we label the reference title as hallucinated and vice versa. We perform a manual inspection of results to determine the efficacy of this proxy for groundedness of reference titles. ## 4 Results and Discussion In this section, we describe our experiment details, discuss the performance of the indirect and direct methods using quantitative metrics, and present interesting qualitative findings. The code and data generated in our experiments will be made available upon publication. ### Experiment details ModelsWe use the Azure OpenAI API5 for our LMs. We use the three most powerful models, Davinci _(text-davinci-003)_, ChatGPT _(gpt-35-turbo)_, and GPT-4 _(gpt-4)_, for evaluation and generating the datasets. We also experimented with smaller models, but the accuracy with these models was extremely poor, as in the work of Kadavath et al. [10]. As can be seen in our results, even the performance of the Davinci model was of limited accuracy. Footnote 5: [https://azure.microsoft.com/en-us/products/cognitive-services/openai-service](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service) TopicsWe use the ACM Computing Classification System6 (CCS) [19] for topics. CCS contains 12 high level categories, 84 second level concepts, and 543 subconcepts at the third level of granularity. For generating the dataset, we sample 200 of the 543 subconcepts uniformly at random, describing each by a topic string of the form _concept: subconcept_ (e.g., _Information retrieval: Retrieval models and ranking_). For each topic, we generate \(k=5\) references. In this manner, we generate \(200\times 5=1000\) candidate paper titles using each LM. Figure 4: Top: Example of the Indirect Query prompt templates instantiated with a candidate title. Bottom: An example of how we estimate overlap between a pair of answers using the LM. ParametersWe selected \(i=3\) indirect query results and took the average of the overlapping evaluations to compute the final score for each indirect query experiment. For direct query experiments, we sampled \(j=10\) judgments at temperature 1.0 and reported the fraction of _yes_ responses as a final score. Search engine labelsThe Bing search engine API7 is used for searching for the candidate title string on the Web. Note that even with exact string match, some flexibility beyond capitalization and punctuation is allowed. A manual inspection of 120 random examples, given in Appendix C, finds the use of the search engine to be a reliable method for detecting hallucinations. Footnote 7: [https://www.microsoft.com/en-us/bing/apis/bing-web-search-api](https://www.microsoft.com/en-us/bing/apis/bing-web-search-api) ### Quantitative metrics First, Table 1 shows the rates of hallucination for the three models studied. As expected, references produced by the newer models (which achieve higher scores on other benchmarks [20]) also exhibit a higher grounding rate or, equivalently, a lower hallucination rate. While this is expected, it is a positive indicator of the validity of our approach of using search engine results to measure hallucination. (This is discussed further in Appendix C.) Since each of our querying strategies outputs a real-valued score, one can trade-off accuracy on G (i.e., how often truly grounded references are labeled G) and H (how often truly hallucinated references are labeled H) by thresholding the score to form a G or H classification. The standard receiver operating characteristic (ROC) curves based on these thresholded scores are shown for each approach and model in Figs. 4(a), 4(b), and 4(c). These figures enable one to explore different points on this trade off for each classifier. For the Davinci and ChatGPT models, the IQ procedure performs best as quantified via the area under the ROC curve (AUC). For GPT-4 (Fig. 4(c)), both the IQ and DQ approaches work well for classifying hallucination and groundedness with the IQ (AUC: \(0.878\)) and DQ1 (AUC: \(0.887\)) performing the best. The performance of each procedure generally improves as the model size increases. For smaller models, where the procedures perform worst, others have found that users are less likely to believe the generated text due to its inaccuracy [16]. We additionally display \(95\%\) confidence bands for each ROC curve using \(100\) bootstrap replicates and a \(95\%\) confidence interval for the AUC using the DeLong et al. [5] estimate of AUC standard error. Each groundedness classifier can also be used as a filter to generate a list of likely grounded references for a literature review based on the raw generations of an LM. Aside from relevance, which we do not study in this work, two primary quantities of interest to a user of this filter would be the fraction of references preserved (more references provide a more comprehensive review) and the fraction of preserved references which are actually hallucinations. Fig. 7 shows how these two quantities can be traded off. As one varies the threshold of G/H classification and returns only those references classified as grounded, the false discovery rate (FDR) captures the fraction of references produced which are hallucinations. Users may have a certain rate of tolerance for hallucinations, and one would like to maximize the number of generated references subject to that constraint. For Davinci and ChatGPT, the IQ method achieves significantly lower FDR and a provides a substantially better FDR-preservation rate trade-off than the other approaches. For GPT-4, both IQ and DQ methods offer low FDR with comparable trade-offs. Fig. 7 also displays 95% FDR prediction intervals (lighter bands) computed from the quantiles of \(100\) bootstrap replicates and 95% confidence intervals (darker bands) for expected FDR computed from the bootstrap mean \(\pm 1.96\) times the bootstrap standard error. Overall, our hypothesis that indirect queries would be more reliable than direct queries appears to hold for ChatGPT and Davinci; for GPT-4 the direct queries were similarly effective. Finally, we now observe that one can improve accuracy for all models using a combination of direct and indirect queries. \begin{table} \begin{tabular}{l l} \hline \hline & **H\%** \\ \hline **GPT-4** & 46.8\% \\ **ChatGPT** & 59.6\% \\ **Davinci** & 73.6\% \\ \hline \hline \end{tabular} \end{table} Table 1: The hallucination rate (out of 1000 generated titles), as determined by ground-truth labels assigned using the Bing search API. Ensemble of the approaches.We find that classification performance increases when we take ensemble of different approaches, as illustrated by ROC curves in Fig. 6. For creating the ensemble of the approaches, we simply compute the mean of the scores and use them as thresholds. The ensemble of three direct query approaches (computed using the equally-weighted mean of the DQ1, DQ2, and DQ3 scores), which we refer to as simply _DQ_, performs slightly better than the best performing direct query approach. The ensemble of IQ and DQ (computed using the 50-50 mean of IQ and the DQ mean), referred to as _IQ+DQ_ performs the best for every model. The compute costs, which involve \(\approx\)6.6 million tokens and $412, are discussed in Appendix B. ### Qualitative findings A qualitative examination of the titles generated by the LMs and their classifications according to the Bing search API revealed several interesting observations: 1. Many hallucinated titles were combinations of multiple existing titles. 2. The Bing quoted search heuristic is more lenient than exact match, ignoring more than just capitalization and punctuation. However, presumably since Bing quoted search is designed to facilitate title searches, it works well. 3. Some hallucinations were "plausible sounding" such as _A survey on X_ for topic \(X\), even when such a survey did not exist. 4. Direct methods may fail to identify hallucinations on "plausible sounding" titles such as surveys or book chapters. The indirect method also sometimes failed to identify a hallucination because Figure 5: ROC Curves for the four procedures, independent queries and direct queries 1-3, left to right. In (a) the procedures have little effect on mitigating hallucination in the Davinci model, where hallucination was rampant, though the IQ does help the most. In (b) again the IQ procedure does help while most of the DQ procedures are of little value. In (c), for GPT-4, all procedures are significantly effective, with large overlaps in the confidence intervals and AUCs. the LM would consistently produce a "likely author" based on the title, for a given non-existent paper. For example, GPT-4 hallucinated the title _Introduction to Operations Research and Decision Making_, but there is a real book called _Introduction to Operations Research_. In all three indirect queries, it hallucinated the authors of the existing book, _Hillier Frederick S., Lieberman Gerald \(J\)._. Similarly, for the hallucinated title _Exploratory Data Analysis and the Role of Visualization_, 2 of 3 indirect queries produced _John W. Tukey_, the author of the classic, _Exploratory Data Analysis_. 5. The indirect method may sometimes fail to identify a grounded paper title which it can recognize/generate, as it may simply not be able to generate authors not encoded in its weights. Since, in many applications, identifying potential hallucinations is more important than recognizing all grounded citations, errors due to falsely marking an H as a G are arguably more problematic than classifying a G as an H. A manual examination of 120 examples is given in Appendix C. ## 5 Conclusions, Limitations, and Future Work This work investigates the hallucinated reference problem in LMs and provides a methodology by which LMs can be used for self-detection of hallucinations. Both direct and indirect queries were found to be effective for language models, and combining multiple methods led to further improvements in accuracy. There are several limitations of this work. First, as discussed earlier, because we used LMs with inaccessible training data, we cannot conclude what is truly grounded versus hallucination. Second, Figure 6: Ensembles combining approaches outperform the best single approach. Left to right: ROC curves for IQ, the best performing direct query approach, the ensemble DQ averaging the three direct query approaches, and the ensemble of IQ and DQ approaches. The ensemble of DQ approaches performs a bit better than the best performing DQ approach. The ensemble of IQ+DQ approaches performs the best for all models. For all three models, the biggest ensemble IQ+DQ performs best. while we consider a binary notion of hallucination in this work, as is done in much prior work, the notion of hallucination is not entirely black and white. Third, LMs are notoriously sensitive to prompt wording, and some of our findings comparing direct and indirect queries may be sensitive to the specific wording in the prompt. Since we use ACM Computing Classification System for our topics, the results are heavily biased towards computer science references, though it would be straightforward to re-run the procedure on any given list of topics. Also note that LMs have been shown to exhibit gender and racial biases [21] which may be reflected in our procedure-in particular: our procedure may not recognize certain names as likely authors, or it may perform worse at matching names of people in certain racial groups where there is less variability in names. Since our work compares LMs and hallucination estimation procedures, the risk is lower compared to a system that might be deployed using our procedures to reduce hallucination. Before deploying any such system, one should perform a more thorough examination of potential biases against sensitive groups and accuracy across different research areas. There are several directions for future work. Of course, an important consequence of our work is the recognition that reducing hallucination may be a problem at generation time. Thus, inventing improved (non-black-box) generation procedures is thus a crucial direction for future work. There are also several more immediate ways in which our work may be extended. First, one may improve accuracy by adding more indirect questions such as year or venue. These pose additional challenges as a paper with the same title and authors may often appear in multiple venues (e.g., arXiv, a workshop, a conference, and a journal) in different years. Second, it would be very interesting to see if the methods we employ could be used to identify other types of open-domain hallucinations beyond references. Even though hallucinated references are often given as a blatant example of hallucination, perhaps due to the ease with which they can be debunked, these other types of hallucination are also important. Following the investigative interviewing analogy, one way to aim to discover general hallucinations would be to query the LM for "notable, distinguishing details" about the item in question. One could then use an LM to estimate the consistency between multiple answers. However, as mentioned for other domains besides references, it may be impossible to determine whether or not a generation is a hallucination without access to the training set (and unclear even with such access). In summary, open-domain hallucination is an important but slippery concept that is difficult to measure. By studying it in the context of references using search engine results, we can quantitatively compare hallucinations across LMs and we can also quantitatively compare different black-box detection methods. Of course, for the sole purpose of detection, one could achieve higher accuracy by directly consulting curated publication indexes. However, we hope that our study of black-box self-detection of hallucinated references sheds light on the nature of open-domain hallucination more broadly, where detecting hallucinations is more challenging. It suggests that hallucination is not entirely a problem of training but rather one that can be addressed using only the same internal model representation with different generation procedures. While our direct and indirect query methods are only partially reliable and impractically expensive, we hope they may pave the way towards more efficient methods that generate text with fewer hallucinations and thereby reduce potential harms of language models. Figure 7: False discovery rate (FDR) vs. fraction of references preserved for each groundedness filter (IQ, DQ1, DQ2, DQ3) and language model. The FDR represents the fraction of preserved references that are actually hallucinations. For unachievable values of the fraction of references preserved (below the minimal fraction achievable by thresholding), we extrapolate each curve by uniformly subsampling references with maximal scores. ## References * Athaluri et al. [2023] Sai Anirudh Athaluri, Sandeep Varma Manthena, V S R Krishna Manoj Kesapragada, Vineel Yarlagadda, Tirth Dave, and Rama Tulasi Siri Duddumpudi. 2023. Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References. _Cureus_ (April 2023). [https://doi.org/10.7759/curus.37432](https://doi.org/10.7759/curus.37432) * Azaria and Mitchell [2023] Amos Azaria and Tom Mitchell. 2023. The Internal State of an LLM Knows When its Lying. [https://doi.org/10.48550/arXiv.2304.13734](https://doi.org/10.48550/arXiv.2304.13734) arXiv:2304.13734 [cs]. * Bubeck et al. [2023] Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. 2023. Sparks of Artificial General Intelligence: Early experiments with GPT-4. [https://doi.org/10.48550/arXiv.2303.12712](https://doi.org/10.48550/arXiv.2303.12712) arXiv:2303.12712 [cs]. * Dash et al. [2023] Debadutta Dash, Rahul Thapa, Juan M. Banda, Akshay Swaminathan, Morgan Cheatham, Mehr Kashyap, Nikesh Kotecha, Jonathan H. Chen, Saurabh Gombar, Lance Downing, Rachel Pedreira, Ethan Goh, Angel Arnaout, Garret Kenn Morris, Honor Magon, Matthew P. Lungren, Eric Horvitz, and Nigam H. Shah. 2023. Evaluation of GPT-3.5 and GPT-4 for supporting real-world information needs in healthcare delivery. [https://doi.org/10.48550/arXiv.2304.13714](https://doi.org/10.48550/arXiv.2304.13714) arXiv:2304.13714 [cs]. * DeLong et al. [1988] Elizabeth R DeLong, David M DeLong, and Daniel L Clarke-Pearson. 1988. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. _Biometrics_ (1988), 837-845. * Evans et al. [2021] Owain Evans, Owen Cotton-Barratt, Lukas Finnveden, Adam Bales, Avital Balwit, Peter Wills, Luca Righetti, and William Saunders. 2021. Truthful AI: Developing and governing AI that does not lie. [https://doi.org/10.48550/arXiv.2110.06674](https://doi.org/10.48550/arXiv.2110.06674) arXiv:2110.06674 [cs]. * Guo et al. [2022] Zhijiang Guo, Michael Schlichtkrull, and Andreas Vlachos. 2022. A Survey on Automated Fact-Checking. _Transactions of the Association for Computational Linguistics_ 10 (Feb. 2022), 178-206. [https://doi.org/10.1162/tacl_a_00454](https://doi.org/10.1162/tacl_a_00454) * Guu et al. [2023] Kelvin Guu, Albert Webson, Ellie Pavlick, Lucas Dixon, Ian Tenney, and Tolga Bolukbasi. 2023. Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs. [https://doi.org/10.48550/arXiv.2303.08114](https://doi.org/10.48550/arXiv.2303.08114) arXiv:2303.08114 [cs]. * Ji et al. [2023] Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of Hallucination in Natural Language Generation. _Comput. Surveys_ 55, 12 (Dec. 2023), 1-38. [https://doi.org/10.1145/3571730](https://doi.org/10.1145/3571730) * Kadavath et al. [2022] Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared Kaplan. 2022. Language Models (Mostly) Know What They Know. [https://doi.org/10.48550/arXiv.2207.05221](https://doi.org/10.48550/arXiv.2207.05221) arXiv:2207.05221 [cs]. * Kung [2023] Janice Y. Kung. 2023. Elicit. _The Journal of the Canadian Health Libraries Association_ 44, 1 (April 2023), 15-18. [https://doi.org/10.29173/jchla29657](https://doi.org/10.29173/jchla29657) * Lin et al. [2022] Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Teaching Models to Express Their Uncertainty in Words. [https://doi.org/10.48550/arXiv.2205.14334](https://doi.org/10.48550/arXiv.2205.14334) arXiv:2205.14334 [cs] version: 1. * Manakul et al. [2023] Potsawee Manakul, Adian Liusie, and Mark J. F. Gales. 2023. SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models. [http://arxiv.org/abs/2303.08896](http://arxiv.org/abs/2303.08896) arXiv:2303.08896 [cs]. * Moran [2023] Chris Moran. 2023. ChatGPT is making up fake Guardian articles. Here's how we're responding. _The Guardian_ (April 2023). [https://www.theguardian.com/commentsfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article](https://www.theguardian.com/commentsfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article) * Nikiforovskaya et al. [2020] Anna Nikiforovskaya, Nikolai Kapralov, Anna Vlasova, Oleg Shpynov, and Aleksei Shpilman. 2020. Automatic generation of reviews of scientific papers. In _2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA)_. 314-319. [https://doi.org/10.1109/ICMLA51294.2020.00058](https://doi.org/10.1109/ICMLA51294.2020.00058) * OpenAI [2023] OpenAI. 2023. GPT-4 Technical Report. [https://doi.org/10.48550/arXiv.2303.08774](https://doi.org/10.48550/arXiv.2303.08774) arXiv:2303.08774 [cs]. * Ouyang et al. [2022] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. [https://doi.org/10.48550/arXiv.2203.02155](https://doi.org/10.48550/arXiv.2203.02155) arXiv:2203.02155 [cs]. * Pelley [2023] Scott Pelley. 2023. Is artificial intelligence advancing too quickly? What AI leaders at Google say. [https://www.cbsnews.com/news/google-artificial-intelligence-future-60-minutes-transcript-2023-04-16/](https://www.cbsnews.com/news/google-artificial-intelligence-future-60-minutes-transcript-2023-04-16/) * Rous [2012] Bernard Rous. 2012. Major update to ACM's Computing Classification System. _Commun. ACM_ 55, 11 (Nov. 2012), 12. [https://doi.org/10.1145/2366316.2366320](https://doi.org/10.1145/2366316.2366320) * Srivastava et al. [2022] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adria Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek,...(421-others), and Ziyi Wu. 2022. Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. [https://doi.org/10.48550/ARXIV.2206.04615](https://doi.org/10.48550/ARXIV.2206.04615) * Swinger et al. [2019] Nathaniel Swinger, Maria De-Arteaga, Neil Thomas Heffernan IV, Mark DM Leiserson, and Adam Tauman Kalai. 2019. What are the biases in my word embedding?. In _Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society_. 305-311. * Tangermann [2023] Victor Tangermann. 2023. Newspaper Alarmed When ChatGPT References Article It Never Published. [https://futurism.com/newspaper-alarmed-chatgpt-references-article-never-published](https://futurism.com/newspaper-alarmed-chatgpt-references-article-never-published) * Vredeveldt et al. [2014] Annelies Vredeveldt, Peter J. van Koppen, and Par Anders Granhag. 2014. The Inconsistent Suspect: A Systematic Review of Different Types of Consistency in Truth Tellers and Liars. In _Investigative Interviewing_, Ray Bull (Ed.). Springer, New York, NY, 183-207. [https://doi.org/10.1007/978-1-4614-9642-7_10](https://doi.org/10.1007/978-1-4614-9642-7_10) * Weise and Metz [2023] Karen Weise and Cade Metz. 2023. When A.I. Chatbots Hallucinate. _The New York Times_ (May 2023). [https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html](https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html) * Wittgenstein [2001] Ludwig Wittgenstein. 2001. _Philosophical Investigations: The German Text, with a Revised English Translation_. Blackwell. Google-Books-ID: t_dPcAAACAAJ. ## Appendix A Licenses and Terms of Use According to the OpenAI terms of use Sharing and Publication policy,8 they "welcome research publications related to the OpenAI API." Following the Bing Search API Legal Information9, we do not store the results of the search queries but rather only whether or not there were any results. According to the ACM,10 "The full CCS classification tree is freely available for educational and research purposes." (This section will be included with any published version of our paper.) Footnote 8: [https://openai.com/policies/sharing-publication-policy](https://openai.com/policies/sharing-publication-policy) Footnote 9: [https://www.microsoft.com/en-us/bing/apis/legal](https://www.microsoft.com/en-us/bing/apis/legal) Footnote 10: [https://www.acm.org/publications/class-2012](https://www.acm.org/publications/class-2012) ## Appendix B Computation and cost We use OpenAI API for running the experiments on GPT-4, ChatGPT and Davinci. We show the average tokens consumed for prompt and completion for each of the approaches and data generation per candidate query in Tables 2 to 4. We estimate the cost based on the pricing details available as of May 2023.11 For GPT-4, around 2.2M tokens were used amounting to roughly $74 to evaluate all approaches. For ChatGPT, around 2.3M tokens were used amounting to roughly $5. For Davinci, around 2.1M tokens were used amounting to roughly $258. For Bing Search, we use an S1 instance of the Bing Search API 12. We made 3,000 queries in all to this endpoint amounting to $75. Summing these costs gives a total of $412. The compute requirements of combining these results were negligible. While the exact model sizes and floating point operations are not publicly available for these models, the total cost gives a rough idea on the order of magnitude of computation required in comparison to the hourly cost of, say, a GPU on the Azure platform. Footnote 11: [https://openai.com/policies/sharing-publication-policy](https://openai.com/policies/sharing-publication-policy) Footnote 12: [https://www.microsoft.com/en-us/bing/apis/pricing](https://www.microsoft.com/en-us/bing/apis/pricing)
2308.11290
ShadowNet for Data-Centric Quantum System Learning
Understanding the dynamics of large quantum systems is hindered by the curse of dimensionality. Statistical learning offers new possibilities in this regime by neural-network protocols and classical shadows, while both methods have limitations: the former is plagued by the predictive uncertainty and the latter lacks the generalization ability. Here we propose a data-centric learning paradigm combining the strength of these two approaches to facilitate diverse quantum system learning (QSL) tasks. Particularly, our paradigm utilizes classical shadows along with other easily obtainable information of quantum systems to create the training dataset, which is then learnt by neural networks to unveil the underlying mapping rule of the explored QSL problem. Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems at the inference stage, even with few state copies. Besides, it inherits the characteristic of classical shadows, enabling memory-efficient storage and faithful prediction. These features underscore the immense potential of the proposed data-centric approach in discovering novel and large-scale quantum systems. For concreteness, we present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits. Our work showcases the profound prospects of data-centric artificial intelligence to advance QSL in a faithful and generalizable manner.
Yuxuan Du, Yibo Yang, Tongliang Liu, Zhouchen Lin, Bernard Ghanem, Dacheng Tao
2023-08-22T09:11:53Z
http://arxiv.org/abs/2308.11290v1
# ShadowNet for Data-Centric Quantum System Learning ###### Abstract Understanding the dynamics of large quantum systems is hindered by the curse of dimensionality. Statistical learning offers new possibilities in this regime by neural-network protocols and classical shadows, while both methods have limitations: the former is plagued by the predictive uncertainty and the latter lacks the generalization ability. Here we propose a data-centric learning paradigm combining the strength of these two approaches to facilitate diverse quantum system learning (QSL) tasks. Particularly, our paradigm utilizes classical shadows along with other easily obtainable information of quantum systems to create the training dataset, which is then learnt by neural networks to unveil the underlying mapping rule of the explored QSL problem. Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems at the inference stage, even with few state copies. Besides, it inherits the characteristic of classical shadows, enabling memory-efficient storage and faithful prediction. These features underscore the immense potential of the proposed data-centric approach in discovering novel and large-scale quantum systems. For concreteness, we present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits. Our work showcases the profound prospects of data-centric artificial intelligence to advance QSL in a faithful and generalizable manner. ## I Introduction The precise characterization of quantum systems holds paramount significance in the development, validation, and evaluation of emerging quantum technologies [1, 2]. However, the rapid progress of quantum technologies presents an escalating challenge in fully describing modern quantum devices, since a general \(N\)-qubit state tomography necessitates exponential state replicas with \(N\) for a reliable estimation [3, 4, 5, 6, 7, 8, 9]. Fortunately, many physical scenarios offer a ray of hope as the structural information about the target systems is known, allowing scenario-specific models to learn large-scale quantum systems efficiently. As a result, great efforts have been made to designing effective models for quantum system learning (QSL), encompassing tasks such as low-entangled state reconstruction [10, 11, 12], randomized benchmarking [13, 14, 15], direct fidelity estimation [16, 17, 18], Hamiltonian learning [19, 20, 21], and self-testing [22, 23, 24]. Although diverse in specific tasks, all models share a common target: _fast_ acquire the interested information of the quantum system and _minimize_ the number of required state copies. Statistical learning algorithms are leading candidates to tackle QSL tasks, due to their intrinsic data-driven nature [25, 26]. Two prominent solutions in this context are shadow tomography [27] and neural-network-based QSL (NN-QSL) [28, 29, 30, 31]. Shadow tomography stands on the fact that many practical QSL tasks only require accurate predictions of specific properties of quantum systems, rather than their full classical descriptions. Myriad algorithms have been proposed to complete shadow tomography [32, 33, 34]. Among them, the most viable one is classical shadows [17], requiring only polynomial state copies to predict an exponential number of target functions. In parallel, NN-QSL makes use of the strong power of deep neural networks (DNNs) to learn a class of quantum states with similar structures by extracting their hidden features [35]. Over the past years, varied architectures of DNNs have been proposed to tackle different QSL tasks, capitalizing on the enhanced efficiency [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47] and generalization ability [48, 49, 50, 51, 52]. Despite their promising potential, both classical shadows and NN-QSL exhibit manifest limitations. Classical shadows and its variants lack generalizability, hinting the inability of extracting the knowledge from a class of states to reduce the sample complexity towards the desired estimation accuracy. On the other hand, although some NN-QSL protocols address this issue assisted by the supervised learning framework, concerns arise regarding the faithfulness of its output at the inference stage [29, 30]. This deficiency poses a critical challenge of using NN-QSL to learn unseen quantum systems, as the predictions may deviate significantly from the ground truth. In this study, we present a novel learning paradigm, dubbed ShadowNet, which combines the strength of classical shadows and NN-QSL to _efficiently, faithfully_, and _generalizably_ solve various QSL tasks. A fundamental aspect of ShadowNet lies in establishing a generic construction rule for the dataset used to train NN-QSL, incorporating classical shadows with readily accessible information about the explored quantum system. The emphasis on the importance of the dataset aligns with the concept of data-centric AI, highlighting the crucial role of improv ing datasets to enhance performance in practical machine learning applications [53; 54]. As depicted in Fig. 1(a), training on this dataset provides two attractive benefits: empowered by the generalization ability of DNNs, ShadowNet enables the offline training and fast predicting at the inference stage, even when limited state copies are available; the predictive faithfulness can be evaluated by the estimated properties of classical shadows. These distinctive features hint great potentials of ShadowNet in dealing with unseen quantum systems, and pave the way for using classical shadows as foundational elements in the development of _data-centric_ QSL. Although the core of QSL dataset is classical shadows and the system information, the detailed formalism of training data is contingent upon the specific problem being addressed, and the learning strategy employed by ShadowNet should be meticulously tailored to accommodate such specific data features. To exhibit the effectiveness of ShadowNet, we instantiate it to solve quantum state tomography and direct fidelity estimation tasks. Notably, the proposed formalism and learning strategy can be extended to tackle other substantial QSL tasks, which may be of independent interest. Numerical results exhibit the efficacy of ShadowNet in acquiring the desired information about the system using a reduced number of copies at the scale of up to 60 qubits. ## II Data-centric quantum system learning and shadowNet The paradigm of data-centric quantum system learning (QSL) underscores the paramount significance of elevating datasets as a potential avenue to enhance both the efficiency and fidelity of learning models when tackling QSL tasks. This concept stands in contrast to the traditional model-centric QSL approach, which revolves around the utilization of diverse deep learning models for QSL. Precisely, in the realm of data-centric QSL, the construction of datasets should adhere to three fundamental requirements: datasets designed for QSL should be memory efficient; the dimension of data features cannot be the sole computational bottleneck for learning models; data features should contain ample information to facilitate predictive uncertainty estimation while enabling an efficient collection process. Guided by these guiding principles, ShadowNet creates the training dataset as follows. Let \(\mathcal{D}_{\text{Tr}}=\{\mathbf{x}^{(i)},\mathbf{y}^{(i)}\}_{i=1}^{n}\) be the training dataset, containing \(n\) examples sampled from the underlying distribution \(\mathbb{D}\). For the \(i\)-th example, \(\mathbf{x}^{(i)}\) includes the classical shadows \(\hat{\rho}^{(i)}\) of the target state \(\rho^{(i)}\) with \(M\) snapshots whose magnitude depends on the desired faith (see SM C for explanations), and other possible information \(\mathbf{z}^{(i)}\) that is easily obtained to describe the quantum system. The label \(\mathbf{y}^{(i)}\) depends on the QSL task at hand, e.g., it refers to \(\rho^{(i)}\) in state tomography. Once \(\mathcal{D}_{\text{Tr}}\) is prepared, ShadowNet employs a tailored DNN to learn the mapping rule from \(\mathbf{x}\) to \(\mathbf{y}\) (see Fig. 1). Denote \(\mathcal{A}(\mathbf{x};\mathbf{w})\) as the prediction of DNN with \(\mathbf{w}\) be Figure 1: **The scheme of ShadowNet**. (a) On the relation of ShadowNet, classical shadows, and NN-QSL. The labels ‘Target’, ‘NN-QSL’, ‘Shadow’, and ‘ShadNet’ refer to the target result, the outputs of NN-QSL, classical shadows, and ShadowNet, respectively. Although NN-QSL enables the estimation error of training data below \(\epsilon_{1}\), the predictive faith of new states is unwarranted, highlighted by the dashed green stars. The estimation error of classical shadows \(\epsilon_{2}\) depends on the number of measurements on each state. ShadowNet incorporates the strengths of classical shadows and NN-QSL to faithfully predict unseen states using very few copies. (b) The basic mechanism of ShadowNet. The upper panel illustrates the training procedure. ShadowNet constructs the training dataset in which each example consists of classical shadows, system information, and ground truth. The classical shadows and system information should be preprocessed before feeding into a handcrafted deep neural networks (DNN), highlighted by the dashed cylinder. The detailed processing rule depends on the QSL problem at hand. The handcrafted DNN is optimized \(T\) epochs to minimize the predefined loss. The lower panel depicts the inference procedure. Given a new input, the same preprocessing rule is applied and then the processed data is fed into the trained DNN. (c) Faithfulness evaluation of ShadowNet. The predictive faith of ShadowNet can be effectively examined by its shadow estimation. ing weights to be optimized. The objective function of ShadowNet is \[\mathcal{L}(\mathbf{w})=\frac{1}{n}\sum_{i=1}^{n}\mathsf{L}\left(\mathcal{A}(\mathbf{x}^{ (i)};\mathbf{w}),\mathbf{y}^{(i)}\right), \tag{1}\] where the per-sample loss \(\mathsf{L}(\mathbf{x}^{(i)},\mathbf{y}^{(i)})\) quantifies the prediction error. Throughout the whole study, \(\mathsf{L}\) is specified to the mean-squared loss. The optimization of \(\mathbf{w}\) is completed by the gradient descent methods with in total \(T\) epochs. The detailed formalism of \(\mathbf{x}\) and implementation of \(\mathcal{A}\) are problem-dependent and will be elucidated later. During the inference procedure, the optimized ShadowNet can efficiently predict unseen examples \((\mathbf{x},\mathbf{y})\in\mathbb{D}\), i.e., \(\widetilde{\mathbf{y}}=\mathcal{A}(\mathbf{x},\mathbf{w}^{(T)})\). A major difference between ShadowNet and prior supervised NN-QSL methods is enabling a faithful prediction of \(\mathbf{y}\). The predictive faith is computed by its classical shadows, as shown in Fig. 1(c). That is, the prediction returned by ShadowNet should locate into the error bounds of its shadow estimation. Otherwise, the prediction is deemed to be _unfaithful_ and the classical shadows' estimation is preferred. ShadowNet advances classical shadows by harnessing DNN's capability of distilling knowledge from the training dataset. This acquired knowledge enables ShadowNet to achieve a lower estimation error compared to classical shadows, even when utilizing fewer state copies. The synergy of generalization ability, memory efficiency, and faithfulness positions ShadowNet as a powerful solution for learning novel and large-scale quantum systems. As aforementioned, the formalism of training data and the design of DNN are flexible and dominate the performance of ShadowNet. To provide a concrete illustration, we next introduce the implementation details of ShadowNet for two substantial QSL tasks: quantum state tomography (QST) and direct fidelity estimation (DFE). Note that the proposed methods can be extended to solve other QSL tasks, especially those that _can be addressed by classical shadows_[18]. ## III ShadowNet for quantum state tomography QST is the process by which the density matrix of a quantum state is reconstructed using measurements on a set of \(M\) identical state copies. Conventional QST methods typically optimize each state independently, disregarding the potential relationships among states that may exhibit similar structures and sample from the same underlying distribution \(\mathbb{D}\). In contrast, ShadowNet handles QST by reframing it as a learning problem rather than an optimization problem. Its primary objective is to extract valuable knowledge of \(\mathbb{D}\) by learning the mapping rule from finite measurement data to the precise density matrix, and then exploits this knowledge to reduce the sample complexity \(M\). In this way, ShadowNet possesses the capability of efficiently predicting the density matrix of previously unseen states sampled from \(\mathbb{D}\), using very few measurements. When solving \(N\)-qubit QST, the learning process of ShadowNet resembles the image-denoising task in computer vision [55]. Intuitively, the measured data of each state form a 'noisy image', and DNN is optimized to acquire a mapping rule for denoising, thereby recovering the 'precise image' (i.e., the density matrix). More formally, the learning procedure involves utilizing the reconstructed shadow state \(\hat{\rho}^{(i)}\) as input and the corresponding exact density matrix \(\rho^{(i)}\) as the label \(\mathbf{y}^{(i)}\). Denote \(\mathcal{C}^{(i)}=\{(U^{(i)}_{j,m},b^{(i)}_{j,m})\}_{j,m=1}^{N,M}\) as classical shadows of \(\rho^{(i)}\) collected by \(M\) random Pauli-based measurements, where \(U^{(i)}_{j,m}\) is a Pauli operator and \(b_{j}\in\{0,1\}\), \(\forall j\in[N]\). The explicit form of the data feature in Eq. (1) is defined as \[\mathbf{x}^{(i)}\equiv\hat{\rho}^{(i)}=\frac{1}{M}\sum_{m=1}^{M}\bigotimes_{j=1}^ {N}\Big{(}3U^{(i)\dagger}_{j,m}|b^{(i)}_{j,m}\rangle\langle b^{(i)}_{j,m}|U^{( i)}_{j,m}-\mathbb{I}_{2}\Big{)}. \tag{2}\] In the training process, the reconstructed shadow state \(\hat{\rho}^{(i)}\) is fed into DNN and the denoised state is denoted by \(\widetilde{\rho}^{(i)}=\mathcal{A}(\hat{\rho}^{(i)};\mathbf{w})\). To ensure that \(\widetilde{\rho}^{(i)}\) is physical, ShadowNet adopts a handcrafted DNN to learn the mapping rule from \(\hat{\rho}\) to \(\rho\), formed by the attention mechanism [56] and a density-matrix constraint layer [39]. The per-sample loss in Eq. (1) yields \(\mathsf{L}(\mathbf{x}^{(i)},\rho^{(i)})=\|\widetilde{\rho}^{(i)}-\rho^{(i)}\|_{2}^ {2}\), quantifying the reconstruction error with the Frobenius norm. Note that besides attention mechanism, convolutional mechanism and other advanced protocols can also realize ShadowNet. Refer to SM D for the elaboration. At the inference stage, ShadowNet exhibits an advantage by directly predicting the density matrix of previously unseen states, using only the reconstructed shadow state as input, without requiring additional optimization. This feature pinpoints the superior efficiency of ShadowNet in comparison to optimization-based QST methods, which often entail lengthy post-processing procedures [57]. Besides, unlike prior NN-QST, ShadowNet allows the faithful prediction. Note that although fidelity is a standard metric for assessing the faithfulness in QST, exponential state copies are essential to ensure a reliable estimation. In this regard, a surrogate quantity \(\mathsf{g}\) may be adopted to gauge the faithfulness, which leverages prior information about the explored quantum system. For example, when learning a class of ground states, \(\mathsf{g}\) corresponds to the ground energies. The reconstructed state is judged to be faithful if the estimated ground state energy \(\widetilde{\mathsf{g}}\) falls into the error bounds of its shadow estimation \(\hat{\mathsf{g}}\). We test ShadowNet on reconstructing ground states of two quantum spin systems: one-dimensional transverse-field Ising model (TFIM) and XXZ model. Both systems are crucial for many-body quantum simulations and have been widely explored [28; 30; 36; 45]. The explicit expression of TFIM is \(H_{\text{TFIM}}=J_{z}\sum_{\langle i,j\rangle}Z_{i}Z_{j}-J_{x}\sum_{i}X_{i}\), where \(\langle i,j\rangle\) denotes the neighborhood spins, and \(J_{z}\) and \(J_{x}\) stand for interaction strength and the transverse field, respectively. The explicit form of XXZ model is \(H_{\text{XXZ}}=-\sum_{i=0}^{N-2}\Delta_{i}(X_{i}X_{i+1}+Y_{i}Y_{i+1})-Z_{i}Z_{i+1}\), where \(\Delta_{i}\) refers to the coupling parameter. In the simulations, the number of qubits is set as \(N=5\) for both Hamiltonians. The performance of ShadowNet is evaluated by two metrics: the fidelity \(F_{Q}(\rho,\widehat{\rho})=(\text{Tr}(\sqrt{\sqrt{\rho}\rho}\sqrt{\rho}))^{2}\); the estimation error of the ground energy \(\mathtt{E}_{1}=\text{Tr}((\widehat{\rho}-\rho)H)\). The simulation results are illustrated in Fig. 2. In the first task, we apply ShadowNet to learn the ground states of TFIM. To collect the training and test datasets, we fix \(J_{x}=1\) and sample \(J_{Z}\) uniformly from the interval \([-0.5,0.5]\) to obtain different examples. The number of measurements for classical shadows is \(M=10000\). When the size of training dataset is \(n=200\), the simulation results are exhibited in Fig. 2(a). The left panel indicates that after training \(T=1000\) epochs, the training fidelity \(F_{1}\) is near to \(1\) and the estimation error of the ground energy \(E_{1}\) is around zero. Moreover, ShadowNet has a satisfied generalizable ability after \(T=10000\) epochs. The test fidelity on \(200\) unseen ground states are near to \(1\). Meanwhile, the ground energy estimation error is almost zero, which is \(0.02\). The right panel showcases the predictive faith of ShadowNet on five test instances in terms of \(E_{1}\). The results reflect that the predictions of ShadowNet fall into the error bound of classical shadows and are closer to the exact results. We then explore how the dataset size \(n\) affects the performance of ShadowNet. All hyper-parameter settings are kept to be the same with the above experiment, except for the varied dataset size, i.e., \(n\in\{8,50,200\}\). The simulation results are visualized in Fig. 2(b). When decreasing \(n\) from \(200\) to \(8\), the test fidelity drops from \(1\) to \(0.81\) and the test estimation error increases from \(0.02\) to \(0.94\). These results suggest that a modest size of dataset is necessary to warrant the performance of ShadowNet. The last task related to TFIM is investigating the role of the shot number \(M\). All hyper-parameter settings are the same with the previous experiments, except for two modifications. First, the shot number has three varied settings, i.e., \(M\in\{10,1000,10000\}\). Second, when constructing training and test datasets, the range of the interaction strength \(J_{z}\) expands to \([-2,2]\), leading to the increased diversity of examples. The simulation results of ShadowNet are shown in Fig. 2(c). An immediate observation is that when \(M\) exceeds a threshold, the performance of ShadowNet tends to be optimal. In other words, the shot number \(M\) should be carefully selected in ShadowNet, since continuously increasing \(M\) does not only require expensively computational cost, but also narrows the advance of ShadowNet compared to classical shadows. We next apply ShadowNet to tackle a more difficult QST task, where the ground states are collected from TFIM and XXZ model. For both the training and test datasets, the construction rule related to TFIM is identical to the third task introduced above. For XXZ model, we set \(\Delta_{i}=\Delta_{i^{\prime}}\) for \(\forall i^{\prime},i\in[N-2]\) and uniformly sample \(\Delta_{i}\) from \([-3,3]\) to generate training and test examples. The training dataset \(\mathcal{D}_{\text{Tr}}\) contains the equivalent number of ground states from TFIM and XXZ model. We fix \(N=5\) and \(T=5000\), and vary the settings of \(n\) and \(M\) to evaluate the performance of ShadowNet. The achieved results are summarized in Table 1. Particularly, when \(M\geq 500\) and \(n=800\), the test fidelity is almost optimal and the estimation error is near to zero. These observations accord with the results of TFIM, i.e., the size of training dataset \(n\) dominates the performance Figure 2: **Results of ShadowNet in reconstructing ground states of TFIM with \(N=5\). (a) The left panel shows the train and test performance of ShadowNet with \(n=200\) and \(M=10^{4}\) with respect to epochs \(T\). The notations ‘\(F_{Q}\)-Train’ (‘\(F_{Q}\)-Test’) and ‘\(E_{1}\)-Train’ (‘\(E_{1}\)-Test’) represent the fidelity between the averaged output of ShadowNet and the target state, and the estimation error on the training (test) dataset, respectively. The right panel illustrates the predictive faith of ShadowNet on five test instances. The notation ‘CS’ refers to classical shadows. The vertical line stands for the error bound of classical shadows. (b) Test performance of ShadowNet with respect to the varied size of training dataset. The symbol \(n=a\) refers that the size of \(\mathcal{D}_{\text{Tr}}\) is \(n\). Vertical bar highlights the variance of the collected results. (c) Test performance of ShadowNet with respect to the varied shot number \(M\). The symbols share the same meaning with those in (a) and (b).** of ShadowNet when \(M\) excesses a threshold. In addition, ShadowNet dramatically outperforms classical shadows with sufficient training data, i.e., 0.044 versus 0.477 when \(n=800\) and \(M=500\). This result suggests the superiority of ShadowNet in learning complex quantum systems since it can attain the desired estimation using significantly fewer samples compared to classical shadows. Refer to SM E for more analysis, including the omitted simulation details and the evaluation of ShadowNet under the convolutional mechanisms. implying that ShadowNet cannot accurately predict the fidelity of each state. After training, the representations of noisy global and local GHZ states are well-aligned, enabling the accurate predictions (see SM E for elaborations). We next systematically comprehend how the number of qubits \(N\) and the size of training dataset \(n\) affect the prediction accuracy of ShadowNet. To do so, we vary the number of qubits as \(N\in\{10,30,60\}\) and the dataset size as \(n\in\{20,200,800\}\). The simulation results are demonstrated in Fig. 3(d). When \(n\) exceeds a threshold (e.g., \(n>20\)), ShadowNet attains a satisfied performance for all settings of \(N\). This phenomenon suggests that the proposed dataset construction rule has the potential to dramatically reduce the required number of training examples, as a crucial characteristic to apply ShadowNet to solve large-scale DFE tasks. Refer to SM E for more numerical results. ## V Discussion and Outlook We present the concept of data-centric QSL, emphasizing the central role played by datasets in contrast to the model-centric approaches prevalent in previous NN-QSL methods, which primarily focus on the design of DNN architectures. In this respect, we devise ShadowNet as a flexible and robust paradigm to tackle QSL tasks, with a specific emphasis on the synergy of generalizability, efficiency, and faithfulness. Central to ShadowNet is the utilization of classical shadows, known for their memory efficiency and estimation guarantees, and easily obtained information to create datasets. Due to the flexibility of ShadowNet, we devise two distinct approaches for the dataset construction and model implementation, tailored to the specific requirements of the task at hand. Numerical simulations confirm the efficacy of ShadowNet and exhibit its potential to advance data-centric QSL. Our work stimulates several important avenues for future research. While our focus has been on Pauli-based random measurements due to their hardware-friendly feature, exploring advanced variants of classical shadows presents an intriguing opportunity to enhance the performance of ShadowNet [60, 61, 62, 63, 64, 65]. Moreover, instead of the discrete quantum systems, the applicability of ShadowNet can be extended to facilitate the learning of continuous-variable quantum systems by leveraging continuous-variable classical shadows to construct the similar training datasets [66]. Exploring alternative construction rules and expanding the applications of ShadowNet represent a critical avenue for further research. In the context of DFE, we leverage the noise information inherent in quantum systems to be a part of data feature. As the majority of QSL problems arise from noisy intermediate-scale quantum devices [67], it is intriguing to investigate whether other easily accessible information pertaining to quantum devices can be utilized to construct datasets that aid DNN in extracting underlying mapping rules, even with limited data sizes. For instance, considering the prominence of variational quantum algorithms [68, 69, 70, 71, 72] as a major application for near-term quantum devices, the design of novel data-centric QSL models to enhance the characterization and error mitigation of the output of these algorithms [73, 74] emerges as a desirable direction. Figure 3: **Results of ShadowNet in solving DFE with noisy GHZ states**. (a) Circuit implementation in preparing global (left panel) and local (right panel) GHZ states under different levels of noise. (b) Performance of classical shadows (denoted by ‘CS’) and ShadowNet in predicting 50 noisy global GHZ states with \(N=60\). The X-axis and Y-axis refers to the level of noise \(p_{1}\) and \(p_{2}\), respectively. The color bar stands for the fidelity between the prepared and ideal states. (c) Visualization of hidden features extracted by ShadowNet before and after training on the whole test examples. The label ‘Glb’ (‘Loc’) represents the global (local) GHZ states. The color bar refers to the ground truth of fidelity of each state. (d) The scalability of ShadowNet trained on the varied training dataset size \(n\) and the number of qubits \(N\). The X-axis refers to the number of qubits and Y-axis refers to the test loss. The label \(n=a\) denotes the number of training examples is \(a\).
2307.01606
Exponentially long transient time to synchronization of coupled chaotic circle maps in dense random networks
We study the transition to synchronization in large, dense networks of chaotic circle maps, where an exact solution of the mean-field dynamics in the infinite network and all-to-all coupling limit is known. In dense networks of finite size and link probability of smaller than one, the incoherent state is meta-stable for coupling strengths that are larger than the mean-field critical coupling. We observe chaotic transients with exponentially distributed escape times and study the scaling behavior of the mean time to synchronization.
Hans Muller Mendonca, Ralf Tönjes, Tiago Pereira
2023-07-04T09:44:36Z
http://arxiv.org/abs/2307.01606v1
Exponentially long transient time to synchronization of coupled chaotic circle maps in dense random networks ###### Abstract We study the transition to synchronization in large, dense networks of chaotic circle maps, where an exact solution of the mean-field dynamics in the infinite network and all-to-all coupling limit is known. In dense networks of finite size and link probability of smaller than one, the incoherent state is meta-stable for coupling strengths that are larger than the mean-field critical coupling. We observe chaotic transients with exponentially distributed escape times and study the scaling behavior of the mean time to synchronization. synchronization; random networks; chaotic maps; mean-field analysis; finite size effects ## I Introduction Complex nonlinear systems often exhibit collective synchronization phenomena which can play an important role for the overall functioning of a system [1; 2; 3]. Phase oscillator models can elucidate key aspects of the mechanism that generates the collective motion [4]. The Kuramoto model, for instance, is particularly useful in describing groups of weakly coupled oscillators such as Josephson junctions, and they can be analyzed in almost full detail in the thermodynamic limit of infinitely many oscillators. Indeed, Kuramoto himself initially studied the fully connected networks of coupled oscillators with frequency heterogeneity, and obtained the critical value of the coupling strength for the transition from incoherence to synchronized collective oscillations. [5]. While such predictions are obtained in the thermodynamic limit, they have been used as fruitful approaches to describe networks with finitely many oscillators [6; 7]. However, recent work has shown that finite size fluctuations or sparse connections in the network can significantly impact on the overall dynamics. In fact, in certain models, synchronization cannot, even approximately, be predicted from the mean-field approximation in the thermodynamic limit [8]. That is, in these models, a transition to synchronization occurs or is inhibited because of finite size fluctuations [9; 10]. The interplay between mean-field predictions and finite-size fluctuations for general models remains elusive and requires further investigation. In this work, we study chaotic phase maps in dense networks where the mean-field dynamics can be analyzed exactly in the thermodynamic limit. For small coupling, due to the chaotic phase dynamics, only incoherence is stable. For a range of coupling strengths, mean-field analysis predicts coexistence between complete chaotic synchronization and incoherence, and for strong coupling, the incoherence becomes unstable. Then, complete synchronization is the globally attracting state in our model. Our results are two-fold: (i) For coupling strengths with a stable coexistence of incoherence and synchronization, although incoherence is locally attracting, finite-size fluctuations can take the system into the basin of attraction of the absorbing state of complete synchronization. Starting near incoherence with uniformly distributed random oscillator phases, the distribution of transient times towards synchronization is exponential and scales as a power of the system size. (ii) Above the critical coupling strength, in dense but incomplete networks, although linear stability analysis of the mean-field equations suggests that any nonzero mean field, e.g., finite size fluctuations of the mean field, will grow exponentially fast, we observe an exponentially long chaotic transient in the incoherent state. Such a delayed transition to synchronization has so far not been described in dense networks of coupled phase oscillators or coupled chaotic maps. ## II Model of coupled chaotic maps The local phase dynamics in each node is modelled as a Bernoulli map of the circle with time steps \(t\in\mathbb{Z}\) \[\varphi(t+1)=f(\varphi(t))=2\varphi(t)\mod 2\pi, \tag{1}\] or via the abuse of notation on the complex unit circle \(z=\exp(i\varphi)\), we write \(z(t+1)=f(z(t))=z(t)^{2}\). This map is chaotic and structurally stable [11]. That is, the statistical properties of the map persist under small perturbations. Therefore, for small coupling, the maps behave as nearly independent, and no collective dynamics is possible for small coupling. In [12], the global coupling of the phase dynamics is implemented as a Moebius map on the complex unit circle. The Moebius map has been shown to give exact solutions of sinusoidally forced phase dynamics [13], including the Kuramoto model, Winfree-type phase equations, and via a nonlinear transformation, the dynamics of theta neurons [14]. It is therefore a meaningful alternative to the sine coupling in the standard circle map. Here, we use a composition of (1) and a Moebius map (see Figure 1) \[z(t+1)=M\left(f(z(t)),\Phi(t),\tau(t)\right), \tag{2}\] where \[M(w,\Phi,\tau)=\frac{e^{i\Phi}\tau+w}{1+e^{-i\Phi}\tau w} \tag{3}\] for a coupling intensity \(-1<\tau<1\), an angle of contraction \(\Phi\in\mathbb{S}^{1}\), and a point \(w\in\mathbb{D}=\{z\in\mathbb{C}:|z|<1\}\) on the open complex unit disc. The family of Moebius maps is a group of biholomorphic automorphisms of \(\mathbb{D}\), and via analytic continuation, these transformations map the boundary of \(\mathbb{D}\) bijectively onto itself. The effect of (3) on the unit disc is a contraction of almost all points towards \(\exp(i\Phi)\) on the boundary where \(\lim_{\tau\to\pm 1}M(w,\Phi,\tau)=\pm\exp(i\Phi)\) and \(\lim_{\tau\to 0}M(w,\Phi,\tau)=w\). The parameter \(\tau\) characterizes the strength of the contraction. For \(\tau\to 0\), the map (2) approaches the uncoupled dynamics (1). Moreover, the family of wrapped Cauchy distributions \[p(\varphi)=\frac{1}{2\pi}\frac{1-R^{2}}{|1-Re^{i(\varphi-\Theta)}|^{2}} \tag{4}\] which includes incoherence as the uniform distribution when \(R\to 0\) and a delta distribution at \(\varphi=\Theta\) when \(R\to 1\), is invariant under (2) and (3) [12; 13; 15]. This family of continuous phase measures, in the context of phase synchronization, is known as the Ott-Antonsen manifold, and assuming this form of phase distribution is equivalent to the so called Ott-Antonsen ansatz [16; 17]. The Ott-Antonsen manifold is parameterized using the mean-field amplitude \(R\) and the mean-field angle \(\Theta\) \[Z=Re^{i\Theta}=\int_{0}^{2\pi}e^{i\varphi}p(\varphi)\,d\varphi. \tag{5}\] The mean-field amplitude \(R\) is the Kuramoto order parameter [18], which is zero for incoherence, i.e., a uniform phase distribution, and \(R=1\) for complete synchronization \(\varphi_{n}=\Theta\) (a.s.). Furthermore, the higher circular moments \(Z_{q}\) on the Ott-Antonsen manifold with \(q\in\mathbb{Z}\) are integer powers of the mean field \[Z_{q}=\int_{0}^{2\pi}e^{iq\varphi}p(\varphi)\,d\varphi=Z^{q}. \tag{6}\] As a consequence, phase doubling maps the circular moments as \(f(Z_{q}(t))=Z_{2q}(t)=Z_{2}^{q}(t)=f(Z_{1}(t))^{q}\), leaving the Ott-Antonsen manifold invariant and mapping the mean-field amplitude and phase as \(R\to R^{2}\) and \(\Theta\to 2\Theta\). To couple the dynamics of the Bernoulli maps (2), the parameters \(\Phi(t)\) and \(\tau(t)\) in (3) should be defined as functions of the ensemble mean field. Following [12], we define the contraction angle \(\Phi(t)\) and the coupling intensity \(\tau(t)\) as \[Z(t) = \frac{1}{N}\sum_{n=1}^{N}z_{n}(t)=R(t)e^{i\Theta(t)} \tag{7}\] \[\Phi(t) = 2\Theta(t)\] (8) \[\tau(t) = \tanh\left(\frac{\varepsilon}{2}R(t)\right), \tag{9}\] where \(\varepsilon\) is a coupling strength. For \(\tau=1\), when \(\varepsilon R\to\infty\), the phases are contracted to a single point \(\exp(2i\Theta)\) on the unit circle. For small values of \(\varepsilon R\), we can expand (2) to the linear order and obtain the more familiar form of mean-field coupled circle maps with phase doubling \[\varphi_{n}(t+1)=2\varphi_{n}(t)+\varepsilon R(t)\sin\left(2\Theta(t)-2 \varphi_{n}(t)\right)+O(\varepsilon^{2}R^{2}(t)). \tag{10}\] The crucial observation is that on the Ott-Antonsen manifold, the mean-field \(Z=R\exp(i\Theta)\) transforms exactly the same way via (2),(3) as each element \(z=\exp(i\varphi)\) on the unit circle [12; 13]; that is, \[Z(t+1)=M(Z^{2}(t),\Phi(t),\tau(t)). \tag{11}\] It is highly unusual that a closed analytic expression for the dynamics of the mean field can be derived and thus analyzed in coupled nonlinear dynamical systems. The reduction in infinitely dimensional microscopic dynamics to the low-dimensional dynamics of the mean-field [16] has been tremendously successful in the analysis of synchronization phenomena over the last decade, while the effects of the finite system size \(N\) remain difficult to analyze [19; 20]. We note that the point measure of a finite ensemble of phases is never actually on the Ott-Antonsen manifold, but can, in some sense, be arbitrarily close to the so-called thermodynamic limit, i.e., the limit of the infinite system size \(N\to\infty\). Applying the Ott-Antonsen ansatz to networks of phase oscillators is possible if the network structure allows for the partitioning of the vertices into a few classes of equivalent vertices. Assuming that all vertices of a class are subjected to the same sinusoidal forcing, the dynamics of the phases in the network can be reduced to the dynamics of coupled mean fields on the Ott-Antonsen manifold for each vertex class [21; 22; 10; 23; 24]. Additionally, heterogeneity in the oscillators and fluctuations in the forces can be incorporated into the mean field dynamics if they follow Cauchy distributions [25; 26; 27]. ### Mean-Field Analysis The mean-field dynamics (11) can be written in terms of the polar representation \[\Theta(t+1)=f(\Theta(t))\quad\text{ and }\quad R(t+1)=\frac{\tau(t)+R^{2}(t)}{ 1+\tau(t)R^{2}(t)}. \tag{12}\] This means that the dynamics of the phase \(\Theta\) decouples from the amplitude and will evolve chaotically. Using Equations (9) and (12), we obtain the amplitude dynamics \[R(t+1)=\frac{\tanh\left(\frac{1}{2}\varepsilon R(t)\right)+R^{2}(t)}{1+\tanh \left(\frac{1}{2}\varepsilon R(t)\right)R^{2}(t)} \tag{13}\] which describes the exact evolution of the order parameter \(R\) in a closed form. We can readily determine the fixed points of the mean-field amplitude \(R(t)\) and their linear stability. Both the complete synchronization \(R=1\) and the complete desynchronization \(R=0\) are fixed points of (13), and change stability at unique critical points \(\varepsilon_{1}=\ln(2)\approx 0.69\) and \(\varepsilon_{0}=2\), respectively, as determined by the eigenvalues of Jacobian of Equation (13) at these fixed points. These critical points are connected by an unstable fixed point branch \((\varepsilon(R_{u}),R_{u})\), where \[\varepsilon(R_{u})=\frac{1}{R_{u}}\log\left(\frac{(1+R_{u})^{2}}{1+R_{u}^{2} }\right). \tag{14}\] This expression is derived from (13) by setting \(R(t+1)=R(t)=R_{u}\) and resolving the equation for \(\varepsilon\). This means that this system of all-to-all coupled, identical chaotic phase maps will always evolve to complete synchronization or complete desynchronization, with a small region \(\ln(2)<\varepsilon<2\) of bistability (Figure 2a). ### Extension to Networks Next, we have studied the same phase dynamics on a random network of \(N\) maps which are coupled to exactly \(k\) different, random neighbors. Here, each phase \(\varphi_{n}\) couples to a local mean field \[Q_{n}=R_{n}e^{i\Theta_{n}}=\frac{1}{k}\sum_{n=1}^{N}A_{nm}z_{m} \tag{15}\] where \(A_{nm}\) are the entries of the adjacency matrix, i.e., equal to one if there is a link from vertex \(m\) to vertex \(n\), but zero otherwise, and \(k=\sum_{m=1}^{N}A_{nm}\) is the in-degree of node \(n\), which, for computational simplicity, we assume to be identical for all nodes. Thus, with \(\tau_{n}=\tanh\left(\frac{\varepsilon}{2}R_{n}\right)\), the dynamics of the phases coupled through a network are \[z_{n}(t+1)=\frac{e^{2i\Theta_{n}(t)}\tau_{n}(t)+z_{n}^{2}(t)}{1+e^{-2i\Theta_{ n}(t)}\tau_{n}(t)z_{n}^{2}(t)}. \tag{16}\] A class of networks is dense if \(\lim_{N\to\infty}\langle k\rangle/N=p>0\), where \(\langle k\rangle\) is the mean node degree. Therefore, \(p\) is the fraction of nodes, in relation to the system size \(N\), that an oscillator is coupled to. Since dense networks are defined in the limit of \(N\to\infty\), there is no sharp distinction between sparse and dense networks of finite size. We refer to a finite network as dense if two nodes share more than one neighbor on average, i.e., \(\langle k\rangle^{2}/N=p^{2}N>1\). In large dense networks, the local mean fields of the oscillators in the neighborhood of each node (15) are equal to the global mean field, with a deviation of \(O(1/\sqrt{k})\), where \(k\) is the size of the neighborhood, i.e., the in-degree of the node. Therefore, mean-field theory should be exact for dense networks in the thermodynamic limit where \(\langle k\rangle\to\infty\). _The network model_ First, we wish to compare the simulation results directly with our mean-field analysis. For large random networks with a link density \(p=k/N\) and \(0<p<1\), the numerical simulations are time-consuming since the \(N\) local mean fields at each node in the network need to be computed in each time step. To simplify these computations, we use a random network where each node couples to exactly \(k\) different random neighbors. This model with a unique in-degree of \(k\) for each node is slightly different from the Erdos Renyi model, with a Poissonian in-degree distribution of small relative width \(\text{std}(k)/\langle k\rangle\sim 1/\sqrt{k}\). For large \(k\), the results of the simulations in our random network model and other random networks with uncorrelated node degrees and a vanishing relative width of the degree distribution are expected to be identical. Figure 1: **Dynamics of phases.**\(N=30\) points on the complex unit circle colored by phase, and corresponding mean field (red dot inside the unit circle). From left to right : initial phase configuration at the points \(z_{n}\) with mean-field amplitude \(R=0.5\) and mean-field phase \(\Theta=\pi/4\), after chaotic phase doubling \(z_{n}^{2}\) with \(R^{2}=0.25\) and \(2\Theta=\pi/2\), and after subsequent contraction toward the angle \(\pi/2\) with intensity \(\tau=0.5\). ## III Results ### Distributions of Transient Times We perform a large number \(M\) of simulations \(m=1\dots M\) from independent, uniformly distributed random initial phases over a maximum of \(T\) steps and record in each simulation the first time step \(t_{m}\) when \(R\geq 0.5\), i.e., the transition time from an incoherent state to complete synchronization. Finite-size scaling for such a discontinuous transition is challenging [28]. The exponential distribution of the times \(t_{m}\), according to some characteristic transition rate, can be checked in a rank plot of time points \(t_{m}\), which gives the sample complementary cumulative distribution \(C(t)=\text{prob}(t\geq t_{m})=\text{rank}(t_{m})/M\)[27, 28]. An exponential tail distribution \(C(t)\) up to observation time \(T\) indicates an exponential distribution of transient times. Since the simulation time is finite, transition times \(t_{m}\geq T\) are not observed, which represents a problem when we are interested in the average time to synchronization. However, assuming a discrete exponential, i.e., geometric distribution, a maximum likelihood estimation of the average transition time is possible up to values considerably exceeding the observation time \(T\) (see Appendix B). Denoting the number of simulations that synchronize at times \(t_{m}<T\) as \(M_{T}\), and defining the observable values \(l_{m}=\text{min}(t_{m},T)\), the maximum likelihood estimation of the expected value \(T_{esc}=\text{E}[t_{m}]\) for the geometric distribution is \[T_{esc}=\frac{\left\langle l_{m}\right\rangle M}{M_{T}}. \tag{17}\] with the sample mean \(\left\langle l_{m}\right\rangle\). If the transition to synchronization is observed in all simulations, i.e., \(M_{T}=M\), the estimator is simply the sample mean of \(t_{m}\), which is an estimator of \(T_{esc}\) for arbitrary transient time distributions. However, when most runs do not synchronize within the finite simulation time \(T\), the ratio \(M/M_{T}\) contains additional information, and the estimated mean escape time can be much larger than the observation time. ### During Coexistence: Escape over the Unstable Branch In [29], it was reported that the transition from incoherence to collective dynamics in sparse networks of coupled logistic maps is of the mean-field type. The analysis in [30] predicts a shift in the critical coupling strength in random networks of Kuramoto phase oscillators of the order \(\langle k\rangle^{2}/\langle k^{2}\rangle\) due to degree inhomogeneity, and \(1/\langle k\rangle\) due to finite size fluctuations of the local mean fields. That is, in dense, homogeneous networks with \(\langle k\rangle^{2}/\langle k^{2}\rangle\to 1\) and \(\langle k\rangle\to\infty\), the critical coupling strength does not change. We expected to find similar behaviors for network-coupled Bernoulli maps. In complete or almost complete networks \(k/N=p\approx 1\) for \(\varepsilon<2\), there is a small probability that finite size fluctuations bring the order parameter \(R\) above the unstable branch, leading to a spontaneous transition to complete synchronization, as shown in Figure 3a. We first observe the scaling of the transient time in fully connected networks with \(p=1\). For values of \(\varepsilon<\varepsilon_{0}=2.0\), the transition rate to synchronization scales strongly with the size \(N\) of the system (Figure 3b,c). However, for values \(\varepsilon>\varepsilon_{0}\), the average transition time depends very weakly on \(N\), as the system grows exponentially fast from a state of incoherence, with \(R\approx 1/\sqrt{N}\). We estimate a finite size scaling exponent \(\beta\) below the transition threshold by collapsing the curves \(T_{esc}(\varepsilon,N)\) using the ansatz \(T_{esc}(\varepsilon,N)=T_{esc}\left((\varepsilon-\varepsilon_{0})N^{\beta} \right).\) The data are consistent with an ad hoc exponent of \(\beta=1/3\) (Figure 3c). ### Above the Critical Coupling Strength: Long Chaotic Transient Above the critical coupling strength \(\varepsilon>\varepsilon_{0}=2\), we expected finite size fluctuations to grow exponentially fast and independently of \(N\), as predicted by linear stability analysis of the mean-field equations (13). Instead, for small connection probabilities \(0<p<1\), we have observed a chaotic transient with seemingly stationary finite size fluctuations \(O(1/\sqrt{N})\) of the mean field (Figure 3). In the large \(N\) limit, the distribution of the transient times depends on the link density \(p\) with increasingly long transients as \(p\) is decreased, but it is otherwise independent of \(N\). A coupling strength for which a transition to complete synchronization could still be observed within the simu Figure 2: **Bifurcation diagram of the mean-field amplitude and a representation of the network interaction**. In (**a**), the bifurcation diagram of the all-to-all coupling mean-field dynamics (12), i.e., on the Ott-Antonsen manifold. Dotted lines show linearly unstable fixed points and solid lines show linearly stable fixed points in the thermodynamic limit. (**b**) Venn diagram of a dense network with \(N\) vertices and connection probability \(p\). The sets of neighbors of nodes \(m\) and \(n\) are of size \(pN\) and their overlap is of size \(p^{2}N\), resulting in correlated local mean fields \(Q_{m}=R_{m}\exp(i\Theta_{m})\) and \(Q_{n}=R_{n}\exp(i\Theta_{n})\) acting on the states \(z_{m}\) and \(z_{n}\). The ratio of the amplitudes of the local mean fields and the global mean field are independent of the network size \(N\). lation time was considerably larger than the mean-field critical coupling \(\varepsilon_{0}=2\). That is, even in dense networks and above the mean-field critical coupling, finite size fluctuations will not necessarily result in the nucleation and exponential growth of a collective mode. Such a delayed transition to synchronization [31] has so far not been described in systems of coupled phase oscillators [32; 30; 33] or coupled logistic maps [29]. In Figure 27f, we plot \(T_{esc}\) over \((\varepsilon-\varepsilon_{0})p\) to demonstrate that the average transition time is roughly scaling as \(1/p\). We do not look for higher-order corrections such as a weak dependence of \(\varepsilon_{0}\) on \(p\), although the curves do not collapse perfectly. Note that the escape time is largely independent of the network size (Figure 25e,f). For \(p=0.1\), \(0.05\), and \(0.025\) we have performed simulations with \(N=10^{4}\) (circles) and with \(N=5\times 10^{4}\) (crosses) for comparison. For \(p=0.01\), we compare network sizes \(N=10^{4}\) (circles) with very time-consuming simulations in networks with \(N=10^{5}\) (crosses). ### Discussion of Finite Size Scaling Mean field theory assumes a phase distribution on the Ott-Antonsen manifold. The characteristic function of a wrapped Cauchy distribution is the geometric sequence \(Z_{q}=Z^{q}\) of circular moments (6). However, in the incoherent state with \(N\) independent uniformly distributed phases \(\varphi_{n}\) the circular moments of an ensemble \[Z_{q}=\frac{1}{N}\sum_{n=1}^{N}e^{iq\varphi_{n}} \tag{18}\] are almost independent complex numbers with a Gaussian distribution of mean zero and a variance of \(1/N\) by virtue of the central limit theorem. The action of the Bernoulli map on the circular moments is the shift \[Z_{q}\to Z_{2q}, \tag{19}\] that is, it is achieved by discarding all odd circular moments. The exponential growth of the order parameter in accordance to mean field theory is expected after the distribution comes close to the Ott-Antonsen manifold, i.e., when the first few circular moments align by chance sufficiently under the mapping (19); in particular, \(Z_{2}(t)\approx Z_{1}^{2}(t)\). Unless the directions of \(Z_{2}\) and \(Z_{1}^{2}\) align by chance, as they would on the Ott-Antonsen manifold, the subsequent contraction of strength \(\varepsilon R\) in the direction of \(Z_{1}^{2}\) after the phase doubling may even decrease the amplitude of the order parameter. In addition, for coupling strengths \(\varepsilon\) below the critical value, \(R=|Z_{1}|\) must be above the unstable branch \(R>R_{u}(\varepsilon)\sim(\varepsilon_{0}-\varepsilon)\). The rate of such a random event should depend on the ratio between \(R_{u}(\varepsilon)\) and the standard deviation \(1/\sqrt{N}\) of the Gaussian distribution of the complex mean field. Based on this scaling argument, the expected time to synchronize should scale as \(T_{esc}=T_{esc}((\varepsilon-\varepsilon_{0})\sqrt{N})\) below the critical coupling. The best collapse of the estimated escape times in fully connected networks of coupled Bernoulli maps was observed by scaling the distance to \(\varepsilon_{0}\) with \(N^{1/3}\) (Figure 3c), i.e., the exponential divergence of the escape time approaches \(\varepsilon_{0}\) slower than \(1/\sqrt{N}\) in the thermodynamic limit. One possibility for this discrepancy is that the scaling argument only considers the chance of \(R>R_{u}\) and not the alignment process of the higher-order circular moments. Above the critical coupling strength, there is only the condition of the alignment of circular moments with the Ott-Antonsen manifold for the initiation of exponential growth. Since in the incoherent state, all circular moments are random Gaussian with identical variance, the alignment process (19) is strictly independent of the system size \(N\). Once exponential growth in the direction of the Ott-Antonsen manifold occurs, the time to synchronization is logarithmic, that is, it is weakly dependent on \(N\). However, it appears that the alignment with the Ott-Antonsen manifold needs to be stronger for Figure 3: **Transient to synchronization** for \(N=10{,}000\) coupled maps in (**a**,**b**), a fully connected network with coupling strength \(\varepsilon=1.81\) below the critical coupling \(\varepsilon_{0}=2\), and (**e**,**d**), in a random network with connection probability \(p=0.1\) for a coupling strength of \(\varepsilon=2.3\) above the critical coupling. The upper panels (**a**,**c**) show the order parameter \(R(t)\), and the lower panels (**b**,**d**), the real part of the ratio of the first two circular moments \(\text{Re}[Z_{1}^{2}/Z_{2}]\). This serves as a visual measure of the alignment of the system state with the Ott-Antonsen manifold, where the ratio is exactly equal to one. The dashed line in (**a**) marks the value of the unstable fixed point of the mean-field dynamics, \(R_{u}=0.098\). Above that value, the state of complete synchronization is attractive on the Ott-Antonsen manifold. In (**b**,**d**), the incoherent state \(R=0\) is unstable; however, finite size fluctuations do not grow exponentially. Instead, we observe a long chaotic transient. networks with link densities of \(p<1\). For small link densities, the divergence of the escape time occurs at larger values \(\varepsilon>\varepsilon_{0}\). This is reminiscent of stabilization by noise [34], where a system is driven away from a low-dimensional unstable manifold of a fixed point into stronger attracting stable directions. In simulations of dense random networks of coupled Bernoulli maps, we could see the independence of the mean escape time from the network size and the scaling of the escape time with roughly \(\sim 1/p\) (Figure 1). To explain this scaling, we argue that mean field theory might be extended to dense networks, where each node couples to a finite neighborhood of \(pN\) nodes in the network, and for every two nodes, these neighborhoods overlap on a set of size \(p^{2}N\) (Figure 2b). The local mean fields are Gaussian random forces of mean value \(Z\), variance \(1/k=1/pN\), and a pairwise correlation of \(p\), which is the relative size of the overlap. The decrease in correlation between the local mean fields in networks with link densities \(p<1\) can be interpreted as individual, finite size noise on the maps, which couple to the global mean field, plus some uncorrelated random deviation. Therefore, the contractions of the phases do not occur in the same direction for different nodes in the network. The strength of the contraction in the direction of the mean field is effectively reduced by the factor \(p\), i.e., \[\tau=\tanh\left(\frac{1}{2}\varepsilon R\right)p\approx\frac{1}{2}\varepsilon pR \tag{20}\] shifting the coupling strength dependence of the transition time (above \(\varepsilon_{0}\)) by a factor of \(1/p\). ## IV Conclusions We have investigated the synchronization of coupled chaotic maps in dense random networks, utilizing mean-field equations and examining network configurations with different link probabilities. Firstly, we noticed the existence of chaotic transients to synchronization within these networks. This means that the incoherent state can persist for extended periods before transitioning into synchronization. This finding led us to study the statistics of transient times and their scaling behaviors in the process of synchronization. The transition times follow exponential distributions, indicating spontaneous transitions at a constant rate. It is noteworthy that the transition from incoherence to complete synchronization only Figure 4: **Statistics of transient times \(t_{m}\) to synchronization**. (**a–c**) In the fully connected network; (**d–f**) in random networks of various link densities \(p=k/N\). The left panels show straight lines in semi-logarithmic plots of cumulative tail distributions of the transient times, demonstrating the rate character of the transition process. The middle panels show the estimated average transient times for various combinations of system sizes \(N\), coupling strengths \(\varepsilon\), and link densities \(p\). The mean field critical coupling strength \(\varepsilon_{0}=2.0\) and the maximum observation time \(T\) are marked by dashed lines. In the globally coupled system in pannels (**a–c**), the transient time depends strongly on the system size \(N\), whereas in dense networks and above \(\varepsilon_{0}\) (**d–f**), the transient time depends strongly on the link density \(p=k/N\), but not on the system size. We demonstrate the scaling of the transient times in panels (**c**) and (**f**) on the right. In the globally coupled system, the exponential divergence of the transient times below \(\varepsilon_{0}\) appears to be a function of \((\varepsilon-\varepsilon_{0})N^{\frac{1}{3}}\). In dense networks, the exponential divergence is roughly a function of \((\varepsilon-\varepsilon_{0})p\). occurs spontaneously in networks of finite size. Additionally, we have observed a remarkable dependence of the transient times to synchronization on the link probability \(p\), represented by the ratio of the in-degree to the total number of nodes, at coupling strengths where an immediate transition to synchrony would be expected from mean field theory. Whether such a delayed transition is due to the specifics of our model or is typical for a more general class of dynamics remains an open question. This research was funded by the FAPESP CEMEAI 391, Grant No. 2013/07375-0, Serrapilheira Institute (Grant No.Serra-392 1709-16124), Newton Advanced Fellow of the Royal Society (393 NAF\(\backslash\)R1\(\backslash\)180236), CAPES and CNPq, Grant No 166191/2018-3. ## V Appendix Here, we calculate the maximum likelihood estimation for the mean value of a geometric distribution \(P(t;\alpha)=(1-\alpha)\alpha^{t}\) for discrete values \(t=0,1,\ldots\) of time steps when only times \(t<T\) can be observed. The expected value for the geometric distribution is \[\mathrm{E}\left[t\right]=(1-\alpha)\sum_{t=0}^{\infty}t\alpha^{t}=\frac{ \alpha}{1-\alpha}. \tag{21}\] Since the times \(t_{m}\), \(m=1\ldots M\) are only observable up to step \(T-1\), we define \(l_{m}=\min(t_{m},T)\). The probabilities for the possible values of \(l_{m}\) are \[P(l_{m}=T;\alpha) = 1-(1-\alpha)\sum_{t=0}^{T-1}\alpha^{t}=\alpha^{T} \tag{22}\] \[P(l_{m}=t<T;\alpha) = (1-\alpha)\alpha^{t}. \tag{23}\] The derivative of the log-likelihood of \(M\) independent observations \(l_{m}\) with respect to the parameter \(\alpha\) is \[\frac{\partial_{\alpha}P(l_{1},l_{2},\ldots,l_{M};\alpha)}{P(l_{1},l_{2}, \ldots,l_{M};\alpha)}=\sum_{m=1}^{M}\frac{\partial_{\alpha}P(l_{m};\alpha)}{P (l_{m};\alpha)}. \tag{24}\] For the probabilities (22),(23), the derivatives are \[\frac{\partial_{\alpha}P(l_{m}=T,\alpha)}{P(l_{m}=T,\alpha)} = \frac{T}{\alpha} \tag{25}\] \[\frac{\partial_{\alpha}P(l_{m}=t<T,\alpha)}{P(l_{m}=t<T,\alpha)} = \frac{t}{\alpha}-\frac{1}{1-\alpha}. \tag{26}\] For a maximum of the log-likelihood for the observed values \(l_{m}\), the sum in (24) is required to be zero. Inserting \(M-M_{T}\) times the term (25) for all observations \(l_{m}=T\) and \(M_{T}\) terms (26), one for each observation \(l_{m}=t<T\), we obtain \[(M-M_{T})\frac{T}{\alpha}+\sum_{l_{m}<T}\frac{l_{m}}{\alpha}-M_{T}\frac{1}{1- \alpha}=0. \tag{27}\] With \[\langle l_{m}\rangle=\frac{1}{M}\sum_{m=1}^{M}l_{m}=\frac{1}{M}\left((M-M_{T} )T+\sum_{l_{m}<T}l_{m}\right) \tag{28}\] we can divide (27) by the number \(M\) of observations and re-order the equation to obtain \[\frac{\langle l_{m}\rangle M}{M_{T}}=\frac{\alpha}{1-\alpha}. \tag{29}\] However, this is exactly the expected value \(\mathrm{E}\left[t\right]\) of time steps for the full geometric distribution (21).
2310.02918
Learning-Aided Warmstart of Model Predictive Control in Uncertain Fast-Changing Traffic
Model Predictive Control lacks the ability to escape local minima in nonconvex problems. Furthermore, in fast-changing, uncertain environments, the conventional warmstart, using the optimal trajectory from the last timestep, often falls short of providing an adequately close initial guess for the current optimal trajectory. This can potentially result in convergence failures and safety issues. Therefore, this paper proposes a framework for learning-aided warmstarts of Model Predictive Control algorithms. Our method leverages a neural network based multimodal predictor to generate multiple trajectory proposals for the autonomous vehicle, which are further refined by a sampling-based technique. This combined approach enables us to identify multiple distinct local minima and provide an improved initial guess. We validate our approach with Monte Carlo simulations of traffic scenarios.
Mohamed-Khalil Bouzidi, Yue Yao, Daniel Goehring, Joerg Reichardt
2023-10-04T16:00:21Z
http://arxiv.org/abs/2310.02918v1
# Learning-Aided Warmstart of Model Predictive Control in Uncertain Fast-Changing Traffic ###### Abstract Model Predictive Control lacks the ability to escape local minima in nonconvex problems. Furthermore, in fast-changing, uncertain environments, the conventional warmstart, using the optimal trajectory from the last timestep, often falls short of providing an adequately close initial guess for the current optimal trajectory. This can potentially result in convergence failures and safety issues. Therefore, this paper proposes a framework for learning-aided warmstarts of Model Predictive Control algorithms. Our method leverages a neural network based multimodal predictor to generate multiple trajectory proposals for the autonomous vehicle, which are further refined by a sampling-based technique. This combined approach enables us to identify multiple distinct local minima and provide an improved initial guess. We validate our approach with Monte Carlo simulations of traffic scenarios. ## I Introduction Model Predictive Control (MPC) has established itself as a popular technique in Motion Planning and Control for autonomous driving. This is attributed to its inherent capability to simultaneously account for collision constraints, dynamic feasibility, actuator constraints, and comfort criteria, enabling the generation of optimal trajectories [1, 2, 3]. A notable variant that we also use is Model Predictive Contouring Control (MPCC) [4, 5, 6]. It generates consistent lateral and longitudinal control signals and does not require a separate desired velocity specification. However, due to constrained computational resources, MPC relies on local optimization, employing simple models and limited planning horizons, potentially resulting in suboptimal or locally optimal (short-term) solutions. Conversely, learning-based approaches can excel where MPC falls short e.g., in efficiency and adaptability in complex tasks, without needing physical models [7, 8]. However, they face challenges in interpretability and reliability, especially in unexplored corner cases. This can potentially lead to hazardous behavior, hindering their suitability for critical applications. Hence, due to their complementary attributes, several methods propose approaches to combine MPC with learning-based approaches. Learning-based MPC can be broadly categorized into two groups. The first group employs a learning-based system to substitute or enhance components of MPC. Simplest are approaches that learn the weights of the cost function [9, 10], as these significantly impact MPC performance and can be challenging to tune manually. A similar technique is cost shaping [11, 12] which adjusts the cost function at each time step, mitigating MPC's limitation in finding only short-term optimal solutions. Other methods learn the state-space model or parts of it [13, 14, 15] to handle unknown or complex dynamics. The second group learns high-level policies where the trajectory is further refined with low-level MPC. Methods such as [16, 17] provide high-level plans as a reference to the MPC. Similarly, the predictive safety filter [18, 19] evaluates constraint satisfaction of the trajectory of the learned system, potentially generating an output that minimizes the discrepancy from it while adhering to constraints. Our approach of a learning-based warmstart also falls into this group, together with [20, 21, 22, 23]. Here, the learned system offers an initial guess to the MPC optimizer, which is then further optimized by the MPC. This concept is particularly compelling given the inherent limitations of Local Optimizers/MPC, that become apparent in the context of autonomous driving in complex scenarios. The first well-known deficiency of the local optimizer is that if the initial guess is far from the optimum, many steps are needed until it converges, or the optimization may not converge at all. The strategy of MPC to provide an initial guess is to use the optimal trajectory, which was calculated in the last timestep, assuming little change between the previous and current Fig. 1: Example where our warmstart improves convergence quality compared to warmstarting with the solution of the last timestep \(t_{k-1}\) due to change of the optimization problem (changing traffic participant behavior prediction) timestep. This strategy fails in uncertain and rapidly changing environments where the optimization problem can vary a lot between each timestep (s. Fig. 1). For instance, due to unknown intentions of human drivers, predictions of how traffic participants act may vary significantly between timesteps. These abrupt changes can lead the optimizer to struggle to recover or find a proper solution in time, potentially resulting in fatal behavior where e.g., collisions cannot be avoided. In the event of optimizer failure, a common approach is to use the same control input as in the last timestep. However, when the environment is changing rapidly, the scene can change even more in the time step after and now using the solution from two timesteps ago only exacerbates the problem. The second deficiency of only being able to find a local optimum is especially problematic in dense traffic with (moving) obstacles. These obstacles are generally the cause for non-convex problems with multiple local minima. Some of these minima lead to undesired behavior, such as overly conservative driving or peculiar overtaking maneuvers. This problem is often mitigated by decomposing the planning problem into a global and a local planner [24, 25]. In this setup, the global planner generates a rough trajectory with significant simplifications for real-time feasibility, potentially sacrificing optimality. Also, topology-based planners such as [26, 27, 28] can address this weakness, from which we adopt the concept of homotopy classes. But, these planner also do not address the previously mentioned weakness of conventional warmstarting in fast-changing environments that our method tackles. However, previous works for learning-based warmstarting [20, 21, 22, 23] are mainly designed for simple repeating tasks. For example, they do not consider constraints, especially moving obstacles, or are trained for a limited number of self-generated scenarios (which additionally require retraining when the weights of the MPC cost function change). The main contributions of our work are summarized as: * Designing a Motion Planner based on Model Predictive Contouring Control with Artificial Potential Fields * Developing a learning-aided warmstart strategy which improves convergence quality in fast-changing unknown scenarios and helps to prevent undesired local minima leveraging the concept of homotopy classes * Devising a time-efficient framework with a novel trajectory refinement process which makes arbitrary multimodal trajectory predictors learned on real-world datasets easily deployable. ## II Baseline Model Predictive Contouring Control Consider an arbitrary traffic scene with \(O\) traffic participants \(o\in\{0,..,O-1\}\) in which an autonomous vehicle (AV), denoted as \(o=0\), needs to plan and execute a safe trajectory. A reference path and the map \(M_{r}\) with road boundaries are given by sets of waypoints \(p_{l}=\{(x_{j}^{l},y_{j}^{l})\}_{j=0}^{K}\) with \(l\in\{ref,lb,rb\}\) possibly provided by a high-level route planner. The reference path is parameterized by the arclength \(\theta\) and augmented by the path orientation \(\psi_{ref}\) and the distance to the left and right road boundary \(d_{lb}\), \(d_{rb}\), \(\mathcal{P}_{ref}:[0,\theta_{max}]\rightarrow\mathbb{R}^{2}\times[0,2\pi], \theta\mapsto(x_{ref}(\theta),y_{ref}(\theta),\psi_{ref}(\theta),d_{lb}( \theta),d_{rb}(\theta))\) where \(\theta_{max}\) is the maximum arclength. We model the motion of the AV by a differential equation \(\dot{\mathbf{z}}(t)=f(\mathbf{z}(t),\mathbf{u}(t))\) using the kinematic bicycle model: \[\dot{\mathbf{z}}=\left[v\cos(\psi),\ v\sin(\psi),\ v\frac{\tan(\psi)}{l},\ a,\ j,\ \dot{\delta}\right]^{\top} \tag{1}\] where \(\mathbf{z}=[x,y,\psi,v,a,\delta]^{\top}\) is the state vector, \(\mathbf{u}=[j,\delta]^{\top}\) is the control input vector and the velocity, acceleration, steering angle, jerk, steering angle rate, and wheelbase are denoted as \(v,a,\delta,j,\dot{\delta},l\), respectively. For the MPC-formulation, the dynamic model is discretized to \(\mathbf{z}_{k+1}=f(\mathbf{z}_{k},\mathbf{u}_{k})\) with the sampling time \(T_{s}\). The MPCC aims to maximize path progress while minimizing path error, balancing between the two objectives. For that, we approximate the arclength (i.e. progress on the path) \(\theta_{k}\), the lag error \(\hat{e}_{k}^{l}\) and the contouring error \(\hat{e}_{k}^{c}\) (s. Fig 2): \[\theta_{k+1}=\theta_{k}+v_{k}^{p}T_{s} \tag{2}\] \[\left[\begin{array}{c}\hat{e}_{k}^{c}\\ \hat{e}_{k}^{l}\end{array}\right]=\left[\begin{array}{cc}\sin(\psi_{ref} \left(\theta_{k}\right))&-\cos(\psi_{ref}\left(\theta_{k}\right))\\ -\cos(\psi_{ref}\left(\theta_{k}\right))&-\sin(\psi_{ref}\left(\theta_{k} \right))\end{array}\right]\Delta\mathbf{p}_{ref}\] where \(\Delta\mathbf{p}_{ref}=[x-x_{ref}\left(\theta_{k}\right),\ y-y_{ref}\left(\theta_{ k}\right)]^{\top}\) and \(v_{k}^{p}\) is the virtual speed on the path. Eq. 2 is augmented to the dynamic model, i.e. \(v_{k}^{p}\) is an additional control input and \(\theta_{k}\) a further state. This is utilized to define the running cost: \[J_{k}=\left[\begin{array}{c}\hat{e}_{k}^{c}\\ \hat{e}_{k}^{l}\end{array}\right]^{\top}Q\left[\begin{array}{c}\hat{e}_{k}^{c }\\ \hat{e}_{k}^{l}\end{array}\right]-q_{v}v_{k}^{p}+\mathbf{u}_{k}^{\top}R\mathbf{u}_{k} \tag{3}\] where \(Q,q_{v},R\) are the respective weights. We extend the formulation of [4] to account for moving obstacles and lanes using the Potential Field method [2]. \[\begin{split} J_{k}^{p}&=q_{ob}\sum_{i=1}^{O}\cdot\exp\left(- \left(\frac{\Delta x_{k}^{o}}{l^{o}}\right)^{2}-\left(\frac{\Delta y_{k}^{o}}{ w^{o}}\right)^{2}\right)\\ &+q_{lm}\sum_{l=0}^{L}\exp\left(-\left(\frac{d_{lm}^{l}-\hat{e}_{k}^{c }\left(\theta_{k}\right)}{\sigma}\right)^{2}\right)\end{split} \tag{4}\] Fig. 2: Illustration of the lag error \(e_{k}^{l}\), contouring error \(e_{k}^{c}\) (where \(\theta^{r}\) is the real arclength) and potential field on obstacles and lane markers where \(\Delta x_{k}^{i},\Delta y_{k}^{i}\) are the distances to the respective obstacle, \(d_{lm}^{l}\) is the signed distance from the reference path to the respective \(L\) lane marker, \(\sigma\) a scaling factor and \(l^{i},w^{i}\) a conservative estimation of the length and width of the obstacle and \(q_{ob},q_{lm}\) are the respective weights. Additionally, for the hard constraints we employ ellipses to approximate the occupied area by the obstacles and utilize a union of three circles to approximate the ego's occupied space. With that, we approximate the Minkowsky sum as described in [5]. The trajectories of the obstacles are provided by the prediction module which will be introduced in the next section. To ensure that the AV stays within the road boundaries, we impose linear constraints \[-d_{lb}\left(\theta_{k}\right)\leq\tilde{e}_{k}^{c}\left(\theta_{k}\right) \leq d_{rb}\left(\theta_{k}\right). \tag{5}\] Box constraints are imposed on the control inputs \(j_{k}\in[j_{\text{min}},j_{\text{max}}]\) and \(\hat{\delta}_{k}\in[\hat{\delta}_{\text{min}},\hat{\delta}_{\text{max}}]\). Additionally, we limit \(\delta\), \(a\), and the lateral acceleration to ensure that the trajectories are feasible for the vehicle [29]. This leaves us with the nonconvex optimization problem: \[\min_{\mathbf{Z},\mathbf{U}} \sum_{k=0}^{N-1}(J_{k}(\mathbf{z}_{k},\mathbf{u}_{k})+J_{k}^{p}(\mathbf{z}_{ k}))+J_{N}(\mathbf{z}_{N}) \tag{6}\] \[s.t.\ \mathbf{z}_{k+1}=f(\mathbf{z}_{k},\mathbf{u}_{k}^{H})\] \[\mathbf{z}_{0}=\mathbf{z}(0)\] \[\mathbf{z}_{k}\in\mathcal{Z},\ \mathbf{u}_{k}\in\mathcal{U}\] where \(\mathcal{Z}\) is set of state constraints imposed by road boundaries, obstacles, and lateral acceleration, \(\mathcal{U}\) is the set of box constraints on the control inputs and \(J_{N}(\mathbf{z}_{N})\) is the terminal cost. The trajectories planned by the MPC are denoted by \(\tau=[\mathbf{Z},\mathbf{U}]^{\top}\) where \(\mathbf{Z}=[\mathbf{z}_{0},...,\mathbf{z}_{N}]^{\top}\) and \(\mathbf{U}=[\mathbf{u}_{0},...,\mathbf{u}_{N-1}]^{\top}\) where \(N\) is the prediction steps. ## III Learning-aided Warmstart This section introduces our learning-aided approach, which we append as a warmstart method to the baseline MPCC introduced in sec. II (s. Fig. 3). The warmstart aims to provide an initial guess \(\mathbf{\tau}^{0}=[\mathbf{Z}^{0},\mathbf{U}^{0}]\) sufficiently close to a satisfactory local optimum of the current time \(t_{k}\), \(||\mathbf{\tau}^{0}-\mathbf{\tau}_{k}^{*}||\leq\mathbf{\epsilon}\) such that the MPCC optimizer locally converges to this optimum. The conventional approach of warmstarting using the optimal trajectory of the last timestep \(\mathbf{\tau}_{k-1}^{*}=[\mathbf{Z}_{k-1}^{*},\mathbf{U}_{k-1}^{*}]\) is still employed in our framework by comparing it to the learning-aided output in terms of minimum cost at timestep \(k\) i.e., considering the new information e.g., about obstacle motion. This allows for enhancing the convergence quality while still maintaining an upper bound for the cost provided by \(\mathbf{\tau}_{k-1}^{*}\). ### _Motion Predictor for Trajectory Proposals_ Motion prediction models reason about the map, the historical trajectories of objects, and their interactions to forecast objects' future movement. However, determining the intentions of other traffic participants considering the various choices an agent can make (e.g. whether a car will overtake or follow a leading vehicle) is challenging. To address this challenge, many learning-based motion prediction models opt to provide _multimodal_ predictions. Motion Transformer (MTR) [30] and Wayformer [31], are examples of such multi-modal predictors trained on large-scale motion prediction datasets, such as Waymo Open Motion (WO) [32]. In our method, we employ MTR which outputs a Gaussian Mixture Model (GMM) for the object's future position \(\mathcal{N}(\mathbf{\mu}^{m},\mathbf{\Sigma}_{in}^{m})\) at every timestep. Each component \(m\in\mathcal{M}\) of this mixture corresponds to one predicted mode. We capitalize on the necessity of the predictor for obstacle prediction1 and reutilize it for predicting the ego trajectory. The aim of our approach is to leverage the multimodal output of the predictor to identify multiple local optima and select the best one. To elaborate on that, we introduce the concept of homotopy classes in the context of motion planning [33]. Footnote 1: In this study, we use the most probable obstacle prediction. Planning with multimodal obstacle predictions remains future work. **Definition 1**.: _Two trajectories connecting the same start and end position belong to the same homotopy class if they can be continuously deformed into each other without intersecting an obstacle. The set of all trajectories that are homotopic to each other is denoted as homotopy class._ According to Definition 1, homotopic trajectories share the same start and end point. Due to the initial state constraint, the trajectories always share the same start by definition. However, we relax the end point requirement as suggested in Fig. 3: The learning-aided warmstart framework for motion planning and control using a multimodal predictor [34]. With the assumption that obstacles are the cause for the existence of multiple minima, it follows that all trajectories \(\mathbf{\tau}_{i}\in\mathcal{A}\) are homotopic, where \(\mathcal{A}=\{\mathbf{\tau}\in\mathcal{X}|J(\mathbf{\tau}^{*})\leq J(\mathbf{\tau}_{i}),\mathbf{ \tau}_{i}=\mathbf{\tau}^{*}+\mathbf{\epsilon},\mathbf{\epsilon}\geq 0\}\) denotes the attractive vicinity of the local optimizer \(\mathbf{\tau}^{*}\)[27]. Thus, the aim of the learning-aided warmstart can be further specified in providing an initial trajectory from the right homotopy class. One of the main causes of multimodality in motion prediction is the interaction with other traffic participants. Consequently, different modes often correspond to different homotopy classes. Therefore, we make the following assumptions: **Assumption 1**.: _Several of the predicted modes do not share the same homotopy class and cover a subset of the existing homotopy classes \(h\in\mathcal{H}\), i.e. \(|\{[m]|m\in\mathcal{M}\}\cap\mathcal{H}|\geq 2\)._ **Assumption 2**.: _The covariance of the components of the GMMs i.e. for the respective mode is small enough such that trajectories drawn from the same components correspond to the same homotopy class (s. Fig. 4)._ Subsequently, we introduce how to utilize and further refine these provided modes to be able to select the best one (in terms of cost) as a warmstart. ### _Bezier Curve Fitting_ Typical trajectory predictors such as MTR predict only the object position distributions at every prediction timestamp. However, we require the complete state and control input trajectories for our warmstart. Furthermore, our method is required to sample realistic trajectories from the given distribution to further refine the predictor trajectory. We select 5th-degree Bezier Curves to fit the predicted positions. They represent the optimal solutions in terms of travel time, control effort, and jerk [35] and thus are close to the optimal vehicle trajectories outputted by the MPC. This smooths the often jerky predictions and allows us to calculate derivatives analytically. Further, we can perform this fit in such a way as to match current kinematic state. This continuity constraint is not directly enforced by MTR. We perform the fitting using Bayesian Linear Regression (BLR) to output a distribution over the curve parameters from which we can sample in the next step. The 5th-degree Bezier Curve \(\mathbf{c}(t)\) can be expressed as a linear combination of 6 control points \(\mathbf{P}_{j}\in\mathbb{R}^{2}\) and the Bernstein polynomials \(\phi_{j}(t):\mathbb{R}\rightarrow\mathbb{R}\): \[\mathbf{c}(t)=\sum_{j=0}^{5}\phi_{j}(t)\mathbf{P}_{j} \tag{7}\] From the temporal derivatives of the Bezier curve, we can then calculate the state and control input trajectories. Hence, we first need to estimate the control points \(\mathbf{P}_{j}^{m}\) from the output of the predictor for each mode. The initial guess should ideally satisfy the continuity constraint \(\mathbf{z}_{0}=\mathbf{z}(0)\). For this, we exploit the property of the Bezier curve that the initial conditions can be determined from the control points. The initial condition for the \((d^{\text{th}})\) derivative can be calculated as: \[\mathbf{c}^{(d)}(0)=\frac{6!}{(6-d)!}\Delta^{d}\mathbf{P}_{0} \tag{8}\] where \(\Delta^{k}\) is the forward difference operator recursively defined by \(\Delta^{k}\mathbf{P}_{i}=\Delta^{k-1}\mathbf{P}_{i+1}-\Delta^{k-1}\mathbf{P}_{i}\) where \(\Delta^{0}\mathbf{P}_{i}=\mathbf{P}_{i}\). From this, we determine the first three control points: \[\mathbf{c}(0)= [x_{0},y_{0}]^{\top},\;\;\mathbf{\dot{c}}(0)=[v_{0}\cos(\psi_{0}),v_ {0}\sin(\psi_{0})]^{\top},\] \[\mathbf{\ddot{c}}(0)= [a_{0}\cos(\psi_{0})-\frac{v_{0}^{2}}{l}\tan(\delta_{0})\sin( \psi_{0}),\] \[a_{0}\sin(\psi_{0})-\frac{v_{0}^{2}}{l}\tan(\delta_{0})\cos(\psi_ {0})]^{\top} \tag{9}\] We utilize these relationships for the control points as a strong Gaussian prior \(\mathcal{N}(\mathbf{P}^{m,0},\mathbf{\Sigma}^{m,0})\) for the BLR. In other words, the first three elements of \(\mathbf{P}^{m,0}\) correspond to eq. 9, and \(\mathbf{\Sigma}^{m,0}\) are derived from the tracked uncertainty of the states from the on-board sensors. As for the remaining three elements in \(\mathbf{P}^{m,0}\), we employ an uninformed prior. To formulate the BLR-problems we form \(\mathbf{C}^{m}=\mathbf{\Phi}^{\top}\mathbf{P}^{m}\) with the vector of the control points \(\mathbf{P}^{m}\in\mathbb{R}^{6.2}\), the vector of Bezier curve points \(\mathbf{C}^{m}\in\mathbb{R}^{2N}\) and the new basis function \(\mathbf{\Phi}\in\mathbb{R}^{6.2\times 2N}\) as done in [36, 37]. The uncertainties of the predictions \([\mathbf{X}_{m},\mathbf{Y}_{m}]^{\top}\), i.e. the covariance \(\mathbf{\Sigma}_{in}^{m}\) outputted by the GMM enter as Gaussian observation noise into the regression. \[\left[\mathbf{X}_{m},\mathbf{Y}_{m}\right]^{\top}=\mathbf{\Phi}^{\top}\mathbf{P}^{m}+\mathbf{e}, \;\mathbf{e}\sim N(0,\mathbf{\Sigma}_{in}^{m}) \tag{10}\] Consequently, the posterior and the covariance for the control points are given: \[\mathbf{\Sigma}_{PP}^{m}= \left(\mathbf{\Phi}\mathbf{\Sigma}_{in}^{m}\mathbf{\Phi}^{\top}+\left(\mathbf{ \Sigma}^{m,0}\right)^{-1}\right)^{-1} \tag{11}\] \[\mathbf{P}^{m}= \mathbf{\Sigma}_{PP}^{m}\mathbf{\Phi}^{\top}\left(\mathbf{\Sigma}_{in}^{m} \right)^{-1}\left[\begin{array}{c}\mathbf{X}_{m}\\ \mathbf{Y}_{m}\end{array}\right]+\mathbf{\Sigma}_{PP}^{m}\left(\mathbf{\Sigma}^{m,0} \right)^{-1}\mathbf{P}^{m,0}\] ### _Control Point Sampling and Cost-Weighted Averaging_ Simply calculating the states and control inputs from each outputted modes of the predictor leads often to an ineffective warmstart. Even if the best homotopy class is chosen from these modes, it can still lead to a solution far from the optimum, resulting in a prolonged convergence time. Additionally, comparing modes to select the best homotopy class based on the cost function is inaccurate, as for two trajectories in two different homotopy classes \(\mathbf{\tau}_{h,1},\mathbf{\tau}_{h,2}\)\(J(\mathbf{\tau}_{h,1})>J(\mathbf{\tau}_{h,2})\) does not necessarily imply \(J(\mathbf{\tau}_{h,1}^{*})>J(\mathbf{\tau}_{h,2}^{*})\) since \(\mathbf{\tau}_{h,1}\) and \(\mathbf{\tau}_{h,2}\) can have different distances from their local optimum \(\mathbf{\tau}_{h,1}^{*}\) and \(\mathbf{\tau}_{h,2}^{*}\). Our solution to refine the trajectories is to utilize the distributions \(\mathcal{N}(\mathbf{P}^{m},\mathbf{\Sigma}_{pp}^{m})\) from eqn. 11, in order to sample \(S\) Bezier curves (s. Fig. 4 ). For each sample \(\mathbf{P}_{s}^{m}\) and the mean \(\mathbf{P}^{m}\), we compute the cost function and obtain our output control points through a cost-weighted average. \[\bar{\mathbf{P}}^{m}=\sum_{s=0}^{S}w_{s}\tilde{\mathbf{P}}_{s}^{m}\text{ with }w_{s}=\frac{\text{e}^{-\lambda J(\tilde{\mathbf{P}}_{s}^{m})}}{\sum_{s=0}^{S}\text{e}^{- \lambda J(\tilde{\mathbf{P}}_{s}^{m})}} \tag{12}\] where \(\lambda\) is a tunable parameter. The chosen weighting factor \(w_{n}\) has the advantageous property that assigns significantly less influence to samples with markedly higher cost function compared to the sample with the minimum cost, i.e., \(J(\tilde{\mathbf{P}}_{i}^{m})>>J(\tilde{\mathbf{P}}_{j}^{m}),w_{i}\to 0\) (softmin normalization [38]). This implies that, in essence, we are only giving substantial consideration to a limited subset of samples within a similar cost range. Furthermore, recall Assumption 2, i.e.; consequently, we assume the samples are in the attractive vicinity of a local minimum. Provided the samples are well distributed around the region of convexity of this local optimum, taking the weighted average of the trajectories gives us a value inside the area spanned by the sample points. Hence, the output results generally in a trajectory closer to the minimum. We execute this process for each mode. Subsequently, the costs for each mode are compared, and the best one is employed as the warmstart. While this still does not guarantee the selection of the homotopy class of the global optimum, it allows us to choose a satisfactory local minimum at least. In autonomous driving, various maneuvers are often similarly satisfactory, and only undesired local minima must be prevented. The detailed steps of the trajectory refinement are outlined in Algorithm 1. It is important to note that this approach supports parallel computation due to the parallel nature of sampling and the independence of each trajectory from one another, i.e., it can take advantage of the parallel processing capabilities of modern GPUs, making it highly efficient. ``` Input : Measured state values \(\mathbf{z}_{k}=[x_{k},y_{k},\psi_{k},v_{k},a_{k},\delta_{k},\theta_{k}]^{\top}\), optimal traj.last timestep \(\mathbf{Z}_{k-1}^{*}\), \(\mathbf{U}_{k-1}^{*}\), map information \(\mathbf{M}_{r}\), reference path \(\mathcal{P}_{ref}\), pose history \(\mathbf{\eta}_{o}^{k}\)\(\forall\) Agents \(o\) with \(o=0\) denoting the ego vehicle Output : Initial Guess for MPCC \(\mathbf{Z}^{0}\), \(\mathbf{U}^{0}\) Initialize # refinement samples \(S\), # used modes \(M\) // Predict trajectories \(\mathbf{\xi}_{o}^{m}=[\mathbf{x},\mathbf{y},\mathbf{\sigma}_{x},\mathbf{\sigma}_{y}]\)\(\mathbf{\xi}\leftarrow\mathcal{MTR}(\mathbf{\eta}^{k},\mathbf{M}_{r})\) foreach\(m=\{1,...,M\}\)do // Fit predicted ego trajs. into Bezier Curve \(\mathbf{P}^{m},\mathbf{\Sigma}_{pp}^{m}\leftarrow\text{BayesReg}(\mathbf{\xi}_{0}^{m},\mathbf{z }_{k})\) from (11) // Sample from \(\mathcal{N}(\mathbf{P}^{m},\mathbf{\Sigma}_{pp}^{m})\)\(\tilde{\mathbf{P}}_{1}^{m},\ldots,\tilde{\mathbf{P}}_{S}^{m}\sim\mathcal{N}(\mathbf{P}^{m},\mathbf{ \Sigma}_{pp}^{m})\) foreach\(s=\{1,...,S\}\)do // Calculate States, Control Inputs and Cost \(\mathbf{Z}_{k}^{m},\mathbf{U}_{S}^{m},J_{S}^{m}\leftarrow\mathcal{J}(\tilde{\mathbf{P}}_{ S}^{m},\mathcal{P}_{ref},\mathbf{\xi}_{1:O}^{0})\) // Calculate Cost-weighted Average of samples \(\tilde{\mathbf{P}}^{m}\leftarrow\) from (12) // Calculate States, Control Inputs and Cost \(\mathbf{Z}^{m},\mathbf{U}^{m},J^{m}\leftarrow\mathcal{J}(\tilde{\mathbf{P}}^{m},\mathcal{ P}_{ref},\mathbf{\xi}_{1:O}^{0})\) // Select mode with minimal cost \(m^{*}\leftarrow\arg\min_{m}J^{m}\)if\(J^{m^{*}}\leq\text{Cost of }\mathbf{Z}_{k-1}^{*},\mathbf{U}_{k-1}^{*}\) at timestep \(k\)then return(\(\mathbf{Z}^{0}\leftarrow\mathbf{Z}^{m},\mathbf{U}^{0}\leftarrow\mathbf{U}^{m^{*}}\)) else return(\(\mathbf{Z}^{0}\leftarrow\mathbf{Z}_{k-1}^{*},\mathbf{U}^{0}\leftarrow\mathbf{U}_{k-1}^{*}\)) ``` **Algorithm 1**Learning-aided Warmstart ## IV Performance Evaluation We compare our MPCC with learning-aided warmstart to the baseline MPCC with conventional warmstart in three experiments. The first two experiments serve as illustrative examples to highlight the two strengths of our approach. The last experiment entails a Monte Carlo simulation of random highway merging scenarios to provide statistical results (s. Tab. I). Experiment \(I\) involves a scenario with two lanes, where the left lane accommodates oncoming traffic but allows for overtaking. This scenario is well-suited to showcase the capability of our framework in escaping undesired local minima, as the presence of other traffic participants introduces non-convexity to the optimization problem. In [34, 28], it is demonstrated that a planner in this scenario may converge towards several distinct local minima/homotopy classes. In our case, our learning-aided warmstart leads to a different behavior compared to the MPCC without warmstart (s. Fig. 5); i.e., the two planners converge towards two distinct local optima. The costs for our planner are significantly lower than those for the baseline planner (s. Fig. 5). This figure also provides a comparison of the control input trajectories and the minimum time-to-collision (TTC) for both planners in this scene, displaying the shortcomings of the local minima the baseline converged to. Experiment II involves a scenario where an obstacle crosses the path of the ego vehicle. Initially, the ego vehicle is unaware of this occurrence for the first few moments, causing a sudden shift in the optimization problem for the planner. Such a situation can arise in various scenarios, for instance, when the obstacle is initially occluded or when predictions change (due to unknown intentions of traffic participants or Fig. 4: Intuition of the trajectory sampling around the fitted trajectory \(\mathbf{\tau}_{p}\) of the predictor which is assumed to fall in the attractive vicinity of local minimum a new decision of an object). In this case, the planner must be capable of finding a solution for the new optimization problem in real-time, even though it differs distinctly from the last timestep. Hence, we impose a maximum solving time constraint. However, we set this limit relatively high with \(t_{\text{max}}=0.5s\) since there is potential to accelerate the MPC runtime through alternative implementations and hardware enhancements, etc. Despite this high maximum solving time, the baseline planner is unable to converge in time when the change occurs (from the new event at \(t_{e}=1.6\) in Fig. 6). The red curve depicts the velocity trajectory outputted by the solver. This trajectory fails to satisfy both collision constraints and the initial condition. In such cases, it is customary to utilize the solution from the last time step, which, in this example, leads to further acceleration of the ego. This behavior ultimately results in an unavoidable collision. In contrast, Fig. 6 depicts that our approach can directly provide an appropriate warmstart after the event. For experiment III, we consider a highway merging scenario (s. Fig. 1) and utilize the Intelligent Driver Model [39] to simulate the behavior of the traffic participants. We generate 100 test runs by randomly sampling the parameters of the IDM model (such as desired velocity, minimum headway, etc.), as well as the initial positions and velocities for the ego vehicle and the other vehicles. As a result, we compare the rate of successful mergings, the percentage of the ego getting stuck in the entrance lane, and collisions. Additionally, we assess the convergence quality in terms of the percentage of successful convergence, failed convergence due to reaching the time limit, and failed convergence due to converging to a point of infeasibility. Further benchmarking parameters are the average cost and solving time2. The significant performance improvement to the baseline becomes evident when considering highway merging. Firstly, each gap between traffic participants potentially corresponds to a local minimum where one can clearly be better than the other e.g., due to gap size. Secondly, shifts in motion predictions of the traffic participants during merging often substantially impact the ego vehicle's plan e.g., if a prediction changes the acceleration slightly, the optimal plan for the ego may shift from merging in front to merging behind. Footnote 2: Solving time data is for comparative purposes only and should not be taken as absolute ## V Conclusions A Learning-aided Warmstart Framework is proposed to address the problem of Model Predictive Control with local minima and convergence issues if using the conventional warmstart strategy in fast-changing, uncertain environments. This framework leverages a multimodal predictor that predicts trajectories for traffic participants and the ego vehicle, respectively. The different ego trajectory modes are used to identify multiple homotopy classes, each associated with an attractive vicinity of a different local optimum. To achieve this, we introduced a novel sampling-based trajectory refinement approach using Bayesian Linear Regression for Bezier Curve Fitting to efficiently optimize the trajectories before selecting the best one as an initial guess. Our Monte Carlo analysis demonstrates that our framework significantly improves the convergence quality in highway merging scenarios. Fig. 5: Illustration of experiment I. Comparison of the local minimum that the baseline converged to with our framework \begin{table} \begin{tabular}{l||c|c|c|c|c|c|c|c} \hline & \multicolumn{3}{c|}{**Merging Execution**} & \multicolumn{3}{c|}{**Convergence Quality**} & \multicolumn{1}{c}{} \\ \cline{3-10} & Success & Aborted & Collision & Success & Max. Time & Converge to & Average & Solving time (std) \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ \hline **Baseline MPCC** & 73 \% & 11\% & 16\% & 69.3\% & 13.6\% & 17.1\% & 4995 & 106 ms (40 ms) \\ \hline **Our Framework** & **88 \%** & **4\%** & **8\%** & **82.7\%** & **7.0\%** & **10.3\%** & **3737** & **94 ms (33 ms)** \\ \hline \end{tabular} \end{table} TABLE I: Results of experiment III. Comparison of Baseline and the learning-aided Framework using Monte Carlo analysis Fig. 6: Illustration of experiment II. Comparison of the baseline to our framework when an event occurs that changes the optimization problem between two timesteps
2301.12292
Zero-shot causal learning
Predicting how different interventions will causally affect a specific individual is important in a variety of domains such as personalized medicine, public policy, and online marketing. There are a large number of methods to predict the effect of an existing intervention based on historical data from individuals who received it. However, in many settings it is important to predict the effects of novel interventions (e.g., a newly invented drug), which these methods do not address. Here, we consider zero-shot causal learning: predicting the personalized effects of a novel intervention. We propose CaML, a causal meta-learning framework which formulates the personalized prediction of each intervention's effect as a task. CaML trains a single meta-model across thousands of tasks, each constructed by sampling an intervention, its recipients, and its nonrecipients. By leveraging both intervention information (e.g., a drug's attributes) and individual features~(e.g., a patient's history), CaML is able to predict the personalized effects of novel interventions that do not exist at the time of training. Experimental results on real world datasets in large-scale medical claims and cell-line perturbations demonstrate the effectiveness of our approach. Most strikingly, \method's zero-shot predictions outperform even strong baselines trained directly on data from the test interventions.
Hamed Nilforoshan, Michael Moor, Yusuf Roohani, Yining Chen, Anja Šurina, Michihiro Yasunaga, Sara Oblak, Jure Leskovec
2023-01-28T20:14:11Z
http://arxiv.org/abs/2301.12292v4
# Zero-shot causal learning ###### Abstract Predicting how different interventions will causally affect a specific individual is important in a variety of domains such as personalized medicine, public policy, and online marketing. However, most existing causal methods cannot generalize to predicting the effects of previously unseen interventions (_e.g_. a newly invented drug), because they require data for individuals who received the intervention. Here, we consider zero-shot causal learning: predicting the personalized effects of novel, previously unseen interventions. To tackle this problem, we propose CaML, a causal meta-learning framework which formulates the personalized prediction of each intervention's effect as a task. Rather than training a separate model for each intervention, CaML trains as a single meta-model across thousands of tasks, each constructed by sampling an intervention and individuals who either did or did not receive it. By leveraging both intervention information (_e.g_., a drug's attributes) and individual features (_e.g_., a patient's history), CaML is able to predict the personalized effects of unseen interventions. Experimental results on real world datasets in large-scale medical claims and cell-line perturbations demonstrate the effectiveness of our approach. Most strikingly, CaML's zero-shot predictions outperform even strong baselines which have direct access to data of considered target interventions. Machine Learning, ICML ## 1 Introduction Personalized predictions about how an intervention will causally affect a specific individual are important across many high impact applications in the physical, life, and social sciences. For instance, consider a doctor deciding whether or not to prescribe a drug to a patient. Depending on the patient, the same drug could either (a) cure the disease, (b) have no effect or (c) elicit a life-threatening adverse reaction. Predicting which effect the drug will have for each patient (_i.e_. the difference between health status with versus without drug exposure) could revolutionize healthcare by enabling personalized treatments for each patient. The challenge with such _causal_ predictions is that the intervention effect is unobserved: it is impossible to observe both outcomes for the same individual (_e.g_., health status _with_ drug exposure and health status _without_ drug exposure). A variety of causal inference methods, referred to as heterogeneous treatment effect estimators, address this challenge to make such personalized causal predictions (Johansson et al., 2016; Green & Kern, 2012; Hill, 2011; Shalit et al., 2017; Alaa & Van Der Schaar, 2017; Curth & van der Schaar, 2021; Kunzel et al., 2019; Kennedy, 2020; Athey & Imbens, 2016). These methods follow a common structure: individuals are described by a set of features (\(X\), _e.g_. medical history). A large natural experiment dataset is then collected in which some individuals received an intervention (\(W\), _e.g_., a specific drug) and others did not. One or more downstream outcomes (\(Y\), _e.g_., whether an adverse reaction occurred) are monitored for each individual. Finally, a model is then trained on this data which takes as input an individual's features \(X\) and estimates the change in an outcome expected from exposure to the intervention for that specific individual. However, a critical limitation of these methods is that they require natural experiment data for each intervention (_e.g_., each drug). Thus, these methods are unable to generalize to novel, previously unseen interventions, which is important in many real-world applications. For instance, a single medical intervention often consists of a unique combination of multiple drugs (Tatonetti et al., 2012; Zitnik et al., 2018) for which no prior data is available in a typical hospital's electronic health records. Furthermore, when a new drug is discovered, government policy is passed, or web platform design is altered, it is valuable to know which individuals it will positively and negatively affect. There is thus a need for methods that can predict the effect of an intervention in a zero-shot fashion, _i.e_., without _any_ training data with individuals who received the intervention. Generalizing to unseen interventions is especially challenging because it requires efficiently "aligning" newly observed interventions to the ones previously observed in the training data. A zero-shot causal learning framework thus requires drawing high-level analogies between the properties of interventions, linking these to low-level individual samples and their outcomes. **Present work.** Here, we propose CaML (**C**ausal **M**eta-**learning), a general framework for training a single meta-model to predict the effects different interventions will have on an individual. CaML is able to generalize to new interventions not seen during training. Our key insight is to frame the causal prediction for each intervention (comprising one treatment or a combination of treatments applied simultaneously1) as a separate meta-learning task. For each task observed during training, we sample a retrospective natural experiment of individuals who did and not receive the intervention. This natural experiment data is used to estimate the causal effect of the intervention for each individual (using any off-the-shelf causal inference method), which serves as the training target for the task. Footnote 1: For clarity, we use “treatment” when referring to a fine-grained unit (_e.g._ a single drug) and “intervention” to refer to applying one or more treatments simultaneously. In order to achieve zero-shot generalization to new interventions, we include information (\(H\)) about the intervention in the task. We then train a single meta-model which fuses intervention features with individual-level features (\(X\)) to predict the intervention's effect. Our approach allows us to predict the causal effect of previously unseen interventions, _i.e._ interventions without sample-level training data, such as a newly discovered drug, or combination of drugs that the model has never seen prescribed to patients before. We refer to this capability as _zero-shot causal learning_. Our experiments across different real-world settings show that CaML is both scalable and effective, including the application to a large-scale medical dataset featuring tens of millions of patients. Most strikingly, CaML's zero-shot performance exceeds even strong baselines that were trained directly on the full test intervention at hand. We further find that CaML is capable of zero-shot generalization even under challenging conditions: when trained only on interventions consisting of single treatments, at inference time it can accurately predict the effect interventions consisting of combinations of previously unseen treatments such as drug combinations. Finally, we explain these findings, by proving a zero-shot generalization bound. ## 2 Related Work We discuss recent work which is most closely related to zero-shot causal learning, and provide an extended discussion of other related work in B. While most methods for predicting the effects of interventions are unable to generalize to unseen interventions, a notable exception is recent methods which aim to predict the effect of an intervention using structured information about its attributes (Kaddour et al., 2021; Harada and Kashima, 2021). In principle these methods can also be used for zero-shot predictions. The main drawback of these existing methods is that they are restricted to specific estimators (Nie and Wager, 2021; Kunzel et al., 2019) (_i.e._ the Robinson decomposition and the S-learner, respectively) which have been shown to under-perform in many settings (Kunzel et al., 2019; Kennedy, 2020; Chernozhukov et al., 2018). In contrast, CaML is agnostic to the choice of estimation strategy. This design choice allows our framework to achieve strong zero-shot performance results (Section 6) by taking advantage of the recent and continuously growing methods for estimating the effects of interventions (Kunzel et al., 2019; Kennedy, 2020; Curth and van der Schaar, 2021; Konstantinov et al., 2022; Frauen and Feuerriegel, 2022). ## 3 Background and Definitions We first consider the problem of estimating treatment effects under the simplest case of a single treatment (\(W\)) and out Figure 1: Overview of the zero-shot causal learning problem. Each individual has features (\(X\)), an intervention (\(W\)), and and outcome (\(Y\)). Lightning bolts (\⃝) represent treatments (_e.g._ drugs) that together compose an intervention (_e.g._ drug combination). The personalized effect of an intervention (\(\tau\)) is always unobserved. The goal is to predict the \(\tau\) from an individual’s features, for a previously unseen intervention. come (\(Y\)), and subsequently generalize it to our setting of interventions (one or more treatments) and multiple outcomes. Under a single treatment and outcome, we consider \(n\) independent random variables \(P_{1},\ldots,P_{n}\) drawn from a distribution \(\mathcal{P}\). Each \(P_{i}=(Y_{i},X_{i},W_{i})\sim\mathcal{P}\) collects the following random variables for unit \(i\): a binary or continuous outcome of interest \(Y_{i}\) realized as \(y\in\mathcal{Y}\subset\mathbb{R}\), instance features \(X_{i}\) realized as vectors \(x\in\mathcal{X}\subset\mathbb{R}^{d}\), a treatment-assignment indicator \(W_{i}\) realized as \(w\in\{0,1\}\). We use the Neyman-Rubin potential outcomes framework (Splawa-Neyman et al., 1990; Rubin, 2005, 1974), in which \(Y_{i}(1),Y_{i}(0)\) reflect the outcome of interest either under treatment (\(W_{i}=1\)), or under control (\(W_{i}=0\)), respectively. In our running medical example, \(Y_{i}(1)\) is health status if exposed to the drug, and \(Y_{i}(0)\) is health status if not exposed to the drug. Notably, the _fundamental problem of causal inference_ is that we only observe one of the two potential outcomes, as \(Y_{i}=W_{i}\cdot Y_{i}(1)+(1-W_{i})\cdot Y_{i}(0)\) (e.g. either health status with or without drug exposure can be observed for a specific individual, depending on whether they are prescribed the drug). In many applications (_e.g._ clinical trials) it is common to estimate an average treatment effect (ATE) across the population: \[\text{ATE}=\mathbb{E}_{\mathcal{P}}\Big{[}Y(1)-Y(0)\Big{]} \tag{1}\] While ATE is more straight forward to estimate, in order to achieve personalized decision-making, we seek to estimate treatment effects that are tailored to the attributes of individuals. Thus, we focus on estimating \(\tau(x)\), known as the conditional average treatment effect (CATE): \[\text{CATE}=\tau(x)=\mathbb{E}_{\mathcal{P}}\Big{[}Y(1)-Y(0)\mid X=x\Big{]} \tag{2}\] CATE is a specific instance of heterogeneous treatment effect (HTE) estimation, in which heterogeneities in treatment effects are estimated separately for each individual (based on their features \(X\)). **Identifying assumptions.** In order to estimate \(\tau(x)\) from observational data, we make standard assumptions of unconfoundedness, consistency, and overlap (Morgan and Winship, 2015). _Unconfoundedness_: there are no unobserved confounders, _i.e._\(Y(0),Y(1)\perp\!\!\!\perp W\mid X\). _Consistency_: \(Y_{i}=Y_{i}(w)\) if individual \(i\) is assigned treatment \(w\). _Overlap_: Treatment assignment is nondeterministic, such that for all \(x\) in support of \(X\): \(0<P(W=1\mid X=x)<1\). ## 4 Zero-shot causal learning We first extend the setup in Section 3 to multiple outcomes and interventions (consisting of one or more treatments), via a multi-dimensional outcome \(Y_{i}\) realized as a vector \(y\in\mathcal{Y}\subset\mathbb{R}^{m}\) and intervention assignment indicators \(W_{i}\) realized as \(w\in\mathcal{W}:=\{0,1\}^{t}\) for \(m\) outcomes and \(t\) treatments. There are thus up to possible \(2^{t}\) interventions and corresponding potential outcomes \(\{Y_{i}(w)\mid w\in\mathcal{W}\}\) and the observed outcome \(Y_{i}\) for an individual \(i\) is thus \(Y_{i}(W_{i}=w)\), abbreviated with \(Y_{i}(w)\). We aim to estimate the multi-treatment CATE, indexed by the intervention \(w\in\mathcal{W}\): \[\text{CATE}_{w}=\tau_{w}(x)=\mathbb{E}_{\mathcal{P}}\Big{[}Y(w)-Y(0)\mid X=x \Big{]}, \tag{3}\] where \(0\) represents the \(t\)-dimensional vector of zeros. The multi-treatment, multi-outcome CATE is identifiable under analogous conditions to the above. That is, for all \(w\) in \(\mathcal{W}\): \(Y(w)\perp\!\!\!\perp W\mid X\), \(Y_{i}=Y_{i}(w)\) if individual \(i\) is assigned intervention \(w\), and \(0<P(W=w\mid X=x)<1\). In many real-world settings (_e.g._ drugs, online A/B tests) training data to learn \(\tau_{w}(x)\) directly is limited or nonexistent for a large subset of \(\mathcal{W}\), because new interventions are frequently introduced, and an intervention can consist of multiple treatments given simultaneously that exhibit interaction effects, which amount to a combinatorial explosion of possible interventions. These settings motivate few-shot and zero-shot methods to estimate CATE. Here, we focus on the zero-shot setting, as it is the most challenging and useful setting, and it is unclear if zero-shot CATE estimation is possible in many real-world settings. We partition the intervention space \(\mathcal{W}\) into two subsets \(\mathcal{W}_{s}\) (interventions seen during training) and \(\mathcal{W}_{u}\) (interventions unseen during training). _Problem 1_ (Zero-shot CATE estimation).: **Given**\(n\) observations drawn from \(\mathcal{P}\) and a set of interventions \(\mathcal{W}_{u}\) all of which are unseen in samples \(1,\ldots,n_{s}\) (training) but instead is only observed in samples \(n_{s}+1,\ldots,n\) (testing), **estimate**\(\tau_{w^{\prime}}(x^{\prime})\) for all testing samples \((x^{\prime},w^{\prime})\), where \(w^{\prime}\in\mathcal{W}_{u}\), using only the samples \(1,\ldots,n_{s}\) for training. We note that the zero-shot CATE estimation problem here (Figure 1) is distinct from the zero-shot learning settings in prior work. Prior work (Larochelle et al., 2008; Romera-Paredes and Torr, 2015; Jiang et al., 2017; Bucher et al., 2017; Chang et al., 2008; Palatucci et al., 2009) typically aims to to learn classifier \(f:\mathcal{X}\rightarrow\mathcal{Y}\) to predict unseen values (_i.e._ classes) in \(\mathcal{Y}\) that were unseen during training. By contrast, we aim to learn a function \(f:\mathcal{X}\times\mathcal{W}\rightarrow\mathcal{T}_{w}\) to predict the effect of a previously unseen treatment \(w\) in \(\mathcal{W}_{u}\). We propose a novel framework for estimating CATE across multiple interventions, even including ones that were never encountered before during training. Our framework consists of three key components (Figure 2). First, we formulate CATE estimation as a meta-learning problem in which each task corresponds to the CATE estimation for a unique intervention. Each task is constructed by sampling a natural experiment of individuals who did and did not receive the intervention, and tasks are augmented with a priori information (\(H\)) describing the intervention (_e.g._ a drug's attributes). Second, we compute a noisy CATE label \(\tilde{\tau}\) using any off-the-shelf estimator (\(\tilde{\tau}\) is referred to as pseudo-outcomes by the causal inference literature (Curth and van der Schaar, 2021)). Finally, we train a single meta-model to predict these labels using individual-level (\(X\)) and intervention-level (\(H\)) information, such that it is able to generalize to unseen tasks. ### Meta-dataset We formulate CATE estimation as a meta-learning problem. For this, each task refers to the CATE estimation for a distinct intervention (one or more treatments applied simultaneously). For instance, in a clinical setting, each treatment is a single drug, and an intervention consists of one or more drugs prescribed at the same time. We consider up to \(C\) combined treatments at once, and thus there are \(q=\sum_{c=1}^{C}\binom{t}{c}\) unique tasks. However, the observed number of tasks \(o\) is typically much smaller, _i.e._, \(o\ll q\). Interventions as well as tasks in our meta-dataset are jointly indexed by \(j\in\mathbb{N}\) with \(1\leq j\leq o\), such that we can refer to the \(j\)-th intervention assignment indicator with \(w_{j}\). We then construct a meta-dataset \(D\) in the following way: \[D =\left\{D^{(j)}\right\}_{j=1}^{o},\text{ with} \tag{4}\] \[D^{(j)} =\left(D^{(j)}_{\text{treated}}\cup D^{(j)}_{\text{control}},H^{(j )}\right)\text{ and}\] (5) \[D^{(j)}_{\text{treat}} =\left\{(X_{i},Y_{i},W_{i})\mid W_{i}=w_{j}\right\},\] (6) \[D^{(j)}_{\text{control}} =\left\{(X_{i},Y_{i},W_{i})\mid W_{i}=0)\right\}, \tag{7}\] \(D^{(j)}\) denotes the natural experiment dataset for task \(j\), composed of a treated group (instances which received the intervention, _i.e._\(W_{i}=w_{j}\)) and control group (instances which did not receive any intervention, _i.e._\(W_{i}=0\)). Each sample \(i\) represents an individual, for which the quantities \((Y_{i},X_{i},W_{i})\) are collected as introduced in Section 3. In practice, we down-sample both groups (_i.e._ to 1 million samples for the treated and control groups) in our large-scale experiments. In addition, for each task dataset \(D^{(j)}\) we include information describing the intervention: \(H^{(j)}\in\mathbb{R}^{e}\). This is necessary to for zero-shot generalization to new interventions (Kaddour et al., 2021; DeJong and Mooney, 1986; Yasunaga et al., 2021; Koh et al., 2021). The specific form of intervention information will vary depending on the problem domain. For instance, if the intervention is a block of text (Veitch et al., 2020; Weld et al., 2022; Nilforoshan and Wu, 2018), \(H^{(j)}\) could simply be a text embedding produced by a language model (Devlin et al., 2019; Yasunaga et al., 2022). For biomedical treatments (_e.g._ drugs, procedures), each treatment can be represented with a node in a knowledge graph (Chandak et al., 2022; Li et al., 2022), for which an embedding can also be obtained via a variety of knowledge graph embedding methods (Yang et al., 2014; Bordes et al., 2013). Domain experts may also engineer specific features which are included in \(H^{(j)}\), such as the category of treatment from an ontology (_e.g._ ATC codes for drugs). ### Estimating pseudo-outcomes We next estimate the training targets for each task (_i.e._ intervention) in the meta-dataset. The training target (\(\tilde{\tau}^{(j)}\)) is an unbiased, but noisy, estimate of CATE. More formally, for each task \(j\) (which points to the dataset for intervention \(w_{j}\)), we estimate \(\tilde{\tau}^{(j)}\), where \(\mathbb{E}_{\mathcal{P}}[\tilde{\tau}^{(j)}|X=x]=\tau_{w_{j}}(x)\). Thus, \(\tilde{\tau}^{(j)}_{i}\) denotes the target for the \(i\)-th sample in the \(j\)-th task (indexing will be omitted when it is clear from context). We refer to these targets as pseudo-outcomes, following prior literature (Curth and van der Schaar, 2021). For more details on pseudo-outcomes, refer to Section B in the appendix. Figure 2: Visual illustration of the CaML (causal meta-learning) framework. (1) We sample a task (i.e., an intervention) and a natural experiment from the training data consisting of individuals who either received the intervention (W={\(\uparrow\)}), or did not (W={\(\downarrow\)}). Each individual has features (\(X\)) and an outcome (\(Y\)). Each intervention has information (\(H\)) (_e.g._, a drug’s attributes). (2) For each individual we estimate the effect of the intervention on the outcome (pseudo-outcomes \(\tilde{\tau}\)). (3) We predict an individual’s pseudo-outcomes \(\tilde{\tau}\) using a model that fuses \(X\) and \(H\). CaML is trained by repeating this procedure across many tasks and corresponding natural experiments. CaML is agnostic to the specific choice of pseudo-outcome estimator. Thus, we assume a function \(\eta(D^{(j)})\) which takes as input a task dataset \(D^{(j)}\in D\) and returns a vector containing the pseudo-outcomes \(\tilde{\tau}\) for each sample in the task. Thus, we extend each task dataset \(D^{(j)}\) with the pseudo-outcomes, such that a sample holds the elements \((X_{i},Y_{i},W_{i},\tilde{\tau}_{i})\). Our key insight is that by collecting these pseudo-outcomes across multiple tasks, and predicting them using a combination of intervention and individual information (\(H,X\)) we can develop a CATE estimator which generalizes to unseen interventions. In practice, we use the RA-learner (Curth & van der Schaar, 2021) and treat pseudo-outcome estimation as a data pre-processing step (Appendix C.6). ### Meta-model training Given \(m\) outcomes of interest, our goal is then to learn a model \(\Psi_{\theta}\colon\mathbb{R}^{d}\times\mathbb{R}^{e}\rightarrow\mathbb{R}^{m}\) that for parameters \(\theta\) minimizes \[\theta^{*}=\operatorname*{argmin}_{\theta}\ \mathbb{E}_{j\sim U(D)}\ \mathbb{E}_{X,H,\tilde{\tau}\sim D^{j}}\left[L\left(\Psi_{\theta}\right)\right], \tag{8}\] where \(U(D)\) denotes the discrete uniform distribution over the tasks of the meta-dataset \(D\), and where \(L(f)\) refers to a standard loss function between the pseudo-outcomes and the model output, _i.e._, \(L(f)=\left(\tilde{\tau}-f\left(X,H\right)\right)^{2}\). To assess whether the model generalizes to unseen tasks, we partition our meta-dataset by task, into non-overlapping subsets \(D=D_{\text{train}}\cup D_{\text{val}}\cup D_{\text{test}}\). During training, \(\Psi_{\theta}\) is optimized on training tasks \(D_{\text{train}}\). We validate and test this model on \(D_{\text{val}}\) and \(D_{\text{test}}\), which are thus composed of unseen tasks. While the CaML framework is agnostic to a specific training strategy, we based our approach (Algorithm 1) on the Reptile meta-learning algorithm (Nichol et al., 2018) which we find performs better compared to straightforward empirical risk minimization (_c.f._ Section 6). For this, the objective is slightly modified to \[\theta^{*}=\operatorname*{argmin}_{\theta}\ \mathbb{E}_{j\sim U(D)}\ \left[L\left(A_{D^{j}}^{k}\left(\Psi_{\theta}\right)\right)\right], \tag{9}\] where \(A_{D}^{k}\colon\mathcal{F}\rightarrow\mathcal{F}\) represents the operator that updates a model \(f\in\mathcal{F}\) using data sampled from the dataset \(D\) for \(k\) gradient steps. This operator is defined in more detail as the ADAPT routine in Algorithm 1. Note that depending on the choice of CATE estimator, this routine iterates only over treated samples of a task dataset \(D^{(j)}\) (as in our case), or over all samples, including untreated ones. ### CaML architecture To parameterize \(\Psi_{\theta}\), we propose a simple but effective model architecture (see Section 6): \[\Psi_{\theta}\left(X_{i}^{(j)},H^{(j)}\right)=\operatorname{MLP}([ \tilde{X}_{i}^{(j)};\tilde{H}^{(j)}]), \tag{10}\] \[\text{with }\tilde{X}_{i}^{(j)}=\operatorname{MLP}(X_{i}^{(j)}),\ \tilde{H}_{i}^{(j)}= \operatorname{MLP}(H^{(j)}), \tag{11}\] where \([\cdot\,;\cdot]\) denotes concatenation. Equation 11 shows that the input features \(X\) and intervention features \(H\) are encoded separately into dense vectors \(\tilde{X}\) and \(\tilde{H}\), respectively. Our MLPs consist of layers of the form \(f\left(x\right)=x+\text{ReLU}(\text{Linear}(x))\). ``` 0: meta-dataset \(D\), meta-model \(\Psi_{\theta}\) with initialized parameters \(\theta\), hyperparameter \(k\). for iteration \(=1,2,\ldots,L\)do \(j\leftarrow\text{SampleTask()}\) \(D_{\text{test}}^{(j)},D_{\text{test}}^{(j)},H^{(j)}\leftarrow\text{QueryTaskData }(j)\) \(\tilde{\tau}^{(j)}\leftarrow\text{EstimatePseudoOutcomes}(D_{\text{test}}^{(j)},D_{ \text{test}}^{(j)})\) \(\theta^{\prime}\leftarrow\text{Adapt}((D_{\text{test}}^{(j)},D_{\text{test}}^{(j)},\tilde{\tau}^{(j)},H^{(j)},\Psi_{\theta},k)\) \(g\leftarrow\theta-\theta^{\prime}\) // Reptile gradient \(\theta\leftarrow\theta-\beta g\) // Gradient step for meta-model \(\Psi_{\theta}\) endfor return\(\Psi_{\theta}\) ``` **Algorithm 1** The CaML algorithm ## 5 Theoretical analysis We now consider zero-shot causal learning from a theoretical perspective. Under simplified assumptions, we bound the prediction error in the zero-shot setting. We formulate the setting as a supervised learning problem where we learn a smooth function \(f=\Psi(X,H)\rightarrow\tau\) among a family \(\mathcal{F}\). We focus on \(\tau\in\mathbb{R}\), and assume \(\tau\in[0,1]\) without loss of generality, since we can normalize \(\tau\) to this range. The training dataset has \(n\) interventions with \(m\) samples each, _i.e._ first \(n\)_i.i.d._ draws from \(P_{H}\) and then \(m\)_i.i.d._ draws from \(P_{X}\) for each \(H^{(j)}\). The main theorem quantifies the rate that combining information across different interventions helps with zero-shot performance. We prove a finite-sample generalization bound for the ERM variant of CaML. The ERM is a special case of Adapt with \(k=1\) that is more conducive to rigorous analysis. The advantage of Reptile over ERM is orthogonal and we refer the readers to the original discussion in Nichol and Schulman (2018). We assume the estimated \(\tilde{\tau}\) during training satisfies \(\tilde{\tau}=\tau+\xi\) where \(\xi\) is an independent zero-mean noise with \(|\xi|\leq\epsilon\) almost surely for some \(\epsilon\geq 0\), \[\hat{f}=\min_{f\in\mathcal{F}}\hat{L}(f)=\min_{f}\frac{1}{nm}\sum_{j=1}^{n} \sum_{i=1}^{m}(f(X_{i}^{(j)},H^{(j)})-\tilde{\tau}_{i}^{(j)})^{2}.\] The test error is \(L(f)=\mathbb{E}_{H,X,\tau}[(f(X,H)-\tau)^{2}]\). Let \(f^{*}=\min_{f}L(f)\). We bound the excess loss \(L(\hat{f})-L(f^{*})\). Our key assumption is that interventions with similar features \(H\) have similar effects in expectation. We assume that all functions in our family are smooth with respect to \(H\): \[\forall f\in\mathcal{F},\mathbb{E}_{H,X}\left[\left\|\partial f/\partial H \right\|_{2}^{2}\right]\leq\beta^{2}.\] **Theorem 1**.: _Under our assumptions, with probability \(1-\delta\),_ \[L(\hat{f}) \leq L(f^{*})+8(1+\epsilon)R_{nm}(\mathcal{F})+\] \[8\sqrt{\frac{(1+\epsilon)R_{nm}(\mathcal{F})\log(1/\delta)}{n}}+ \frac{2\log(1/\delta)}{3n}+\] \[(1+\epsilon)\sqrt{\frac{(32C^{2}\beta^{2}+2(1+\epsilon)^{2}/m) \log{(1/\delta)}}{n}}\] where \(R_{nm}\) is a novel notion of zero-shot Rademacher complexity defined in equation (12); \(C\) is a Poincare constant that only depends on the distribution of \(H\). For large \(n,m\), the leading terms are the function complexity \(R_{nm}(\mathcal{F})\), and an \(O(\sqrt{1/n})\) term with a numerator that scales with \(\beta\) and \((1+\epsilon)^{2}/m\). This validates our intuition that when the intervention encoding \(H\) is more informative of the true treatment effects (smaller \(\beta\)), and when the estimation of \(\tau\) in the training dataset is more accurate, the performance is better on unseen interventions. Please refer to Section A for the full proof. Compared to standard generalization bound which usually has a \(\sqrt{1/n}\) term, our main technical innovation involves bounding the variance by the smoothness of the function class plus Poincare-type inequalities. When \(\beta\) is much smaller than \(1\) we achieve a tighter bound. ## 6 Experiments We explore to what extent zero-shot generalization is practical when predicting the effects of interventions. We thus design two novel evaluation settings using real-world data in domains where zero-shot abilities will be highly impactful: (1) Health Insurance Claims: predicting the effect of a drug on a patient, and (2) LINCS: predicting the effect of a perturbation on a cell. We use new datasets because existing causal inference benchmarks (Hill et al., 2003; Shimoni et al., 2018) focus on a single intervention. By contrast, zero-shot causal learning must be conceptualized in a multi-intervention setting. **Zero-shot Evaluation**. Each task corresponds to predicting CATE across different individual samples that received the same intervention. We split all tasks into meta-training/meta-validation, and a hold-out meta-testing set for evaluating zero-shot predictions (Table 2, unseen drugs for Claims and Table 3, unseen molecular perturbations in LINCS). For the Claims dataset, we also consider the challenging setting of combinations of unseen drugs (Table 4). The same patient (Claims) or cell-line (LINCS) can appear in multiple tasks (if they received different interventions at different times). Thus, to ensure a fair zero-shot evaluation, we exclude all samples who have ever received a meta-testing intervention from meta-val/meta-train. Similarly, we exclude all meta-validation patients from meta-train. Details on holdout selection provided in Appendix C.2. Table 1 gives an overview of the key terms that specify both benchmarks. In the Claims dataset, we compare zero-shot predictions with strong single-intervention baselines which cannot generalize to unseen interventions. To do so, we further split each task in meta-validation and meta-testing into a train/test (50/50) split of samples. These baselines are trained on a task's train split, and all methods are evaluated on the test split of the meta-testing tasks. On the LINCS dataset, as each task consists of \(<100\) cells, single-intervention baselines performed weakly and are excluded from analysis. **Baselines.** We compare the zero-shot performance of CaML to two distinct categories of baselines. (1) _Trained on test task_ baselines can only be trained on a single task. Thus, we train a single model for each meta-testing task on its train split, and evaluate performance on its test split. This category includes T-learner (Kunzel et al., 2019), X-learner (Kunzel et al., 2019), RA-learner (Curth and van der Schaar, 2021), R-learner (Nie and Wager, 2021), DragonNet (Shi et al., 2019), TARNet (Shalit et al., 2017), and FlexTENet (Curth and van der Schaar, 2021). (2) _Zero-shot_ baselines are, in principle, capable of generalizing to unseen interventions. We use GraphITE (Harada and Kashima, 2021) and Structured Intervention Networks (SIN) (Kaddour et al., 2021). We also use a variant of the S-learner (H) (Kunzel et al., 2019) which uses the intervention information as input. We elaborate on implementation details of baselines in Appendix C.7. For details on hyperparameter search and fair comparison, see Appendix C.1. ### Setting 1: Personalized drug side effect prediction from large-scale medical claims Our first setting (Claims) is to predict the increased likelihood of a life-threatening side effect caused by a drug prescription. We leverage a large-scale insurance claims dataset of over 3.5 billion claims across 30.6 million patients in the United States2. Each datestamped insurance claim contains a set of diagnoses (ICD-10 codes), drug prescriptions (DrugBank ID), procedures (ICD-10 codes), and laboratory results (LOINC codes). Laboratory results were categorized by whether the result was high, low, normal, abnormal (for non-continuous labs), or unknown. Footnote 2: Insurance company undisclosed per data use agreement. Interventions (\(W\)) are administration of one drug (\(n=745\)), or two drugs (\(n=22{,}883\)) prescribed in combination. Time of intervention corresponds to the _first_ day of exposure. Intervention information (\(H\)) was generated from pre-trained drug embeddings from a large-scale biomedical knowledge graph (Chandak et al., 2022) (Appendix C). We compute drug combination embeddings as the sum of the embeddings of the constituent drugs. We focus on the binary outcome (\(Y\)) of the occurrence of the side effect pancytopenia within 90 days of intervention exposure. Pancytopenia is a deficiency across all three blood cell lines (red blood cells, white blood cells, and platelets). Pancytopenia is life-threatening, with a 10-20% mortality rate (Khunger et al., 2002; Kumar et al., 2001), and is a rare side effect of many common medications (Kuhn et al., 2016) (_e.g._ arthritis and cancer drugs), which in turn require intensive monitoring of the blood work. Following prior work (Guo et al., 2022), patient medical history features (\(X\)) were constructed by time-binned counts of each unique medical code (diagnosis, procedure, lab result, drug prescription) at seven different time scales before the drug was prescribed, resulting in a total of 443,940 features. For more details, refer to Appendix C.1. **Metrics** We rely on best practices for evaluating CATE estimators in observational data, as established by recent work (Yadlowsky et al., 2021; Chernozhukov et al., 2018), which recommend to assess treatment rules by comparing subgroups across different quantiles of estimated CATE. We follow the high vs. others RATE (rank-weighted average treatment effect) approach from Yadlowsky et. al (Yadlowsky et al., 2021), which computes the difference in average treatment effect (ATE) of the top \(u\) percent of individuals (ranked by predicted CATE), versus all individuals (for more details, see Appendix C.1). For instance, RATE @ 0.99 would be the difference between the top 1% of the samples (by estimated CATE) vs. the average treatment effect (ATE) across all samples, which we would expect to be high if the CATE estimator is accurate. Note that estimates of RATE can exceed 1, and also be negative if model predictions are inversely associated with CATE. We elaborate on the RATE computation in Appendix C.1. The real-world use case of our model is preventing drug prescription for a small subset of high-risk individuals. Thus, more specifically, for each task \(j\), intervention \(w_{j}\) in the meta-dataset, and meta-model \(\Psi_{\theta}\), we compute \(RATE\mathbin{@}u\) for each \(u\) in \([0.999,0.998,0.995,0.99]\) across individuals who received the intervention. Additionally, because our meta-testing dataset consists of individuals treated with drugs known to cause pancytopenia, observational metrics of recall and precision are also a rough _proxy_ for successful CATE estimation (and highly correlated to RATE, Table 2). Thus, as secondary metrics, we also compute \(Recall\mathbin{@}u\) and \(Precision\mathbin{@}u\) for the same set of thresholds as RATE, where a positive label is defined as occurrence of pancytopenia after intervention. ### Setting 2: Cellular gene expression response due to perturbation Our second setting (LINCS) is to predict how a cell's gene expression (\(Y\)) will respond to intervention from perturbagen (\(W\)) (small molecule compound such as a drug). This is a critical problem as accurately predicting treatment response will accelerate drug-discovery. We use data for 10,325 different perturbagens from the LINCS Program (Subramanian et al., 2017). Each perturbagen corresponds to a different small molecule. Molecular embeddings were generated using the RDKit featurizer (Landrum et al., 2006) and used as intervention information (\(H\)). Outcomes (\(Y\)) of interest are post-intervention gene expression across the top-\(50\) and top-\(20\) differentially expressed landmark genes (DEGs) in the LINCS dataset. We did not look at all 978 genes since most do not show significant varia \begin{table} \begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Dataset & Samples & Features (\(X\)) & Outcome (\(Y\)) & Intervention (\(W\)) & Intervention information (\(H\)) \\ \hline Claims & Patients & Patient history (binned counts of medical codes) & Pancytopenia onset & Drug administration (prescription) & Drug embedding (knowledge graph) \\ LINCS & Cell lines & Cancer cell encyclopedia & Expression of landmark genes (DEG) & Perturbagen (small molecule) & Molecular embeddings (RDKit) \\ \hline \hline \end{tabular} \end{table} Table 1: High-level overview of our two experimental settings. For more details, refer to Appendix C.1. tion upon perturbation. We use \(19{,}221\) features (\(X\)) from the Cancer Cell Line Encyclopedia (CCLE) (Ghandi et al., 2019) to characterize each cell-line (\(n=99\)), each of which correspond to unperturbed gene expression measured in a different lab environment using a different experimental assay. For more details, see Appendix C.1. **Metrics**. A key advantage of experiments on cells is that at evaluation time we can observe both \(Y(0)\) and \(Y(1)\) for the same cell line \(X\), through multiple experiments on clones of the same cell-line in controlled lab conditions. In the LINCS dataset, \(Y(0)\) is also measured for all cells which received an intervention. Thus, we can directly compute the precision in estimating heterogeneous effects (PEHE) on all treated cells in our meta-testing dataset, an established measure for CATE estimation performance analogous to mean-squared error (Hill, 2011) (see Appendix C.1). ### Key findings _CaML's zero-shot predictions outperform all baselines._ In the medical claims setting, single intervention baselines (Tables 2, dark grey rows) are the highest performing baselines as we train them directly on the meta-test intervention. Still, CaML achieves 13-26% higher RATE than the best single intervention baseline, the RA-learner, and even 150-160% higher RATE than the best zero-shot baseline, the S-learner (H). In the LINCS data, multi-intervention learners are strongest as there are only a small number of instances (cell lines) per intervention3. CaML outperforms both single-intervention and multi-intervention learners by drawing from both of their strengths--it allows us to use strong CATE estimation methods (_i.e._ the RA-learner) which previously were restricted to single interventions, while sharing information across multiple interventions. Footnote 3: Single-task baselines excluded from Table 3: all performed similar or worse than mean baseline due to low task sample size. _CaML learns to generalize from single treatments to combinations of unseen treatments (drug pairs)._ We evaluate CaML's performance in the challenging setting of predicting the personalized effects of combinations of two drugs which are both unseen during training, while only training on interventions consisting of single drugs. CaML achieves strong performance results (see Appendix Table 4), surpassing the best baseline trained on the test tasks in 11 out of 12 metrics, and outperforming all zero-shot baselines across all metrics. _Understanding CaML's performance results._ Our ablation studies explain that CaML's performance gains are due to (1) our meta-learning formulation and algorithm (in contrast to the w/o meta-learning row, in which ERM is used to train the model), and (2) the flexible CATE estimation strategy, allowing to take advantage of recently developed CATE estimators previously restricted to single interventions (in contrast to the w/o RA-learner row, in which an alternative pseudo-outcome estimator is used). Lastly, (3) comparison to the RA-learner trained seperately on each meta-testing intervention (Table 2, RA-learner, grey) shows that we gain \begin{table} \begin{tabular}{l|c c c c c|c c c c|c c c c} & \multicolumn{6}{c|}{RATE @\(u\) (\(\uparrow\))} & \multicolumn{6}{c|}{Recall @\(u\) (\(\uparrow\))} & \multicolumn{6}{c}{Precision @\(u\) (\(\uparrow\))} \\ & \(0.999\) &.998 & 0.995 & 0.99 & 0.999 & 0.998 & 0.995 & 0.99 & 0.999 & 0.998 & 0.995 & 0.99 \\ \hline Random & -0.06\(\pm\)0.02 & -0.05\(\pm\)0.02 & -0.01\(\pm\)0.03 & 0.0\(\pm\)0.02 & 0.0 & 0.0 & 0.0 & 0.0 & 0.01 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline T-learner & 6.46\(\pm\)0.00 & 4.96\(\pm\)0.00 & 2.96\(\pm\)0.00 & 1.83\(\pm\)0.00 & 0.12 & 0.18 & 0.26 & 0.31 & 0.36 & 0.29 & 0.18 & 0.11 \\ X-learner & 0.62\(\pm\)0.00 & 0.59\(\pm\)0.00 & 0.6\(\pm\)0.00 & 0.52\(\pm\)0.00 & 0.02 & 0.04 & 0.08 & 0.12 & 0.09 & 0.07 & 0.06 & 0.05 \\ R-learner & 2.32\(\pm\)0.00 & 2.07\(\pm\)0.00 & 1.83\(\pm\)0.00 & 1.38\(\pm\)0.00 & 0.06 & 0.1 & 0.19 & 0.26 & 0.24 & 0.21 & 0.15 & 0.11 \\ RA-learner & 8.81\(\pm\)0.03 & 6.71\(\pm\)0.01 & 4.08\(\pm\)0.00 & 2.52\(\pm\)0.00 & 0.17 & 0.26 & 0.38 & 0.45 & 0.54 & 0.42 & 0.26 & 0.16 \\ DragonNet & 1.04\(\pm\)0.67 & 0.97\(\pm\)0.68 & 0.73\(\pm\)0.44 & 0.56\(\pm\)0.29 & 0.03 & 0.05 & 0.08 & 0.11 & 0.15 & 0.12 & 0.08 & 0.06 \\ TARNet & 2.84\(\pm\)0.28 & 2.22\(\pm\)0.18 & 1.41\(\pm\)0.11 & 0.9\(\pm\)0.05 & 0.05 & 0.08 & 0.12 & 0.14 & 0.18 & 0.15 & 0.09 & 0.06 \\ FlexTENet & 0.68\(\pm\)0.06 & 0.6\(\pm\)0.07 & 0.43\(\pm\)0.05 & 0.32\(\pm\)0.03 & 0.04 & 0.06 & 0.11 & 0.16 & 0.15 & 0.13 & 0.09 & 0.06 \\ GraphITE & 3.61\(\pm\)0.62 & 2.15\(\pm\)0.38 & 0.99\(\pm\)0.15 & 0.53\(\pm\)0.07 & 0.07 & 0.08 & 0.09 & 0.1 & 0.23 & 0.14 & 0.07 & 0.04 \\ SIN & 0.12\(\pm\)0.09 & 0.11\(\pm\)0.06 & 0.08\(\pm\)0.02 & 0.08\(\pm\)0.02 & 0.0 & 0.0 & 0.01 & 0.02 & 0.01 & 0.01 & 0.01 & 0.01 \\ S-learner (H) & 4.42\(\pm\)0.73 & 3.38\(\pm\)0.75 & 1.88\(\pm\)0.5 & 1.1\(\pm\)0.29 & 0.08 & 0.11 & 0.15 & 0.16 & 0.25 & 0.18 & 0.1 & 0.06 \\ \hline CaML - w/o meta-learning & 7.16\(\pm\)0.58 & 6.0\(\pm\)0.45 & 3.9\(\pm\)0.17 & 2.4\(\pm\)0.12 & 0.15 & 0.22 & 0.32 & 0.39 & 0.45 & 0.35 & 0.22 & 0.14 \\ CaML - w/o RA-learner & 8.63\(\pm\)1.19 & 1.75\(\pm\)0.72 & 4.32\(\pm\)0.24 & 2.57\(\pm\)0.16 & 0.16 & 0.24 & 0.34 & 0.41 & 0.48 & 0.38 & 0.23 & 0.14 \\ CaML (ours) & **11.13\(\pm\)**0.51 & **8.48\(\pm\)**0.22 & **4.89\(\pm\)**0.07 & **2.86\(\pm\)**0.04 & **0.18** & **0.27** & **0.38** & **0.45** & **0.54** & **0.43** & **0.26** & **0.16** \\ \end{tabular} \end{table} Table 2: Performance results for the Claims dataset (predicting pancytopenia onset from drug exposure using patient medical history. Key findings are (1) CaML outperforms all zero-shot baselines (RATE is 150-160% higher than S-learner H, the strongest zero-shot baseline) (2) Even compared against the best baseline trained on the test tasks (RA-learner), CaML achieves \(13-26\%\) higher in RATE. Higher RATE values are better; RATE can exceed 1 (due to AIPW scores) and also be negative if model predictions are inversely associated with CATE (Appendix C.1). Mean and standard deviation across runs is reported. Precision and recall standard deviation all \(<0.01\). Analogous trends hold for generalization to _pairs_ of unseen drugs (Table 4). Single-task methods were trained on the meta-testing tasks (best model underlined). Zero-shot methods were trained on meta-training tasks and applied to previously unseen meta-testing tasks (best model in bold). from learning from thousands interventions. See Appendix C.3 for details on ablations. ## 7 Conclusion We introduce a novel approach to predict the effects of previously unseen interventions. CaML consistently outperforms state-of-the-art baselines, by unlocking zero-shot capacity for many recently developed CATE estimation methods which were previously restricted to studying single interventions in isolation. Exciting directions for future work include designing new model architectures which learn well under the CaML framework, designing novel CATE estimators for CaML, as well as more generally exploring novel learning strategies that enable zero-shot causal learning. ## Acknowledgements We thank Stefan Wager, Emma Pierson, Kexin Huang, Kaidi Cao, Yanay Rosen, Johann Gaebler, Maria Brbic for helpful conversations. H.N was supported by a Stanford Knight-Hennessy Scholarship and the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1656518. M.M. was supported by DARPA N660011924033 (MCS), NIH NINDS R61 NS11865, GSK, Wu Tsai Neurosciences Institute. A.S and S.O were supported by the American Slovenian Education Foundation (ASEF) fellowship. M.Y was supported by the Microsoft Research PhD fellowship. Y.R was supported by funding from GlaxoSmithKline LLC. We also gratefully acknowledge the support of Stanford HAI for Google Cloud Credits, DARPA under Nos. HR00112190039 (TAMI), N660011924033 (MCS); ARO under Nos. W911NF-16-1-0342 (MURI), W911NF-16-1-0171 (DURIP); NSF under Nos. OAC-1835598 (CINES), OAC-1934578 (HDR), CCF-1918940 (Expeditions), NIH under No. 3U54HG010426-04S1 (HuBMAP), Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Amazon, Docomo, GSK, Hitachi, Intel, JPMorgan Chase, Juniper Networks, KDDI, NEC, and Toshiba.
2307.11415
Engineering mobility in quasiperiodic lattices with exact mobility edges
We investigate the effect of an additional modulation parameter $\delta$ on the mobility properties of quasiperiodic lattices described by a generalized Ganeshan-Pixley-Das Sarma model with two on site modulation parameters. For the case with bounded quasiperiodic potential, we unveil the existence of self-duality relation, independent of $\delta$. By applying Avila's global theory, we analytically derive Lyapunov exponents in the whole parameter space, which enables us to determine mobility edges or anomalous mobility edges exactly. Our analytical results indicate that the mobility edge equation is described by two curves and their intersection with the spectrum gives the true mobility edge. Tuning the strength parameter $\delta$ can change the spectrum of the quasiperiodic lattice, and thus engineers the mobility of quasi-periodic systems, giving rise to completely extended, partially localized, and completely localized regions. For the case with unbounded quasiperiodic potential, we also obtain the analytical expression of the anomalous mobility edge, which separates localized states from critical states. By increasing the strength parameter $\delta$, we find that the critical states can be destroyed gradually and finally vanishes.
Zhenbo Wang, Yu Zhang, Li Wang, Shu Chen
2023-07-21T08:19:23Z
http://arxiv.org/abs/2307.11415v1
# Engineering mobility in quasiperiodic lattices with exact mobility edges ###### Abstract We investigate the effect of an additional modulation parameter \(\delta\) on the mobility properties of quasiperiodic lattices described by a generalized Ganeshan-Pixley-Das Sarma model with two on site modulation parameters. For the case with bounded quasiperiodic potential, we unveil the existence of self-duality relation, independent of \(\delta\). By applying Avila's global theory, we analytically derive Lyapunov exponents in the whole parameter space, which enables us to determine mobility edges or anomalous mobility edges exactly. Our analytical results indicate that the mobility edge equation is described by two curves and their intersection with the spectrum gives the true mobility edge. Tuning the strength parameter \(\delta\) can change the spectrum of the quasiperiodic lattice, and thus engineers the mobility of quasi-periodic systems, giving rise to completely extended, partially localized, and completely localized regions. For the case with unbounded quasiperiodic potential, we also obtain the analytical expression of the anomalous mobility edge, which separates localized states from critical states. By increasing the strength parameter \(\delta\), we find that the critical states can be destroyed gradually and finally vanishes. ## I Introduction In condensed matter physics, mobility is a fundamental property of physical systems, which refers to the ability of a particle, such as an electron, to move through a material. In metal, it is responsible for the transport properties such as conductivity and resistance. In the context of semiconductors, it is an important parameter that determines the performance of electronic devices. Generally, mobility is influenced by factors like crystal structures, interactions, defects, and impurities, among others. More than sixty years ago, Anderson in his seminal work [1] investigated the role that the disordered on-site potential played on the mobility of particles in certain random lattices. From then on, Anderson localization [1; 2] has attracted large and broad attention worldwide. Typically, for three-dimensional systems subjected to disorder of finite strength, localized and extended eigenstates can coexist in the energy band. Two intervals in the energy dimension corresponding to eigenstates with different mobility property are separated by a critical energy value, namely the mobility edge [3]. Tuning the strength of disorders may shift the value of mobility edge. Accordingly, the proportion between extended and localized eigenstates may also change, finally leading to the modulation of system's mobility. While mobility edge is usually absent for the above-mentioned uncorrelated disorders in low dimensional systems [2; 4], one-dimensional (1D) quasiperiodic systems offer an appealing platform to study localization-delocalization transition [5; 6; 7; 8; 9; 10; 11] and mobility edge [12; 13; 14; 15]. Among these, the most famous one is the Aubry-Andre (AA) model [5], which analytically demonstrates the existence of localization-delocalization transition by utilizing the self-duality property. Subsequently, various generalizations to the standard AA model confirmed the existence of mobility edge in 1D quasiperiodic lattices, for example, lattice models with slowly varying quasi-periodic potentials [16; 17], generalized AA model [12], incommensurate lattices with exponentially decaying hoppings [18], and the recently proposed mosaic model [14]. So far, the existence of mobility edges in low dimensional systems has been demonstrated in various models [13; 14; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36]. Very recently, the concept of mobility edge has found its new territory in the emerging field of non-Hermitian physics [37; 38; 39; 40; 41; 42; 43; 44; 45]. In this work, we study quasiperiodic lattices described by a generalized Ganeshan-Pixley-Das Sarma (GPD) model with two tunable strength parameters of quasiperiodical potential. In comparison with the GPD model proposed by Ganeshan et. al [12], also referred as generalized AA model model in references, our model includes an additional modulation parameter \(\delta\) (see Eq.(2)). By applying Avila's global theory, we analytically derive the Lyapunov exponent in the whole parameter space, which enables us to determine the mobility edge exactly. Our analytical results indicate that the mobility edge equation is independent of \(\delta\) and generally described by two curves, whose intersection with the spectrum of system gives the true mobility edges. Tuning the strength parameter \(\delta\) can change the spectrum of the quasiperiodic lattice, and thus provides a scheme to engineer the mobility of quasi-periodic systems. In this manner, with the anchored mobility edge as a separation, the ratio of eigenstates on both sides then is changed, leading to the engineering of system's mobility. Numerically calculating inverse participation ratios (IPRs) and Lyapunov exponents, we demonstrate that eigenstates of the system with bounded quasiperiodic potential successively cross the stationary mobility edge and undergo three scenarios, namely, completely extended, partially localized, and completely localized. For the case with unbounded quasiperiodic potential, we also obtain the analytical expression of the anomalous mobility edge, which separates localized states from critical states. By increasing the strength of \(\delta\), we find that the critical states are destroyed gradually and finally vanish. The paper is organized as follows. First, we introduce our model in Sec. II A. Subsequently, in Sec. II B, we unveil the existence self-duality relation for the system with bounded quasiperiodic potentials, independent of the modulation parameter \(\delta\). In Sec. II C, by applying Avila's global theory, we derive analytically the expression of Lyapunov exponent and mobility edge. In Sec. II D, we discuss the engineering of mobility and further verify our analytical results by numerically calculating the inverse participation ratios and Lyapunov exponents. The unbounded potential case is discussed in Sec. II E. Finally, we give a summary in Sec. III. ## II Model and results ### model We consider a one-dimensional quasiperiodic lattice described by the following eigenvalue equation, \[t\left(\phi_{n-1}+\phi_{n+1}\right)+V_{n}(\lambda,\delta,\alpha)\phi_{n}=E\phi _{n}, \tag{1}\] with \[V_{n}(\lambda,\delta,\alpha)=\frac{\lambda\cos(2\pi nb+\theta)+\delta}{1- \alpha\cos(2\pi nb+\theta)}, \tag{2}\] where \(n\) is the index of lattice site, and \(t\) is the nearest-neighbour hopping amplitude. The quasiperiodic potential is regulated by two modulation parameters \(\lambda\), \(\delta\) and a deformation parameter \(\alpha\). The parameter \(\theta\) denotes a phase factor and \(b\) is an irrational number responsible for the quasiperiodicity of the on site potential. To be concrete, in this work we choose \(b=(\sqrt{5}-1)/2\), however the obtained results are also valid for any other choice of the irrational number \(b\). For convenience, we shall set \(t=1\) as the energy unit in the following calculation. When \(\delta=0\), the model reduces to the generalized AA model (GPD model) studied in ref. [12], for which an exact mobility \(\alpha E=2\text{sgn}(\lambda)|t|-\lambda\) is identified by the existence of a generalized duality symmetry for the case of \(\alpha\in(-1,1)\). On the other hand, the limit of \(\lambda=0\) was recently studied in ref. [31] for the unbounded case \(\alpha>1\). The onset of anomalous mobility edges at the energies \(E=\pm 2|t|\) is unveiled via the calculation of the Lyapunov exponent. In this work, we shall consider the general case in the presence of both \(\lambda\) and \(\delta\) terms. For the bounded case with \(\alpha\in(-1,1)\), we unveil the existence of a self-dual symmetry even in the presence of \(\delta\) term, which enables us to get an expression of mobility edge. By applying Avila's global theory, we can derive the mobility edges and anomalous mobility edges analytically by calculating the Lyapunov exponents for both cases of \(|\alpha|<1\) and \(|\alpha|>1\). ### Self-duality relation At first, we consider the case of \(\alpha\in(-1,1)\) and demonstrate the existence of a generalized duality symmetry for the model with the quasiperiodic potential (2) under a generalized dual transformation, from which we can derive the exact mobility edges by searching the self-duality relation. Following ref. [12], we define \[\chi_{n}(\beta,\theta)\equiv\frac{\sinh\beta}{\cosh\beta-\cos(2\pi nb+\theta)}.\] Since Eq.(2) can be represented as \[V_{n}(\lambda,\delta,\alpha)=-\frac{\lambda}{\alpha}+\frac{\frac{\lambda}{ \alpha}+\delta}{1-\alpha\cos(2\pi nb+\theta)}, \tag{3}\] the model described by Eqs. (1) and (2) can be straightforwardly rewritten into a form as below, \[t(\phi_{n-1}+\phi_{n+1})+G\chi_{n}(\beta,\theta)\phi_{n}=\left(E+\lambda\cosh \beta\right)\phi_{n}, \tag{4}\] in which \(\beta\) is defined as \(\cosh\beta\equiv 1/\alpha\) for \(\alpha\in(0,1)\), and the parameter \(G\) is given by \(G=(\lambda\cosh\beta+\delta)\coth\beta\). By using a well-established mathematical relation [12] as following, \[\frac{\sinh\beta}{\cosh\beta-\cos(2\pi nb+\theta)}=\sum_{r=-\infty}^{\infty}e^ {-\beta|r|}e^{ir(2\pi nb+\theta)}, \tag{5}\] we can implement consecutively three transformations to recover Eq. (4) into its initial form. Define \(u_{p}=\sum_{n}e^{in(2\pi bp+q\pi)}\phi_{n}\), where \(\sum_{n}\) is short for \(\sum_{n=-\infty}^{\infty}\) and \(q\) is an integer. Multiplying \(e^{in(2\pi bp+q\pi)}\) with both sides of Eq. (4) and performing a summation, we get \[\omega\chi_{p}^{-1}(\beta_{0},0)e^{p\theta}u_{p}=G\sum_{r}e^{-\beta|p-r|}e^{r \theta}u_{r}, \tag{6}\] where \(\beta_{0}\) is defined through relation \(E+\lambda\cosh\beta\equiv(-1)^{q}2t\cosh\beta_{0}\) and \(\omega\) is defined as \(\omega\equiv(-1)^{q}2t\sinh\beta_{0}\). Subsequently, we move out to implement the second transformation \(v_{m}=\sum_{p}e^{ip(2\pi bm+\theta+q\pi)}\chi_{p}^{-1}(\beta_{0},0)u_{p}\). By multiplying \(e^{ip(2\pi bm+q\pi)}\) with both sides and making a sum over \(p\), Eq. (6) is correspondingly transformed into \[\omega\chi_{m}^{-1}(\beta,q\pi)v_{m}=G\sum_{r}e^{-\beta_{0}|m-r|}v_{r}. \tag{7}\] Then it comes to the last step where the transformation is defined as \(z_{k}=\sum_{m}e^{im(2\pi bk+\theta)}v_{m}\). We multiply Eq. (7) by \(e^{im(2\pi bk+\theta)}\) and sum over \(m\). Finally, one obtains the following tight binding model about \(z_{k}\), \[t(z_{k+1}+z_{k-1})+G\frac{\sinh\beta}{\sinh\beta_{0}}\chi_{k}(\beta_{0},\theta) \;z_{k}=(-1)^{q}2t\cosh\beta\;z_{k}. \tag{8}\] It is not difficult to notice that Eq. (8) can be managed to be equivalent to Eq. (4), if one lets \(\beta=\beta_{0}\). Accordingly, we have \(E+\lambda\cosh\beta=(-1)^{q}2t\cosh\beta\), which in terms of the original parameter \(\alpha\) is \[\alpha E=(-1)^{q}2t-\lambda. \tag{9}\] Since \(q\) may take even or odd integers, this actually gives out the analytical formula of a pair of exact mobility edges. As for the other case \(\alpha\in(-1,0)\), one can also arrive at Eq. (9) by conducting similar derivations as above. ### Analytical formula of the exact mobility edge Next we apply Avila's global theory [46] to calculate the Lyapunov exponent and derive the exact mobility edge [47; 48]. For convenience, we will absorb \(t\) into \(\lambda\) and \(E\) in the derivation process by setting \(t=1\). For the spectral problem with incommensurate potential, the Lyapunov exponent \(\gamma(E)\) is defined as: \[\gamma(E)=\lim_{L\to\infty}\frac{1}{L}\ln||T_{L}(\theta)||,\] where \(||T_{L}(\theta)||\) is the norm of the \(2\times 2\) transfer matrix \(T_{L}(\theta)\), given by \[T_{L}(\theta)=\prod_{n=1}^{L}M_{n}, \tag{10}\] in which \[M_{n}=\begin{pmatrix}E-V_{n}&-1\\ 1&0\end{pmatrix}, \tag{11}\] with \(V_{n}\) given by Eq.(2). We adopt the conventional procedure to calculate Lyapunov exponent. First, we need to complex the phase, i.e., letting \(\theta\to\theta+i\epsilon\). In order to apply global theory more conveniently, we introduce a new matrix \(\widetilde{M}_{j}\), which can be written as \[\widetilde{M}_{j}(\theta)=[1-\alpha\cos(2\pi jb+\theta)]M_{j}. \tag{12}\] Then the transfer matrix for \(\widetilde{M}_{j}(\theta)\) can be expressed as \[\widetilde{T}_{L}(E,\theta)=\prod_{j=1}^{L}\widetilde{M}_{j}(\theta).\] And the Lyapunov exponent about \(\widetilde{T}_{L}(E,\theta+i\epsilon)\) is \[\tilde{\gamma}(E,\theta+i\epsilon)=\lim_{L\to\infty}\frac{1}{L}\ln|| \widetilde{T}_{L}(E,\theta+i\epsilon)||.\] In the limit of \(L\to\infty\), we can replace the sum of \(j\) by an integral, \[\tilde{\gamma}(E,\theta+i\epsilon)=\frac{1}{2\pi}\int\ln||\widetilde{T}_{L}(E,\theta+i\epsilon)||d\theta.\] Then it follows \[\gamma(E,\epsilon)=\tilde{\gamma}(E,\epsilon)-\frac{1}{2\pi}\int\ln[1-\alpha \cos(\theta+i\epsilon)]d\theta. \tag{13}\] In this part, we focus on the case \(-1<\alpha<1\) and the result of the integral in Eq.(13) is \[\frac{1}{2\pi}\int\ln(1-\alpha\cos(\theta+i\epsilon))d\theta=\ln\frac{1+\sqrt {1-\alpha^{2}}}{2},\] if \(|\epsilon|<\ln|\frac{1+\sqrt{1-\alpha^{2}}}{\alpha}|\). From Eq.(13), we can find that \(\gamma(E,\epsilon)\) and \(\tilde{\gamma}(E,\epsilon)\) has the same slope about \(\epsilon\) when \(|\epsilon|<\ln|\frac{1+\sqrt{1-\alpha^{2}}}{\alpha}|\). In the large-\(\epsilon\) limit, we get \[\widetilde{T}_{L}(E,\epsilon)=\prod_{j=1}^{L}\frac{1}{2}e^{-i2\pi bj}e^{| \epsilon|}\begin{pmatrix}-\alpha E-\lambda&\alpha\\ -\alpha&0\end{pmatrix}+o(1). \tag{14}\] According to the Avila's global theory, \(\tilde{\gamma}(E,\epsilon)\) is a convex, piecewise linear function about \(\epsilon\in(-\infty,\infty)\). Combined with the result of Eq.(14), we can see that the slope about \(\epsilon\) is always \(1\). Thus, the Lyapunov exponent about \(\widetilde{T}_{L}(E,\theta+i\epsilon)\) can be written as \[\tilde{\gamma}(E,\epsilon)=|\epsilon|+\ln f(E),\] for large enough \(\epsilon\), where \[f(E)=|\frac{|\alpha E+\lambda|+\sqrt{(\alpha E+\lambda)^{2}-4\alpha^{2}}}{4}|.\] Considering the convexity of the Lyapunov exponent, the slope of \(\gamma(E,\epsilon)\) might be \(1\) or \(0\) in the region \(0\leq|\epsilon|<\ln|\frac{1+\sqrt{1-\alpha^{2}}}{\alpha}|\). Besides, the slope of \(\gamma(E,\epsilon)\) in a neighborhood of \(\epsilon=0\) is nonzero if the energy \(E\) is in the spectrum. Therefore, when \(E\) is in the spectrum, \[\tilde{\gamma}(E,\epsilon)=|\epsilon|+\ln f(E), \tag{15}\] for any \(\epsilon\in(-\infty,\infty)\). Based on Eq.(13) and the non-negativity of Lyapunov exponent \(\gamma(E,\epsilon)\), we have \[\gamma(E,0)=\max\{\ln\frac{2f(E)}{1+\sqrt{1-\alpha^{2}}},0\}, \tag{16}\] Then the mobility edge can be determined by \(\gamma(E)=0\), which gives rise to \[|\alpha\frac{E}{t}+\frac{\lambda}{t}|=2, \tag{17}\] where we have already explicitly included \(t\). Although Eq.(17) takes a different form from Eq.(9), it can be checked that they are actually equivalent. This result suggests that the mobility edges may be composed of two curves. The appearance of the mobility edge depends on another condition: a true mobility edge exists only if these curves are within the energy spectrum. Therefore, the energy spectrum and the mobility edge equation together determine the mobility properties of the system. In order to determine which curve determine the mobility edge for different parameters, we import the operator theory and give more accurate results. By comparing the expression of curves with the range of the physical possible energy spectrum (more details can be found in Appendix A), we arrive at the expression: \[E_{c}=\frac{2\text{sgn}(\lambda+\delta\alpha)|t|-\lambda}{\alpha}. \tag{18}\] When \(\delta=0\), we see that the mobility edge reduces to \(E_{c}=\frac{2\text{sgn}(\lambda)|t|-\lambda}{\alpha}\), consistent with the result in Ref. [12]. In this case, for a given \(\lambda\) parameter, e.g., \(\lambda>0\) and \(t=1\), the mobility edge is only determined by the curve \(E_{c}=\frac{2-\lambda}{\alpha}\). However, in the presence of nonzero \(\delta\), the mobility edge can be given by either \(E_{c}=\frac{2-\lambda}{\alpha}\) or \(E_{c}=\frac{-2-\lambda}{\alpha}\) depending on the value of \(\lambda+\delta\alpha\), as displayed in Fig.1. To gain an intuitive understanding, we display some numerical results in Fig.1 for system with various parameters \(\lambda\) and \(\delta\), in which we display the energy spectrum versus \(\alpha\) and plot the mobility edges given by Eq.(9) and the inverse participation ratios (IPRs) [49] as a function of \(\alpha\). The IPR for an eigenstate with eigenvalue \(E\) is given as \[\text{IPR}(E_{i})=\frac{\sum_{n}|\phi_{n}(E_{i})|^{4}}{\left(\sum_{n}|\phi_{n }(E_{i})|^{2}\right)^{2}}, \tag{19}\] where \(E_{i}\) is the \(i\)-th energy eigenvalue. For an extended eigenstate, the probability tends to be distributed evenly among the lattice, thus the IPR is expected to be the order of \(1/L\). While for a localized eigenstate, the probability is usually well confined to a few lattice sites, therefore the IPR approaches \(1\) in the limiting case. It is shown that the localized and extended region are separated by the mobility edge. In Fig.1(a-b), the mobility edges are determined by different curves because the sign of \(\lambda+\delta\alpha\) is changed in the process of adjusting \(\alpha\) from \(-1\) to \(1\). In contrast, the mobility edges in Fig.1(c-d) are determined by only one curve because adjusting \(\alpha\) does not change the sign of \(\lambda+\delta\alpha\). ### Engineering the mobility property Although the equation of mobility edges are simply two straight lines described by \(E=\frac{2}{\alpha}-\frac{\lambda}{\alpha}\) and \(E=-\frac{2}{\alpha}-\frac{\lambda}{\alpha}\), which are independent of \(\delta\), tuning \(\delta\) can change the spectrum of the system dramatically. By tuning \(\delta\), we can access five different regions as shown in Fig.2. By comparing the energy spectrum and the equation of mobility edges, we can approximately obtain transition points separating these different regions of \(\delta\) (details about the transition points can be found in Appendix B). For the case of \(-1<\alpha<0\), as shown in Fig.2(a), the five different regions are: (i) For \(-\infty<\delta<-\frac{\lambda}{\alpha}+\frac{2(1-\alpha)}{\alpha}\), all the eigenstates are localized; (ii) For \(-\frac{\lambda}{\alpha}+\frac{2(1-\alpha)}{\alpha}<\delta<-\frac{\lambda}{ \alpha}+\frac{2(1+\alpha)^{2}}{\alpha}\), there is a mobility edge determined by \(E=-\frac{\lambda}{\alpha}+\frac{2}{\alpha}\), below which the states are localized, whereas above which the states are extended; (iii) For \(-\frac{\lambda}{\alpha}+\frac{2(1+\alpha)^{2}}{\alpha}<\delta<-\frac{\lambda} {\alpha}-\frac{2(1+\alpha)^{2}}{\alpha}\), all the eigenstates are extended; (iv) For \(-\frac{\lambda}{\alpha}-\frac{2(1+\alpha)^{2}}{\alpha}<\delta<-\frac{\lambda} {\alpha}-\frac{2(1-\alpha)^{2}}{\alpha}\), there is a mobility edge determined by \(E=-\frac{\lambda}{\alpha}-\frac{2}{\alpha}\), below which the states are extended, whereas above which the states are localized; (v) For \(-\frac{\lambda}{\alpha}-\frac{2(1-\alpha)^{2}}{\alpha}<\delta<+\infty\), all the eigenstates are localized. For the case of \(0<\alpha<1\), as shown in Fig.2(b), the five different regions are: (i) For \(-\infty<\delta<-\frac{\lambda}{\alpha}-\frac{2(1+\alpha)}{\alpha}\), all the eigenstates are localized; (ii) For \(-\frac{\lambda}{\alpha}-\frac{2(1+\alpha)}{\alpha}<\delta<-\frac{\lambda}{ \alpha}-\frac{2(1-\alpha)^{2}}{\alpha}\), there is a mobility edge determined by \(E=-\frac{\lambda}{\alpha}-\frac{2}{\alpha}\), below which the states are localized, whereas above which the states are extended; (iii) For \(-\frac{\lambda}{\alpha}-\frac{2(1-\alpha)^{2}}{\alpha}<\delta<-\frac{\lambda} {\alpha}+\frac{2(1-\alpha)^{2}}{\alpha}\), all the eigenstates are extended; (iv) For \(-\frac{\lambda}{\alpha}+\frac{2(1-\alpha)^{2}}{\alpha}<\delta<-\frac{\lambda} {\alpha}+\frac{2(1+\alpha)}{\alpha}\), there is a mobility edge determined by \(E=-\frac{\lambda}{\alpha}+\frac{2}{\alpha}\), below which states are extended, whereas above which states are localized; (v) For \(-\frac{\lambda}{\alpha}+\frac{2(1+\alpha)}{\alpha}<\delta<+\infty\), all the eigenstates are localized. Figure 1: Numerical spectrum \(E\) of the model in Eq. (1) as a function of \(\alpha\) with model parameters \(L=10000\), \(\theta=0\) and \(t=1\). The IPR of each eigenstate is also calculated, which is indicated by the color of each eigenvalue in the spectrum. The lines in magenta and blue are exact mobility edges predicted by analytical formula Eq. (9). (a) \(\lambda=0.5\), \(\delta=2\), (b) \(\lambda=0.5\), \(\delta=-2\), (c) \(\lambda=-1.5\), \(\delta=1\), (d) \(\lambda=1.5\), \(\delta=1\). To see how the mobility is engineered by the strength of \(\delta\), we show the change of IPRs and Lyapunov exponents of all eigenstates in Fig. 3 by choosing several typical parameters corresponding to Fig.2(a). The Lyapunov exponents [17] (LEs) for finite-size lattices can be numerically calculated by using [50; 51] \[\gamma(E_{i})=\frac{1}{L-1}\sum_{j\neq i}\ln\left|\frac{E_{i}-E_{j}}{t}\right|. \tag{20}\] It is well-known that Lyapunov exponent is the inverse of localization length, thus for an extended eigenstate it approaches to a vanishing value as the lattice size \(L\) increases. On the other hand, the Lyapunov exponent is non-zero for localized states. The IPRs for all single-particle eigenstates under different strengths of \(\delta\) are shown in Fig. 3(a-d) and the LEs are correspondingly given in Fig. 3(e-h). For all of them the strength of \(\lambda\) is fixed at \(\lambda=-0.5\). The lattice size is \(L=10000\) and other parameters are \(\alpha=-0.36\) and \(\theta=0\). From top to bottom, the corresponding strengths of the second quasi-periodic potential are \(\delta=1.5\), \(\delta=3.0\), \(\delta=5.0\), and \(\delta=7.0\). It is clearly shown that as the strength of \(\delta\) is modulated from \(\delta=1.5\) to \(\delta=7.0\), the system is engineered to undergo different situations, initially wholly extended, then partially localized, and at last completely localized. Notably, during the whole process, the mobility edge denoted by vertical line in Fig. 3 is fixed and rather robust against the variation of the strength of \(\delta\). As the strength of \(\delta\) is varied, single particle eigenstates change their mobility properties by leapfrogging the fixed mobility edge consecutively, one by one. In the above calculation, \(\delta\) is chosen as an independent parameter. Nevertheless, we can also choose \(\delta\) as a function of \(\lambda\). Although the form of \(\delta(\lambda)\) does not change the mobility edge equation, it can modulate the structure of spectrum and thus enable us engineering the mobility properties of the quasiperiodic lattices. In Fig. 4(a) and Fig.4(b), we display the energy spectrum and corresponding IPRs versus \(\lambda\) for systems with \(\delta=\frac{1}{\alpha}\sin(\lambda)\) and \(\delta=\frac{\lambda}{\alpha}\sin(\lambda)\), respectively. While the extended states and the mobility edges occur only in a region around \(\lambda=0\) as shown in Fig. 4(a), we find that the mobility edges occur periodically in Fig. 4(b) with the increase of \(\lambda\). Intuitively, periodically occurring mobility edges can be attributed to the periodical occurrence of zero points of \(\frac{\lambda}{\alpha}+\delta(\lambda)\). According to the expression of Eq.(3), when \(\frac{\lambda}{\alpha}+\delta(\lambda)=0\), the quasiperiodic potential vanishes, and the corresponding eigenstates must be extended states. When \(\frac{\lambda}{\alpha}+\delta(\lambda)\neq 0\), localized states may occur if the energy spectrum exceeds the mobility edge curves. Figure 4: Examples of energy spectrum engineering with the freedom granted by the second quasi-periodic potential. IPR is indicated by the color of the eigenvalue point. The lattice size is \(L=10000\). (a) \(\delta=\frac{1}{\alpha}\sin(\lambda)\), (b) \(\delta=\frac{\lambda}{\alpha}\sin(\lambda)\). Other parameters are \(\alpha=0.5\), \(t=1\), \(\theta=0\). Figure 3: Engineering the system’s mobility by varying strength \(\delta\) of the second quasi-periodic on-site potential while mobility edge is kept fixed by the first quasi-periodic on-site potential strength \(\lambda\). The left column shows the inverse participation ratios (IPRs) of all single-particle eigenstates for different values of \(\delta\). The lattice size is \(L=10000\) with parameters \(t=1\), \(\theta=0\), \(\alpha=-0.36\), and \(\lambda=-0.5\). The right column gives the corresponding Lyapunov exponents (LEs). (a,e) \(\delta=1.5\), (b,f) \(\delta=3.0\), (c,g) \(\delta=5.0\), (d,h) \(\delta=7.0\). The vertical line in (a-h) denotes position of the anchored mobility edge. Figure 2: Numerical spectrum \(E\) of the model in Eq. (1) as a function of \(\delta\) with different parameters. (a)\(\lambda=-0.5\), \(\alpha=-0.36\). (b)\(\lambda=-0.5\), \(\alpha=0.36\). We choose \(L=10000\), \(\theta=0\) and \(t=1\) in all cases. The lines in magenta and blue are exact mobility edges predicted by analytical formula Eq. (9). The dashed lines denote transition points separating different regions. ### Anomalous mobility edges for the case of \(|\alpha|>1\) For the case of \(|\alpha|>1\), the quasiperiodic potential given by Eq.(2) is in principle an unbounded potential, which, however, does not diverge at any lattice site for a finite size lattice. According to the Simon-Spencer theorem [52], extended states are forbidden for an unbounded quasiperiodic potential, and thus the self-duality mapping does not work. Nevertheless, we can use Avila's global theory for unbounded quasiperiodic operators to derive the analytical expression of anomalous mobility edges [31; 35]. The derivation of mobility edges for \(|\alpha|>1\) is similar to the case of \(|\alpha|<1\) until Eq.(13). The result of the intergal in Eq.(13) for \(|\alpha|>1\) is \[\frac{1}{2\pi}\int\ln[1-\alpha\cos(\theta+i\epsilon)]d\theta=|\epsilon|+\ln( \frac{\alpha}{2}).\] Thus we can get the Lyapunov exponent in the large-\(\epsilon\) limit as \[\gamma(E,\epsilon)=\ln(\frac{2f(E)}{\alpha})\] for any \(\epsilon\). The Lyapunov exponent \(\gamma(E,\epsilon)\) is independent of \(\epsilon\). Similar to the discussion in ref. [31], there is an anomalous mobility edge determined by \(\gamma(E)=0\). Here the anomalous mobility edge means an edge separating localized states and critical states. Through straighforward calculations, we arrive at an exact analytical formula of the anomalous mobility edge as \[E_{c}=\pm 2|t|-\frac{\lambda}{\alpha}. \tag{21}\] Before proceeding further discussion, we set \(t=1\) for convenience. In regions of \(E>2-\frac{\lambda}{\alpha}\) and \(E<-2-\frac{\lambda}{\alpha}\), \(\gamma(E)>0\) and the eigenstates are localized eigenstates with localization length \(\xi=1/\gamma(E)\). In the region \(-2-\frac{\lambda}{\alpha}<E<2-\frac{\lambda}{\alpha}\), the energy spectrum is singular continuous and the eigenstates are critical. Next we carry out numerical analysis to unveil the existence of anomalous mobility edges in the regime of \(|\alpha|>1\). In Fig.5(a), we display the energy spectrum and corresponding IPRs versus \(\alpha\) for both the regions of \(|\alpha|<1\) and \(|\alpha|>1\). In order to distinguish the extended eigenstates and critical eigenstates displayed in Fig.5(a), we make multifractal analysis and calculate the scaling exponent \(\beta_{min}\). The multifractal analysis demands considering a series of finite systems with different sizes. We thus choose the system size \(L\) as the \(m\)th Fibonacci number \(F_{m}\). The scaling exponent \(\beta_{min}\) can be extracted as follows. For a given wave function \(\psi_{n}^{j}\), one can extract a scaling exponent \(\beta_{n}^{j}\) from the \(n\)th on-site probability, \(P_{\theta}^{j}=|\psi_{n}^{j}|^{2}\)\((1/F_{m})^{\beta_{n}^{j}}\). Here we use the minimum value \(\beta_{\min}^{j}=\min_{n}(\beta_{n}^{j})\) to characterize eigenstate properties. As the system size increases, \(\beta_{\min}^{j}\to 1\) for the extended eigenstates, whereas \(\beta_{\min}^{j}\to 0\) for the localized eigenstates. For the critical eigenstates, the \(\beta_{\min}^{j}\) approches to a value in the interval \((0,1)\). In order to reduce the fluctuations among different critical eigenstates, we define an average scaling exponent \(\beta_{\min}=\frac{1}{L^{\prime}}\sum_{j=1}^{L^{\prime}}\beta_{\min}^{j}\), where \(L^{\prime}\) is the number of eigenstates in the corresponding region. In Fig.5(b), the numerical result of scaling analysis Figure 5: (a)Mobility edges and anomalous mobility edges. The lattice size is \(L=10000\). Other parameters are \(t=1\), \(\theta=0\), \(\delta=2\) and \(\lambda=0.5\). The lines in magenta and blue are exact mobility edges predicted by analytical formula Eq. (9). The lines in black are anomalous mobility edges predicted by analytical formula Eq.(21). (b) \(\beta_{min}\) as a function of the inverse Fibonacci index \(1/m\) for different \(\alpha\). From top to bottom, the data points with different color represents, extended eigenstates (in the energy interval \(\left[\frac{2}{\alpha}-\frac{\lambda}{\alpha},-\frac{2}{\alpha}-\frac{2}{ \alpha}\right]\) with \(\alpha=-0.36\), critical eigenstates (in the energy interval \([-2-\frac{\lambda}{\alpha},2-\frac{\lambda}{\alpha}]\)) with \(\alpha=1.5\), localized eigenstates (outside the energy interval \([\frac{2}{\alpha}-\frac{\lambda}{\alpha},-\frac{2}{\alpha}-\frac{\lambda}{ \alpha}]\)) with \(\alpha=-0.36\), localized eigenstates (outside the energy interval \([-2-\frac{\lambda}{\alpha},2-\frac{\lambda}{\alpha}]\)) with \(\alpha=1.5\). We choose \(\lambda=0.5\),\(\theta=0\) and \(\delta=0.5\) in our calculation. Figure 6: Modulations of eigenstate properties by varying strength \(\delta\) of the second quasi-periodic on-site potential while mobility edge is kept fixed by the first quasi-periodic on-site potential strength \(\lambda\). The left column shows the inverse participation ratios (IPRs) of all single-particle eigenstates for different values of \(\delta\). The lattice size is \(L=10000\) with parameters \(t=1\), \(\theta=0\), \(\alpha=-2\), and \(\lambda=-0.5\). The right column gives the corresponding Lyapunov exponents (LEs). (a,d) \(\delta=0\), (b,e) \(\delta=2.0\), (c,f) \(\delta=5.7\). The vertical lines in (a-f) denote positions of the anchored anomalous mobility edges given by Eq.(21). is shown. For the regime of \(|\alpha|>1\), there appear anomalous mobility edges. On the other hand, there are normal mobility edges for the regime of \(|\alpha|<1\). From Eq.(21), we see that the pair of anomalous mobility edges is completely independent of \(\delta\). Thus in unbounded case, one is also granted a degree of freedom to engineer the system's spectrum while the position of the anomalous mobility edge is kept fixed. As the strength of \(\delta\) varies, certain eigenstate may hop across the anomalous mobility edge and the property of the eigenstate changes. In Fig. 6, we show this manner of modulations of eigenstate properties by numerically calculating IPRs (left column) and LEs (right coloumn) for all eigenstates. The two vertical lines denote the anomalous mobility edges predicted by Eq.(21). Data points in between stand for critical eigenstates while those points outside denote localized eigenstates. From top to bottom, the strength of the second quasiperiodic potential are \(\delta=0,2\) and \(5.7\). It is clearly shown that as \(\delta\) varies, the critical states are killed gradually and finally all critical states vanish. For the unbounded case of \(|\alpha|>1\), we notice that the spectrum is very wide and thus a region with all eigenstates being critical states is hard to be accessed by tuning \(\delta\), which is in contrast with the bounded case where a completely extended region is accessible. ## III Summary In summary, we study 1D quasiperiodic lattices described by a generalized GPD model with an additional tunable parameter \(\delta\) in the whole parameter space, including cases with both the bounded and unbounded quasiperiodic potential. By applying Avila's global theory, we derive the analytical expression of Lyapunov exponent, which permits us to get the exact expression of mobility edges and anomalous mobility edges. Although the mobility edge equation and anomalous mobility edge equation do not include the introduced parameter \(\delta\) explicitly, the parameter can modulate the energy spectrum and thus provides a way to engineering the mobility properties of the system. By numerically calculating the IPRs and Lyapunov exponents, we show that the mobility can be flexibly engineered by modulating the strength of new parameter while the mobility edge equation is kept unchanged. For the bounded case, the modulation of \(\delta\) can lead to completely extended, partially localized, and completely localized regions. For the unbounded case, the modulation of \(\delta\) can only lead to partially localized and completely localized states, whereas a completely critical region is hard to be accessed. Our study unveils the richness of quasiperiodic localization and provides a scheme to engineer the mobility properties of quasiperiodic lattices. ###### Acknowledgements. L.W. is supported by the Fundamental Research Program of Shanxi Province, China (Grant No. 202203021211315), the National Natural Science Foundation of China (Grant Nos. 11404199, 12147215) and the Fundamental Research Program of Shanxi Province, China (Grant Nos. 1331KSC and 2015021012). S. C. is supported by the NSFC under Grants No. 12174436 and No. T2121001 and the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No. XDB33000000. ## Appendix A Accurate expression of the model's mobility edge for the case with \(|\alpha|<1\) The mobility edge can be determined by letting Lyapunov exponent \(\gamma(E)=0\), which gives \[\left|\frac{\alpha E+\lambda}{t}\right|=2. \tag{24}\] To be specific, it consists of two parts, \[E_{c1} = \frac{2|t|-\lambda}{\alpha}, \tag{25}\] \[E_{c2} = \frac{-2|t|-\lambda}{\alpha}. \tag{26}\] To get a more accurate formula for the mobility edge, one has to resort to operator theory. According to the operator theory, the range of the physical possible energy spectrum \(E\) of the model Eq.(1) can be estimated as \(E\subseteq[-2|t|+\min(V_{n}),2|t|+\max(V_{n})]\). Before proceeding, we note that the on site potential can be rewritten as \[V_{n}=\frac{\lambda/\alpha+\delta}{1-\alpha\cos(2\pi nb+\theta)}-\lambda/\alpha. \tag{27}\] Thus when \(\lambda/\alpha+\delta>0\) and \(\alpha>0\), we have \[\{E\}\subseteq[-2|t|+\frac{\lambda/\alpha+\delta}{1+\alpha}-\lambda/\alpha,2|t |+\frac{\lambda/\alpha+\delta}{1-\alpha}-\lambda/\alpha], \tag{28}\] while when \(\lambda/\alpha+\delta>0\) and \(\alpha<0\), we have \[\{E\}\subseteq[-2|t|+\frac{\lambda/\alpha+\delta}{1-\alpha}-\lambda/\alpha,2| t|+\frac{\lambda/\alpha+\delta}{1+\alpha}-\lambda/\alpha]. \tag{29}\] And when \(\lambda/\alpha+\delta<0\) and \(\alpha>0\), we have \[\{E\}\subseteq[-2|t|+\frac{\lambda/\alpha+\delta}{1-\alpha}-\lambda/\alpha,2| t|+\frac{\lambda/\alpha+\delta}{1+\alpha}-\lambda/\alpha], \tag{30}\] while \(\lambda/\alpha+\delta<0\) and \(\alpha<0\), we have \[\{E\}\subseteq[-2|t|+\frac{\lambda/\alpha+\delta}{1+\alpha}-\lambda/\alpha,2| t|+\frac{\lambda/\alpha+\delta}{1-\alpha}-\lambda/\alpha]. \tag{31}\] According to the above-obtained ranges of the energy spectrum \(E\) under four different cases, we can arrive at more accurate mobility edges by excluding the unphysical part. Firstly, we consider the case with \(\lambda/\alpha+\delta>0\) and \(\alpha>0\). In this case, it is obviously that \(E_{c1}>E_{c2}\). And we have the following relation, \[-2|t|\alpha+\frac{\lambda+\delta\alpha}{1+\alpha}>-2|t|. \tag{10}\] So accordingly one can get \[E_{c2}<-2|t|+\frac{\lambda/\alpha+\delta}{1+\alpha}-\lambda/\alpha \tag{11}\] This means that \(E_{c2}\) is even below the lower limit of the energy spectrum. So \(E_{c2}\) should be omitted and only \(E_{c1}\) is valid in this case. Secondly, we turn to the case \(\lambda/\alpha+\delta>0\) and \(\alpha<0\), for which we have \(E_{c2}>E_{c1}\). Noting that \(\lambda+\delta\alpha<0\), it is easily to find that the following relation is fulfilled, \[2|t|(1+\alpha)(1-\alpha)>\lambda+\delta\alpha. \tag{12}\] Thus we can see that \(E_{c1}\) is lower than the minimum of the model's energy spectrum \(E\), i.e., \[E_{c1}<-2|t|+\frac{\lambda/\alpha+\delta}{1-\alpha}-\lambda/\alpha. \tag{13}\] So in this case, \(E_{c1}\) is excluded and \(E_{c2}\) is kept. Thirdly, we consider the case \(\lambda/\alpha+\delta<0\) and \(\alpha>0\). In this case, we have \(E_{c1}>E_{c2}\) and relation, \[\frac{\lambda/\alpha+\delta}{1+\alpha}<0<2|t|(\frac{1}{\alpha}-1). \tag{14}\] It is straightforward to arrive at, \[E_{c1}>2|t|+\frac{\lambda/\alpha+\delta}{1+\alpha}-\lambda/\alpha, \tag{15}\] which means \(E_{c1}\) is outside the range of the model's energy spectrum. Therefore, in this case, the model's mobility edge is determined by \(E_{c2}\). Fourthly, we check the case \(\lambda/\alpha+\delta<0\) and \(\alpha<0\). Obviously, we have \(E_{c2}>E_{c1}\) in this case. Also noting the following realtion \[\frac{\lambda/\alpha+\delta}{1-\alpha}<0<-2|t|(\frac{1}{\alpha}+1), \tag{16}\] we can get \[E_{c2}>2|t|+\frac{\lambda/\alpha+\delta}{1-\alpha}-\lambda/\alpha. \tag{17}\] This means \(E_{c2}\) is above the upper limit of the physical model's energy spectrum \(E\). Therefore, the mobility edge in this case is determined by \(E_{c1}\). In summary, when \(\alpha>0\), the mobility edge can be described by \[E_{c}=\frac{2\text{sgn}(\lambda/\alpha+\delta)|t|-\lambda}{\alpha}, \tag{18}\] and on the other hand, for \(\alpha<0\), we have \[E_{c}=\frac{-2\text{sgn}(\lambda/\alpha+\delta)|t|-\lambda}{\alpha}. \tag{19}\] Furthermore, the mobility edge can be written in a briefer form, \[E_{c}=\frac{2\,\text{sgn}(\alpha)\,\text{sgn}(\lambda/\alpha+\delta)|t|- \lambda}{\alpha}. \tag{20}\] Finally, we arrive at \[E_{c}=\frac{2\text{sgn}(\lambda+\delta\alpha)|t|-\lambda}{\alpha}. \tag{21}\] ## Appendix B Transition points by tuning \(\delta\) Here we focus on the interval \(-1<\alpha<0\) and estimate the range of energy spectrum, while the discussion in the interval \(0<\alpha<1\) is similar. For the discussion below, the hopping amplitude \(t\) is set to be \(1\). Observing the on site potential Eq.(2), a special point is obvious: \(\frac{\lambda}{\alpha}+\delta=0\). At this point, the range of energy spectrum is \([-2,2]\) and the eigenstates are always extended. For convenience, we define a new parameter \(\Delta=\frac{\lambda}{\alpha}+\delta\) from now on. In the following, we will discuss from two aspects. (i) \(\Delta>0\). The energy spectrum only has cross points with the upper mobility edge line \(E=-\frac{2}{\alpha\alpha}-\frac{\lambda}{\alpha}\). When \(\Delta\) is small, the approximate \(\frac{\lambda}{\alpha}\) range of energy spectrum spectrum is \([-2+\frac{\Delta}{1-\alpha}-\frac{\lambda}{\alpha},2+\frac{\Delta}{1+\alpha} -\frac{\lambda}{\alpha}]\). Thus, a transition point appears when the mobility edge line intersects with the energy spectrum. It is determined by \[-\frac{2}{\alpha}=2+\frac{\Delta}{1+\alpha}. \tag{22}\] So the transition point is given as \[\Delta=-\frac{2(1+\alpha)^{2}}{\alpha}\rightarrow\delta=-\frac{\lambda}{ \alpha}-\frac{2(1+\alpha)^{2}}{\alpha}. \tag{23}\] When \(\Delta\) is large, all the eigenstates become localized states. In this regime, the range of energy spectrum is well approximated as \([\frac{\Delta}{1-\alpha}-\frac{\lambda}{\alpha},\frac{\Delta}{1+\alpha}- \frac{\lambda}{\alpha}]\). And the transition point upon which all the states become localized is determined by \[-\frac{2}{\alpha}=\frac{\Delta}{1-\alpha} \tag{24}\] and the transition point is \[\Delta=-\frac{2(1-\alpha)}{\alpha}\rightarrow\delta=-\frac{\lambda}{\alpha}- \frac{2(1-\alpha)}{\alpha}. \tag{25}\] (ii) \(\Delta<0\). The energy spectrum only has cross points with lower mobility edge line \(E=\frac{2}{\alpha}-\frac{\lambda}{\alpha}\). When \(|\Delta|\) is small, the approximate range of energy spectrum is \([-2+\frac{\Delta}{1+\alpha}-\frac{\lambda}{\alpha},2+\frac{\Delta}{1-\alpha}- \frac{\lambda}{\alpha}]\). So the transition point upon which the mobility edge line meets the energy spectrum is determined by \[\frac{2}{\alpha}=-2+\frac{\Delta}{1+\alpha} \tag{10}\] and the transition point is \[\Delta=\frac{2(1+\alpha)^{2}}{\alpha}\rightarrow\delta=-\frac{\lambda}{ \alpha}+\frac{2(1+\alpha)^{2}}{\alpha} \tag{11}\] When \(|\Delta|\) is large, all the eigenstates become localized states. In this region, \([\frac{\Delta}{1+\alpha}-\frac{\lambda}{\alpha},\frac{\Delta}{1-\alpha}-\frac {\lambda}{\alpha}]\) is a good approximation for the range of energy spectrum. And the transition point where all the states become localized is determined by \[\frac{2}{\alpha}=\frac{\Delta}{1-\alpha} \tag{12}\] and thus the transition point given as \[\Delta=\frac{2(1-\alpha)}{\alpha}\rightarrow\delta=-\frac{\lambda}{\alpha}+ \frac{2(1-\alpha)}{\alpha}. \tag{13}\] One can find that these transition points are symmetric about \(\delta=-\frac{\lambda}{\alpha}\). As \(\delta\) varies, we can obtain systems which are fully localized, partially localized and fully extended. For intervals of \(\delta\) possessing true mobility edges, it is worth noting that when \(\Delta<0\), the low-energy eigenstates are localized and the high-energy eigenstates are extended, while contrarily the situation reverses when \(\Delta>0\).
2304.08414
Tame symmetric algebras of period four
In this paper we are concerned with the structure of tame symmetric algebras of period four (TSP4 algebras, for short). We will mostly focus on the case when the Gabriel quiver of $A$ is biserial, i.e. there are at most two arrows ending and at most two arrows starting at each vertex, but some of the results can be easily extended to the general case. This serves as a basis for upcoming series of articles devoted to solve the problem of classification of all TSP4 algebras with biserial Gabriel quiver. We present a range of properties (with relatively short proofs) which must hold for the Gabriel quiver of a tame symmetric algebra of period four. Amongst others we show that triangles (and squares) appear naturally in the Gabriel quivers of such algebras, such as for weighted surface algebras [6, 8, 9].
Karin Erdmann, Adam Hajduk, Adam Skowyrski
2023-04-17T16:29:26Z
http://arxiv.org/abs/2304.08414v1
# Tame symmetric algebras of period four ###### Abstract. In this paper we are concerned with the structure of tame symmetric algebras \(A\) of period four (TSP4 algebras, for short). We will mostly focus on the case when the Gabriel quiver of \(A\) is biserial, i.e. there are at most two arrows ending and at most two arrows starting at each vertex, but some of the results can be easily extended to general case. Here, we serve a basis for upcoming series of articles devoted to solve the problem of classification of all TSP4 algebras with biserial Gabriel quiver. We present a range of properties (with relatively short proofs) which must hold for the Gabriel quiver of a tame symmetric algebra of period four. Amongst others we show that triangles (and squares) appear naturally in the Gabriel quivers of such algebras, so as for weighted surface algebras [6, 8, 9]. Key words and phrases:Symmetric algebra, tame algebra, periodic algebra, quiver 2020 Mathematics Subject Classification: Primary: 16D50, 16E30, 16G20, 16G60 ## 1. Introduction Classical examples of tame symmetric algebras of period four are 2-blocks of finite-dimensional group algebras with quaternion defect groups. More recently it was discovered that all weighted surface algebras [6] (see also [8] and [9]) are tame symmetric of period four, and so are virtual mutations investigated in [11] or so called weighted generalized triangulation algebras [13], which generalize both mentioned classes. The main result of [7] established the classification of tame symmetric algebras of period four whose Gabriel quiver is 2-regular, which gives an evidence that general classification may be in reach after some work. A full classification in the biserial case seems to be an exciting challenge. This paper is a contribution towards this goal. Here we present a range of properties with short proofs, but which will be essential input for the general classification (work in progress). Throughout we fix an algebraically closed field, and we consider finite-dimensional associative \(K\)-algebras with identity. We also assume that algebras are basic and connected. Recall that an algebra \(A\) is _self-injective_, provided that \(\Lambda\) is injective as a right \(\Lambda\)-module, i.e. projective modules are also injective (see also [12]). In this paper, we focus our attention on _symmetric_ algebras, that is these self-injective algebras, for which there is a nondegenerate symmetric \(K\)-bilinear form \(\Lambda\times\Lambda\to K\). There are many classical examples of symmetric algebras, for instance, blocks of finite-dimensional group algebras [4] or Hecke algebras associated to Coxeter groups [1]. Any algebra \(\Lambda\) is a quotient of its trivial extension \(T(\Lambda)\), which is a symmetric algebra. For an algebra \(\Lambda\) we denote by \(\operatorname{mod}\Lambda\) the category of finitely generated (right) \(\Lambda\)-modules. For a module \(M\) in \(\operatorname{mod}\Lambda\), its _syzygy_ is a module \(\Omega(M)=ker(\pi)\), where \(\pi:P\to M\) is a projective cover of \(M\) in \(\operatorname{mod}\Lambda\) (so syzygy is defined up to isomorphism). We call a module \(M\) in \(\operatorname{mod}\Lambda\) a _periodic module_ if \(\Omega^{d}(M)\cong M\), for some \(d\geqslant 1\) (the smallest such \(d\) is the _period_ of \(M\)). Recall that an algebra \(\Lambda\) is called a _periodic algebra_ if \(\Lambda\) is periodic as an \(\Lambda\)-bimodule, or equivalently, \(\Lambda\) is a periodic module over its enveloping algebra \(\Lambda^{e}=\Lambda\otimes_{K}\Lambda\). Periodicity of an algebra implies periodicity of all non-projective indecomposable \(A\)-modules (see for example [14, Theorem IV.11.19]). In particular, if \(\Lambda\) is a periodic algebra, then all simple \(\Lambda\)-modules are periodic. Moreover, it is known [10, see Theorem 1.4] that periodicity of simples in \(\operatorname{mod}\Lambda\) implies \(\Lambda\) is self-injective, and hence, periodic algebras form a subclass in the class of self-injective algebras. Here we work with bound quiver algebras \(\Lambda=KQ/I\), where the Gabriel quiver \(Q\) is biserial: that is, at each vertex at most two arrows start and at most two arrows end. We will consider algebras \(\Lambda\) which are both symmetric and tame, and we assume that \(\Lambda\) is a periodic algebra of period four. Any such algebra is said to be a TSP4 algebra. We will give an overview of general properties of Gabriel quivers \(Q\) and minimal generators of ideals \(I\) for such algebras. A full classification by quivers and relations requires much more efforts. In particular, we shall see that triangles (and squares) appear naturally; see Section 4. Moreover, in the last section, we present partial results describing some distinguished types of vertices. For the necessary background in the representation theory we refer to books [2, 14]. ## 2. Preliminaries Let \(\Lambda=KQ/I\) be an admissible presentation of \(\Lambda\), where the algebra is tame and symmetric, and has \(\Omega\)-period \(4\), as an algebra. In particular, all simple modules are \(\Omega\)-periodic as \(A\)-modules with period dividing \(4\)[14, Theorem IV.11.19]. In fact, we can assume that all simples have period \(4\) (see Remark 2.2). We also assume \(Q\) is connected, that is \(\Lambda\) is indecomposable as an algebra. For a vertex \(i\in Q\), we denote by \(P_{i}\) the indecomposable projective module in \(\operatorname{mod}\Lambda\) associated to vertex \(i\), and by \(p_{i}\) its dimension vector \(p_{i}:=\underline{\dim}(P_{i})\). Similarly, we write \(S_{i}\) and \(s_{i}\), for the simple module associated to vertex \(i\) and its dimension vector. For a vertex \(i\) of the quiver \(Q\), we let \(i^{-}\) be the set of arrows ending at \(i\), and \(i^{+}\) the set of arrows starting at \(i\). In this paper, we assume the sizes \(|i^{-}|\) and \(i^{+}|\) are at most \(2\). With this, \(Q\) is said to be _2-regular_ if \(|i^{-}|=|i^{+}|=2\), and _biserial_ if \(1\leq|i^{-}|,|i^{+}|\leq 2\). We say that \(i\in Q_{0}\) is a _regular_ vertex (\(1\)- or \(2\)-regular), provided \(|i^{-}|=|i^{+}|\) (and the size is equal \(1\) or \(2\), respectively). Otherwise, we call \(i\) a _non-regular_ vertex. We will use the following notation and convention for arrows: we write \(\alpha,\bar{\alpha}\) for the arrows starting at vertex \(i\), with the convention that \(\bar{\alpha}\) does not exist in case \(|i^{+}|=1\). Similarly we write \(\gamma,\gamma^{*}\) for the arrows ending at some vertex \(i\), where again \(\gamma^{*}\) may not exist. Then \(Q\) has a subquiver Consider the simple module \(S_{i}\), \(i\in Q_{0}\). We will briefly discuss some basic consequences of \(\Omega\)-periodicity of \(S_{i}\), mainly, the associated exact sequence. Recall that there are natural isomorphisms \(\Omega(S_{i})=\operatorname{rad}P_{i}=\alpha\Lambda+\bar{\alpha}\Lambda\) and \(\Omega^{-}(S_{i})\cong(\gamma,\gamma^{*})\Lambda\subset P_{x}\oplus P_{y}\). In particular, it follows that the module \(P_{i}^{+}=P_{j}\oplus P_{k}\) is a projective cover of \(\Omega(S_{i})\) and the module \(P_{i}^{-}=P_{x}\oplus P_{y}\) is an injective envelope of \(\Omega^{-}(S_{i})\) (\(\Lambda\) is symmetric). Consequently, involving \(\Omega\)-periodicity (period 4) of \(S_{i}\), we conclude that there is an exact sequence in \(\operatorname{mod}\Lambda\) of the form ( \[*\] ) \[0\to S_{i}\to P_{i}\stackrel{{ d_{3}}}{{\to}}P_{i}^{-} \stackrel{{ d_{2}}}{{\to}}P_{i}^{+}\stackrel{{ d_{1}}}{{\to}}P_{i}\to S_{i}\to 0\] with \(Im(d_{k})\cong\Omega^{k}(S_{i})\), for \(k\in\{1,2,3\}\). By our convention, \(P_{y}\) or \(P_{k}\) may not exist. Moreover, we denote by \(p_{i}^{+}\) (respectively, \(p_{i}^{+}\)) the dimension vector \(\underline{\dim}(P_{i}^{+})\) (respectively, \(\underline{\dim}(P_{i}^{-})\)). Using the above sequence, one easily gets that \(p_{i}^{+}=p_{i}^{-}\). We use this fact (without mentioning) many times in the rest part of the paper. Now, we will show a few examples of results obtained by using exact sequences of the form (\(*\)). As a first application note the following lemma. **Lemma 2.1**.: _If \(\Lambda\) has infinite type then there is no arrow \(\alpha:i\to j\) with \(i^{+}=\{\alpha\}=j^{-}\)._ Proof.: Suppose there is such an arrow. Then \(\Omega(S_{i})=\alpha\Lambda\cong\Omega^{-1}(S_{j})\) and \(\Omega^{2}(S_{i})\cong S_{j}\). Therefore in the exact sequence for \(S_{i}\) the projective \(P_{i}^{-}\) is isomorphic to \(P_{j}\) and this means that there is a unique arrow ending at \(i\) and it starts at \(j\). As well, in the exact sequence for \(S_{j}\) we have \(P_{j}^{+}\cong P_{i}\) since \(\Omega^{2}(S_{j})\cong S_{i}\). Therefore there is a unique arrow starting at \(j\) and it ends at \(i\). Now, \(Q\) is connected and hence has only two vertices and two arrows. Then \(\Lambda\) is a Nakayama algebra of finite representation type, hence a contradiction (see for example [14, Theorems I.10.3 and 10.7]). **Remark 2.2**.: Actually, existence of arrow with the above described property implies that \(\Lambda\) is of finite type, as it is explained in the following note [5]. There is also proved this condition is equivalent to existence of a simple with period 2. Hence, when dealing with TSP4 algebras of infinite type, we may assume that all simples have period exactly 4. We have also the following observation. **Lemma 2.3**.: _The quiver \(Q\) does not have a subquiver of the form_ \[j\stackrel{{\longrightarrow}}{{\longleftarrow}}i\longleftarrow t\] _where all arrows to and from \(i\) are shown._ Proof.: Assume this happens. Then in the exact sequence for \(S_{i}\) we have \(P_{i}^{+}\cong P_{j}\) and \(P_{i}^{-}\cong P_{j}\oplus P_{t}\). Since \(P_{t}\neq 0\) it follows that \(\underline{\dim}P_{i}^{+}\neq\underline{\dim}P_{i}^{-}\), a contradiction. To end this preliminary section we will give one simple lemma pertaining vectors \(p_{i}^{+}=p_{i}^{-}\), for \(i\in Q_{0}\) (this common dimension vector of two modules \(P_{i}^{+}\) and \(P_{i}^{-}\) will be denoted by \(\hat{p}_{i}\)). It is clear from the exact sequence \((*)\) that \(p_{i}\) is less or equal to \(\hat{p}_{i}+s_{i}\) (in the product order), since \(p_{i}-s_{i}=\underline{\dim}\Omega^{1}(S_{i})\) is less than \(\underline{\dim}(R^{+})=\hat{p}_{i}\). Moreover, \(\hat{p}_{i}\) is greater up to dimension, as the following shows (here we write \(|x|\) for the sum \(|x|=x_{1}+\cdots+x_{n}\), where \(x=(x_{1},\ldots,x_{n})\in\mathbb{N}^{n}\), which corresponds to the \(K\)-dimension of \(X\), if \(x=\underline{\dim}(X)\), for a module \(X\) in \(\operatorname{mod}\Lambda\)). **Lemma 2.4**.: \(|\hat{p}_{i}|>|p_{i}|\)_._ Proof.: Of course, we have an exact sequence \(0\to\Omega^{2}(S_{i})\to P_{i}^{+}\to\Omega^{1}(S_{i})\to 0\), where \(\Omega^{1}(S_{i})=\operatorname{rad}P_{i}\) has dimension vector equal to \(p_{i}-s_{i}\). We claim that \[(\square)\qquad\qquad|\hat{p}_{i}|-\dim_{K}\Omega^{1}(S_{i})>1.\] Indeed, if this is not the case, then the difference is \(1\), and we conclude that \(\Omega^{2}(S_{i})\cong S_{i}\). On the other hand, \(P_{i}^{+}\) (respectively, \(P_{i}^{-}\)) are injective envelope (respectively, projective cover) of \(\Omega^{2}(S_{i})\), so it would imply that both are isomorphic to \(P_{i}\). It means that there is a unique arrow in \(Q\) starting at \(i\) which also ends at \(i\), and dually, there is a unique arrow in \(Q\) ending at \(i\) which also starts at \(i\). As a result \(Q\) admits one vertex and two loops, which is impossible, due to our assumptions. Therefore, \((\square)\) holds. In particular, we get \(|\hat{p}_{i}|-\dim_{K}\Omega^{1}(S_{i})=|\hat{p}_{i}|-|p_{i}|+1>1\), and hence \(|\hat{p}_{i}|-|p_{i}|>0\), so we are done. ## 3. Period 4 and minimal relations In this section, we develop further consequences of the structure of the exact sequence \((*)\) associated to the simple module \(S_{i}\), as described in the previous section. Actually, we will focus rather on maps and show their connection with minimal relations defining algebra \(\Lambda\). We start with our given presentation \(\Lambda=KQ/I\) and a vertex \(i\in Q_{0}\). We will briefly write \(J\) for the Jacobson radical \(\operatorname{rad}\Lambda\) of \(\Lambda\). Consider the associated exact sequence ( \[*\] ) \[0\to S_{i}\to P_{i}\stackrel{{ d_{3}}}{{\to}}P_{i}^{-} \stackrel{{ d_{2}}}{{\to}}P_{i}^{+}\stackrel{{ d_{1}}}{{\to}}P_{i}\to S_{i}\to 0\] where \(P_{i}^{+}=P_{j}\oplus P_{k}\) and \(P_{i}^{-}=P_{x}\oplus P_{y}\). We may assume that \(d_{1}(x,y):=\alpha x+\bar{\alpha}y\), since the induced epimorphism \((\alpha\ \bar{\alpha}):P_{j}\oplus P_{k}\to\Omega(S_{i})=\alpha\Lambda+\bar{ \alpha}\Lambda\) is a projective cover of \(\Omega(S_{i})\) in \(\operatorname{mod}\Lambda\). Adjusting arrows \(\gamma\) or \(\gamma^{*}\) (including impact on presentation, i.e. on generators of \(I\)), we can already say that \(d_{3}(e_{i})=(\gamma,\gamma^{*})\) for some choice of the arrows \(\gamma,\gamma^{*}\) ending at \(i\) (see [7, Proposition 4.3]). The kernel of \(d_{1}\) is then \(\Omega^{2}(S_{i})=Im(d_{2})\), and it has at most two minimal generators. They are images of idempotents \(e_{x}\in P_{x}=e_{x}\Lambda\) and \(e_{y}\in P_{y}\) via \(d_{2}:P_{i}^{-}\to P_{i}^{+}\). We may write them as \(\varphi\) and \(\psi\), respectively, and they are contained in \(P_{j}\oplus P_{k}\), so we can also write \[\varphi=d_{2}(e_{x},0)=(\varphi_{jx},\ \varphi_{kx})\ \text{ and }\ \psi=d_{2}(0,e_{y})=(\psi_{jy},\ \psi_{ky}),\] where \(\varphi_{jx}\) belongs to \(e_{j}\Lambda e_{x}\) and similarly for the other components of \(\varphi,\psi\). The exact sequence gives information on minimal generators of the ideal \(I\), which we sometimes refer to as minimal relations. In the sense of the following lemma, arrows of \(Q\) induce minimal relations. **Lemma 3.1**.: _If there is an arrow \(x\to i\) then there is a minimal generator \(\rho\in e_{i}\Lambda e_{x}\) for the ideal \(I\) (given the presentation)._ Proof.: Consider the generators \(\varphi,\psi\) of the kernel of \(d_{1}\). We have \(\alpha\varphi_{jx}+\bar{\alpha}\varphi_{kx}=0\) in \(\Lambda\), equivalently the element \(\alpha\varphi_{jx}+\bar{\alpha}\varphi_{kx}\in KQ\) belongs to \(I\). It is a minimal relation since \(\varphi\) is a minimal generator. Similarly the generator \(\psi\) gives a minimal relation in \(e_{i}\Lambda e_{y}\). Recall that any homomorphism \(d:P_{x}\oplus P_{y}\to P_{j}\oplus P_{k}\) in \(\operatorname{mod}\Lambda\) can be represented in the matrix form \[M=\binom{m_{jx}\ m_{jy}}{m_{kx}\ m_{ky}},\] where \(m_{ab}\) is a homomorphism \(P_{b}\to P_{a}\) in \(\operatorname{mod}\Lambda\), identified with an element \(m_{ab}\in e_{a}\Lambda e_{b}\), for any \(a\in\{j,k\}\) and \(b\in\{x,y\}\). In this way, \(d\) becomes multiplication by \(M\), i.e. \(d(u)=M\cdot u\), for \(u\in P_{i}^{-}\) (using column notation for vectors in \(P_{i}^{-}\) and \(P_{i}^{+}\)). Continuing with the generators of \(\Omega^{2}(S_{i})\), let \(M_{i}\) be the matrix with column the components of \(\varphi\) and \(\psi\), that is \(d_{2}\) is given by matrix \[M_{i}=\binom{\varphi_{jx}\ \psi_{jy}}{\varphi_{kx}\ \psi_{ky}}.\] Rewriting compositions \(d_{1}d_{2}=0\) and \(d_{2}d_{3}=0\) in matrix form, we get identities \[(\alpha\ \bar{\alpha})\cdot M_{i}=0\ \text{and}\ M_{i}\cdot\binom{\gamma}{ \gamma^{*}}=0 \tag{1}\] for some choice of arrows \(\gamma,\gamma^{*}\) ending at \(i\) (cf. [7, Proposition 4.3]). **Remark 3.2**.: Basically, identities (1) determine generators (cogenerators) of \(\Omega^{2}(S_{i})\), which are encoded in columns (rows) of matrix \(M_{i}\), satisfying the following _universal properties:_ 1. _if_ \(\theta=\binom{\theta_{1}}{\theta 2}\in P_{j}\oplus P_{k}\) _is an element_ \(\theta\in\Lambda e_{z}\setminus J^{2}\) _such that_ \([\alpha\ \bar{\alpha}]\cdot\theta=0\)_, then_ \(z=x\) _or_ \(y\) _and there is an exact sequence isomorphic to_ \((*)\) _with_ \(\theta\) _being one of the columns of_ \(M_{i}\)_,_ 2. _if_ \(\mu\in P_{x}\oplus P_{y}\) _is an element_ \(\mu\in e_{z}\Lambda\setminus J^{2}\) _such that_ \(\mu\cdot\binom{\gamma}{\gamma^{*}}=0\)_, then_ \(z=j\) _or_ \(k\) _and there is an exact sequence isomorphic to_ \((*)\) _with_ \(\mu\) _being one of the rows of_ \(M_{i}\)_._ Indeed, for \(\theta\) as in (i), by definition \(\theta\in Ker(d_{1})=Im(d_{2})\), so \(\theta\) can be written as \(\theta=M_{i}\cdot\eta\), for some \(\eta=\binom{\eta_{1}}{\eta 2}\in P_{x}\oplus P_{y}\), \(\eta\in\Lambda e_{z}\). Note also that all entries of \(M_{i}\) are in \(J\) (equivalently, \(d_{2}\) is in \(\operatorname{rad}_{\Lambda}\)), since otherwise equality \((\alpha\ \bar{\alpha})\cdot M_{i}=0\) implies that \(\alpha\) or \(\bar{\alpha}\) in \(J^{2}\) (or \(\alpha\in K\bar{\alpha}\)), which is impossible for an arrow. But \(\theta\notin J^{2}\), i.e. \(\theta_{1}\notin e_{j}J^{2}e_{z}\) or \(\theta_{2}\notin e_{k}J^{2}e_{z}\), hence we infer that \(\eta\notin J\), because \(\eta\in J\) would force \(\theta=M_{i}\cdot\eta\in J^{2}\). As a result, we get that \(\eta_{1}\notin e_{x}Je_{z}\) or \(\eta_{2}\notin e_{y}Je_{z}\). Since for \(a\neq b\) in \(Q_{0}\), we have \(e_{a}Je_{b}\simeq\operatorname{rad}_{\Lambda}(P_{b},P_{a})=\operatorname{ Hom}_{\Lambda}(P_{b},P_{a})\simeq e_{a}\Lambda e_{b}\), we conclude that \(z=x\) or \(y\), and in both cases \(\eta\) is a unit of the local algebra \(e_{z}\Lambda e_{z}\). We may assume that \(z=x\) (the proof in case \(z=y\) is similar). In particular, then \(\eta_{1}\) is a unit in \(e_{x}\Lambda e_{x}\) (i.e. \(\eta_{1}\) is a scalar multiplication of \(e_{x}\)), so we obtain the following identity \[\begin{pmatrix}\theta_{1}&\psi_{jy}\\ \theta_{2}&\psi_{ky}\end{pmatrix}=M_{i}\cdot\begin{pmatrix}\eta_{1}&0\\ \eta_{2}&e_{y}\end{pmatrix}.\] Denote by \(M^{\prime}_{i}\) the matrix on the left hand side and let \(N=\begin{pmatrix}\eta_{1}&0\\ \eta_{2}&e_{y}\end{pmatrix}\). Consequently, the above identity \(M^{\prime}_{i}=M_{i}\cdot N\) translates into the following commutative diagram in \(\operatorname{mod}\Lambda\): where we identify \(d_{1}=(\alpha\ \bar{\alpha})\), \(d_{2}=M_{i}\), \(d_{3}=\begin{pmatrix}\gamma\\ \gamma^{*}\end{pmatrix}\), \(d_{2}^{\prime}=M^{\prime}_{i}\), \(v=N\) (which is an isomorphism, since \(\eta_{1}\) and \(e_{y}\) are units in the corresponding local algebras), and \(d_{3}^{\prime}=v^{-1}d_{3}\). It follows that the bottom row of the above diagram is the required exact sequence. Similarly, if \(z=y\), then we can construct analogous matrix \(M^{\prime}_{i}\), but with \(\theta\) as the second column. _In other words, one can swap the original sequence for the new one, in which \(M_{i}\) admits a fixed \(\theta\) as the first (or second) column._ In a similar way, we can prove (ii), where we use cokernel of \(d_{3}\) (instead of kernel of \(d_{1}\)); indeed by the universal property of cokernels one can factorize matrix \(\begin{pmatrix}\mu_{1}&\mu_{2}\\ \varphi_{kx}&\psi_{ky}\end{pmatrix}\) through the cokernel of \(d_{3}\) (\(\cong Im(d_{2})\)), and lift this factorization to a map \(u:P_{j}\oplus P_{k}\to P_{j}\oplus P_{k}\), given by a matrix \(N=\begin{pmatrix}\eta_{1}&\eta_{2}\\ 0&e_{k}\end{pmatrix}\), such that \(N\cdot M_{i}=M^{\prime}_{i}\) and \(\eta_{1}\) is a unit. This means \(ud_{2}=d_{2}^{\prime}\), yielding analogous commutative diagram with (exact) isomorphic rows. Note that the conditions (i)-(ii) mentioned above explain how minimal generators (relations) of \(I\) give rise to generators of \(\Omega^{2}(S_{i})\), and how these two are connected via the exact sequence \((*)\), up to isomorphism (here we mean both isomorphism of exact sequences and isomorphisms of algebras, i.e. changing presentation of \(\Lambda\)). Namely, we may start with a minimal generator \(\rho\) of the ideal \(I\) of \(KQ\), without loss of generality \(\rho\in e_{i}\Lambda e_{j}\), where \(i,j\) are vertices of \(Q\). Say \(\alpha,\bar{\alpha}\) start at \(i\) and \(\beta,\beta^{*}\) end at vertex \(j\). Then we can write \(\rho\) as an element of \(KQ\) in the following way \[\rho=\alpha x_{1}\beta+\alpha x_{2}\beta^{*}+\bar{\alpha}x_{3}\beta+\bar{ \alpha}x_{4}\beta^{*} \tag{2}\] where the \(x_{i}\) are linear combinations of monomials, and the expression is unique if written in terms of the monomial basis of \(KQ\). Consequently, we infer from (i) that an element \[\theta=(x_{1}\beta+x_{2}\beta^{*},\ x_{3}\beta+x_{4}\beta^{*})\] is in the kernel of \(d_{1}\), and it can be taken as a generator for \(\Omega^{2}(S_{i})\) (column of \(M_{i}\)), for example if \(\theta\notin J^{2}\). Similarly \(\mu=(\alpha x_{1}+\bar{\alpha}x_{3},\alpha x_{2}+\bar{\alpha}x_{4})\) gives a cogenerator (row of \(M_{i}\)), if \(\mu\notin J^{2}\). **Remark 3.3**.: _We note that not all minimal relations can be realized in this way. Indeed, if \(\Lambda\) is a weighted surface algebra \(\Lambda=\Lambda(Q,f,m_{\bullet},c_{\bullet})\) with at least one arrow \(\alpha\in Q_{1}\) such that \(m_{\alpha}n_{\alpha}=2\) (virtual arrow), then there exist a minimal zero relation of the form \(\alpha\beta\gamma=0\), which cannot be induced from an element in the second syzygy of a simple module._ ## 4. Triangles and squares In this section we discuss some properties of triangles and squares in \(Q\) with respect to minimal relations. As we will see in Proposition 4.1 below, it is natural to investigate triangles in the quiver \(Q\), which appear together with paths of length \(2\) involved in minimal relations (note that this was an essential tool in [7, see Proposition 4.2]). Similarly, squares come with paths of length \(3\) as shown in paralell result (Lemma 4.5). If \(p\) is a monomial in \(KQ\), we write \(p\prec I\), provided that \(p\) occurs as a term (summand) in some minimal relation defining \(I\) (i.e. \(p\) is involved in a minimal relation). Very often, paths of length two occur in this way as shown in [7, Proposition 4.2]. ### Paths of length 2 and triangles **Proposition 4.1**.: _Assume \(\alpha:i\to j\) and \(\beta:j\to k\) are arrows such that \(\alpha\beta\prec I\). Then there is an arrow in \(Q\) from \(k\) to \(i\), so that \(\alpha\) and \(\beta\) are part of a triangle in \(Q\)._ Proof.: Recall we write \(\bar{\alpha}\) for the other arrow starting at \(i\) (if it exists), and write \(\beta^{*}\) for the other arrow ending at \(k\) (if it exists). Then \(\alpha\beta\prec I\) means that \[\alpha\beta+\alpha z_{0}\beta+\alpha z_{1}\beta^{*}+\bar{\alpha}z_{2}\beta+ \bar{\alpha}z_{3}\beta^{*}=0\] in \(\Lambda\) where \(z_{0}\in J\), and \(z_{i}\in\Lambda\). We may assume \(z_{0}=0\), otherwise we replace \(\alpha\) by \(\alpha(1+z_{0})\). We use the exact sequence \((*)\). Then the identity above gives an element \(\varphi\) in the kernel of \(d_{1}\), namely \[\varphi=(\beta+z_{1}\beta^{*},z_{2}\beta+z_{3}\beta^{*})\] Clearly, \(\varphi\notin J^{2}\), because its first coordinate admits an arrow. Therefore, using Remark 3.2(i) for \(\theta:=\varphi\) (viewed as a column), we conclude that \(P_{k}\) is a direct summand of \(P_{i}^{-}\), i.e. \(k\) is a source of an arrow ending at \(i\), and the claim follows. Note that this holds for any symmetric periodic algebra of period \(4\) (i.e. also for wild ones). **Example 4.2**.: In [7, Section 11], there is a quiver \(Q\) of an algebra which is symmetric and periodic of period \(4\), but the algebra is wild (so out of our current interest; see also [3] and [7, Corollary 2]). This is mentioned as a consequence of the classification in [7], however we can observe it already, as the above proposition implies that the algebra must be wild. Namely, it follows from Proposition 4.1 that any path \(\rho\) in \(Q\) of length two which does not involve a loop satisfies \(\rho\nparrow I\). Therefore \(B/J^{3}\) contains a wild subalgebra, given by a quiver of type \(\widetilde{\widetilde{E}}_{7}\) without any relations, as in [7, see the proof of Proposition 4.2]. **Lemma 4.3** (Triangle Lemma).: _Assume \(Q\) contains a triangle_ _If \(\gamma\alpha\nparrow I\) then also \(\alpha\beta\nparrow I\)._ Proof.: Consider the exact sequence for the simple module \(S_{x}\) \[0\to S_{x}\to P_{x}\to P_{j}\oplus P_{j^{*}}\to P_{i}\oplus P_{i}\to P_{x}\to S_{x} \to 0,\] where \(j^{*}=s(\beta^{*})\) and \(\bar{i}=t(\bar{\gamma})\). Taking minimal generators for \(\Omega^{2}(S_{x})\) gives the columns of the matrix \(M_{x}\), that is \[M_{x}=\begin{pmatrix}\varphi_{ij}&\psi_{ij^{*}}\\ \varphi_{\bar{i}j}&\psi_{\bar{i}j^{*}}\end{pmatrix}.\] It satisfies \((\gamma\ \bar{\gamma})\cdot M_{x}=0\) and \(M_{x}\cdot\binom{\beta}{\beta^{*}}=0\). Suppose \(\gamma\alpha\nparrow I\), but (for a contradiction) \(\alpha\beta\prec I\). Then there is a minimal relation of the form \[\alpha\beta+\alpha z_{1}\beta+\alpha z_{2}\beta^{*}+\bar{\alpha}z_{3}\beta+ \bar{\alpha}z_{4}\beta^{*}=0\] with \(z_{1}\in J\), and we may assume again \(z_{1}=0\). Now, if we define \[\theta:=(\alpha+\bar{\alpha}z_{3},\alpha z_{2}+\bar{\alpha}z_{4}),\] then \(\theta\cdot\binom{\beta}{\beta^{*}}=0\) and \(\theta\notin J^{2}\) (since it involves an arrow), so by Remark3.2(ii), we can take \(\theta\) as the first row of \(M_{x}\). In particular, \(\varphi_{ij}=\alpha+\bar{\alpha}z_{3}\), hence it follows that \(\gamma(\alpha+\bar{\alpha}z_{3})+\bar{\gamma}\varphi_{\bar{i}j}=0\), and we obtain \(\gamma\alpha\prec I\), a contradiction. Consequently, for a triangle as above, either all paths \(\gamma\alpha,\alpha\beta,\beta\gamma\prec I\), or none of them. Hence triangles in \(Q\) split into two families: triangles, say _of type R_ (for which all \(\gamma\alpha,\alpha\beta,\beta\gamma\prec I\)) and triangles _of type N_. We will further see (Section 5) similar distinction between non-regular vertices. Let us finish this part with the following lemma. **Lemma 4.4**.: _Assume \(i\) is a 1-vertex which is part of a triangle_ _Then both \(x\) and \(j\) must be 2-vertices._ Proof.: By Lemma 2.1, there must be another arrow, say \(\bar{\gamma}\), starting at \(x\), and there must be another arrow, say \(\alpha^{*}\), ending at \(j\). From the exact sequence for \(S_{i}\) we know that \(p_{x}=p_{j}\). (i) Assume vertex \(x\) is not a 2-vertex, then \(\beta\) is the only arrow ending at \(x\). Therefore \[e_{x}\Lambda/S_{x}\cong\beta\Lambda.\] Moreover, again by Lemma 2.1 there must be another arrow starting at \(j\), call it \(\bar{\beta}\). Hence \[\operatorname{rad}(P_{j})=\beta\Lambda+\bar{\beta}\Lambda.\] As a result, we get the following equalities of dimension vectors: \[p_{x}=s_{x}+\underline{\dim}(\beta\Lambda)\text{ and }p_{j}=s_{j}+\underline{ \dim}(\beta\Lambda+\bar{\beta}\Lambda)=s_{j}+\underline{\dim}\beta\Lambda+ \underline{\dim}(\bar{\beta}\Lambda/\beta\Lambda\cap\bar{\beta}\Lambda).\] Comparing \(p_{x}=p_{j}\), we conclude that \(\underline{\dim}(\bar{\beta}\Lambda/\beta\Lambda\cap\bar{\beta}\Lambda)=s_{x }-s_{j}\), so this must be zero (i.e. \(x=j\)), since otherwise \(s_{x}-s_{j}\) has a negative coordinate, which cannot happen for a dimension vector of a \(\Lambda\)-module. In particular since \(S_{j},S_{x}\) are simple and the vector space dimension of \(\beta\Lambda\cap\bar{\beta}\Lambda\) is equal to the vector space dimension of \(\bar{\beta}\Lambda\). However, we have an inclusion of these spaces, so they are equal. Now \(\bar{\beta}\Lambda=\beta\Lambda\cap\bar{\beta}\Lambda\subseteq\beta\Lambda\), hence \(\bar{\beta}\in\beta\Lambda\). But this is not possible since \(\bar{\beta}\) is an arrow \(\neq\beta\). (ii) The proof that \(j\) must be a 2-vertex is dual. ### Paths of length \(\geq 3\) In this short paragraph we will consider a bit longer paths, i.e. of length 3 or 4 (and in particular, induced squares). Suppose the quiver of \(\Lambda\) has a subquiver \[u\stackrel{{\delta}}{{\longrightarrow}}i\stackrel{{ \alpha}}{{\longrightarrow}}k\stackrel{{\beta}}{{ \longrightarrow}}t\stackrel{{\gamma}}{{\longrightarrow}}j\] We have the following counterpart of Proposition 4.1. **Lemma 4.5**.: _Suppose \(\alpha\beta\nparrow I\) or \(\beta\gamma\nparrow I\), and \(\alpha\beta\gamma\prec I\). Then there is an arrow \(j\to i\)._ Proof.: We work in \(KQ\), using the basis consisting of paths. We deal with the case \(\alpha\beta\nparrow I\) (the other case is dual, working with the opposite algebra). After possibly adjusting arrow \(\beta\), there is a minimal relation of the form \[\alpha\beta\gamma+\alpha z_{1}\gamma^{*}+\bar{\alpha}z_{2}\gamma+\bar{\alpha} z_{3}\gamma^{*}\in I\] Consider the exact sequence for the simple module \(S_{i}\), \[0\to S_{i}\to P_{i}\to P_{u}\oplus P_{u^{\prime}}\to P_{k}\oplus P_{l}\to P_{i} \to S_{i}\to 0\] Here \(\alpha:i\to k,\bar{\alpha}:i\to l\) start at \(i\), and \(\delta:u\to i\) and \(\delta^{*}:u^{\prime}\to i\) are the arrows ending at \(i\), where by convention, \(\bar{\alpha}\) or \(\delta^{*}\) may not exist (then we omit \(P_{l}\) and \(P_{u^{\prime}}\)). We take \(\Omega(S_{i})=\alpha\Lambda+\bar{\alpha}\Lambda\) and \(\Omega^{2}(S_{i})=\{(x,y)\in P_{k}\oplus P_{l}\mid\alpha x+\bar{\alpha}y=0\}\). From the exact sequence, this is equal to \(\varphi\Lambda+\psi\Lambda\) where \(\varphi=\varphi e_{u}\) and \(\psi=\psi e_{u^{\prime}}\). Let \(M_{i}\) be the matrix with columns \(\varphi\) and \(\psi\). Then (for some choice of arrows \(\delta,\delta^{*}\)) we have \[(\alpha\ \bar{\alpha})\cdot M_{i}=0\ \ M_{i}\cdot\dbinom{\delta}{\delta^{*}}=0.\] The minimal relation above gives rise to the following element \(\theta\) which belongs to \(\Omega^{2}(S_{i})\), \[\theta=(\beta\gamma+z_{1}\gamma^{*},\ z_{2}\gamma+z_{3}\gamma^{*})\] If \(\theta\notin J^{2}\), then we may take \(\varphi:=\theta\) as the first column of \(M_{i}\), since \(\beta\gamma\prec\theta_{1}\) must be in \(e_{k}\Lambda e_{u}\) (see also Remark 3.2(i)). It follows that \(j=u\), and hence \(\delta\) is an arrow from \(j=u\) to \(i\). Suppose now \(\theta\in J^{2}\). We will show that this leads to a contradiction. The radical of \(\Omega^{2}(S_{i})\) is equal to \(\Omega^{2}(S_{i})J=\varphi J+\psi J\). So we can write \(\theta=\varphi v+\psi w\) and \(v,w\in J\) and we can take them in \(Je_{j}\). Then \[\beta\gamma+z_{1}\gamma^{*}=\varphi_{ku}v+\psi_{ku^{\prime}}w\] Say \(\beta\gamma\) occurs in \(\varphi_{ku}v\). We can write \(v=ve_{j}=v_{1}\gamma+v_{2}\gamma^{*}\) with \(v_{i}\in\Lambda\) (which need not be in the radical). We can write \(\varphi_{ku}=\beta y_{1}+\bar{\beta}y_{2}\) with \(y_{1},y_{2}\in KQ\). Then \[\varphi_{ku}v=\beta y_{1}v_{1}\gamma+\beta y_{1}v_{2}\gamma^{*}+\bar{\beta}y_ {2}v\] Then \(\beta\gamma\) is a term of \(\beta y_{1}v_{1}\gamma\). Therefore \(y_{1}v_{1}\) (which we can take equal to \(y_{1}v_{1}e_{t}\)) is equal to \(e_{t}\) modulo the radical. It follows that \(y_{1}v_{1}\) is a unit in \(e_{t}\Lambda e_{t}\). However, it factors through vertex \(u\), and _it follows that \(u=t\)_. So we have a triangle \((\alpha,\beta,\delta)\) and \(\alpha\beta\nparrow I\). It follows (from our triangle lemma) that also \(\beta\delta\nparrow I\). On the other hand, we exploit the identity for \(\varphi_{ku}v\) a bit further. Since \(y_{1}v_{1}\) is a unit, we may assume \(\beta=\beta y_{1}\). Then \(\varphi_{ku}=\beta+\bar{\beta}y_{2}\) and recall this is the top left entry of the Matrix \(M_{i}\) above. We have \(M_{i}\binom{\delta}{\delta^{*}}=0\) which gives \[\beta\delta+\bar{\beta}y_{2}\delta+\psi_{ku^{\prime}}\delta^{*}=0,\] hence \(\beta\delta\prec I\), a contradiction. The above proposition shows that paths of length \(3\) involved in minimal relations induce squares in \(Q\). We have a result similar to previous Triangle Lemma, stated as follows. **Lemma 4.6** (Square Lemma).: _Assume \(Q\) contains a square_ _If \(\alpha\beta\gamma\prec I\), then \(\beta\gamma\delta\prec I\)._ Proof.: Suppose that \(\alpha\beta\gamma\prec I\), but \(\beta\gamma\delta\nparrow I\). In particular, we have also \(\beta\gamma\nparrow I\). Consider the exact sequence for \(S_{2}\). Then \(\Omega^{2}(S_{2})\) has generators being the columns of the matrix \[M_{2}=\begin{pmatrix}\varphi_{31}&\psi_{3,1^{*}}\\ \varphi_{\bar{3}1}&\psi_{\bar{3}1^{*}}\end{pmatrix}.\] and \((\alpha\ \bar{\alpha})M_{2}=0\) and \(M_{2}\binom{\delta}{\delta^{*}}=0\). By our assumption we have minimal relation \(\alpha(\beta\gamma+x_{1}\gamma+x_{2}\bar{\gamma})+\bar{\alpha}(x_{3}\gamma+x_{ 4}\bar{\gamma})=0\) with \(x_{1}\in J^{2}\). This gives an element \[\varphi=(\beta\gamma+x_{1}\gamma+x_{2}\bar{\gamma},\ x_{3}\gamma+x_{4}\bar{ \gamma})^{t}\in\Omega^{2}(S_{2})\] which cannot be in the radical of \(\Omega^{2}(S_{2})\), since \(\beta\gamma\nparrow I\). So we can take this as the first column of \(M_{2}\). It follows that \((\beta\gamma+x_{1}\gamma+x_{2}\bar{\gamma})\delta+\psi_{31^{*}}\delta^{*}=0\), so \(\beta\gamma\delta\prec I\), and we get a contradiction. The following lemma shows that sometimes one can relate paths of length \(3\) and \(4\). **Lemma 4.7**.: _Suppose we have a path in \(Q\) of the form_ \[u\stackrel{{\delta}}{{\longrightarrow}}i\stackrel{{ \alpha}}{{\longrightarrow}}k\stackrel{{\beta}}{{ \longrightarrow}}t\stackrel{{\gamma}}{{\longrightarrow}}j\] _with \(|k^{+}|=1\) and \(\delta\alpha,\alpha\beta,\beta\gamma\nparrow I\). If \(\delta\alpha\beta\gamma\prec I\), then \(\alpha\beta\gamma\prec I\)._ Proof.: Suppose that \(\delta\alpha\beta\gamma\prec I\) and let \(\bar{\delta}:u\to u^{\prime}\) and \(\bar{\alpha}:i\to i^{\prime}\) denote the second arrow starting at \(u\) and \(i\) (if exist). By the assumptions on \(k\), we conclude that any path \(p\in e_{i}A\) starting from \(\delta\alpha\) must go through \(\delta\alpha\beta\). Hence \(\delta\alpha\beta\gamma\) as a minimal generator of \(I\) is involved in a minimal relation of the form \[\delta\alpha\beta\gamma+\delta\alpha\beta p+\delta\bar{\alpha}q+\bar{\delta}r=0,\] where \(p,q,r\in J\). After adjusting \(\gamma:=\gamma+p\), we may change the presentation to get \(p=0\). Consequently, the element \(\rho=(\alpha\beta\gamma+\bar{\alpha}q,r)\) belongs to \(\Omega^{2}(S_{u})=ker([\delta\ \bar{\delta}])\). Finally, if \(u^{-}=\{\sigma,\sigma^{*}\}\), and \(v=s(\sigma)\), \(v^{\prime}=s(\sigma^{*})\), then \(\Omega^{2}(S_{u})\cong Im(M_{u})\), where \(M_{u}:P_{v}\oplus P_{v^{\prime}}\to P_{i}\oplus P_{i^{\prime}}\) is given by the matrix \[\begin{pmatrix}\varphi_{uv}&\psi_{uv^{\prime}}\\ \varphi_{u^{\prime}v}&\psi_{u^{\prime}v^{\prime}}\end{pmatrix},\] hence \(\rho=M_{u}\cdot\binom{\kappa_{1}}{\kappa_{2}}\), for some \(\kappa_{1}\in P_{v}\) and \(\kappa_{2}\in P_{v^{\prime}}\). But then \[\alpha\beta\gamma+\bar{\alpha}q=\varphi_{uv}\kappa_{1}+\psi_{uv^{\prime}} \kappa_{2},\] and therefore, \(\alpha\beta\gamma\) is generated by minimal relations. But, \(\alpha\beta\nparrow I\) and \(\beta\gamma\nparrow I\), hence \(\alpha\beta\gamma\) is also involved in some minimal relation of \(I\), as claimed. ## 5. Non-regular vertices In this section, we give some partial results describing non-regular vertices. Clearly, for \(Q\) biserial, the non-regular vertices \(i\) satisfy either \(|i^{-}|=1\) and \(|i^{+}|=2\) or \(|i^{-}|=2\) and \(|i^{+}|=1\). In the first case \(i\) is called a \((1,2)\)-vertex, whilst in the second a \((2,1)\)-vertex. Let \(i\) be a \((1,2)\)-vertex of the form We call \(i\) a vertex _of type R_ (respectively, _of type N_), provided that both \(\alpha\beta\prec I\) and \(\alpha\bar{\beta}\prec I\) (respectively, both \(\alpha\beta\nparrow I\) and \(\alpha\bar{\beta}\nparrow I\)). If \(k\neq l\), \(i\) is said to be _proper_. Similar notions can be defined for \((2,1)\)-vertices. Whenever we consider a \((1,2)\)-vertex \(i\), we keep the above notation for arrows starting and ending at \(i\). **Remark 5.1**.: We recall that there exist infinitely many pairwise non-isomorphic TSP4 algebras \(A\) containing arbitrary large number of \((1,2)\)- and \((2,1)\)-vertices of both types R or N. Indeed, one may take any weighted surface algebra \(\Lambda\) (see [8]) containing arbitrary number of 'blocks' of the form with \(\xi_{i}\) being a virtual arrow. Then using results of [11, see also Section 4], we conclude that the virtual mutation \(A=\Lambda(\xi)\) with respect to the sequence \(\xi=(\xi_{1},\ldots,\xi_{n})\) of virtual arrows is a TSP4 algebra and vertices \(x_{1},\ldots,x_{n}\) are \((1,2)\)-vertices of type N (in \(Q_{A}\)), whereas \(y_{1},\ldots,y_{n}\) are \((2,1)\)-vertices of type N (in definition of \(\Lambda\), we have to pick weights \(m_{\xi_{i}}=m_{\alpha_{i}}=1\), for any \(i\in\{1,\ldots,n\}\)). For vertices of type R one has to consider so called _weighted generalized triangulation algebras_[13], given by quivers which are glueings of blocks of five types I-V. Without going into details, we only mention that for any such algebra \(A\) (it is a TSP4 algebra in most cases), its Gabriel quiver contains two \((1,2)\)-vertices and two \((2,1)\)-vertices per each block of type V, and all these vertices are of type R (this follows directly from the shape of relations in \(A\)). Therefore, one can easily construct a TSP4 algebra with arbitrary large number of non-regular vertices of type R (in this case the number of \((1,2)\)-vertices is equal to the number of \((2,1)\)-vertices). But then the Gabriel quiver of \(A\) is not biserial. It turns out that there are no non-regular vertices of type R, if the Gabriel quiver is biserial. For proper ones, it is pretty easy to see, as the following lemma shows. **Lemma 5.2**.: _There are no proper non-regular vertices of type R._ Proof.: Suppose \(i\) is a \((1,2)\)-vertex of type R. In particular, \(\alpha\beta\prec I\) yields an arrow \(\gamma:k\to j\), whereas \(\alpha\bar{\beta}:l\to j\), an arrow \(\delta:l\to j\), due to Proposition 4.1. Suppose now \(i\) is proper. Then \(j^{-}=\{\gamma,\delta\}\), since \(Q\) is biserial, and hence \(p_{j}^{-}=p_{k}+p_{l}\). But \(p_{i}^{-}=p_{i}^{+}\) gives \(p_{j}=p_{k}+p_{l}\), because \(i\) is a \((1,2)\)-vertex, and therefore, we get \(p_{j}=p_{j}^{-}=\hat{p}_{j}\), which is a contradiction with Lemma 2.4. Dual arguments provide the proof for \((2,1)\)-vertices. We complete the claim as follows. **Theorem 5.3**.: _There are no non-regular vertices of type R._ Proof.: By previous lemma, it is sufficient to prove that there is no \((1,2)\)-vertex \(i\) of type R with \(k=l\) (i.e. non-proper one). Suppose to the contrary, that such a vertex exists. For simplicity, we will use notation \(1,2,3\) for vertices, respectively, \(i,k=l\) and \(j\). Since \(1=i\) is of type R, we get an arrow \(\gamma:2\to 3\) (see Proposition 4.1), and consequently, \(Q\) admits the following subuiver Of course, \(1\) is a non-regular vertex, by the assumption. We claim that also \(2\) is a non-regular vertex. Indeed, if this is not the case, then there is an arrow \(\bar{\gamma}:2\to x\), \(\bar{\gamma}\neq\gamma\), and moreover, \(x\neq 3\), because otherwise, we would get a subquiver, hence \(p_{1}=p_{3}\), because \(p_{2}^{-}=p_{2}^{+}\), and so \(\hat{p}_{1}=p_{1}^{-}=p_{3}=p_{1}\), which gives a contradiction with Lemma 2.4. Consequently, we have no arrows \(x\to 1\), so both \(\beta\bar{\gamma}\not\prec I\) and \(\bar{\beta}\bar{\gamma}\not\prec I\), due to Proposition 4.1. But then \(A\) admits a wild (hereditary) factor algebra of the form, a contradiction. This proves that \(2\) is a \((2,1)\)-vertex. In particular, using \(p_{1}^{-}=p_{1}^{+}\) and \(p_{2}^{-}=p_{2}^{+}\) one gets \(p_{1}=p_{2}\) and \(p_{3}=2p_{1}\). It follows also that \(3^{-}\cup 3^{+}\supsetneq\{\gamma,\alpha\}\), and hence \(3\) is a \(2\)-vertex (note: \(p_{3}^{-}=p_{3}^{+}\)). In particular, there may be a loop \(\rho\) at vertex \(3\), and then \(Q_{1}=\{\alpha,\beta,\bar{\beta},\gamma,\rho\}\). Otherwise, instead of \(\rho\) there are arrows starting and ending at \(3\), and other vertices or arrows. We start with properties which hold in both cases. We may assume that \(e_{1}J^{3}=e_{1}\beta\gamma J\) and \(\bar{\beta}\gamma\in e_{1}J^{3}\). By tameness of \(A\), there must be a minimal relation involving at least one of \(\beta\gamma,\bar{\beta}\gamma\). This means that \(\dim e_{1}J^{2}/e_{1}J^{3}\leq 1\) and clearly it is non-zero. So we may assume it is spanned by the coset of \(\beta\gamma\). Then \(\bar{\beta}\gamma=c\beta\gamma+\psi\) for \(c\in K\) and \(\psi\in e_{1}J^{3}\). If \(c\neq 0\) then we replace the arrow \(\bar{\beta}\) by \(\bar{\beta}-c\beta\), to get the claim. We write the relation as \[\bar{\beta}\gamma=\beta\gamma\Theta_{1}\gamma+\beta\gamma\Theta_{2}\rho\ \ \ ( \Theta_{1}\in J^{2},\ \ \Theta_{2}\in\Lambda).\] Note also that we may assume \(\gamma\alpha=0\). Indeed, using the exact sequence for the simple module \(S_{1}\), one can see that the second syzygy \(\Omega^{2}(S_{1})\) has one minimal generator, say \((\varphi_{23},\psi_{23})\), satisfying both \(\varphi_{23}\alpha=0\) and \(\psi_{23}\alpha=0\). Applying now the relation (*), we may fix the generator of the form \((-(\gamma\Theta_{1}\gamma+\gamma\Theta_{2}\rho),\gamma)\) (previously adjusting \(\gamma\), to get \(\psi_{23}=\gamma\)). In particular, \(\gamma\alpha=0\) for this choice. Consider the exact sequence for \(S_{2}\). This gives \[0\to\Omega^{-1}(S_{2})\cong(\beta,\bar{\beta})\Lambda\to P_{1}\oplus P_{1}\to P _{3}\to\gamma\Lambda\cong\Omega(S_{2})\to 0\] Since \(\gamma\alpha=0\) we have \(\alpha\Lambda\subset\Omega^{2}(S_{2})\). It is not in the radical of \(\Omega^{2}(S_{2})\) since \(\alpha\) is an arrow. From the exact sequence, \(\Omega^{2}(S_{2})\) has one more generator, call it \(\varphi_{31}\), and we have the minimal relation \[\alpha\beta+\varphi_{31}\bar{\beta}=0\] Assume now that there is a loop at vertex \(3\) and consider the exact sequence for \(S_{3}\). This gives an exact sequence \[0\to\Omega^{-1}(S_{3})\cong(\gamma,\rho^{\prime})\Lambda\to P_{2}\oplus P_{3} \to\ P_{1}\oplus P_{3}\to\alpha\Lambda\oplus\rho\Lambda\cong\Omega(S_{3})\to 0\] where \(\rho^{\prime}\) is a version of \(\rho\), and the middle map is given by matrix \(M_{3}=\binom{\varphi_{12}\ \psi_{13}}{\varphi_{32}\ \psi_{33}}\), which describes the generators of \(\Omega^{2}(S_{3})\). Then \((\alpha\ \rho)M_{3}=0\) and \(M_{3}\binom{\gamma}{\rho^{\prime}}=0\). We can write the minimal relation (**) as \(0=\alpha\beta\ +\ \alpha\phi^{\prime}\bar{\beta}+\rho\phi^{\prime\prime}\bar{\beta}\) where \(\phi^{\prime}\in\Lambda\) and \(\phi^{\prime\prime}\in J\), and therefore \(\Omega^{2}(S_{3})\) has a generator \((\beta+\phi^{\prime}\bar{\beta},\phi^{\prime\prime}\bar{\beta})\). This can be taken as the first column in \(M_{3}\). The first row of \(M_{3}\) gives now that we have a minimal relation \((\beta+\varphi^{\prime}\bar{\beta})\gamma+\psi_{13}\rho^{\prime}=0\). Now \(\bar{\beta}\gamma\) is in \(\beta\gamma J\) and \(\psi_{13}\rho^{\prime}\) also is in \(J^{3}\). Therefore \(\beta\gamma\in J^{3}\). But we have seen that \(e_{1}J^{3}=\beta\gamma J\), so we deduce \(e_{1}J^{3}\subseteq e_{1}J^{4}\) and hence \(e_{1}J^{3}=0\). As a result, \(\beta\gamma J=0\) and \(\beta\gamma\in\operatorname{soc}(e_{1}\Lambda)\cap e_{1}\Lambda e_{3}=0\), which is not possible for a symmetric algebra (see [4, I.3.5]). In general, we have arrows \(\bar{\alpha}:3\to x\) and \(\gamma^{*}:y\to 3\). The exaxt sequence for \(S_{3}\) is now of the form \[0\to\Omega^{-1}(S_{3})\cong(\gamma,\gamma^{*})\Lambda\to P_{2}\oplus P_{y}\to \ P_{1}\oplus P_{x}\to\alpha\Lambda\oplus\bar{\alpha}\Lambda\cong\Omega(S_{3})\to 0\] Exactly as in the first case, we rewrite (**) which gives the first column of the matrix \(M_{3}\). Then from the first row we get the identity \(0=(\beta+\phi^{\prime}\bar{\beta})\gamma+\psi_{1y}\gamma^{*}\), and we get the same contradiction as before.
2306.06240
Imaging the warped dusty disk wind environment of SU Aurigae with MIRC-X
SU Aurigae is a widely studied T Tauri star and here we present original state-of-the-art interferometric observations with better uv and baseline coverage than previous studies. We aim to investigate the characteristics of the circumstellar material around SU Aur, constrain the disk geometry, composition and inner dust rim structure. The MIRC-X instrument at CHARA is a 6 telescope optical beam combiner offering baselines up to 331 m. We undertook image reconstruction for model-independent analysis, and fitted geometric models such as Gaussian and ring distributions. Additionally, the fitting of radiative transfer models constrains the physical parameters of the disk. Image reconstruction reveals a highly inclined disk with a slight asymmetry consistent with inclination effects obscuring the inner disk rim through absorption of incident star light on the near-side and thermal re-emission/scattering of the far-side. Geometric models find that the underlying brightness distribution is best modelled as a Gaussian with a FWHM of $1.53\pm0.01 \mathrm{mas}$ at an inclination of $56.9\pm0.4^\circ$ and minor axis position angle of $55.9\pm0.5^\circ$. Radiative transfer modelling shows a flared disk with an inner radius at 0.16 au which implies a grain size of $0.14 \mathrm{\mu m}$ assuming astronomical silicates and a scale height of 9.0 au at 100 au. In agreement with literature, only the dusty disk wind successfully accounts for the NIR excess by introducing dust above the mid-plane. Our results confirm and provide better constraints than previous inner disk studies of SU Aurigae. We confirm the presence of a dusty disk wind in the cicumstellar environment, the strength of which is enhanced by a late infall event which also causes very strong misalignments between the inner and outer disks.
Aaron Labdon, Stefan Kraus, Claire L. Davies, Alexander Kreplin, Sebastian Zarrilli, John D. Monnier, Jean-Baptiste le Bouquin, Narsireddy Anugu, Benjamin Setterholm, Tyler Gardner, Jacob Ennis, Cyprien Lanthermann, Theo ten Brummelaar, Gail Schaefer, Tim J. Harries
2023-06-09T20:27:05Z
http://arxiv.org/abs/2306.06240v1
# Imaging the warped dusty disk wind environment of SU Aurigae with MIRC-X ###### Abstract Context:T Tauri stars are low-mass young stars whose disks provide the setting for planet formation, one of the most fundamental processes in astronomy. Yet the mechanisms of this are still poorly understood. SU Aurigae is a widely studied T Tauri star and here we present original state-of-the-art interferometric observations with better uv and baseline coverage than previous studies. Aims:We aim to investigate the characteristics of the circumstellar material around SU Aur, constrain the disk geometry, composition and inner dust rim structure. Methods:The MIRC-X instrument at CHARA is a 6 telescope optical beam combiner offering baselines up to 331 m. We undertook image reconstruction for model-independent analysis, and fitted geometric models such as Gaussian and ring distributions. Additionally, the fitting of radiative transfer models constrains the physical parameters of the disk. Results:Image reconstruction reveals a highly inclined disk with a slight asymmetry consistent with inclination effects obscuring the inner disk rim through absorption of incident star light on the near-side and thermal re-emission/scattering of the far-side. Geometric models find that the underlying brightness distribution is best modelled as a Gaussian with a FWHM of \(1.53\pm 0.01\) mas at an inclination of \(56.9\pm 0.4^{\circ}\) and minor axis position angle of \(55.9\pm 0.5^{\circ}\). Radiative transfer modelling shows a flared disk with an inner radius at 0.16 au which implies a grain size of \(0.14\,\mu\)m assuming astronomical silicates and a scale height of \(9.0\,\)au at 100 au. In agreement with literature, only the dusty disk wind successfully accounts for the NIR excess by introducing dust above the mid-plane. Conclusions:Our results confirm and provide better constraints than previous inner disk studies of SU Aurigae. We confirm the presence of a dusty disk wind in the circumstellar environment, the strength of which is enhanced by a late infall event which also causes very strong misalignments between the inner and outer disks. Conclusions: ## 1 Introduction Outflows from protoplanetary systems are one of the key mass loss mechanisms during the planet formation process. They remove both excess mass and angular momentum from the system, a crucial process as the final masses and rotation rates of stars are known to be significantly less than the initial mass of protostellar cores. Within young stellar objects (YSOs) there are several different mechanisms of outflow. Firstly, accretion and magnetically driven jets can emerge from the poles of the star. Secondly, photoevaporative winds caused by the ultra-violet (UV) disassociation of molecules in the upper layers of the outer disk can cause significant mass loss. Such photoevaporative winds are usually associated with higher-mass, hotter objects such as Herbig Ae/Be stars. Finally, magnetospherically driven dusty disk winds can originate from the inner disk whereby material is lifted from the disk plane along inclined magnetic field lines. Magnetospheric winds require the presence of a strong magnetic field, usually associated with T Tauri stars with convective envelopes rather than fully radiative interiors. This allows for optically thick material to exist close enough to the central star to contribute to the Near-Infrared (NIR) emission exterior to the main disk structure. This model has been shown to successfully account for the NIR excess of the spectral energy distribution (SED) and the basic visibility features of AB Aur, MWC 275 and RY Tau (Konigl & Salmeron, 2011; Petrov et al., 2019). While all these mechanisms have been observed, it is not fully understood why some YSOs only appear to exhibit a sub-selection of outflow mechanisms. One of the first stars observed to have an inner dusty disk wind was the T Tauri star SU Aurigae (Petrov et al., 2019). For a full description of the literature surrounding SU Aurigae and the basic stellar properties, see the previous paper by these authors; Labdon et al. (2019) (hence forth LA19). In this previous work we studied the circumstellar environment of SU Aur using interferometric observations from the CHARA/CLIMB and PTI (Palomar Testbed Interferometer) instruments. The disk was found to be inclined at \(51.2\pm 1.2^{\circ}\) with a position angle of \(61.0\pm 1.0^{\circ}\) and was best modelled with a ring-like geometry with a radius of \(0.17\pm 0.02\) au. Additionally, radiative transfer modelling of visibilities and the SED found that the NIR excess could only be reproduced in the presence of a dusty disk wind, where material is lifted from the disk along magnetic field lines allowing the reprocessing of additional stellar radiation. Since the publication of LA19, additional relevant pieces of literature have come to light. Spectroscopic and photometric monitoring of SU Aur by Petrov et al. (2019) has revealed that a dusty disk wind is the potential source of the photometric variability in both SU Aur and RY Tau at visible wavelengths. The characteristic time of change in the disk wind outflow velocity and the stellar brightness indicate that the obscuring dust is located close to the sublimation rim of the disk, in agreement with previous theoretical disk wind models (Bans & Konigl, 2012; Konigl & Salmeron, 2011). Recent ALMA and SPHERE observations by Ginski et al. (2021) reveal a significant disk warp between the inner and out disks of \(\sim 70^{\circ}\). This misalignment is shown to cause large shadows on the outer disk as it blocks light from the central star. Their observations also suggest that SU Aur is currently undergoing a late infall event with significant amounts of material falling inwards from the outermost regions of the disk. Such events have the opportunity to significantly impact the evolution of the disk. This paper presents one of the first 6-telescope optical interferometric studies of a YSO to date utilising state of the art observations covering a wider range of baseline position angles and lengths (up to 331 m) (other firsts include Kraus et al. (2020); Davies et al. (2022)). Three different modelling methodologies were used to interpret our data and to provide direct comparisons to LA19. (i) Image reconstruction was used to obtain a model-independent representation of the data and to derive the basic object morphology. (ii) Following this geometric model fitting allowed us to gain an appreciation for the viewing geometry of the disk by fitting Gaussian and ring models to the data. In addition, more complex geometric modelling was used to explore the chromaticity of the data. (iii) Finally, we combine interferometry and photometry to derive physical parameters with radiative transfer analysis, where our focus is on confirming the presence of a dusty disk wind. ## 2 Observations The CHARA array is a Y-shaped interferometric facility that comprises six 1 m telescopes. It is located at the Mount Wilson Observatory, California, and offers operational baselines between 34 and 331 m (ten Brummelaar et al., 2005). The MIRC-X instrument (Anugu et al., 2020; Kraus et al., 2018), a six-telescope beam combiner, was used to obtain observations in the near-infrared H-band (\(\lambda=1.63\,\mu m,\Delta\lambda=0.35\,\mu m\)) between September and October 2018. We obtained 11 independent pointings of SU Aur, using a mixture of 5 and 6-telescope configurations, due to the short delay line limitations of CHARA. We obtained a maximum physical baseline of 331 m corresponding to a resolution of \(\lambda/(2B)=0.70\) mas [milliarcseconds], where \(\lambda\) is the observing wavelength and \(B\) is the projected baseline. Details of our observations, and the calibrator(s) observed for the target during each observing session, are summarised in Table 1. The uv plane coverage that we achieved for the target is displayed in Figure 1. Our data covers an exceptionally wide range of baseline lengths and position angles, making the data ideally suited for image reconstruction. The MIRC-X data were reduced using the standard python pipeline developed at the University of Michigan by (J.B. le Bouquin, N. Anugu, T. Gardner). The measured visibilities and closure phases were calibrated using interferometric calibrator stars observed alongside the target. Their adopted uniform diameters (UDs) were obtained from JMMC SearchCal (Bonneau et al., 2006, 2011), and are listed in Table 1. Considering the short timescale over which the observations were taken the effect of time dependencies/variability of the object is thought to be minimal. However, care was taken to check for time dependencies in the visibilities of baselines of similar length and position angle. Variability in the H band is known to be minimal, so any time dependencies in the visibility amplitudes is likely geometric. However, no significant time dependencies were discovered. ## 3 Image Reconstruction Image reconstruction techniques require broad and roughly circular uv coverage along as many baseline lengths as possible. Fortunately, the data from these observations lends itself to this process as the uv plane has been well sampled, though some small gaps remain in the position angle coverage. This technique is useful for interpretation of non-zero closure phases, indicative of asymmetric distributions, in a model-independent way. Our closure phase values are shown in Figure 2. There are many different algorithms with which to reconstruct images from interferometric data, but the process described here involved the use of the \(SQUEEZE\) algorithm (Baron et al., 2010). \(SQUEEZE\) employs an MCMC approach to image reconstruction and was chosen due to the wide range of available regularisation options and its ability to implement \(SPARCO\) a semi-parametric approach for image reconstruction of chromatic objects (Kluska et al., 2014). In the \(SQUEEZE\)/SPARCO routine, the object is modelled as an unresolved central star with an extended, model-independent, environment (Kluska et al., 2014). Both components have different spectral behaviours and so differing spectral indices. Additionally, the type and weight of the regularisation was explored, \(SQUEEZE\) allows for a very wide range of regularisation algorithms to be implemented. The regularisation plays the role of the missing information by promoting a certain type of morphology in the image. Total variation (TV) Figure 1: Coverage of the uv plane of the interferometric MIRC-X observations obtained with the CHARA array was found to most reliably reproduce the best image, TV aims to minimise the total flux gradient of the image and is useful to describe uniform areas with steep but localised changes. These regularisations are considered to be the best ones for optical interferometric image reconstruction (Renard et al. 2011). The size and number of pixels also plays an important role in image reconstruction. One cannot simply use the maximum number of pixels of the smallest size to obtain better resolution, they have to be chosen to match uv plane sampling. It was found that a quadratic smoothing regularisation with a weight of \(1\times 10^{5}\) and \(252\times 252\) pixels of 0.1 mas in size provides the best-fit image reconstruction when utilising exact Fourier transform methods. The optimal regularisation parameters were determined using the L-curve method. The final image is shown in Figure 3 (top left panel) with the 1,3 and 5-\(\sigma\) significance levels shown is white, green and blue contours respectively. The inclination of the disk appears to be greater than that found by LA19 with a similar minor-axis position angle. There also appears to be a central bulge along the minor disk axis likely caused by the over brightness of the star along this axis. The brightness distribution shows a brighter structure along the north-west of the outer disk, parallel to the major axis of the disk. This is consistent with the asymmetry found by LA19 and is indicative of a highly inclined disk where the far side of the inner rim in directly exposed to the observer, while the nearside is obscured by flaring in the outer disk. There are smaller significant structures to the south-east of the disk also, we interpret these to be the shadowed near-side of the rim due to their smaller extent than the northern features. We are not confident in the exact shape of these objects given their irregularity. In order to highlight the radial brightness distribution across the disk, Figure 4 shows a flattened profile, the elongation of the \begin{table} \begin{tabular}{c c c c c} \hline Date & Beam Combiner & Stations & Pointings & Calibrator (UD [mas]) \\ \hline 2018-09-13 & CHARA/MIRC-X & S1-S2-E1-E2-W1-W2 & 2 & HD 34499 (\(0.256\pm 0.007\)) \\ 2018-09-16 & CHARA/MIRC-X & S1-S2-E1-E2-W1-W2 & 1 & HD 28855 (\(0.303\pm 0.008\)) \\ 2018-09-17 & CHARA/MIRC-X & S1-S2-E1-W1-W2 & 2 & HD 40280 (\(0.599\pm 0.051\)) \\ 2018-10-26 & CHARA/MIRC-X & S1-S2-E1-E2-W1-W2 & 6 & BD+31 600 (\(0.391\pm 0.011\)), \\ & & & & BD+44 1267 (\(0.317\pm 0.008\)), \\ & & & & BD+43 1350 (\(0.318\pm 0.008\)), \\ & & & & HD 28855 (\(0.303\pm 0.008\)) \\ \hline \end{tabular} 1 \end{table} Table 1: Observing log from 2018 from the CHARA/MIRC-X interferometer. Figure 2: Visibilities and closure phases of the image reconstruction. Black triangles with error bars are the original calibrated observables (squared visibilities on the left and closure phases on right), over plotted as blue markers are the model observables of the reconstructed image. Below each plot is the fit residuals normalised by the standard deviation as black circles. bulge can be seen in the NW and SE directions with the extended rim material at radii out to 1 mas. In order to quantitatively measure the size and orientation of the emitting region, a simple ellipse was fitted to 3, 4, and 5-\(\sigma\) flux significance contours. The averaged results find an inclination of \(46^{\circ}\pm 6\) and a position angle of \(53^{\circ}\pm 4\). Fitting an ellipse to lower significance levels is not possible due to their irregular shape. the position angle of the ellipses are in good agreement with the values derived in Section 4. The inclination shows slight deviations from these values. The chromaticity of the object is measured using two variables in the \(SPARCO\) implementation. \(f_{*}^{0}\), the stellar-to-total Figure 4: Flattened reconstructed image, created from angular slices though the image from the central star. In the full image of Figure 3, north is up and east is left. R is the radial distance from central star, \(\Theta\) is the polar angular direction. Figure 3: TOP LEFT: Image reconstruction resultant bootstrapped image, including beam size and orientation. The coloured contours represent significance flux levels of 1\(\sigma\) (white), 3\(\sigma\) (green) and 5\(\sigma\) (blue) flux ratio at the central wavelength; and \(d_{env}\), the spectral index of the extended environment. Only the spectral index of the extended environment is needed as interferometric data is only sensitive to the relative difference in spectral index between the star en circumstellar material. \(d_{env}\) is found be \(1.7\pm 0.8\) which corresponds to a temperature of \(1257^{+234}_{-231}\) assuming the objects NIR emission is in a Rayleigh-Jean regime. This is within the range of sublimation temperatures of typical disk astronomical silicates, as expected at such small disk radii. \(J_{\rm a}^{0}\) is found to be \(0.55\pm 0.6\), consistent with values measure from the SED and those found from geometric modelling (see Section 4). The large errors associated with \(J_{\rm a}^{0}\) and \(d_{env}\) are a result of some degeneracy between the parameters, see Kluska et al. (2014) for a full description of the procedure. The visibility and closure phase fits of the image reconstruction are shown in Figure 2 (top panels) along with the residuals of the fit (bottom panels). The combined visibility and closure phase reduced chi-squared \(\chi^{2}_{red}\) of the image reconstruction was found to be 4.38. ## 4 Geometric Modelling In order to understand the geometry of the system one must consider the application of simple geometric models. In this section we explore several different approaches to modelling our data with both a non-chromatic 'grey' models and techniques which explore the chromaticity. ### Basic Geometric Models The fitting of Gaussian and ring like distributions to the interferometric variables allows highly accurate estimations of the characteristic size, inclination and position angle of the object. In all models the central star is modelled as a point source, which is an acceptable assumption given the expected angular diameter of the star. The disk parameters are then fitted in the RAPIDO (Radiative transfer and Analytic modelling Pipeline for Interferometric Disk Observations) framework (Kreplin et al. 2018), available in-house at University of Exeter. RAPIDO utilises the Markov chain Monte Carlo (MCMC) sampler _emcee_ to produce a fit and error estimate (Foreman-Mackey 2016). Three disk models were employed, a standard Gaussian brightness distribution which is characterised by it's full-width-half-maximum (FWHM). Along with two ring models, a sharp ring with a width fixed to 20% of the disk radius (\(R\)) and a'skewed' ring with a more diffuse radial profile produced by convolving with a Gaussian with a \(FWHM\). The skewed ring is also capable of modelling azimuthal modulation or disk asymmetries, a detailed description of this model can be found in Lazareff et al. (2017). In addition to the model specific parameters, we also fitted the inclination (\(INC\)), minor-axis position angle (\(PA\)) and disk-to-total flux ratio (\(f_{disk}\)). As we see no evidence of time variability in the data we are able to fit all data simultaneously. The results from the simple geometric model fitting are shown in Table 2. Out of the geometric models tested, the Gaussian model is considered to be the best fit. Even though the skewed ring produced a slightly small \(\chi^{2}\) value for the closure phase and visibility measurements, we do not consider this significant given the additional degrees of freedom in the model. In addition, as shown in Figure 5, the best fit skewed ring is tending towards a Gaussian distribution, which a very small inner radius 0.17 mas with a wide FWHM ring width 0.75 mas. The skewed ring fails to reproduce the small but complex asymmetries seen in the image reconstruction. The best fit Gaussian model finds a disk of FWHM \(1.52\pm 0.01\) mas which is inclined at \(56.9\pm 0.4^{\circ}\) and a minor-axis po \begin{table} \begin{tabular}{c c c c c} \hline Parameter & Explored Parameter Space & Gaussian & Ring & Skewed Ring \\ \hline \(R\) [mas] & \(0.0-10.0\) & – & \(0.83\pm 0.01\) & \(0.17\pm 0.16\) \\ \(FWHM\) [mas] & \(0.0-15.0\) & \(1.52\pm 0.01\) & – & \(0.75\pm 0.04\) \\ \(INC\) [\({}^{\circ}\)] & \(0.0-90.0\) & \(56.9\pm 0.4\) & \(57.4\pm 0.4\) & \(56.9\pm 0.5\) \\ \(PA\) [\({}^{\circ}\)] & \(0.0-360.0\) & \(55.9\pm 0.5\) & \(56.8\pm 0.4\) & \(55.8\pm 0.5\) \\ \(f_{disk}\) & \(0.0-1.0\) & \(0.43\pm 0.01\) & \(0.32\pm 0.01\) & \(0.43\pm 0.01\) \\ \hline \(\chi^{2}_{vis}\) & & 11.63 & 13.87 & 11.62 \\ \(\chi^{2}_{cp}\) & & 6.05 & 6.05 & 6.01\(*\) \\ \hline \end{tabular} 1 \end{table} Table 2:. Best fit parameters for the simple geometric models investigated. Figure 5: Geometric model images, corresponding to best fit parameters described in Table 2. TOP LEFT: Gaussian model brightness distribution. TOP RIGHT: Ring model brightness distribution. BOITOM LEFT: Skewed Ring model brightness distribution. BOITOM RIGHT: Full reconstructed image, repeated from Figure 3. sition angle of \(55.9\pm 0.5^{\circ}\). In addition, we find that \(43\pm 1\%\) of the total flux originates from the disk in the H band. This is consistent with measurements based on the infrared excess of the spectral energy distribution (SED) in LA19 and also with the flux ratio found in the image reconstruction algorithm. The primary limitation of the simple geometric models described above is that they are intrinsically 'grey' in nature. Meaning they contain no spectral information, hence the large \(\chi^{2}\) values obtained in the fitting process. In order to better model the spectral dependency of the visibility, more complex temperature gradient models that are able to account for observing wavelength must be employed. ### Temperature Gradient Models A physically correct model can be applied by considering the temperature gradient of the disk. A temperature gradient model (TGM) allows for the simultaneous fitting of interferometric and photometric observables. The origin of the photometric data used is describe in Table 1 in Appendix A. It is built up by several rings extending from an inner radius \(R_{\rm in}\) to an outer radius \(R_{\rm out}\). Each ring is associated with temperature and hence flux. Therefore, a model SED can be computed by integrating over the resulting blackbody distributions for each of the concentric rings. Such a model allows us to not only to build up a picture of the temperature profile, but also approximate the position of the inner radius. The TGM is based upon a \(T_{R}=T_{0}(R/R_{0})^{-Q}\) profile where \(T_{0}\) is the temperature at the inner radius of the disk \(R_{0}\), and \(Q\) is the exponent of the temperature gradient (Kreplin et al., 2020; Eisner & Hillenbrand, 2011). A TGM represents an intrinsically geometrically thin disk. A point source is used at the centre of each model to represent an unresolved star, which is a reasonable approximation given the expected angular diameter of 0.05 mas (Perez et al., 2020). Also included in the fitting of photometric parameters is a treatment of interstellar extinction based on (Fitzpatrick, 1999) with an \(E_{(B-V)}=0.5\)(Bertout et al., 2007). The inclination and position angle of the disk are maintained at fixed values of \(56.9\pm 0.4^{\circ}\) and \(55.9\pm 0.5^{\circ}\) respectively, from the fitting of the Gaussian distribution. This was done to reduce the number of free parameters in the model. The fitting was undertaken using all of visibility data shown in Figure 3 and all the SED points simultaneously. The fitting and error computation was once again done using _emcee_(Foreman-Mackey, 2016). The results of the temperature gradient modelling are shown in Figure 7. We find an inner disk radius of \(0.15\pm 0.04\) au where the temperature is equivalent to \(2100\pm 200\) K and decreases with an exponent of \(Q=0.62\pm 0.02\). The inner disk radius is the point at which the dusty disk is truncated due to the sublimation of material, in contrast to more extreme objects such as FU Ori where the inner disk radius is equivalent to that of the stellar radius indicating boundary layer accretion (Labdon et al., 2021). Interior to the sublimation radius is expected a hot dust free inner disk from which material can be magnetospherically accreted. However, our low spectral resolution continuum observations are not sensitive to these regions. An inner disk radii temperature of \(2100\pm 200\) K may be considered high, but is broadly consistent with laboratory sublimation temperatures for silicate grains. An exponent of \(Q=0.62\pm 0.02\) is slightly larger than that expected from a disk heated by stellar radiation alone, and may indicate the presence of additional disk heating mechanisms, such as viscous heating (Pringle, 1981; Kenyon & Hartmann, 1987; Dullemond & Dominik, 2004). The resultant SED of the inner disk is shown in Figure 6 as the orange curve. Beyond 5-6 \(\mu m\) the TGM fails to fit the shape of the disk accurately. This is likely due to the flared nature of the disk in contrast to the 'flat' TGM model which will have the strongest effect and longer wavelengths. In addition, the strong disk warp reported by Ginski et al. (2021) would also introduce some temperature discontinuity, the modelling of which is beyond the scope of this work. ## 5 Radiative Transfer In order to provide a more physical model and directly compare to LA19, we used the TORUS Monte-Carlo radiative transfer code (Harries et al., 2019) to simultaneously fit the visibility, closure phase and photometric data of the SU Aurigae system. The models adopted here are based on the disk models used by LA19, adapted to account for the higher inclination and different observing wavelength. In these TORUS simulations, the dust was allowed to vertically settle to the scale height of the gas component and the dust sublimation radius was left as a free parameter, allowing the inner rim radius to define itself based on well-defined rules of the Lucy (1999) iterative method to determine the location and the temperature structure of the whole disk. This is implemented whereby the temperature is initially calculated for grid cells in an optically thin disk structure, with dust added iteratively to each cell with a temperature lower than that of sublimation, until the appropriate dust to gas ratio is reached (0.01). We confirmed that stellar photosphere models of Castelli & Kurucz (2004) using these stellar parameters can reproduce the photometry measurements of SU Aur reasonably well across the visible continuum. We adopt a silicate grain species with dust properties and opacities adopted from Draine (2003). For a more detailed description of TORUS and the algorithms used, see Davies et al. (2018); Labdon et al. (2019). The dusty disk wind model is adapted from Bans & Konigl (2012). This mechanism is based on the presence of a large-scale, ordered magnetic field which threads the disk along which disk material is flung out. The high magnetic pressure gradient above the disk surface accelerates the material which is then collimated through the azimuthal and poloidal field components (Bans & Konigl, 2012). These centrifugally driven winds are Figure 6: Spectral energy distribution of SU Aurigae. Green points are photometric data from a variety of instruments. Red line is Spitzer IR data. Black dashed line is direct radiation from the stellar photosphere. Blue line is the best TORUS computed radiative transfer model inclined at 56\({}^{\circ}\). Orange line is the SED computed from the simple temperature gradient models described in Section 4.2. highly efficient at distributing density above and below the plane of the disk, carrying angular momentum away from the disk surface. A full description of the implementation within the TORUS radiative transfer code can be found in LA19. The disk model adopted follows the curved inner rim prescription of Isella & Natta (2005) with a density dependent sublimation radius whereby grains located in the disk midplane are better shielded due to higher densities and so can exist closer to the central star than grains in the less dense upper layers. A full summary of the disk parameters can be found in Table 3. The only parameters we explored with respect to LA19 were the disk scale height (\(h_{0}\)), the grain size (\(a\)) and flaring index \(\beta_{disk}\). The key difference in the models described here compared to LA19 is the grain size adopted. Here we adopt a smaller grain size of \(0.14\,\mu m\), which in turn leads to a slightly smaller inner radius of \(0.16\,\)au. Additionally, in order to improve the SED fit at longer wavelengths we also adopt a more modest scale height of \(9.0\,\)au. The resultant SED from the radiative transfer model is shown in figure 6, in addition to the model stellar photosphere, also calculated within TORUS. The SED is fit well across the optical and IR and longer mm wavelengths, however is a relatively poor fit across the 8-40 \(\mu m\) range. We attribute this to the disk warp reported in Ginski et al. (2021) which would result in a physical disk break and temperature discontinuity. The geometry governing disk warps is little understood, in particular how it would effect the inner and outer edge of the warp. The shapes of these rims, and how is the vertical structure of the disk effected at this point, are not known. Such a model are well beyond the scope of this paper for 2 reasons: Firstly we lack high spatial resolution data at M/F-IR wavelengths which might cover the location of the disk warp. The complex geometry of the warp would require extensive modelling, which is difficult given the limitations of the radiative transfer code used. The presence of a disk wind is once again required to fit both the visibilities and the SED. The absence of a disk wind fails to reproduce IR excess across both the H and K bands, with insufficient NIR disk flux. This significantly impacts both the SED and interferometric fit. A disk wind is required to eject more hot dust above the midplane of the disk where it is directly exposed to stellar radiation which is reprocessed as an IR excess. This is shown in Figure 8, where the squared visibilities are shown for both a disk model with and without the dusty wind environment. The disk wind model provides a far superior fit to the observations, being able to successfully reproduce the NIR excess. Beginning from the wind parameters found in LA19, we run a new grid of models exploring the disk wind parameter space. The parameters explored and their ranges are described in Table 3, along with the resulting best fit values. The results are broadly similar to those found in LA19, with exception of a slightly higher temperature of material in the disk wind, closer to dust temperatures found in the inner disk. Of particular note is the into-wind accretion rate of \(10^{-7}\odot M\). Considering the historically accepted on-to-star accretion rate to into-wind accretion ratio of 0.1, this level of transport is perhaps unphysically high given the age of the star, but a discussion of possible mechanisms is included in Section 6. The final computed image is shown in Figure 3 (middle left panel) and shows the clear asymmetry originating from the inclination in the asymmetry map in the same figure (middle left panel). In order to approximate what this computed image would look like if observed at the same resolution as the original observations, we computed synthetic visibility and closure phases based on the radiative transfer images (as described in (Davies et al. 2018)). Artificial noise and error bars were computed to be representative of the original data and to ensure an accurate representation. These synthetic observables were then reconstructed in the same manner as the original data, as described in Section 3. Care was taken to ensure the consistency in the reconstruction parameters for both the real and synthetic observables. The reconstructed TORUS image is also shown in Figure 3 (bottom right panel), and shows clear similarities with both the original TORUS image and the image reconstructed from the original data. ## 6 Discussion Our extensive observations and analysis of the circumstellar environment of SU Aurigae have revealed the details of the inner disk in unprecedented detail. The wide variety of techniques used to analyse our interferometric data allows us to precisely define the disk characteristics. Image reconstruction is a crucial, model independent, method of analysis which is ideally suited to our dataset with extensive uv and baseline coverage. Our analysis reveals an elliptical shape, indicative of an object with a high inclination as shown in Figure 3. There appears to be a central bulge to the disk, however this feature is not thought to be physical but rather a manifestation of the brightness of the central star combined with Figure 7: Temperature gradient profile across the inner au of SU Aur. The dusty disk extends down an inner radius, consistent with the expected sublimation radius and temperature. Interior to this expected a hot dust free inner disk which is not detected in the continuum observations. the width of the disk at this point. The more extended material in the image are thought to be a depiction of the far-side of the disk rim which is un-obscured by the outer disk, this re-enforced by the fact that the extended material is bright in the north-east of the image. There is a significant asymmetric feature in the form of a thin brightness on the north-eastern edge of the disk. The unique shape of this feature indicates that this is again caused by the high inclination obscuring the nearside disk rim. The effect of an inclined disk on the observed brightness distribution is described extensively by Jang-Condell & Turner (2013) and accurately describes the observations here. By fitting an ellipse to the significant features in the image, we gain a quantitative measure of the disk geometry. The results find an inclination of \(46^{\circ}\pm 6\) and a position angle of \(53^{\circ}\pm 4\), very similar to those derived from more reliable geometric model fits described below. The scale and shape of the image is similar to that of LA19, with a slightly higher inclined viewing angle. We consider the images of this work to be the more accurate depiction of SU Aur given the higher quality observations, taken over a much shorter timescale, with the added detail and resolution this entails. Although geometric modelling is much more constrained in the geometries it can explore, in does provide a more quantitative view of the disk. It was found that the model which best fit our data was a simple Gaussian distribution with a point source representing the star. A Gaussian model is consistent with other work, both on this object by LA19, but also in other YSO studies such as the survey by Lazareff et al. (2017) who find that little under half of their 51 objects can be modelled by a Gaussian structure. The Gaussian fitted in this work has a FWHM of \(1.52\pm 0.01\) mas (\(0.239\pm 0.002\) au) at an inclination of \(56.9^{\circ}\pm 0.4\) at a minor axis position angle of \(55.9^{\circ}\pm 0.5\) with a stellar-to-total flux ratio of \(0.57\pm 0.01\). The reduced \(\chi^{2}\) value for the visibilities is 11.63 and 6.05 for the closure phases, which are equivalent to \(0^{\circ}\) for this centro-symmetric model. These values are in agreement with the literature values of Akeson et al. (2005) who find a K band radius of \(0.18\pm 0.04\) au and an inclination of \(62^{+4}_{-8}\). Similar values for the inclination in literature are \(\sim\)\(60^{\circ}\) and \(\sim\)\(50^{\circ}\) found by Unruh et al. (2004); Jeffers et al. (2014) respectively. The minor axis position angle derived here is significantly greater than the literature values of \(24^{\circ}\pm 23\) and \(15^{\circ}\pm 5\) found by Akeson et al. (2005); Jeffers et al. (2014). This difference is likely due to either: The poor uv coverage and lack of longer baselines in previous interferometric studies, both of which make estimating the position angle and inclination particularly unreliable. Other non-interferometric studies focus on the outer disk which is shown by Ginski et al. (2021) to be misaligned compared to the inner disk. The geometric modelling results are broadly similar to those presented in our previous work LA19 where an inclination of \(50.9\pm 1.0^{\circ}\) and minor axis position angle of \(60.8\pm 1.2^{\circ}\) were found and the data were marginally better described by a ring-like brightness distribution. The values and models presented here are considered to be more accurate due high precision observations and significantly smaller potential for temporal variations, as the data of LA19 was coalesced over 14 years. These values are also consistent with observations of the outer disk by Ginski et al. (2021) where dark shadows are observed in scattered light origination from a significant disk warp between the inner and outer regions. On larger scales the near-side of disk is seen to the north-east, in our observations it is seen to the south west. In modelling the temperature gradient of SU Aur we can gain an appreciation for spectral dependence of our interferometric variables across the 6 spectral channels of MIRC-X. Our modelling finds a disk which extends down to \(0.15\pm 0.04\)au where the temperature is equivalent to \(2100\pm 200\)\(K\) and decreases with an exponent of \(Q=0.62\pm 0.02\). The outer edge of this temperature regime was found to be \(0.20\pm 0.03\)au, showing this prescription only covers the innermost regions of the disk. Modelling outer regions of the disk is beyond the scope of this paper, as our NIR interferometric data does not cover emission from these regions and larger radii are likely effected by the significant disk warp causing a temperature gradient discontinuity. A Figure 8: Coloured circles are the MIRC-X squared visibilities, where the colour represents the wavelength across the H-band. Black crosses are the corresponding squared visibilities extracted from TORUS radiative transfer model images. LEFT: Best fit TORUS model including a strong dusty disk wind component. RIGHT: The same base disk model, excluding any dusty disk wind component. Lower panels show the residuals in the fits between data and models. temperature gradient exponent of \(Q=0.62\pm 0.02\) found here lies between two established models from literature. That of Pringle (1981) who find that a steady state, optically-thick accretion disk heated by viscous processes will exhibit an exponent of 0.75 and of Kenyon & Hartmann (1987); Dullemond & Dominik (2004) who show that a flared disk heated by reprocessed stellar radiation alone will exhibit an exponent of \(\leq 0.5\). As such, it is difficult to comment on the heating of the circumstellar environment of SU Aurigae. But it is possible that the disk is not heated by stellar radiation alone, but additional heating processes such as viscous heating may also be present. The radiative transfer modelling presented in this paper is heavily based on LA19. For a detailed discussion of the motivation behind certain choices, particularly in relation to the shape of the inner rim, we recommend the reader see Section 5 of that paper. In this work we are able to achieve a similar SED fit to LA19, including with the adoption of a smaller \(0.14\,\mu\)m grain size, it is noted that the smaller grain size is in line with older radiative transfer work of SU Aur by Akeson et al. (2005). The smaller grain size results in a smaller inner rim which now extends down to 0.13 au at a sublimation temperature of 2000 K. This is within the uncertainties of values predicted by the temperature gradient modelling and is roughly consistent with older literature values of \(0.18\pm 0.04\) au and \(0.17\pm 0.08\) au by Akeson et al. (2005) and Jeffers et al. (2014) respectively. The flaring parameters \(\alpha_{\rm disk}\) and \(\beta_{\rm disk}\) were fixed such that \(\alpha_{\rm disk}=\beta_{\rm disk}+1\) and found to be 2.3 and 1.3 respectively, a more physical representation, than the values depicted in LA19. Similarly to LA19, a dusty disk wind is required in order to fit both the SED across the NIR and visibilities as shown in Figure 8. The TORUS implementation of s dusty wind does not depend on the underlying launching mechanism, but just prescribes Figure 9: Black crosses are the closure phase data obtained with MIRC-X. Overlaid as coloured points are the TORUS model closure phases, the colours represent the wavelength of the spectral channels and follows the same convention as other plots in this work. Below in black points are the normalised residual errors of the fit. a geometry above and below the disk, which is populated by dust grains, where it can reprocess stellar radiation to contribute to the NIR excess. This was first put forward by Bans & Konigl (2012) in the context of a magnetospherically driven disk wind. Figure 8 highlights how a standard (no-wind) model cannot sufficiently fir the interferometric data, and how the addition of a dusty disk wind can provide a significantly better fit to the data. The maximum temperature of the dust in the wind is similar to the temperature of the dust at the sublimation temperature of the disk 1900 K and 2000 K respectively, as expected given the dust launches from close to to the sublimation rim. In Appendix B each baseline is plotted in a separate panel in order to explore the chromatic (temperature) gradient of the data. The discrepancy in the gradient and level of some baselines can be explained by the simplified heating description within the radiative transfer model. TORUS produces a disk heated by reprocessed stellar radiation alone, with no internal disk heating such as viscous heating. We see from the specific temperature gradient modelling that we might expect some viscous heating within the disk. However, the implementation of the dusty disk wind in this scenario is not completely physical, owing to the high into-wind outflow rate of \(1\times 10^{-7}\mathrm{M}_{\odot}\mathrm{yr}^{-1}\) required. If one assumes the historically accepted outflow to accretion ratio of 0.1, the resulting onto-star-accretion rate is greater than those typically found in T Tauri stars. In particular, this contrasts with the measured accretion rate of SU Aur (Perez et al., 2020). However, the inflow to outflow ratio is subject of some discussion with recent works suggesting the ratio may be closer to unity in some situations (Pascucci et al., 2022), particularly by invoking a magnetically driven winds originating from a dead-zone close to the sublimation radius. Based on our data we cannot differentiate such specific wind launching mechanisms. In addition, the suggestion that SU Aur is undergoing a late stage infall event Ginski et al. (2021) could be a potential explanation for such a high level of mass transport through the system.
2306.09173
Instabilities in a growing system of active particles: scalar and vector systems
The physics of micron-scale biological colonies usually benefits from different out-of-equilibrium sources. In bacterial colonies and cellular tissues, the growth process is among the important active sources that determine the dynamics. In this article, we study the generic dynamical instabilities associated with the growth phenomena that may arise in both scalar and vectorial systems. In vectorial systems, where the rotational degrees of particles play a role, a phenomenological growthmediated torque can affect the rotational dynamics of individual particles. We show that such a growth-mediated torque can result in active traveling waves in the bulk of a growing system. In addition to the bulk properties, we analyze the instabilities in the shape of growing interfaces in both scalar and vectorial systems.
Forouh Maleki, Ali Najafi
2023-06-15T14:50:32Z
http://arxiv.org/abs/2306.09173v1
# Instabilities in a growing system of active particles: scalar and vector systems ###### Abstract The physics of micron-scale biological colonies usually benefits from different out-of-equilibrium sources. In bacterial colonies and cellular tissues, the growth process is among the important active sources that determine the dynamics. In this article, we study the generic dynamical instabilities associated with the growth phenomena that may arise in both scalar and vectorial systems. In vectorial systems, where the rotational degrees of particles play a role, a phenomenological growth-mediated torque can affect the rotational dynamics of individual particles. We show that such a growth-mediated torque can result in active traveling waves in the bulk of a growing system. In addition to the bulk properties, we analyze the instabilities in the shape of growing interfaces in both scalar and vectorial systems. ## I Introduction. The process of growth is a necessary element that brings the meaning of life to the living systems. From a physicist's standing point, one important and central challenge lies in understanding the mechanism by which a non-equilibrium proliferating system forms its overall functioning shape [1; 2]. Bacterial colonies [3; 4; 5], biofilms [6; 7; 8], and growing tissues [9; 10; 11; 12] are standard examples that belong to the class of active systems where one can study the growth phenomena. Along this general task, self-organization and ordering in active colonies [13; 14; 15; 16; 17; 18], pattern formation in biological systems [13; 19; 20] and nematic ordering in bacterial colonies [21; 22] are studied extensively. A growing system benefits from chemical, physical, and biological processes at many different time and length scales [23]. On the other hand, different mechanisms ranging from behavior at the level of individual cells, cell-cell signaling, and environmental feedback, help a growing system to perform its job. All of such processes are mostly based on non-equilibrium reactions that eventually aim to provide mechanical motion. In a simplified mesoscale mechanical picture, out-of-equilibrium forces can be modeled by active terms in a phenomenological description that is called the active nematics [24; 25; 26]. Such continuum descriptions accompanied by agent-based simulations have to be compared with experimental facts. Resulting from nonlinearities hidden in continuum models, physical instabilities are among the intriguing phenomena that can help the system to find its overall shape. Examples of such instabilities include the buckling at bulk and roughening at boundaries [7; 12; 27; 28; 29; 30; 31]. In this article, we aim to present a generic description of a growing active matter that takes into account the growth at a phenomenological level. To put our idea, we will consider a growing matter in two categories of scalar and vectorial cases. In a scalar system, the rotational degrees of freedom of individual cells are neglected while in the vector case, the rotational motion plays an important role. In the vectorial case, in addition to the density, the director field is also a relevant variable that needs to take into account. Based on symmetry arguments, we consider a growth-mediated torque in our description and investigate the instabilities in both bulk and boundaries of a growing system. To this end, we use a minimal model that can capture the mechanics of a growing system. ## II Model As shown in Fig. 1 (a,b), consider a two-dimensional system composed of motile particles with the proliferation ability. This dense system of active particles lives in an ambient fluid, a fluid that could be either an aqueous media (in bacterial suspension) or extracellular fluid (in growing tissue). For biological systems, apoptosis and cell division can contribute and result in a positive or negative overall growth rate. We denote by \(g(t)\), the rate by which the particles proliferate. In addition to growth, the motility of particles would also act as another source for initiating mechanical motion in our system. Each self-driven motile particle can exert stress on the fluid. Denoting by \(a\), the amount of stress that each active particle carries, this stress could be either positive or negative. Extensile (pusher) and contractile (puller) active particles will be described by \(a>0\) and \(a<0\), respectively. Two phenomena of active motility of particles and the process of proliferation can result in large-scale motion in this system. In general, the time scale for the motion in the active part of the system may be much smaller than its counterpart in the ambient fluid. As a result of this observation and to study the long-time behavior of the system, we only consider the dynamics of the active part. At the continuum level the physical state of this system can be described by the coarse-grained fields of density \(\rho\), velocity filed \(\mathbf{v}(\mathbf{r},t)\) and director field \(\mathbf{n}(\mathbf{r},t)\) of the active part. These fields are subjected to the following dynamical equations [32; 33]: \[\rho\frac{d}{dt}\mathbf{v}=\nabla\cdot\Sigma-\Gamma\mathbf{v},\ \ \ \ \nabla\cdot\mathbf{v}=g(t),\] \[\frac{D}{Dt}\mathbf{n}=\gamma^{-1}\left(\mathbf{h}+\mathbf{n} \times\mathbf{\tau}^{\mathrm{g}}\right), \tag{1}\] where co-moving and co-rotating derivatives are defined as: \(\frac{d}{dt}=\partial_{t}+\mathbf{v}\cdot\nabla\) and \(\frac{D}{Dt}=\frac{d}{dt}+(I-\mathbf{n}\mathbf{n})\cdot\mathcal{D}\cdot \mathbf{n}\), respectively. Here \(I\) denotes the unit tensor of rank two and \(\mathcal{D}=D^{-}+AD^{+}\) with \(D^{\pm}=(1/2)(\nabla\mathbf{v}\pm[\nabla\mathbf{v}]^{\mathrm{T}})\). For spherical particles \(A=0\) and oblate (prolate) particles correspond to \(A>0\) (\(A<0\)) [34]. In our active system, both growth-mediated torque \(\mathbf{\tau}^{\mathrm{g}}\), a phenomenological term that we will introduce later and the thermodynamic current \(\Sigma\) derive the system to out of equilibrium conditions. The thermodynamic force can be written as [33]: \[\Sigma=\Sigma^{\mathrm{d}}+\Sigma^{\mathrm{r}}-\frac{\partial\mathcal{F}}{ \partial\nabla\mathbf{n}}\cdot\nabla\mathbf{n}-a\mathbf{n}\mathbf{n}-PI. \tag{2}\] The term proportional to the local nematic tensor \(\mathbf{n}\mathbf{n}\) is the active stress resulting from the motility of particles [35]. Furthermore, the elastic free energy density of the nematic phase and corresponding molecular force can be written as [32; 36]: \[\mathcal{F}=\frac{K}{2}\sum_{i,j}\partial_{i}n_{j}\partial_{i}n_{j},\ \ \ \ \ h_{i}=-\delta\mathcal{F}/\delta n_{i}.\] Recalling the equation of motion (Eq. 1), the rotational friction coefficient is denoted by \(\gamma\) and the term proportional to \(\gamma\) guarantees a relaxation to states with zero molecular force in passive systems. The dissipative part of stress tensor is given by \(\Sigma^{\mathrm{d}}=2\eta[D^{+}-1/2(\nabla\cdot\mathbf{v})I]\) and reactive stress is given by: \(\Sigma^{\mathrm{r}}=\frac{1}{2}(\mathbf{n}\mathbf{h}-\mathbf{n}\mathbf{n})- \frac{A}{2}(\mathbf{n}\mathbf{h}+\mathbf{n}\mathbf{n})\). Friction with the substrate is denoted by a single parameter \(\Gamma\). It is important to note that the number density of such a growing system is not conserved. In this case, a detailed model for the pressure should be considered. Growth pressure denoted by \(P\) is the main place where the growth affects the dynamics. Following the well-studied two-fluid model [37; 38], we choose a simple model for this growth pressure. For active systems similar to biological tissues, the concept of homeostasis works. In such systems, the static steady-state can be described by a characteristic homeostatic pressure denoted by \(P_{\mathrm{h}}\). Slowly growing systems are very near to the homeostatic state and the pressure can be expanded as powers of growth rate. It can be shown that in this regime, density \(\rho\) can be considered approximately as a constant variable. Furthermore, the pressure obeys the relation [37]: \[P=P_{\mathrm{h}}-\zeta\nabla\cdot\mathbf{v}, \tag{3}\] where the bulk viscosity for the homeostatic state is denoted by \(\zeta\). For \(\zeta>0\), any local increase in the pressure would result in the domination of apoptosis over cell division. It should be noted that the low Reynolds condition that is relevant for our purposes, will exclude any nonlinearity in terms of velocity field emerging from co-moving derivatives. Nutrients are also assumed to be accessible everywhere without any limitations. This last simplification can work well for 2-D colonies where the third dimension always provides free space to supply the food. How does the growth affect the orientational degree of freedom? As a result of short-range cell-cell communications, the growing state of the cells that are surrounding a target cell, can influence the motion of this target cell. This will result in a growth-mediated torque that will be denoted by \(\mathbf{\tau}^{\mathrm{g}}\). The existence of such torque was discussed previously [24]. The polarity and geometrical asymmetry of the particles might influence this scenario. In a phenomenological description, the local gradient of growth rate can contribute to the torque \(\mathbf{\tau}^{\mathrm{g}}\), that is exerted on the cells. At the leading order of the growth gradient, and following the symmetry considerations, a term like \(\mathbf{\tau}^{\mathrm{g}}\sim\nabla g\times\mathbf{n}\) is a possible term that we will consider. In the homeostatic picture, the growth rate is proportional to the pressure, then the growth-mediated torque will be written as: \[\mathbf{\tau}^{\mathrm{g}}=-\beta A^{2}\nabla P\times\mathbf{n}, \tag{4}\] where, \(\beta\) is a phenomenological parameter, and a simple second-order dependence on the asymmetry parameter \(A\) is assumed, meaning that the torque is similar for oblate and prolate particles. For \(\beta>0\), as it is shown Figure 1: (a) Schematic view of a proliferating system composed of anisotropic active particles moving in an ambient fluid. (b) Detailed processes of apoptosis and cell division. (c) A growing front with fluctuating shape. (d) Growth-mediated torque tends to align a particle with the direction of the growth gradient. in fig. 1(d), the growth torque tends to align the particles' polarity and \(-\nabla g\). Recalling the angle between \(\mathbf{n}\) and \(\nabla g\) by \(\psi\), for a fixed growth gradient, \(\psi=\pi\) is an absorbing state, meaning that the particle tends to move toward a region with less growing rate. The model described so far has many ingredients describing different physical processes that can affect the dynamics. We try to consider the effects of different terms step by step. First, we consider a scalar model in which we neglect the orientational degrees of freedom of the particles by dropping out the variable \(\mathbf{n}\). Later on, we will add the effects of the nematic variable \(\mathbf{n}\). ## III Scalar system To study the bulk properties of a growing scalar system, we first consider an unbounded growing system. For a gradually growing system in which the growth rate is very small, we linearize the equations in terms of the velocity field \(\mathbf{u}(\mathbf{r},t)\) and study its dynamics. Defining the Fourier transform of any variable \(f_{i}(\mathbf{r},t)\) as \(\tilde{f}_{i}(\mathbf{q},\omega_{i})=\int d\mathbf{r}dtf_{i}(\mathbf{r},t)\exp \left[i(\mathbf{q}\cdot\mathbf{r}-\omega_{i}t)\right]\), we can observe that the dispersion for longitudinal and transverse modes obeys the following relations: \[i\omega_{\mathrm{L,T}}=\rho^{-1}\left(\Gamma+(\eta+A_{\mathrm{L,T}}\zeta)q^{2 }\right), \tag{5}\] where \(A_{\mathrm{L}}=1\) and \(A_{\mathrm{T}}=0\). Transverse and longitudinal directions are defined with respect to the direction of wave vector \(\mathbf{q}\). As it is seen, for this scalar case, any kind of fluctuation in the bulk will eventually disappear. This result roots in the homeostatic description where growth-mediated motion is assumed to propagate through the pressure variations. For this scalar case, all motions are expected to take place at the boundaries. For this reason, it is necessary to analyze the effects at the boundaries. To study the boundary effects, we consider the case where the system is allowed to grow in 1-dimension. In this case, the system is limited from one side by a rigid wall and it is free to expand from the other side. As depicted in fig. 1(c), we choose a reference frame with \(z\)-axis along the growth direction. The growing front lies at \(z=0\) while the limited part of the system sits at \(z=-\infty\). The growing front is assumed to be a permeable and abrupt boundary (at \(z=0\)). On top of this permeable boundary (\(z>0\)), a fluid reservoir with fixed pressure is in contact with the growing system. The upper fluid is in mechanical equilibrium with the ambient fluid at the tissue (part of the growing system that is passive with no dynamics in our model). Furthermore, we assume that an additional external mechanical pressure denoted by \(P^{\mathrm{ext}}\), is exerted on the active part of the system at the position of the boundary [37]. This externally applied pressure can help us to capture the physics of growth and homeostasis in living systems. The homeostatic state is a state in which the external pressure is adjusted to a specific value \(P_{\mathrm{h}}\) so that the boundary reaches a non-moving still state. In this case, the death and apoptosis processes cancel each other and the growth rate vanishes on average. Obviously, any deviation from the homeostatic pressure will result in a motion in the boundary. Denoting by \(P^{\mathrm{ext}}=P_{\mathrm{h}}+\Delta P\), for \(\Delta P<0\), the division dominates over death and the boundary will move upward. To consider the dynamics, we notice that the stress tensor for this scalar system reads as: \[\Sigma=\eta(\nabla\mathbf{v}+\nabla\mathbf{v}^{T})-\eta^{-}\nabla\cdot \mathbf{v}I-P_{h}I,\] and the dynamical equation \(\nabla\cdot\Sigma=\Gamma\mathbf{v}\) takes the following form: \[(\eta\partial_{z}^{2}+\eta^{+}\partial_{x}^{2})v_{x}+\zeta \partial_{x}\partial_{z}v_{z}=\Gamma v_{x},\] \[(\eta^{+}\partial_{z}^{2}+\eta\partial_{x}^{2})v_{z}+\zeta \partial_{x}\partial_{z}v_{x}=\Gamma v_{z},\] where \(\eta^{\pm}=(\eta\pm\zeta)\). To solve these equations, we decompose the velocity field into two parts: \[\mathbf{v}=(v_{z}^{s}(z)+\delta v_{z})\hat{z}+\delta v_{x}\hat{x}, \tag{7}\] where the steady state solution \(v_{z}^{s}(z)\) corresponds to a steady state growth with a flat boundary. The rest shows the possible fluctuations corresponding to time-dependent nonuniformity in the shape of the boundary. Denoting the fluctuating shape of the boundary by function \(b(x,t)\) (see fig. 1), the velocity satisfies the following boundary condition: \[v_{z}(x,b)-v_{z}(x,0)=\partial_{t}b(x,t)+v_{x}(x,b)\partial_{x}b(x,t). \tag{8}\] Denoting the surface tension of the growing boundary by \(\gamma_{s}\), the components of the stress tensor should satisfy the following relations: \[\Sigma_{nn}(x,z=b(x,t))=-P_{h}-\Delta P-\gamma_{s}\nabla\cdot\hat {n},\] \[\Sigma_{nt}(x,z=b(x,t))=0, \tag{9}\] where \(\hat{t}\approx\hat{x}\) and \(\hat{n}\approx\hat{z}-\partial_{x}b(x,t)\hat{x}\) are local tangent and normal vectors. In terms of Cartesian components, the stress tensor can be written as: \[\Sigma_{nn}=\Sigma_{zz}-2(\partial_{x}b)\Sigma_{zx},\] \[\Sigma_{nt}=\Sigma_{zx}+(\partial_{x}b)(\Sigma_{zz}-\Sigma_{xx}), \tag{10}\] with, \[\Sigma_{xx}=\eta^{+}\partial_{x}v_{x}-\eta^{-}\partial_{z}v_{z}- P_{h},\] \[\Sigma_{zz}=\eta^{+}\partial_{z}v_{z}-\eta^{-}\partial_{x}v_{x}- P_{h},\] \[\Sigma_{xz}=\eta(\partial_{z}v_{x}+\partial_{x}v_{z}). \tag{11}\] Neglecting the fluctuations, we see that the steady-state solution reads as: \[v_{z}^{s}(z)=v_{\mathrm{g}}e^{z/\lambda},\quad v_{\mathrm{g}}=\frac{-\Delta P} {\sqrt{\Gamma\eta^{+}}}, \tag{12}\] where, \(\lambda=\sqrt{\eta^{+}/\Gamma}\) is the hydrodynamic screening length and the growth velocity or the speed by which the boundary proceed is denoted by \(v_{\rm g}\). In a small region with thickness \(\lambda\), just below the growing front, net flow can be observed. Beyond this layer, pressure has its equilibrium value denoted by \(P_{h}\) and no net growth can be observed. For \(\Delta P>0\) apoptosis dominates and \(v_{\rm g}<0\), showing that the boundary moves downward. For a system in which the growth (cell division) is dominated, \(\Delta P<0\) and this corresponds to positive growth velocity \(v_{\rm g}>0\) where the boundary moves upward. To see how the growing front remains smooth, we consider the fluctuations up to the first order of the height function \(b(x,t)\). We consider a traveling wave pattern as \(b(x,t)=\tilde{b}e^{i(q_{x}x-\omega t)}\) and investigate the response of the system. Furthermore, we consider the following ansatz for the velocity profile: \[\delta{\bf v}(x,z,t)=\delta{\bf v}e^{i(q_{x}x-\omega t)+kz},\] where \(k^{-1}\) shows the depth within which the fluctuations penetrate into the system. Inserting the above velocity pattern into the equations, we will arrive at the following equations: \[\begin{bmatrix}(\eta k^{2}-\eta^{+}q_{x}^{2}-\Gamma)&i\zeta kq_{x}\\ i\zeta kq_{x}&(\eta^{+}k^{2}-\eta q_{x}^{2}-\Gamma)\end{bmatrix}\begin{bmatrix} \delta\tilde{v}_{x}\\ \delta\tilde{v}_{z}\end{bmatrix}=0. \tag{13}\] The non-trivial solution to the above set of equations results in two possible values for \(k\). At the limit of very small wave numbers for the fluctuations (\(q\to 0\)), we expect to have \(k=\lambda^{-1}\). As a result of this requirement, the solution with \(k=\sqrt{q^{2}+\lambda^{-2}}\) is acceptable. Now, we investigate the boundary conditions to examine the spectrum of fluctuations. Putting the above solutions into the boundary conditions, the dispersion relation will read as: \[-i\omega=-\frac{\gamma_{s}\lambda q_{x}^{2}}{\eta^{+}}\left(1-\frac{v_{\rm g}} {2\gamma_{s}}\left(3\eta-\zeta\right)\right), \tag{14}\] As we expected, all fluctuations relax to zero for a homeostatic state with \(v_{\rm g}=0\). Far from the homeostatic state where the system grows, the flat boundary can be unstable depending on the parameters. For \(3\eta>\zeta\), we see that a flat growing boundary (\(v_{\rm g}>0\)) is unstable for \(v_{\rm g}\geq 2\gamma_{s}/(3\eta-\zeta)\). On the other hand, the flat growing boundary is always stable for \(\eta<\zeta/3\). In Fig. 2, in terms of \(\tilde{\zeta}=\zeta/\eta^{+}\) and \(\bar{v}_{\rm g}=\eta^{+}v_{\rm g}/\gamma_{s}\), we have investigated the possible behavior of the growing boundary. This instability criterion can be understood as a competition between surface tension and growth. The activity corresponding to growth amplifies the surface undulations whereas, the surface tension provides a restoring force. The underlying mechanism for the instability takes its roots in the fact that the variation in the growth rate in our homeostatic model, would be roughly proportional to the shape variations given by function \(b(x,t)\). As a result of shape fluctuations in the moving front, a local protrusion on the growing front would experience a higher growth rate which will eventually result in shape instability. ## V Vectorial system (Bulk) From now on we move to the vectorial system where orientational order plays an important role. In this case the activity parameter \(a\), growth parameter \(v_{\rm g}\) and \(\beta\), the parameter reflecting the growth-mediated torque, contribute to the dynamics. To study the dynamics in the bulk, consider an infinite system with coarse-grained fields given by \({\bf n}={\bf n}_{0}\), \(P\xi=P_{\rm h}\) and \({\bf v}=0\). Denoting the fluctuations by \(\delta{\bf n}\) and \(\delta{\bf v}\), we linearize the dynamical equations and neglect the effect of inertia to reach the following equations for perturbative fields: \[-\Gamma\delta{\bf v} -\zeta({\bf q}\!\cdot\!\delta{\bf v}){\bf q}-\eta q^{2}\delta{\bf v }+\frac{i}{2}(1-A)({\bf q}\!\cdot\!\delta{\bf h}){\bf n}_{0}\] \[-i\frac{i}{2}(1+A)({\bf q}\!\cdot\!{\bf n}_{0})\delta{\bf h}-ia({ \bf q}\!\cdot\!\delta{\bf n}){\bf n}_{0}-ia({\bf q}\!\cdot\!{\bf n}_{0})\delta {\bf n}=0,\] \[-i\omega \delta{\bf n}=\frac{i}{2}(1+A)({\bf q}\!\cdot\!{\bf n}_{0})\delta {\bf v}-\frac{i}{2}(1-A)(\delta{\bf v}\!\cdot\!{\bf n}_{0}){\bf q}+\frac{1}{ \gamma}\delta{\bf h}\] \[-iA(\delta{\bf v}\!\cdot\!{\bf n}_{0})({\bf q}\cdot{\bf n}_{0}){ \bf n}_{0}+\frac{\beta\zeta}{\gamma}A^{2}({\bf q}\!\cdot\!\delta{\bf v})({\bf q }-({\bf q}\!\cdot\!{\bf n}_{0}){\bf n}_{0}),\] where wave vector and frequency of the perturbations are denoted by \({\bf q}\) and \(\omega\), respectively. We denote by \(\theta_{0}\), the angle between wave vector and director field \({\bf n}_{0}\) (see Fig. 3). Furthermore \(\delta{\bf h}/K=-({\bf q}-{\bf n}_{0}({\bf n}_{0}\!\cdot\!{\bf q}))({\bf q}\! \cdot\!\delta{\bf n})-{\bf q}\!\cdot\!\delta{\bf n}\), where \({\bf q}\) is the average of the perturbations, \({\bf q}\ \((\mathbf{n}_{0}\cdot\mathbf{q})^{2}\delta\mathbf{n}\). The solution to the above equation in an unbounded space gives the dispersion relation as: \[-i\omega=\tau_{\mathrm{r}}^{-1}-iv(\mathbf{q},\mathbf{n}_{0})q, \tag{15}\] where relaxation time and group velocity of the perturbations are given by: \[\tau_{\mathrm{r}}^{-1} =\frac{a}{\Gamma/q^{2}+\eta}\left(\frac{1}{2}\cos 2\theta_{0}+ \frac{A}{4}g(\theta_{0})\right)\] \[-K_{\mathrm{e}}\left(\Gamma/q^{2}+\eta+\gamma/4+\frac{A}{2} \gamma\cos 2\theta_{0}+A^{2}\gamma g(\theta_{0})\right)\] \[v(\mathbf{q},\mathbf{n}_{0}) =\frac{\beta\zeta A^{2}\sin 2\theta_{0}}{\gamma(\eta^{+}+ \Gamma/q^{2})}\left(a-2AKq^{2}\right), \tag{16}\] here \(K_{\mathrm{e}}=\frac{Kq^{2}}{\gamma(\eta+\Gamma/q^{2})}\), and function \(g(\theta_{0})\) is given by: \[g(\theta_{0})=\frac{2\Gamma/q^{2}+2\eta+\zeta(1+\cos(4\theta_{0}))}{\eta^{+}+ \Gamma/q^{2}}.\] Reflected from the first term in \(\tau_{\mathrm{r}}\), for \(a>0\) (\(a<0\)), splay (bend) fluctuations tend to initiate hydrodynamic instability for geometrically symmetric swimmers (\(A=0\)) [35]. Bend and splay elastic energy can stabilize both modes of perturbations and this is shown in the second term in \(\tau_{\mathrm{r}}\)[35]. The interesting physics is in the real part of frequency \(\omega_{\mathrm{r}}=v(\mathbf{q},\mathbf{n}_{0})q\). When the hydrodynamic instabilities are stabilized by nematic elastic energy or other stabilization mechanisms [39], stable traveling waves can propagate in the system with corresponding group velocity given by \(v\). Propagation of such active waves is directly related to the parameter \(\beta\), the growth-mediated torque. Active waves can be observed in systems that either contain motile particles (\(a\neq 0\), \(A\neq 0\)) or contain non-motile particles with finite rotational elasticity (\(K\neq 0\), \(A\neq 0\)). For a system with nematic order, the velocity of such propagating modes crucially depends on the direction of propagation. The maximum velocity of propagating active waves corresponds to the case where the wave vector has an angle \(\theta_{0}=\pi/4\) with the nematic direction. Fig. 3, shows a snapshot of the active wave which is propagating in a direction with its maximum velocity. It is interesting to note that pure bend and splay waves (\(\theta_{0}=0\) and \(\theta_{0}=\pi/2\)) can not propagate. In addition to the director wave, one can consider this traveling wave as a pressure or growth wave. Local fluctuations in growth rate can propagate in the system. More interestingly is the direction of propagation which is a right-moving wave in the sense that fixing an angle \(\theta_{0}\), the waves can only propagate in \(+\hat{q}\) direction. In Fig. 3, we have presented an intuitional picture that can reveal the physics behind this active wave. As seen in this picture, in a locally ordered nematic phase, a small fluctuation in the direction of a particle can produce a hydrodynamic flow. Divergence of this excess flow initiates a pressure gradient and subsequently gives rise to a gradient in the growth rate. Then, growth-mediated torque will eventually promote the fluctuations to propagate. ## V Vectorial system (boundary) Having studied the bulk properties of an active nematic system, we now consider a system that is bounded by a rigid wall at \(z=-\infty\) and a freely growing boundary at \(z=0\), see Fig. 1. At very long times the system reaches a steady state in which the boundary moves with velocity \(v_{\mathrm{g}}\). The steady-state velocity profile is similar to the scalar case given in Eq. 12, with the growth velocity of the boundary that is replaced by: \[v_{\mathrm{g}}=\frac{-\Delta P+a}{\sqrt{\Gamma\eta^{+}}}. \tag{17}\] This steady state corresponds to the case where all elongated particles are perpendicular to the moving front. To consider the dynamics of fluctuations we put \(\mathbf{n}=\hat{z}+\delta\theta(x,z,t)\hat{x}\) and denote the shape of the interface by the function \(b(x,t)=\tilde{b}e^{i(q_{x}x-\omega t)}\). Similar to the scalar case, we assume that all the bulk fields behave like: \(\delta\mathbf{v}(x,z,t)=\delta\tilde{\mathbf{v}}e^{i(q_{x}x-\omega t)+kz}\). Putting this information in the equations, we arrive at the following equation Figure 3: up: In terms of orientation and local growth rate (encoded in the color of arrows), the active traveling wave is shown. We have chosen an angle \(\theta_{0}=\pi/4\) corresponds to a wave with maximum velocity. Here the local growth density is encoded in the color by which the orientation vectors are drawn. down: To see how a perturbation in the director field can propagate, we have applied a small orientational fluctuation to the middle cell. Cells are assumed to be contractile (puller) and their corresponding flow pattern is shown by black arrows. As a result of a small orientational fluctuation, a local velocity denoted by \(\delta\mathbf{v}\) will emerge. In the homeostatic picture, \(\mathbf{q}\cdot\delta\mathbf{v}\) will give a pressure difference and it eventually gives a gradient in growth rate. Taking into account the growth-mediated torque, this will eventually provide an active source for traveling waves. for the amplitudes: \[\begin{bmatrix}\eta k^{2}-\eta^{+}q_{x}^{2}-\Gamma&iq_{x}\zeta k&-k(a+d^{+})\\ &iq_{x}\zeta k&\eta^{+}k^{2}-\eta q_{x}^{2}-\Gamma&-iq_{x}(a+d^{-})\\ &(\frac{k}{2}+\beta_{1}q_{x}^{2})&-iq_{x}(\frac{1}{2}+\beta_{1}k)&(\beta_{1} \frac{v_{\rm g}}{\lambda^{2}}+\frac{d}{\gamma})\end{bmatrix}\begin{bmatrix} \delta\tilde{v}_{x}\\ \delta\tilde{v}_{z}\\ \delta\tilde{\theta}\end{bmatrix}=0,\] where, \(d^{\pm}=\frac{A\pm 1}{2}d\) with \(d=K(k^{2}-q^{2})\), \(\beta_{1}=\beta\zeta\gamma^{-1}A^{2}\). The solution to the above homogeneous system of equations, reveals the penetration depth \(k^{-1}\). Note that we assumed that the rotational dynamics of the director field are fast so we neglected the unsteadiness of the director field in the bulk. Putting the solutions in the boundary conditions, Eqs. 8 and 9, will give us: \[\delta\tilde{\theta}=-iq_{x}\tilde{b},\hskip 14.226378pt\delta \tilde{v}_{z}+\tilde{b}\frac{v_{\rm g}}{\lambda}=-i\omega\tilde{b},\] \[-iq_{x}\eta^{-}\delta\tilde{v}_{x}+k\eta^{+}\delta\tilde{v}_{z}+ \left(\gamma_{s}q_{x}^{2}+\eta^{+}\frac{v_{\rm g}}{\lambda^{2}}\right)\tilde{ b}=0,\] \[\eta k\delta\tilde{v}_{x}+i\eta q_{x}\delta\tilde{v}_{z}+2iq_{x} \frac{v_{\rm g}}{\lambda}\tilde{b}-d^{-}\delta\tilde{\theta}=0. \tag{18}\] Again, the above equations can be considered as a set of homogenous equations incorporating the field amplitudes as unknown variables. Looking for non-zero solutions for variables, we will obtain a relation that reveals the frequency of oscillations. Up to the leading order of wave vector \(q\), the dispersion relation reads as: \[-i\omega=\frac{\gamma_{s}\lambda q_{x}^{2}}{\eta^{+}}\left(\bar{v}_{g}(\xi+ \frac{\eta^{-}}{\eta^{+}})+\frac{l_{K}}{2}(A-1)(1-\frac{\zeta}{\eta})-1\right), \tag{19}\] where, \[\xi=[2\bar{\gamma} +l_{K}l_{a}((1+A)\bar{\gamma}-4(\bar{\eta}^{2}+\bar{\zeta}^{2}+ \bar{\eta}))\] \[+2\bar{\zeta}A^{3}l_{K}l_{a}l_{\beta}(1-\frac{\eta^{-}}{\eta^{+}})\] \[+4\bar{\zeta}A^{2}l_{\beta}(\bar{\zeta}-\bar{\zeta}^{2}\bar{v}_{g }l_{a}+\bar{\zeta}(1-\bar{\eta}\bar{v}_{g}l_{a}))]\] \[\times[4\bar{\gamma}-8A^{2}l_{a}l_{\beta}\bar{v}_{g}\bar{\zeta}^{2 }+2l_{K}l_{a}(-4\bar{\zeta}+(1+A)\bar{\gamma})]^{-1},\] where dimensionless variables are defined as: \(l_{K}=(\ell_{K}/\lambda)\), \(l_{a}=(\ell_{a}/\lambda)\), \(l_{\beta}=\ell_{\beta}/\lambda\) with \(\ell_{K}=(K/\gamma_{s})\), \(\ell_{a}=(\gamma_{s}/a)\), \(\ell_{\beta}=\beta\) and \(\bar{\gamma}=\gamma/\eta^{+}\). Before analyzing the growth-mediated instabilities, we note that at the limit of \(\bar{v}_{g}=0\), a passive instability can be observed. As it is seen from the above relation (setting \(\bar{v}_{g}=0\)), the instability can arise from a competition between surface tension and bulk elasticity. For \((A-1)(\eta-\zeta)>(2\eta/l_{K})\), this will give instability. It should be noted that this instability is not a general feature of passive systems. Here, the compressibility of the fluid combined with the special choice of the ordered state in which \(\mathbf{n}_{0}\) is perpendicular to the boundary, triggers the instability. Local terms proportional to \(n^{2}\) and \(n^{4}\) in the free energy, the terms that are not considered in our model, can stabilize this passive instability. To analyze the growth-associated instabilities, we consider the case where \(\bar{v}_{g}\neq 0\). It is seen that the elasticity, motility, and growth-mediated torque, contribute to the instability through their corresponding length scales denoted by \(\ell_{K}\), \(\ell_{a}\), and \(\ell_{\beta}\), respectively. Among these different parameters, we investigated the phase diagram of the system in terms of the speed of growth \(\bar{v}_{g}\), particle asymmetry \(A\) and the strength of growth-mediated torque \(\beta\). The phase diagram of the system for a special choice of parameters is plotted in fig. 4. It is shown how the asymmetry parameter of particles \(A\), competes with growth parameter \(\bar{v}_{g}\) to result in the either smooth or rough boundary for the system. Similar to the scalar case, there is always a threshold growth speed \(\bar{v}_{g}\), beyond which the moving front gets roughness. Slow growth with speed less than this threshold speed will result in a flat and smooth interface. Fig. 4 shows how this threshold velocity behaves as a function of particle asymmetry \(A\) and growth-mediated parameter \(\beta\). ## IV Discussion We studied the dynamical instabilities in a growing system. Our model takes into account four different length scales. As a result of friction with a substrate, the hydrodynamic interactions are screened and \(\lambda\) shows the corresponding screening length. The elasticity of the bulk introduces another length scale that is denoted by \(\ell_{K}\). Two other length scales correspond to the motility and growth-mediated torque, those are denoted by \(\ell_{a}\) and \(\ell_{\beta}\), respectively. Our analysis shows that for a scalar system, in the case where the orientational degrees of freedom is neglected, all dynamical behavior is limited to a boundary layer with thickness \(\lambda\) near the free interface of the system. All variations at the bulk will rapidly decay, but the fluctuations near boundaries can result in shape instabilities in the interfaces. In contrast, for the vectorial case, the case where the rotational degrees of particles play an important role, nontrivial results can be observed both at the bulk and interface. As a result of a phenomenological growth-mediated torque, we observed an active wave that can propagate in the bulk. The speed of this active wave is roughly proportional to \((\beta\zeta a/\gamma\Gamma)q^{2}\). Increasing either the friction with the substrate or the rotational friction of particles will result in a decrease in the propagation speed. This wave can be considered as a wave pattern on the pressure field in the system. As the fluctuation in the pressure is proportional to the growth rate, the active wave can also be thought of as a propagating wave in the pattern of growth rate. We are not aware of any real observation of such a wave but we think this might influence the overall dynamics of growing colonies. In addition to the bulk properties, we also studied the interface instabilities in the vectorial case. ## I Acknowledgement Useful discussions with R. Golestanian and F. Julicher and helps received from M. Setoudeh at the early stage of the work are acknowledged.
2305.10172
Knowledge-enhanced Mixed-initiative Dialogue System for Emotional Support Conversations
Unlike empathetic dialogues, the system in emotional support conversations (ESC) is expected to not only convey empathy for comforting the help-seeker, but also proactively assist in exploring and addressing their problems during the conversation. In this work, we study the problem of mixed-initiative ESC where the user and system can both take the initiative in leading the conversation. Specifically, we conduct a novel analysis on mixed-initiative ESC systems with a tailor-designed schema that divides utterances into different types with speaker roles and initiative types. Four emotional support metrics are proposed to evaluate the mixed-initiative interactions. The analysis reveals the necessity and challenges of building mixed-initiative ESC systems. In the light of this, we propose a knowledge-enhanced mixed-initiative framework (KEMI) for ESC, which retrieves actual case knowledge from a large-scale mental health knowledge graph for generating mixed-initiative responses. Experimental results on two ESC datasets show the superiority of KEMI in both content-preserving evaluation and mixed initiative related analyses.
Yang Deng, Wenxuan Zhang, Yifei Yuan, Wai Lam
2023-05-17T12:55:52Z
http://arxiv.org/abs/2305.10172v1
# Knowledge-enhanced Mixed-initiative Dialogue System for Emotional Support Conversations ###### Abstract Unlike empathetic dialogues, the system in emotional support conversations (ESC) is expected to not only convey empathy for comforting the help-seeker, but also proactively assist in exploring and addressing their problems during the conversation. In this work, we study the problem of mixed-initiative ESC where the user and system can both take the initiative in leading the conversation. Specifically, we conduct a novel analysis on mixed-initiative ESC systems with a tailor-designed schema that divides utterances into different types with speaker roles and initiative types. Four emotional support metrics are proposed to evaluate the mixed-initiative interactions. The analysis reveals the necessity and challenges of building mixed-initiative ESC systems. In the light of this, we propose a knowledge-enhanced mixed-initiative framework (KEMI) for ESC, which retrieves actual case knowledge from a large-scale mental health knowledge graph for generating mixed-initiative responses. Experimental results on two ESC datasets show the superiority of KEMI in both content-preserving evaluation and mixed initiative related analyses. ## 1 Introduction As the world is making efforts to recover from Covid-19 and plans for future construction, emotional support is of great importance in resolving the widespread emotional distress and increased risk for psychiatric illness associated with the pandemic Pfefferbaum and North (2020); Suh et al. (2021). A wide range of emotional support conversation (ESC) systems are emerging to provide prompt and convenient emotional support for help-seekers, including mental health support Sharma et al. (2021); Lokala et al. (2022), counseling Althoff et al. (2016); Shen et al. (2020, 2022) or motivational interviewing Perez-Rosas et al. (2016); Saha et al. (2021, 2022). Generally, the ESC system aims at reducing the user's emotional distress as well as assisting the user to identify and overcome the problem via conversations Liu et al. (2021). Mixed initiative is commonly defined as an intrinsic feature of human-AI interactions where the user and the system can both take the initiative in leading the interaction directions Allen et al. (1999); Kraus et al. (2020). For example, mixed-initiative conversational information-seeking (CIS) systems Aliannejadi et al. (2019); Deng et al. (2023) can proactively initiate clarification interactions for resolving the ambiguity in the user query, instead of only reacting to the query. Accordingly, a mixed-initiative ESC system can proactively switch the initiative to provide an empathetic response or initiate a problem-solving discussion when appropriate. Many efforts have been made on the emotion reasoning for generating empathetic responses Shen Figure 1: Examples from EmpatheticDialogues and ESConv datasets with a similar job loss problem. et al., 2020; Zhang and Danescu-Niculescu-Mizil, 2020; Cheng et al., 2022; Peng et al., 2022). Another line of work focuses on identifying the dialogue acts of the utterances (Welivita and Pu, 2020; Malhotra et al., 2022; Svikhnushina et al., 2022) or predicting the next conversational strategies (Perez-Rosas et al., 2017; Liu et al., 2021; Tu et al., 2022) in ESC systems. However, the feature of mixed initiative has not been investigated in existing ESC studies. To facilitate the analysis on mixed-initiative ESC systems, we first propose an EAFR schema to annotate the utterances into different types with speaker roles and initiative types, named _Expression_ (User-initiative), _Action_ (Support-initiative), _Feedback_ (User Non-initiative), and _Reflection_ (System Non-initiative). Besides, four emotional support metrics are designed to measure the characteristics of initiative and non-initiative interactions in ESC, including _Proactivity_, _Information_, _Repetition_, and _Relaxation_. To analyze the necessity of considering mixed initiative in ESC systems, we conduct a preliminary analysis on the different interaction patterns between ESC and empathetic dialogues (ED). Firstly, the dialogue flow analysis shows that the system in ED generally serves as a passive role, while the system in ESC proactively switches the initiative role during the conversation. As shown in Figure 1, the system in ED solely targets at comforting the user by reflecting their feelings or echoing their situations, _i.e._, _Non-Initiative_. Differently, ESC systems are further expected to proactively explore the user's problem by asking clarifying questions and help the user overcome the problem by providing useful information or supportive suggestions, _i.e._, _Initiative_. Furthermore, the analysis of the conversation progress and the emotional support metrics reveal three challenges in building a mixed-initiative ESC system: _1) When_ should the system take the initiative during the conversation? _2) What_ kind of information is required for the system to initiate a subdialogue? _3) How_ could the system facilitate the mixed-initiative interactions? According to these challenges, we define the problem of mixed-initiative ESC, which includes three sub-tasks: _1) Strategy Prediction_ to determine the mixed-initiative strategy in the next turn, _2) Knowledge Selection_ to collect the necessary knowledge for the next turn, and _3) Response Generation_ to produce emotional support responses with appropriate mixed-initiative strategy and knowledge. To tackle this problem, we propose a novel framework, named Knowledge Enhanced Mixed-Initiative model (KEMI), to build a mixed-initiative dialogue system for emotional support conversations with external domain-specific knowledge. In detail, KEMI first employs a knowledge acquisition module to acquire emotional support knowledge from a large-scale knowledge graph on mental health dialogues. Specifically, we expand the user utterance with generated common-sense knowledge as a query graph and then perform subgraph retrieval over the knowledge graph. Secondly, a response generation module conducts multi-task learning of strategy prediction and response generation in a sequence-to-sequence manner to generate mixed-initiative responses with external knowledge. The main contributions of this work are summarized as follows: (1) To measure the mixed-initiative interactions in ESC systems, we propose an innovative analysis method, including an EAFR annotation schema and corresponding emotional support metrics. (2) We propose a novel knowledge-enhanced mixed-initiative framework for ESC, which retrieves external knowledge from mental health knowledge graph by subgraph retrieval using the query graph expanded with commonsense knowledge. (3) Experimental results show that the mixed initiative is of great importance in ESC, and the proposed method effectively outperforms existing methods on both content-preserving evaluation and mixed initiative analyses. ## 2 Related Works **Emotional Support Conversation** Similar to fine-grained sentiment analysis (Zhang et al., 2022, 2021c, 2021c, 2022a) in conversations (Li et al., 2022a; Zhang et al., 2021), early works on emotional chatting mainly investigate approaches to detecting user emotions (Li et al., 2017; Zhou et al., 2018) or incorporating emotional signals into response generation (Wei et al., 2019; Song et al., 2019). As for empathetic dialogue systems (Rashkin et al., 2019; Welivita et al., 2021), evolving from emotion-aware response generation (Lin et al., 2019; Majumder et al., 2020) and emotional style transfer (Sharma et al., 2021), more efforts have been made on emotional reasoning techniques (Li et al., 2021; Kim et al., 2021; Gao et al., 2021; Cheng et al., 2022). Some latest studies explore the utilization of ex ternal knowledge for enhancing the model capability of emotion reasoning, including commonsense knowledge graph Zhong et al. (2021); Li et al. (2022), generative commonsense model Sabour et al. (2021), and domain-specific knowledge Shen et al. (2020, 2022). Shen et al. (2022) collectively exploit three kinds of external knowledge. Likewise, many ESC systems also leverage commonsense knowledge for response generation Tu et al. (2022); Peng et al. (2022). However, the commonsense knowledge is rather abstractive without detailed information, so that it is less helpful for the ESC system to generate meaningful and informative responses. In this work, we employ the generative commonsense model for query expansion to retrieve actual case knowledge from an external knowledge graph. **Mixed-initiative Dialogue** Recent years have witnessed many efforts on developing mixed-initiative conversational systems for various dialogues, such as information-seeking dialogues Zamani et al. (2020); Aliannejadi et al. (2019), open-domain dialogues Wu et al. (2019); Rachna et al. (2021); Lei et al. (2022), recommendation dialogues Deng et al. (2021), conversational question answering Deng et al. (2022). Despite the importance of mixed initiative in ESC systems, this area has not been investigated. One closely related research scope is to recognize the conversation strategies Liu et al. (2021); Perez-Rosas et al. (2017) or the dialogue acts Malhotra et al. (2022); Welivita and Pu (2020); Svikhunushina et al. (2022); Deng et al. (2022) of the utterances in ESC systems. However, these studies only focus on predicting the support strategies, instead of actually involving mixed-initiative interactions in ESC. In addition, measuring mixed initiative is also regarded as an essential perspective for assessing dialogue quality Vakulenko et al. (2021, 2020, 2019). Due to the high expenses in human evaluation, Sekulic et al. (2022) and Zhang and Balog (2020) investigate user simulation for evaluating the mixed-initiative interactions in conversational systems. In this work, we investigate several metrics for measuring the characteristics of the mixed initiative in ESC systems. ## 3 Preliminary Analysis ### EAFR Schema & Metrics Inspired by the ConversationShape Vakulenko et al. (2021) for the analysis of mixed-initiative CIS systems, we first propose an EAFR annotation schema to study the mixed initiative in ESC systems. The EAFR annotation schema classifies the utterance in ESC into four categories w.r.t the role of speakers and the type of initiative, including _Expression_ (User-initiative), _Action_ (System-initiative), _Feedback_ (User Non-Initiative), and _Reflection_ (System Non-Initiative). Definitions and examples of each type are presented in Table 1. Then, each utterance \(i\) in a dialogue is annotated as a tuple \((r_{i},t_{i},\mathbf{v}_{i},e_{i})\) for analysis. \(r_{i}\in\{\text{User}(U),\text{System}(S)\}\) denotes the speaker role. \(t_{i}\in\{\text{Initiative}(I),\text{Non-Initiative}(N)\}\) denotes the initiative type. \(\mathbf{v}_{i}\in\{0,1\}^{|V|}\) denotes the one-hot vocabulary embeddings. \(e_{i}\in[1,5]\) denotes the level of emotion intensity1. We further design four emotional support metrics for investigating patterns of mixed initiative in ESC systems as follows: Footnote 1: A decrease from the intensity reflects emotion improvement Liu et al. (2021). * **Proactivity**: how proactive is the system in the emotional support conversation? \[\text{Pro}=\frac{1}{\sum_{i=1}^{n}\mathcal{I}(r_{i}=S)}\sum_{i=1}^{n}\mathcal{ I}(r_{i}=S,t_{i}=I)\] (1) denotes the ratio of system-initiative interactions. * **Information**: how much information does the system contribute to the dialogue? \[\text{Inf}=\frac{\sum_{i=1}^{n}\sum_{k=1}^{|V|}\mathcal{I}(r_{i}=S,v_{ik}=1, \sum_{j=1}^{i-1}v_{jk}=0)}{\sum_{i=1}^{n}\mathcal{I}(r_{i}=S)}\] (2) represents the average number of new frequent terms2 that are introduced by the system. Footnote 2: We only consider frequent terms that appear in the dialogue more than once. Standard pre-processing pipeline is adopted: remove punctuation, tokenization, lowercase, remove stopwords, and apply the English Snowball stemmer. * **Repetition**: how often does the system follow up on the topic introduced by the user? \[\text{Rep}=\frac{\sum\limits_{i=1}^{n}\sum\limits_{k=1}^{|V|}\mathcal{I}(r_{i} =S,v_{ik}=1,\sum\limits_{j=1}^{i-1}v_{jk}[r_{j}=U]>0)}{\sum_{i=1}^{n} \mathcal{I}(r_{i}=S)}\] (3) represents the average number of repeated frequent terms that are introduced by the user and mentioned by the system. * **Relaxation**: how well does the system relax the emotional intensity of the user? \[\text{Rel}_{i}[r_{i}=S]=e_{<i}[r_{<i}=U]-e_{>i}[r_{>i}=U]\] (4) \[\text{Rel}=\frac{1}{\sum_{i=1}^{n}\mathcal{I}(r_{i}=S)}\sum\nolimits_{i=1}^{n} \text{Rel}_{i}[r_{i}=S] \tag{5}\] represents the change of the user's emotion intensity. \(e_{<i}[r_{<i}=U]\) and \(e_{>i}[r_{>i}=U]\) denote the emotion intensity of the first user utterance before and after the utterance \(i\), respectively. ### Analysis of Mixed Initiative in ESC To reveal the necessity of incorporating mixed initiative into ESC systems, we analyze the different interaction patterns between empathetic dialogues (ED) and emotional support conversations (ESC): (i) EmpatheticDialogues (Rashkin et al., 2019), a dataset for ED that aims to provide empathetic responses for comforting the help-seeker, and (ii) ESConv (Liu et al., 2021), a dataset for ESC that aims to not only reduce users' emotional distress, but also help them understand and overcome the issues they face. Due to the space limitation, we present the detailed analysis in Appendix A, including (i) the visualization of dialogue flow that indicates the initiative patterns between the user and system (A.2); (ii) the visualization of conversation progress that shows the phased change of the user's emotion intensity (A.3); and (iii) the evaluation of emotional support metrics that quantifies different aspects of mixed-initiative interactions (A.4). ### Challenges of Mixed Initiative in ESC The preliminary analysis reveals the importance of mixed-initiative interactions in ESC systems. Meanwhile, it is also challenging to balance the mixed-initiative interactions, as overacting in one way or taking the initiative inappropriately can be harmful to the emotional support conversations. Based on these analyses, we identify three key challenges in building a mixed-initiative ESC system: _1) When_ **should the system take the initiative during the conversation?** The analysis of conversation progress (A.3) shows that taking initiative at different phases of the conversation may lead to different impacts on the user's emotional state. In particular, support strategies or dialogue acts attach great importance to conversational effectiveness in ESC (Zhang and Danescu-Niculescu-Mizil, 2020; Tu et al., 2022). Therefore, it is a crucial capability for the ESC system to determine whether to take the initiative at each conversation turn. _2) What_ **kind of information is required for the system to initiate a subdialogue?** The analysis of mixed initiative metrics (A.4) show that the initiative system utterances are much informative than the non-initiative ones. Therefore, it is of great importance to discover necessary information and knowledge to make an appropriate mixed-initiative interaction. Researchers (Burleson, 2003) in communication and sociology states that the helpfulness of supportive statement is contingent on the following knowledge: (i) _Affective_ Knowledge, the emotion recognition of the user's affective state, (ii) _Causal_ Knowledge, the emotional reasoning of stressors that cause the current affective state of the user, and (iii) _Cognitive_ Knowledge, the cognitive analysis of coping processes to solve the core problematic situation that the user faces. _3) How_ **could the system facilitate the mixed-initiative interactions?** Since the system in ESC ultimately provides a natural language utterance to interact with the user, this challenge can be defined as a function that generates an initiative-aware utterance based on the given information. ### Problem Definition Similar to the ED problem, the ESC problem is typically defined as: given the dialogue context \begin{table} \begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{113.8pt} p{113.8pt}} \hline \hline Role & Type & EAFR & Definition & Sample Utterances \\ \hline User & Initiative & Expression & The user describes details or expresses feelings about the situation. & My school was closed due to the pandemic. I feel so frustrated. \\ \hline System & Initiative & Action & The system requests for information related to the problem or provides suggestions and information for helping the user solve the problem. & How are your feelings at that time? Deep breaths can help people calm down. Some researches has found that... \\ \hline User & Non-Initiative & Feedback & The user responds to the system’s request or delivers opinions on the system’s statement. & Okay, this makes me feel better. No, I haven’t. \\ \hline System & Non-Initiative & Reflection & The system conveys the empathy to the user’s emotion or shares similar experiences and feelings to comfort the user. & I understand you. I would also have been really frustrated if that happened to me. I’m sorry to hear about that. \\ \hline \hline \end{tabular} \end{table} Table 1: Definition and Examples for EAFR Schema Reflecting Patterns of Initiative Switch between Dialogue Participants in Emotional Support Conversations. \(\mathcal{C}=\{u_{1},u_{2},...,u_{t}\}\) and the description of the user's problematic situation \(s\), the goal is to estimate a function \(p(r|\mathcal{C},s)\) that generates the target response \(r\). In the light of the challenges discussed in Section 3.3, we further define the mixed-initiative emotion support conversation problem with the following three sub-tasks, corresponding to the above three challenges: 1) _Strategy Prediction_ predicts the support strategy \(y\) that can be regarded as the fine-grained initiative. 2) _Knowledge Selection_ selects appropriate knowledge \(k\) from the available resources \(\mathcal{K}\). 3) _Response Generation_ generates the mixed-initiative response \(r\) based on the predicted strategy and the selected knowledge. ## 4 Method Motivated by the analysis in the last section, we propose the KEMI framework that aims to generate mixed-initiative responses with external knowledge. As illustrated in Figure 2, KEMI contains two parts: 1) Knowledge Acquisition, and 2) Mixed-initiative Response Generation. ### Knowledge Acquisition Commonsense knowledge is widely adopted to enhance the emotion reasoning in ESC systems. Despite the wide usage of commonsense knowledge in ESC systems, it is usually succinct and lacks specific context information. We propose an approach to retrieve relevant actual cases of ESC from a large-scale mental health knowledge graph, namely HEAL (Welivita and Pu, 2022), for compensating the deficiency of commonsense knowledge. #### 4.1.1 Query Expansion with COMET Given the user utterance \(u_{t}\) at the current turn \(t\), a straight-forward knowledge acquisition approach is to use \(u_{t}\) as the query to directly retrieve actual cases from the HEAL KG. However, there is limited information provided by the user utterance, which may hinder the preciseness and explainability of the knowledge retrieval. To this end, we exploit COMET(Bosseltu et al., 2019), a commonsense knowledge generator, to expand the query with multi-perspective additional information regarding the user's affective and cognitive state. Specifically, the current user utterance \(u_{t}\) is fed into COMET with five special relation tokens, \(p\in\{\texttt{[xReact]},\texttt{[xIntent]},\texttt{[xWant]},\texttt{[xNeed]}\), \(\texttt{[xEffect]}\}\), to generate commonsense inference \(c_{p}\) for the relation \(p\), _i.e._, \(c_{p}=\texttt{COMET}(p,u_{t})\). Definitions of each commonsense relation can be found in Appendix B. Then the original user utterance \(u_{t}\) can be expanded with commonsense knowledge \(\{c_{p}\}\). #### 4.1.2 Query Graph Construction The actual case in HEAL (Welivita and Pu, 2022) is represented as a graph structure. Specifically, we consider 4 out of 5 types of nodes in HEAL that are related to response generation: _1) expectation_: commonly asked questions by the user in an emotional support conversation; _2) affective state_: emotional states associated with each speaker; _3) stressor_: the cause of emotional issues; and _4) response_: frequent types of responses by the system to address the user's problems. Edges are constructed to build the connections between nodes according to actual emotional support conversations. More details of HEAL can be found in Appendix C. In accordance with the HEAL knowledge graph, the relation [xReact], which reveals the user's emotional state, provides the same information as nodes in HEAL with the type of _affective state_. The relation [xIntent], which reveals the causes of the user's current situation, also shares the same information as nodes in HEAL with the type of _stressor_. The rest of relations, including [xWant], [xNeed], and [xEffect], which reveal the user's cognitive state, are relevant to the _responses_ for addressing the user's problem. Therefore, the expanded query \(\hat{u}_{t}=\{u_{t},\{c_{p}\}\}\) can be represented as a graph with abstractive entity descriptions, as shown in Figure 2. #### 4.1.3 Subgraph Retrieval To avoid enumerating all the subgraphs in HEAL, which is a densely-connected graph (over 2 million subgraphs), we propose a subgraph retrieval approach to select the top relevant subgraphs to form a candidate set. We first retrieve top-\(K\) entities relevant to each abstractive entity description in the expanded query graph \(\hat{u}_{t}\). Specifically, we use sentence-BERT (Reimers and Gurevych, 2019) to be an embedding-based retriever \(f_{r}(\cdot)\) for modeling the semantic similarity between the entities in the query and HEAL. With the retrieved top-\(K\) entities for each type of nodes, we merge them based on the edge connections in the knowledge graph to induce candidate subgraphs. Finally, we adopt top-\(N\) candidate subgraphs as the retrieved knowledge \(\mathcal{K}\). The subgraphs are ranked by the sum of similarity scores of each node in the subgraph \[\begin{split} E=\{e_{\text{exp}},e_{\text{aff}},e_{\text{str}},e_{ \text{resp}}\}\text{:}\\ \text{\bf{Sim}}(\hat{u}_{t},E)=& f_{r}(u_{t},e_{ \text{exp}})+f_{r}(c_{\text{sk}},e_{\text{aff}})+f_{r}(c_{\text{sl}},e_{\text{ str}})\\ &+f_{r}([c_{\text{sk}},c_{\text{sk}},c_{\text{sk}}],e_{\text{ resp}}).\end{split} \tag{6}\] ### Mixed-initiative Response Generation Given the dialogue context \(\mathcal{C}\) and the retrieved knowledge \(\mathcal{K}\), we first encode them into distributed representations with contextualized encoders. Specifically, we add special tokens to differentiate the roles of user and system as well as different types of knowledge as: <context> = [situ], \(s\), \([\text{usr}],u_{1}\), [sys], \(u_{2},...\) <know.> = [xR.], \(c_{\text{sk}}\), [xI.],..., [Aff.], \(e_{\text{aff}},...\) Pretrained language models (PLMs), _e.g._, GPT2 Radford et al. (2019), have shown superior capability of generating high-quality responses in many dialogue systems, especially those PLMs pre-trained on dialogue corpus, _e.g._, BlenderBot Roller et al. (2021). To leverage the advantages of these generative PLMs, we reformulate the mixed-initiative emotional support conversation problem as a Seq2Seq problem, which linearizes the input and output as a sequence of tokens as follows: \[\begin{split} X=&\texttt{[CLS]},\texttt{<context>}, \texttt{[know.]},\texttt{<know.>}_{i},...\\ & Y=&\texttt{[strategy]},y,\texttt{[response]},r\end{split}\] where \(X\) and \(Y\) are the linearized input and output sequences for Seq2Seq learning. Then the model is trained to maximize the negative log likelihood: \[\begin{split}\mathcal{L}=-\frac{1}{L}\sum\nolimits_{l=1}^{L} \log P(Y_{l}|Y_{<l};X).\end{split} \tag{7}\] ## 5 Experiment ### Experimental Setups DatasetsWe adopt the following two datasets for the evaluation: (i) ESConv Liu et al. (2021), an emotional support conversation dataset, contains 1,300 dialogues with 38,365 utterances and 8 types of support strategies. We adopt the original train/dev/test split; and (ii) MI Perez-Rosas et al. (2016), a motivational interviewing dataset, contains 284 counseling sessions with 22,719 utterances and 10 types of behavior strategies. We randomly split the dataset for train/dev/test by 8:1:13. Footnote 3: Since there is no speaker label in the MI dataset, it is only adopted for response generation evaluation while the analysis of mixed initiative is not applicable. Evaluation MetricsAs for automatic evaluation, we adopt Macro F1 as the strategy prediction metric. Following previous studies Liu et al. (2021); Tu et al. (2022), Perplexity (PPL), BLEU-\(n\) (B-\(n\)), and ROUGE-L (R-L) are included for the evaluation of response generation. BaselinesWe provide extensive comparisons with both non-PLM and PLM-based methods, including three Transformer-based methods Transformer Vaswani et al. (2017), MoEL Lin et al. (2019), and MIME Majumder et al. (2020)) and four BlenderBot-based methods BlenderBot Roller et al. (2021), BlenderBot-Joint Liu et al. (2021), GLHG Peng et al. (2022)4, and MISC Tu et al. (2022)5. Details about these baselines can be found in Appendix D. Footnote 4: Since GLHG leverages the problem type as an additional label, we also report the ablation result for a fair comparison, _i.e._, GLHG w/o \(\mathcal{L}_{2}\) Loss. Footnote 5: Due to a different train/test split adopted in Tu et al. (2022), we reproduce the performance of MISC on the standard split of ESConv Liu et al. (2021). Implementation DetailsKEMI is based on the BlenderBot model Roller et al. (2021). Following previous BlenderBot-based models Liu et al. (2021); Peng et al. (2022); Tu et al. (2022), we adopt the small version6 of BlenderBot in experiments. The learning rate and the warmup step are set to Figure 2: Overview of KEMI. Each expanded query is represented as a graph to retrieve subgraphs from HEAL, and each subgraph in HEAL can be regarded as an actual case of emotional support conversations. be 3e-5 and 100, respectively. The max input sequence length and the max target sequence length are 160 and 40, respectively. We retrieve the top-\(1\) subgraph from HEAL as the knowledge. The training epoch is set to 5 and the best model is saved according to the PPL score in the dev set.7 Footnote 7: [https://github.com/dengyang17/KEMI](https://github.com/dengyang17/KEMI) ### Overall Performance Table 2 and Table 3 summarize the experimental results on the ESConv and MI dataset, respectively. Among the baselines, BlenderBot-based methods largely outperform Transformer-based methods by leveraging the valuable pretrained knowledge. GLHG and MISC effectively exploit the commonsense knowledge to improve the performance of response generation. Besides, the joint learning with strategy prediction task is beneficial to the performance of response generation. Finally, KEMI substantially outperforms other methods with a noticeable margin. This indicates the domain-specific actual case knowledge from HEAL can alleviate the reliance on large-scale PLMs. Compared with commonsense knowledge, the knowledge from HEAL is much more effective in predicting support strategies, as this relevant knowledge can serve as an real example for guiding the system to respond. ### Human Evaluation Following previous studies Liu et al. (2021); Peng et al. (2022), we conduct human evaluation to compare the generated responses from two given models on five aspects: 1) _Fluency_: which model's response is more fluent? 2) _Identification_: which model's response is more skillful in identifying the user's problem? 3) _Comforting_: which model's response is better at comforting the user? 4) _Suggestion_: which model can give more helpful and informative suggestions? 5) _Overall_: which model's response is generally better? We randomly sample 100 dialogues from ESConv and three annotators are asked to determine the _Win/Tie/Lose_ for each comparison. Table 4 presents the human evaluation results. We compare the generated responses from KEMI with those produced by other two baselines, BlenderBot-Joint and MISC. The results show that KEMI achieves remarkable improvement on initiative interactions, including _Identification_ and _Suggestion_. Consequently, KEMI can generate more satisfactory and helpful responses than other methods, according to the _Overall_ metric. ### Ablation Study In order to investigate the effect of each sub-task and each type of knowledge on the final performance, we report the experimental results of the ablation study in Table 5. In general, both the strategy prediction and the knowledge selection tasks as well as all types of knowledge contribute to the final performance more or less. There are several notable observations in detailed comparisons: (i) The knowledge from HEAL is the key to the improvement on the strategy prediction task, since the actual case knowledge can provide a good guidance for the next support strategy. (ii) Different from discarding the actual case knowledge (w/o HEAL), discarding the commonsense knowledge \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{vs.} & \multicolumn{3}{c}{BlenderBot-Joint} & \multicolumn{3}{c}{MISC} \\ \cline{2-7} & Win & Tie & Loss & Win & Tie & Loss \\ \hline Flu. & 26\% & **51\%** & 23\% & 37\% & **47\%** & 16\% \\ Ide. & **50\%** & 38\% & 12\% & **46\%** & 30\% & 24\% \\ Com. & **46\%** & 40\% & 14\% & **44\%** & 30\% & 26\% \\ Sug. & **52\%** & 22\% & 26\% & **52\%** & 16\% & 28\% \\ Ove. & **62\%** & 20\% & 18\% & **70\%** & 12\% & 18\% \\ \hline \hline \end{tabular} \end{table} Table 4: Human evaluation results (KEMI vs.). \begin{table} \begin{tabular}{l c c c c c} \hline \hline Model & F1\(\uparrow\) & PPL\(\downarrow\) & B-2\(\uparrow\) & B-4\(\uparrow\) & R-1\(\uparrow\) \\ \hline Transformer Vaswani et al. (2017) & - & 81.55 & 5.66 & 1.31 & 14.68 \\ MoEL-Lin et al. (2019) & - & 6.293 & 5.02 & 1.14 & 14.21 \\ MME* Majumder et al. (2020) & - & 43\% & 4.82 & 1.03 & 14.83 \\ BlenderBot* Roller et al. (2021) & - & 16.23 & 5.45 & - & 15.43 \\ GLHG* Peng et al. (2022) & - & **16.7** & 7.57 & 2.13 & 16.37 \\ GLHG w/o \({}_{L}\) Loss* Peng et al. (2022) & - & 6.15 & 1.75 & 15.87 \\ BlenderBot-Joint Liu et al. (2021) & 19.23 & 16.15 & 5.52 & 1.29 & 15.51 \\ MISC Tu et al. (2022) & 19.89 & 16.08 & 7.62 & **2.19** & 16.40 \\ \hline KEMI & **24.66** & 15.92 & **8.31\({}^{\dagger}\)** & **2.51\({}^{\dagger}\)** & **17.05\({}^{\dagger}\)** \\ \hline \hline \end{tabular} \end{table} Table 2: Experimental results on ESConv. \({}^{*}\) and \({}^{**}\) indicate the results reported in Peng et al. (2022) and Liu et al. (2021) respectively. Other results are reproduced. \({}^{\dagger}\) indicates statistically significant improvement (\(p\)<0.05) over the best baseline. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Model & F1\(\uparrow\) & PPL\(\downarrow\) & B-2\(\uparrow\) & B-4\(\uparrow\) & R-1\(\uparrow\) \\ \hline Transformer Vaswani et al. (2017) & - & 65.52 & 6.23 & 1.52 & 15.04 \\ BlenderBot Roller et al. (2021) & - & 16.06 & 6.57 & 1.66 & 15.64 \\ BlenderBot-Joint Liu et al. (2021) & 22.66 & 14.74 & 7.28 & 2.18 & 16.41 \\ MISC Tu et al. (2022) & 22.68 & 14.33 & 7.75 & 2.30 & 17.11 \\ \hline KEMI & **25.91** & **13.84\({}^{\dagger}\)** & **8.52\({}^{\dagger}\)** & **2.72\({}^{\dagger}\)** & **18.00\({}^{\dagger}\)** \\ \hline \hline \end{tabular} \end{table} Table 3: Experimental results on MI Counseling. (w/o COMET) brings a positive effect on the fluency metrics (PPL), as the commonsense knowledge is not a natural sentence. However, the COMET contributes more on the content-preserving metrics (BLEU and ROUGE) than the HEAL, indicating that the succinct commonsense knowledge can be more precise. (iii) Among the three types of knowledge, cognitive knowledge is the most effective one for both strategy prediction and response generation tasks. (iv) Using Oracle strategy and Oracle knowledge substantially improves the overall performance, which demonstrates the effectiveness of considering these two sub-tasks in ESC systems. The performance gap between KEMI and Oracle also shows that the knowledge selection is very challenging and there is still much room for improvement. ### Analysis of Mixed Initiative We conduct the mixed initiative analysis introduced in Section 3.2 over the proposed KEMI method and other baselines. Since the calculation of the Relaxation metric in Eq.(4) requires the emotion intensity score of the user feedback, we adopt a model-based user simulator for automatic evaluation, which is described in Appendix A.1.3. #### 5.5.1 Emotional Support Metrics Table 6 summarizes the results of the four emotional support metrics for the generated responses from four BlenderBot-based methods and the reference responses in the test set. Note that, for a fair comparison, we also adopt Eq.(9) to calculate the Relaxation metric for the reference responses in the test set (_i.e._, REF). It can be observed that (i) As for the Proactivity metric, BlenderBot tends to act passively in ESC. While BlenderBot-Joint and MISC overly take the initiative after simply taking into account the support strategies. KEMI effectively balances the proportion of initiative and non-initiative interactions in ESC. (ii) With the actual case knowledge, KEMI can generate much informative responses than other baselines w.r.t the Information metric. However, there is still a large gap to reach the reference responses. (iii) Indeed, it is relatively easier to generate responses that repeat the previous information w.r.t the Repetition metric. (iv) KEMI outperforms other baselines in terms of the Relaxation metric on the initiative interactions with a large margin, which shows the superiority of KEMI on taking the initiative role for helping the user to solve emotional problems. #### 5.5.2 Conversation Progress We conduct the conversation progress analysis by dividing the whole conversation into five equal length intervals and observing the change of users' emotion intensity levels at each conversation phase. As the results shown in Figure 3, we observe that BlenderBot and MISC have a clear inclination to take non-initiative and initiative interactions in all stages of the conversation, respectively. Our KEMI method shares a more similar progress as the reference conversation with a balanced interaction pattern. More importantly, the initiative responses generated by KEMI has a more positive impact on the user's emotional intensity than other baselines, especially in the last two stages of the conversation. This result indicates that KEMI effectively takes the initiative to generate responses that can provide suggestions or information for relaxing the help-seekers by solving their emotional problems. ### Case Study To intuitively show the superiority of KEMI over other baselines, Figure 4 presents a case study of generated responses with the scores of mixed-initiative metrics. In the reference response, the system takes the initiative to provide useful sug \begin{table} \begin{tabular}{l|c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Proactivity} & \multicolumn{2}{c}{Information} & \multicolumn{2}{c}{Repetition} & \multicolumn{2}{c}{Relaxation} \\ \cline{2-9} & \multicolumn{2}{c}{Init.} & Non. & \multicolumn{2}{c}{Init.} & Non. & \multicolumn{2}{c}{All} & \multicolumn{2}{c}{Init.} & Non. & \multicolumn{2}{c}{All} \\ \hline BB & 0.36 & **0.64** & 1.79 & 1.32 & 1.48 & 1.00 & 1.11 & 1.07 & -0.01 & 0.11 & 0.07 \\ BB-J & **0.68** & 0.32 & 1.89 & 1.18 & 1.66 & 1.18 & **1.09** & **1.15** & 0.01 & 0.07 & 0.03 \\ MISC & 0.61 & 0.39 & 1.91 & 1.25 & 1.65 & 1.16 & **1.12** & 1.14 & 0.00 & 0.04 & 0.02 \\ KEMI & 0.45 & 0.55 & **2.04** & **1.40** & **1.68** & **1.18** & 1.09 & 1.13 & **0.09** & **0.13** & **0.11** \\ \hline REF & 0.51 & 0.49 & 3.09 & 3.01 & 3.05 & 1.12 & 1.06 & 1.09 & 0.10 & 0.13 & 0.11 \\ \hline \hline \end{tabular} \end{table} Table 6: Emotional support metrics. BB and BB-J denote BlenderBot and BlenderBot-Joint. \begin{table} \begin{tabular}{l l c c c c} \hline \hline Strategy & Knowledge & F1\(\uparrow\) & PPL\(\downarrow\) & B-2\(\uparrow\) & R-L\(\uparrow\) \\ \hline - & - & - & 16.23 & 5.45 & 15.43 \\ - & KEMI & - & 16.16 & 6.54 & 16.21 \\ \hline Joint & KEMI & 24.66 & 15.92 & 8.31 & 17.05 \\ Joint & w/o COMET & 23.26 & 15.74 & 7.60 & 16.47 \\ Joint & w/o HEAL & 19.99 & 16.08 & 7.98 & 16.92 \\ Joint & w/o _Affective_ & 22.68 & 16.08 & 8.22 & 16.98 \\ Joint & w/o _Causal_ & 23.14 & 15.94 & 8.16 & 16.92 \\ Joint & w/o _Cognitive_ & 20.24 & 16.22 & 7.62 & 16.64 \\ Joint & Oracle & **32.38** & 12.79 & 18.45 & 28.01 \\ \hline Oracle & KEMI & - & 15.92 & 9.75 & 18.81 \\ Oracle & Oracle & - & **12.78** & **19.11** & **28.88** \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation study. Oracle knowledge is obtained by the lexical match between the reference response and the candidate knowledge from HEAL. gestions to the user for solving her/his problem, which effectively reduce the user's emotional intensity. Among the generated responses, BlenderBot and BlenderBot-Joint decide to convey empathy to the user by paraphrasing the previous information, while MISC and KEMI proactively initiate a discussion about potential solutions to the problem. Based on the Relaxation metric, two initiative responses can better comfort the emotional intensity of the user than two non-initiative responses. Furthermore, KEMI can generate more informative and specific responses with actual case knowledge. ## 6 Conclusions In this paper, we design a novel analysis framework for analyzing the feature of mixed initiative in ESC. The analysis demonstrates the necessity and importance of mixed-initiative interactions in ESC systems. To this end, we propose the KEMI framework to tackle the problem of mixed-initiative ESC. KEMI first retrieves actual case knowledge from a large-scale mental health knowledge graph with query expansion and subgraph retrieval. Then KEMI performs multi-task learning of strategy prediction and response generation with the retrieved knowledge. Extensive experiments show that KEMI outperforms existing methods on both automatic and human evaluation. The analysis also shows the effectiveness of incorporating actual case knowledge and the superiority of KEMI on the mixed-initiative interactions. ## Limitations In this section, we analyze the limitations of this work: * As it is the first attempt to analyze the mixed-initiative interactions in emotional support conversations, the proposed metrics can be further improved for more robust evaluation. * Since the knowledge retrieval is not the focus of this work, we did not spend much space on discussing the choice of different retrieval methods. As shown in Table 5, there is still much room for improving the knowledge retrieval from a large scale knowledge graph. It is also worth studying more efficient retrieval methods for retrieving knowledge from a densely connected KG. * The proposed method requires an additional mental health related knowledge graph constructed by experts or knowledgeable workers, which is probably difficult to obtain in some applications. However, different from other knowledge-intensive tasks that can be benefited from open-domain knowledge (_e.g.,_ Wikipedia), it attaches Figure 4: Case study. **Bold** terms denote new (red) and _repeated_ (blue) frequent terms respectively. Figure 3: The distribution of system utterance initiative (the stack plot) and the user’s emotion intensity change (the bar chart) at different conversation progress. Higher scores of the emotion intensity change represent better emotion improvement of users. great importance in the professionals of the knowledge for building a helpful and safe ESC system. ## Ethical Considerations The datasets adopted are publicly available and widely studied benchmarks collected from professionals or well-trained annotators. All personally identifiable and sensitive information, _e.g._, user and platform identifiers, in these dataset has been filtered out. We do not make any treatment recommendations or diagnostic claims. Compared with existing methods for emotional support conversations, the proposed method can be regarded as one step further to a more safer ESC system. The proposed method retrieves knowledge from a well-established mental health knowledge graph, which can be maintained by filtering out harmful information when applying into applications. Then the knowledge-enhanced approach can alleviate the randomness during the response generation and provide the guidance towards more positive responses. In order to prevent the happening of unsafe cases, the analysis of emotion intensity prediction can also serve as an alarming mechanism that calls for handoffs to an actual psychologist.
2310.13815
Quantum physics cannot be captured by classical linear hidden variable theories even in the absence of entanglement
Recent experimental tests of Bell inequalities confirm that entangled quantum systems cannot be described by local classical theories but still do not answer the question whether or not quantum systems could in principle be modelled by linear hidden variable theories. In this paper, we study the quantum trajectories of a single qubit that experiences a sequence of repeated generalised measurements. It is shown that this system, which constitutes a Hidden Quantum Markov Model, is more likely to produce complex time correlations than any classical Hidden Markov Model with two output symbols. From this, we conclude that quantum physics cannot be replaced by linear hidden variable theories. Indeed, it has already been recognised that not only entanglement but also non-classical time correlations of quantum systems with quantum feedback are a valuable resource for quantum technology applications.
Kawthar Al Rasbi, Lewis A. Clark, Almut Beige
2023-10-20T21:06:15Z
http://arxiv.org/abs/2310.13815v1
# Quantum physics cannot be captured by classical linear hidden variable theories ###### Abstract Recent experimental tests of Bell inequalities confirm that _entangled_ quantum systems cannot be described by local classical theories but still do not answer the question whether or not quantum systems could in principle be modelled by linear hidden variable theories. In this paper, we study the quantum trajectories of a single qubit that experiences a sequence of repeated generalised measurements. It is shown that this system, which constitutes a Hidden Quantum Markov Model, is more likely to produce complex time correlations than any classical Hidden Markov Model with two output symbols. From this, we conclude that quantum physics cannot be replaced by linear hidden variable theories. Indeed, it has already been recognised that not only entanglement but also non-classical time correlations of quantum systems with quantum feedback are a valuable resource for quantum technology applications. ## I Introduction Entanglement, as defined by Erwin Schrodinger [1], is considered by many the "essence of quantum physics" [2] and the origin of "spooky action at a distance" [3], and therefore receives a lot of attention, especially in recent research into quantum information processing. Many quantum technology applications, from quantum cryptography to quantum metrology and quantum computing, require entanglement as a resource. It is therefore not surprising that entanglement is also often at the centre of a debate which tries to draw a clear line between quantum and classical physics. Already in 1935, scientists like Einstein, Podolsky and Rosen asked the questions whether or not the dynamics of quantum systems could be described by classical hidden variable theories [4]. They were hoping that quantum physics was simply a way of dealing with a lack of knowledge rather than indicating the need for an alternative, non-deterministic approach to physics. In 1964, Bell constructed an inequality that could be violated by entangled quantum systems, but not by performing measurements on two individual classical particles with local hidden properties [5]. Suddenly, the question of physics being either quantum or classical was no longer just a matter of interpretation. It was now possible to verify and quantitatively measure the strangeness of quantum systems. Over the years, several tests of Bell inequalities [6] have been performed [7; 8; 9; 10] and a strong case has been made for the reality of quantum physics and the existence of entanglement. Eventually, in 2015, additional loopholes of previous Bell tests were closed [11; 12; 13] and quantum physics became widely accepted not only as a highly efficient but also necessary approach. Although it is still possible to describe quantum systems by linear hidden variable theories, it was concluded that such classical theories would have to be at least non-local. However, entanglement is not the only strange property of quantum systems. Another characteristics that quantum systems do not share with classical systems is that their measurement outcomes can be discrete even when their internal states are continuous. For example, weak light arriving at a detector either causes a click or no click. Hence performing a measurement on a quantum systems must alter its state, thereby causing a quantum jump to occur [15]. Without quantum jumps, repeating a measurement on a quantum system might not yield the same outcome, which would mean that the outcome of the previous measurement was meaningless. This is in contrast to classical physics where a measurement reveals information but does not cause a physical system to change. In 1975 Dehmelt pointed out that driving a single three-level atom with appropriate laser fields can lead to macroscopic quantum jumps [16]. These are a random sequence of long periods of constant fluorescence interrupted by long periods of no fluorescence. The light and dark periods of a blinking atom occur on macroscopic time scales and are a manifestation of very persistent time correlations. Subsequently, the existence of macroscopic quantum jumps has been experimentally verified by several groups [17; 18; 19] and theoretical models were developed to accurately predict the statistical properties of their trajectories [20; 21; 22; 23]. Since there is a large variety of classical stochastic processes, it is hard to argue that the time correlations in these quantum experiments, were non-classical. Quantum jump experiments were mainly seen as an interesting way of illustrating the stochastic nature of quantum physics [24]. Eventually, it was pointed out that macroscopic light and dark periods could be used to herald the generation of entangled atomic states [25; 26]. To capture the non-classicality of time correlations, temporal Bell inequalities [27; 28; 29; 30; 31; 32; 33] were introduced, which
2304.06439
Impact of noise on the instability of spiral waves in stochastic 2D mathematical models of human atrial fibrillation
Sustained spiral waves, also known as rotors, are pivotal mechanisms in persistent atrial fibrillation (AF). Stochasticity is inevitable in nonlinear biological systems such as the heart; however, it is unclear how noise affects the instability of spiral waves in human AF. This study presents a stochastic two-dimensional mathematical model of human AF and explores how Gaussian white noise affects the instability of spiral waves. In homogeneous tissue models, Gaussian white noise may lead to spiral-wave meandering and wavefront break-up. As the noise intensity increases, the spatial dispersion of phase singularity (PS) points increases. This finding indicates the potential AF-protective effects of cardiac system stochasticity by destabilizing the rotors. By contrast, Gaussian white noise is unlikely to affect the spiral-wave instability in the presence of localized scar or fibrosis regions. The PS points are located at the boundary or inside the scar/fibrosis regions. Localized scar or fibrosis may play a pivotal role in stabilizing spiral waves regardless of the presence of noise. This study suggests that fibrosis and scars are essential for stabilizing the rotors in stochastic mathematical models of AF. Further patient-derived realistic modeling studies are required to confirm the role of scar/fibrosis in AF pathophysiology.
Euijun Song
2023-04-13T12:17:07Z
http://arxiv.org/abs/2304.06439v3
Impact of noise on the instability of spiral waves in stochastic 2D mathematical models of human atrial fibrillation ###### Abstract Sustained spiral waves, also known as rotors, are pivotal mechanisms in persistent atrial fibrillation (AF). Stochasticity is inevitable in nonlinear biological systems such as the heart; however, it is unclear how noise affects the instability of spiral waves in human AF. We present a stochastic two-dimensional mathematical model of human AF and explore how Gaussian white noise affects the instability of spiral waves. In homogeneous tissue models, Gaussian white noise may lead to spiral-wave meandering and wavefront break-up. As the noise intensity increases, the spatial dispersion of phase singularity (PS) points increases. This finding indicates the potential AF-protective effects of cardiac system stochasticity by destabilizing the rotors. However, Gaussian white noise is unlikely to affect the spiral-wave instability in the presence of localized scar or fibrosis regions. The PS points are located at the boundary or inside the scar/fibrosis regions. Localized scarring or fibrosis may play a pivotal role in stabilizing spiral waves regardless of the presence of noise. This study suggests that fibrosis/scars are essential for determining the rotors in stochastic mathematical models of AF, and further patient-derived modeling studies are required. + Footnote †: preprint: APS/123-QED ## I Introduction Atrial fibrillation (AF) is the most common cardiac arrhythmia characterized by chaotic electrical wave propagation, and is associated with mortality and morbidity.[1] AF causes electrical and structural remodeling of the atrial tissues, evolving from paroxysmal AF to persistent AF (PeAF). Although the mechanisms of PeAF are poorly understood, recent studies suggest that PeAF is driven by sustained spiral waves ("rotors" or "reentrant drivers") localized within spatially compacted regions.[2] The core of a spiral wave, known as a spiral-wave tip, can be mathematically described as a phase singularity (PS) point,[3] which is the intersection of the depolarizing wavefront and the repolarizing wave tail.[4] Cardiac computational modeling approaches have been widely used to study the complex spiral wave dynamics of human AF. In homogeneous atrial tissue models, the electrical remodeling conditions of PeAF can sustain stable rotor meandering in spatially compacted regions.[5] In the presence of electrophysiological heterogeneities, rotors are frequently found in fibrotic regions or at the boundaries between fibrotic and non-fibrotic tissues.[6; 7; 8] However, most computational models of human AF numerically solve deterministic partial differential equations (PDEs), ignoring the stochastic nature of complex biological systems. Stochasticity is inevitable in complex biological systems such as gene regulatory networks, neuronal networks, and cardiac systems, and plays an important role in the dynamic behavior of nonlinear systems.[9; 10] For example, noise-induced stochastic resonance can be found in the FitzHugh-Nagumo model.[11] In two-dimensional (2D) neuronal networks, certain thresholds of noise intensity can affect the formation and instability of spiral waves.[12; 13] However, it is unknown how noise affects the spiral wave dynamics in human AF models. Does noise affect spiral-wave instability in terms of the meandering of spiral-wave tips and wave break-up? Do PS points also localize near fibrotic regions in stochastic AF models? We do not know the answers to these questions based on experimental and clinical studies. Stochastic mathematical modeling of AF is essential for studying the effect of noise on the spiral wave dynamics in human AF models. In this study, we present a stochastic 2D mathematical model of human AF by adding Gaussian white noise to the conventional deterministic reaction-diffusion equation. We use the Courtemanche-Ramirez-Nattel human atrial cell model,[14] which is widely utilized in 2D and three-dimensional (3D) AF models, to study AF mechanisms and personalize ablation treatment strategies.[15; 16] Using the stochastic mathematical model of AF, we numerically simulate spiral waves on 2D isotropic, homogeneous atrial tissues by varying noise intensity levels. To examine whether the PS points localize at fibrotic regions in the stochastic AF model, we further explore the spiral wave dynamics in the presence of localized scar or fibrosis regions (Figure 1). We show that Gaussian white noise can lead to spiral-wave meandering and wavefront break-up in homogeneous atrial tissues, whereas localized scar or fibrotic regions can stabilize spiral waves without generating wavefront break-up. ## II Methods ### Stochastic 2D computational modeling of AF We present a stochastic 2D mathematical model of human AF using the Courtemanche-Ramirez-Nattel human atrial cell model.[14] The conventional deterministic 2D AF model can be described by the following reaction-diffusion equation:[5; 17] \[\frac{\partial V}{\partial t}=-\frac{I_{ion}+I_{stim}}{C_{m}}+D\nabla^{2}V \tag{1}\] where \(V\left(t,\mathbf{x}\right)\) (mV) is the transmembrane potential, \(I_{ion}\) (pA) is the total ionic current, \(I_{stim}\) (pA) is the stimulus current, Impact of noise on the instability of spiral waves in stochastic 2D mathematical models of human atrial fibrillation. \[\frac{\partial V}{\partial t}=-\frac{I_{lon}+I_{stim}}{C_{m}}+D\nabla^{2}V+\sigma \xi(t,\mathbf{x})\] The biophysical details of each ionic current can be found in Courtemanche et al. [14] Here, we add a Gaussian white noise term to obtain the following stochastic PDE: [18; 19] \[\frac{\partial V}{\partial t}=-\frac{I_{lon}+I_{stim}}{C_{m}}+D\nabla^{2}V+ \sigma\xi\left(t,\mathbf{x}\right) \tag{2}\] where \(\sigma\) (mV) is the noise intensity and \(\xi\left(t,\mathbf{x}\right)\) is the white noise satisfying \[\left\langle\xi\left(t,\mathbf{x}\right)\right\rangle_{t}=0,\quad\left\langle \xi\left(t,\mathbf{x}\right),\xi\left(t^{\prime},\mathbf{x}\right)\right\rangle _{t}=\delta\left(t-t^{\prime}\right)\] for each \(\mathbf{x}\in\mathbb{R}^{2}\). We can rewrite the above stochastic PDE (Eq. 2) in the differential form as follows: \[dV=\left[-\frac{I_{lon}+I_{stim}}{C_{m}}+D\nabla^{2}V\right]dt+\sigma dW\left( t,\mathbf{x}\right) \tag{3}\] where \(W\left(t,\mathbf{x}\right)\) is the Wiener process satisfying \(dW\left(t,\mathbf{x}\right)=\xi\left(t,\mathbf{x}\right)dt\), which is the differential form of the Brownian motion. We numerically solve this stochastic PDE (Eq. 3) on a 2D isotropic, homogeneous domain of area 75\(\times\)75 mm\({}^{2}\), consisting of a 2D lattice network of 300\(\times\)300 atrial cells. We used the forward Euler method with a fixed time step of \(\Delta t=0.02\) ms and a space step of \(\Delta x=\Delta y=0.25\) mm, and applied the Neumann (no-flux) boundary conditions. The Laplacian \(\nabla^{2}V\) was approximated using the five-point stencil. The increment of the Wiener process was numerically implemented in the Ito sense as \[W\left(t+\Delta t,\mathbf{x}\right)-W\left(t,\mathbf{x}\right)\approx\sqrt{ \Delta t}\ \eta\] where \(\eta\sim\mathcal{N}\left(0,\,1\right)\) is a Gaussian random number with a mean value of 0 and a standard deviation of 1. [13; 20] To reflect the electrical remodeling of PeAF, we reduced the L-type Ca\({}^{2+}\) current (\(I_{Cal}\)) by 70%, the transient outward K\({}^{+}\) current (\(I_{to}\)) by 50%, and the ultrarapid delayed rectifier K\({}^{+}\) current (\(I_{Kur}\)) by 50%, and increased the inward rectifier K\({}^{+}\) current (\(I_{K1}\)) by 100%, as described by Pandit et al. [5] Two diffusion coefficients were tested: \(D\)=0.1 and 0.05 mm\({}^{2}\)/ms, which produce conduction velocities of 0.43 and 0.27 m/s, respectively. \(D\)=0.1 mm\({}^{2}\)/ms is a commonly used diffusion coefficient in 2D AF models that produces a physiological conduction velocity. [5; 17] We also tested the noise intensity levels of \(\sigma=\)0, 1, 5, and 10 mV. Spiral waves were initiated by applying the standard S1-S2 cross-field protocol, and the AF wave dynamics were studied for 5 s. The numerical simulation was performed using C++ code with OpenMP parallelization. The atrial cell model is publicly available at the CellML Physiome Project ([https://models.physiomeproject.org](https://models.physiomeproject.org)). ### Modeling scar and fibrosis regions In addition to the 2D homogeneous model described in the previous section, we simulated the AF wave dynamics on inhomogeneous models in the presence of a localized scar or fibrotic region. The scar and fibrotic regions were applied to the center of the 2D cardiac tissue with a radius of 10 mm (Figure 1). The scar was modeled as an inexcitable and non-conductive region (\(D=0\)). In the fibrotic regions, we reduced \(I_{K1}\) by 50%, \(I_{Cal}\) by 50%, and the sodium current (\(I_{Na}\)) by 40% to reflect the TGF-\(\beta\)1 fibrogenic signaling effects; we also decreased the diffusion coefficient \(D\) by 30% to represent gap junction remodeling. [7; 8] ### Analysis of spiral waves To analyze the spatiotemporal patterns of spiral waves, we generated 2D maps of the transmembrane potential with a sampling time step of 10 ms. PS points were identified using the method proposed by Iyer and Gray. [21] The phase \(\theta\left(t,\mathbf{x}\right)\in\left[-\pi,\pi\right]\) at each node is calculated as, \[\theta\left(t,\mathbf{x}\right)=\arctan\left[V\left(t+\tau,\mathbf{x}\right)- V_{mean}\left(\mathbf{x}\right),\ V\left(t,\mathbf{x}\right)-V_{mean}\left(\mathbf{x} \right)\right]\] where \(\tau=30\) ms is the time delay constant and \(V_{mean}\) is the mean of the action potential for the whole fibrillation state. The PS points are identified if \(\oint\nabla\theta\cdot dr=\pm 2\pi\). [21; 22] To evaluate the spatial distribution of the PS points, we computed the PS spatial dispersion as the standard deviation of the PS points as follows: \[\text{PS spatial dispersion}=\sqrt{\frac{\sum_{i}\left\|PS_{i}-PS_{mean} \right\|^{2}}{N_{PS}}}\] where \(PS_{i}\) is the \(i\)th PS point, \(PS_{mean}\) is the average location of the PS points, \(N_{PS}\) is the number of PS points, and \(\left\|\cdot\right\|\) is the \(L^{2}\) norm. The PS spatial dispersion vanishes if the spiral-wave tip is consistent over time. The spiral wave rotation frequency Figure 1: Stochastic two-dimensional computational modeling of human atrial fibrillation. Spiral waves were numerically simulated on atrial tissues with and without localized scar and fibrosis regions (see Methods for details). is estimated as the maximum dominant frequency of the action potential signals acquired at the node (50, 50). All signal analyses were performed using MATLAB 2021b (MathWorks, Inc.). ## III Results ### Noise-induced instability of spiral waves in homogeneous models First, we numerically simulated human AF on 2D homogeneous tissue models. Figure 2 shows the transmembrane potential maps and PS plots. With noise intensity levels of \(\sigma\)=0 and 1 mV, spiral waves were localized near the center of the atrial tissue, indicating sustained stable rotor dynamics. The maximum distance between the PS points was \(<\)20 mm. When a noise intensity level of \(\sigma\)=5 mV was applied, spiral waves continuously meandered (\(>\)30 mm) and wavefront break-up occurred. At a noise intensity level of \(\sigma\)=10 mV, spiral waves were largely meandered (\(>\)60 mm), and wavefront break-up also occurred. At a noise intensity level of \(\sigma\)=10 mV and a diffusion coefficient of \(D\)=0.1 mm\({}^{2}\)/ms, the spiral waves were spontaneously terminated at 4.5 s. All the other cases showed sustained fibrillation states for \(>\)5 s. Sequences of the transmembrane potential maps are shown in Supplementary Figure S1. We quantitatively determined how noise changes the spiral wave rotation frequency and PS spatial dispersion, as shown in Figure 3. Noise changed the spiral wave frequency by only approximately \(<\)2.6% (Figure 3A). As the noise intensity level increased from 0 to 10 mV, the spiral wave rotation frequencies were 8.0, 8.0, 7.8, and 7.8 Hz for the \(D\)=0.1 mm\({}^{2}\)/ms cases, and 7.8, 7.8, 7.8, and 7.6 Hz for the \(D\)=0.05 mm\({}^{2}\)/ms cases, respectively. However, noise dramatically increased the PS spatial dispersion (Figure 3B). As the noise intensity level increased from 0 to 10 mV, the PS spatial dispersions were 4.5, 5.0, 9.5, and 28.1 mm for the \(D\)=0.1 mm\({}^{2}\)/ms cases, and 3.1, 3.5, 14.1, and 23.1 mm for the \(D\)=0.05 mm\({}^{2}\)/ms cases, respectively. This result is consistent with the observations of noise-induced spiral-wave meandering and wavefront breakup, as shown in Figure 2. Figure 3: Spiral wave rotation frequency (A) and phase singularity (PS) spatial dispersion (B) values for stochastic 2D atrial fibrillation simulations on homogeneous tissues, depending on the diffusion coefficients of \(D\)=0.1 and 0.05 mm\({}^{2}\)/ms and the noise intensity levels of \(\sigma\)=0, 1, 5, and 10 mV. Figure 2: Transmembrane potential maps and phase singularity (PS) plots for stochastic 2D atrial fibrillation simulations on homogeneous tissues. The simulations were performed for diffusion coefficients of \(D\)=0.1 and 0.05 mm\({}^{2}\)/ms, and noise intensity levels of \(\sigma\)=0, 1, 5, and 10 mV. The PS points were computed during the whole fibrillation state, and the action potential signals were acquired at the node (50, 50). ### Effects of scar regions Next, we examined the effect of localized scar regions on electrical wave propagation in stochastic AF models. As shown in Figure 4, electrical waves were periodically propagated around the scar region with a radius of 10 mm. The PS points were identified at the boundary of the scar region. There was no wavefront break-up, and the fibrillation states were sustained for \(>\)5 s. This stable wave propagation pattern is known as an "anatomical reentry" rather than a "spiral wave," which is usually defined in the absence of an anatomic obstacle.[23] Noise changed the spiral wave frequencies by only approximately \(<\)3.3% (Figure 5A). As the noise intensity level increased from 0 to 10 mV, the spiral wave rotation frequencies were 6.0, 6.0, 6.0, and 6.2 Hz for the \(D\)=0.1 mm\({}^{2}\)/ms cases, and 4.0, 4.0, 4.0, and 4.0 Hz for the \(D\)=0.05 mm\({}^{2}\)/ms cases, respectively. In all cases, the PS spatial dispersions were consistently 10.0 mm, which is almost exactly the radius of the scar region (Figure 5B). ### Effects of fibrosis regions Similarly, we examined how localized fibrosis regions affect the spiral wave dynamics in stochastic AF models. As shown in Figure 6, when the diffusion coefficient was \(D\)=0.1 mm\({}^{2}\)/ms, spiral waves meandered around the fibrosis region with a radius of 10 mm, occasionally invading the fibrosis region when those cells were recovered from refractory periods. When the diffusion coefficient was \(D\)=0.05 mm\({}^{2}\)/ms, spiral waves meandered inside the fibrotic region. The PS points were identified at the boundary and inside the fibrotic region. All cases showed sustained fibrillation states for \(>\)5 s, and there was no wavefront breakup. The noise changed spiral wave frequencies by only approximately \(<\)3.2% (Figure 7A). As the noise intensity level increased from 0 to 10 mV, the spiral wave rotation frequencies were 6.2, 6.2, 6.2, and 6.4 Hz for the \(D\)=0.1 mm\({}^{2}\)/ms cases, and 5.2, 5.2, 5.1, and 5.2 Hz for the \(D\)=0.05 mm\({}^{2}\)/ms cases, respectively. In all cases, the PS spatial dispersions were consistently below 10.0 mm, implying the spiral wave meandering inside the fibrotic region (Figure 7B). Figure 4: Transmembrane potential maps and phase singularity (PS) plots for stochastic 2D atrial fibrillation simulations in the presence of scar regions. The simulations were performed for diffusion coefficients of \(D\)=0.1 and 0.05 mm\({}^{2}\)/ms, and noise intensity levels of \(\sigma\)=0, 1, 5, and 10 mV. The PS points were computed during the whole fibrillation state, and the action potential signals were acquired at the node (50, 50). Figure 5: Spiral wave rotation frequency (A) and phase singularity (PS) spatial dispersion (B) values for stochastic 2D atrial fibrillation simulations in the presence of scar regions, depending on the diffusion coefficients of \(D\)=0.1 and 0.05 mm\({}^{2}\)/ms and the noise intensity levels of \(\sigma\)=0, 1, 5, and 10 mV. ## IV Discussion In this study, we numerically simulated stochastic 2D models of human AF and explored the effects of Gaussian white noise on the instability of spiral waves. In homogeneous atrial tissue models, the electrical remodeling condition of the PeAF can generate stable rotor dynamics in the absence of noise. However, Gaussian white noise can lead to spiral-wave meandering and wavefront breakup without significantly altering the spiral wave frequencies (Figures 2 and 3). This finding indicates the potential AF-protective effect of the stochasticity of cardiac systems by destabilizing rotors. In contrast, Gaussian white noise is unlikely to affect spiral-wave instability in the presence of localized scar and fibrosis regions, and the PS points are located at the scar or fibrosis areas (Figures 4-7). Thus, scarring or fibrosis may play a pivotal role in stabilizing spiral waves regardless of the Gaussian white noise. The overall results suggest that tissue heterogeneities such as scars and fibrosis are essential for determining the rotors in stochastic 2D mathematical AF models, and further patient-derived stochastic 3D modeling studies are needed. The pathophysiological importance of fibrosis in AF has been extensively studied. In patient-derived 3D computational models, the PS points are associated with fibrotic regions;[7; 8] this spatial relationship between fibrosis and the rotor is robust to the model parameter variability.[6] In addition, fibroblast-myocyte coupling can affect the spiral wave dynamics and extracellular electrograms;[24; 25] however, this coupling effect was not incorporated in this study. The DECAAF clinical study also demonstrated that the degree of atrial tissue fibrosis is associated with the catheter ablation outcomes in AF.[26] In contrast, a recent non-invasive electrophysiology mapping system found that rotors are not directly associated with fibrosis in patients with AF.[27] This discrepancy between computational and clinical studies may be attributed to the model parameter uncertainty and the absence of stochasticity. Our stochastic AF modeling approach must be further tested to examine the noise-induced instability of spiral waves in patient-derived 3D AF models that reflect patient-specific anatomy and electrophysiology. Although \(D\)=0.1 mm\({}^{2}\)/ms is a widely used diffusion coefficient value in 2D AF models,[17; 5] we also tested \(D\)=0.05 Figure 6: Transmembrane potential maps and phase singularity (PS) plots for stochastic 2D atrial fibrillation simulations in the presence of fibrosis regions. The simulations were performed for diffusion coefficients of \(D\)=0.1 and 0.05 mm\({}^{2}\)/ms, and noise intensity levels of \(\sigma\)=0, 1, 5, and 10 mV. The PS points were computed during the whole fibrillation state, and the action potential signals were acquired at the node (50, 50). Figure 7: Spiral wave rotation frequency (A) and phase singularity (PS) spatial dispersion (B) values for stochastic 2D atrial fibrillation simulations in the presence of fibrosis regions, depending on the diffusion coefficients of \(D\)=0.1 and 0.05 mm\({}^{2}\)/ms and the noise intensity levels of \(\sigma\)=0, 1, 5, and 10 mV. mm\({}^{2}\)/ms to examine the spiral wave dynamics in a severely re-modeled condition in PeAF. The results from the two diffusion coefficients were similar, except for the spiral wave frequencies. In homogeneous tissues, the spiral wave frequency is known to be primarily dependent on the inverse of the action potential duration and not on the conduction velocity if the curvature effects are negligible.[28; 29] In the present study (Figure 3), the spiral wave frequencies in the homogeneous models were 7.6-8.0 Hz, not being significantly affected by the diffusion coefficient and noise intensity level. However, in the presence of scar or fibrosis, the spiral wave frequencies in the \(D\)=0.05 mm\({}^{2}\)/ms cases were consistently lower than those in the \(D\)=0.1 mm\({}^{2}\)/ms cases (Figures 5 and 7). As the PS points were identified at the boundary and inside the scar/fibrosis region, the spiral wave frequency may be mainly dependent on the conduction velocity. Although the spiral wave frequency was not the primary focus of the present study, further studies are needed to systemically examine whether noise alters the spiral wave frequency under various electrophysiological conditions.[30] This study has several limitations. We adopted Gaussian white noise, which is neither structurally correlated nor bounded. Because Gaussian noise is inappropriate for many real complex biological systems, the impact of non-Gaussian noise must be investigated.[13; 31] The action potential and spiral wave dynamics are also sensitive to the model parameter uncertainty/variability.[28; 32] The effects of various AF remodeling conditions, tissue anisotropy, and electrophysiological heterogeneity should be systemically investigated further. We only tested the noise intensity levels of \(\sigma\)=0, 1, 5, and 10 mV because of the large computational time. It is worthwhile to determining whether there is a critical \(\sigma\) value where transitioning of the instability of spiral waves occurs. In addition, various sizes of atrial tissue and scar/fibrosis regions should be tested because wavelength and tissue size affect the spontaneous termination of cardiac fibrillation.[33] ###### Acknowledgements. This research received no external funding. This study did not produce new animal/clinical data. The author would like to thank the anonymous reviewers for their valuable comments and Editage for English language editing. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request. ## Conflict of Interest The author has no conflicts of interest to declare. ## Authorship Contribution **Euijun Song:** Conceptualization, Methodology, Formal analysis, Software, Investigation, Visualization, Writing - original draft.
2303.15384
Branching Fraction of the Decay $B^+ \to π^+ τ^+ τ^-$ and Lepton Flavor Universality Test via the Ratio $R_π(τ/μ)$
Among (semi)leptonic rare $B$-decays induced by the $b \to d$ flavor changing neutral current, the decay $B^+ \to \pi^+ \mu^+ \mu^-$ is the only one observed so far experimentally. Related decays involving the $e^+e^-$ and $\tau^+ \tau^-$ pairs are the targets for the ongoing experiments at the LHC, in particular LHCb, and Belle II. The muonic and electronic semileptonic decays have almost identical branching fractions in the Standard Model (SM). However, the tauonic decay $B^+ \to \pi^+ \tau^+ \tau^-$ differs from the other two due to the higher reaction threshold which lies slightly below the $\psi (2S)$-resonance. We present calculations of the ditauon ($\tau^+ \tau^-$) invariant-mass distribution and the branching fraction ${\rm Br} (B^+ \to \pi^+ \tau^+ \tau^-)$ in the SM based on the Effective Electroweak Hamiltonian approach, taking into account also the so-called long-distance contributions. The largest theoretical uncertainty in the short-distance part of the decay rates is due to the $B \to \pi$ form factors, which we quantify using three popular parametrizations. The long-distance contribution can be minimized by a cut on the ditauon mass $m_{\tau^+ \tau^-} > M_{\psi (2S)}$. Once available, the branching fractions in the tauonic and muonic (and electronic) modes provide stringent test of the lepton flavor universality in the $b \to d$ transitions. We illustrate this by calculating the ratio $R_\pi (\tau/\mu) \equiv {\rm Br} (B^+ \to \pi^+ \tau^+ \tau^-)/{\rm Br} (B^+ \to \pi^+ \mu^+ \mu^-)$ in the SM for the total and binned ratios of the branching fractions.
Ahmed Ali, Alexander Ya. Parkhomenko, Irina M. Parnova
2023-03-27T16:58:02Z
http://arxiv.org/abs/2303.15384v2
# Branching Fraction of the Decay \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) and ###### Abstract Among (semi)leptonic rare \(B\)-decays induced by the \(b\to d\) flavor changing neutral current, the decay \(B^{+}\to\pi^{+}\mu^{+}\mu^{-}\) is the only one observed so far experimentally. Related decays involving the \(e^{+}e^{-}\) and \(\tau^{+}\tau^{-}\) pairs are the targets for the ongoing experiments at the LHC, in particular LHCb, and Belle II. The muonic and electronic semileptonic decays have almost identical branching fractions in the Standard Model (SM). However, the tauonic decay \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) differs from the other two due to the higher reaction threshold which lies slightly below the \(\psi(2S)\)-resonance. We present calculations of the ditauon (\(\tau^{+}\tau^{-}\)) invariant-mass distribution and the branching fraction Br(\(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\)) in the SM based on the Effective Electroweak Hamiltonian approach, taking into account also the so-called long-distance contributions. The largest theoretical uncertainty in the short-distance part of the decay rates is due to the \(B\to\pi\) form factors, which we quantify using three popular parametrizations. The long-distance contribution can be minimized by a cut on the ditauon mass \(m_{\tau^{+}\tau^{-}}>M_{\psi(2S)}\). Once available, the branching fractions in the tauonic and muonic (and electronic) modes provide stringent test of the lepton flavor universality in the \(b\to d\) transitions. We illustrate this by calculating the ratio \(R_{\pi}(\tau/\mu)\equiv{\rm Br}(B^{+}\to\pi^{+}\tau^{+}\tau^{-})/{\rm Br}(B^{+} \to\pi^{+}\mu^{+}\mu^{-})\) in the SM for the total and binned ratios of the branching fractions. keywords: \(B\)-meson, semileptonic decay, \(\tau\)-lepton, transition form factors, branching fraction, lepton flavor + Footnote †: journal: Physics Letters B DESY 23-036 ## 1 Introduction Rare bottom-hadron decays induced by the quark-level Flavor Changing Neutral Current (FCNC) transitions \(b\to s\) and \(b\to d\) are of special interest as they allow us to test the SM precisely and search for possible deviations from the Standard Model (SM). FCNC processes in the SM are governed by the GIM mechanism [1], which allows such transitions only through higher-order electroweak (loop) diagrams. In particular, semileptonic rare decays are a very useful tool for testing the Lepton Flavor Universality (LFU), a linchpin of the electroweak sector of the SM. Semileptonic decays due to the \(b\to s\) currents such as \(B^{+}\to K^{(*)\mu}\mu^{+}\mu^{-}\), \(B^{0}\to K^{(*)0}\mu^{+}\mu^{-}\), and \(B^{0}_{s}\to\phi\mu^{+}\mu^{-}\) and their electronic counterparts, while suppressed by the loops, are favored by the quark mixing Cabibbo-Kobayashi-Maskawa (CKM) matrix [2; 3]. Hence, there is plenty of data available on their branching fractions and decay characteristics, such as the lepton-pair invariant-mass and angular distributions [4; 5; 6; 7; 8; 9; 10]. Some of these measurements were found to be not in accord with the SM-based predictions, triggering searches for better models incorporating physics beyond the Standard Model (BSM) [11; 12; 13; 14; 15; 16]. An important issue in these decays is the interference between the short (perturbative)- and long (non-perturbative)-distance contributions. The standard experimental procedure is to exclude the dilepton invariant-mass squared (\(q^{2}\)) spectrum close to the \(J/\psi\)- and \(\psi(2S)\)-resonances, and extract the short-distance part of the spectrum from the rest. Measurements of the phase difference between the short- and long-distance amplitudes in the \(B^{+}\to K^{+}\mu^{+}\mu^{-}\) decay has been undertaken by the LHCb collaboration based on data collected in 2011 and 2012 [17]. Their analysis shows that the phases of the \(J/\psi\)- and \(\psi(2S)\)-mesons are important near the resonance masses, due to their small decay widths, but their influence on the dilepton invariant mass spectrum in other regions is small. In addition, the branching fractions of the higher charmonium states: \(\psi(3770)\), \(\psi(4040)\), \(\psi(4160)\), and \(\psi(4415)\), were measured. This analysis is potentially helpful for studies of other semileptonic decays, \(B^{+}\to\pi^{+}\ell^{+}\ell^{-}\), in particular. For the \(B\to\pi e^{+}e^{-}\) and \(B\to\pi\mu^{+}\mu^{-}\), also light vector mesons, \(\rho^{0}\), \(\omega\) and \(\phi\) give sizable contributions to the lepton invariant-mass distribution around \(q^{2}\sim 1\) GeV\({}^{2}\). Dedicated searches of possible LFU-violations in rare decays to the \(b\to s\) currents have been undertaken by the LHCb collaboration [18; 19; 20] in terms of the ratios \(R_{K^{\pm}(\mu}/e)\equiv{\rm Br}(B\to K^{(*)}\mu^{+}\mu^{-})/{\rm Br}(B\to K^{(*) }e^{+}e^{-})\), measured in selected bins in the dilepton invariant mass squared. This data hinted at LFU-violation, typically reaching three standard deviation from the SM. Similar analysis by the Belle collaboration [21; 22], on the other hand, yielding \(R_{K^{\pm}(\mu}/e)=1.03^{+0.28}_{-0.24}\pm 0.01\) for \(q^{2}\in(1.0,\,6.0)\) GeV\({}^{2}\), while consistent with the SM is less conclusive due to larger experimental errors. However, recent measurements of the ratios \(R_{K^{\pm}(\mu}/e)\) in the low- and central-\(q^{2}\) parts of the spectrum by the LHCb collaboration [23; 24] are found in almost perfect agreement with the SM predictions [25; 26]. This data, based on 9 fb\({}^{-1}\) integrated luminosity, with improved understanding of the background and tighter electron particle identification, supersedes the earlier LHCb data. There have also been persistent indications over almost a decade of LFU-violation in the charged current (CC) semileptonic transitions \(B\to D^{(*)}\ell\nu_{\ell}\), comparing the light (\(\ell=e,\,\mu\)) and \(\tau\)-lepton modes via the ratios \(R_{D^{\pm}}\equiv{\rm Br}(B\to D^{(*)}\tau\nu_{\tau})/{\rm Br}(B\to D^{(*)} \ell\nu_{\ell})\)[27; 28; 29; 30]. However, the latest analysis of \(R_{D^{\prime}}\) by the LHCb collaboration [31; 32], yielding \(R_{D^{-}}={\rm Br}(B^{0}\to D^{*-}\tau^{+}\nu_{\tau})/{\rm Br}(B^{0}\to D^{*-} \mu^{+}\nu_{\mu})=0.247\pm 0.015\,({\rm stat})\pm 0.015\,({\rm syst})\pm 0.012\,({\rm ext })\), is in good agreement with the SM-based estimate \(R_{D^{\prime}}=0.254\pm 0.005\)[33]. Likewise, the single best-measurement of \(R_{D}\), namely \(R_{D}=0.307\pm 0.037\,({\rm stat})\pm 0.016\,({\rm syst})\) by the Belle collaboration [34], is in good agreement with the corresponding ratio in the SM, \(R_{D}=0.298\pm 0.004\)[33]. Thus, the long-standing anomalies in \(R_{D}\) and \(R_{D^{\prime}}\) in CC decays have receded, thanks to precise data. We also mention that the 2.6 standard-deviation departure from the LFU observed by the LEP experiments in the branching fractions of the \(W^{\pm}\to\ell^{\pm}\nu_{\ell}\) decays, namely \(R_{\tau/(e\mu)}^{\rm LEP}\equiv{\rm Br}(W^{\pm}\to\tau^{+}\nu_{\tau})/[{\rm Br} (W^{\pm}\to e^{+}\nu_{e})+{\rm Br}(W^{\pm}\to\mu^{\pm}\nu_{\mu}))=1.066\pm 0.025\)[35; 36], has now been brought in line with the SM expectations \(R_{\tau/(e\mu)}^{\rm SM}=0.9996\)[37; 38], by precise experiments in proton-proton collisions at the LHC with \(R_{\tau/(e\mu)}^{\rm CMS}=1.002\pm 0.019\)[39]. Measurements by the ATLAS collaboration [40], \(R_{\mu/e}^{\rm ATLAS}=1.003\pm 0.010\) and \(R_{\tau/\mu}^{\rm ATLAS}=0.992\pm 0.013\), are likewise in excellent agreement with the LFU hypothesis. One concludes that there is no experimental evidence of the LFU-violation in charged-current processes. Data on the FCNC semileptonic \(b\to d\) transitions is rather sparse. For decays induced by the \(b\to d\ell^{+}\ell^{-}\) transition, where \(\ell=e,\,\mu,\,\tau\), the \(B^{+}\to\pi^{+}\mu^{+}\mu^{-}\) decay is so far the only mode observed in the \(B\)-meson sector, first reported by the LHCb collaboration in 2012 [41] and analyzed in detail in 2015 [42]. The measured dimuon invariant mass distribution in the \(B^{+}\to\pi^{+}\mu^{+}\mu^{-}\) decay is in good agreement with theoretical predictions in the SM [43; 44; 45] in almost all regions of the spectrum, except the lowest \(q^{2}\)-part. In this region, experimental data significantly exceeds theoretical predictions based on the short-distance contribution [42]. Taking into account the sub-leading weak annihilation (WA) and long-distance (LD) contributions, however, gives better agreement between theoretical predictions and experimental data [46; 47; 48]. We also note the evidences for the \(B^{0}\to\pi^{+}\pi^{-}\mu^{+}\mu^{-}\) decay with a significance of \(4.8\sigma\)[49], the \(B^{0}_{s}\to K^{*0}\mu^{+}\mu^{-}\) decay at \(3.4\sigma\)[50], reported by the LHCb collaboration, and the observation of the \(\Lambda^{0}_{b}\to p\pi^{-}\mu^{+}\mu^{-}\) decay in the bottom-baryon sector by the same collaboration [51], all of them are mediated by the \(b\to d\ell^{+}\ell^{-}\) transition. A model-independent analysis of the \(|\Delta b|=|\Delta d|=1\) processes to test the SM and probe flavor patterns of new physics was undertaken in [52]. Constraints on Wilson coefficients are obtained from global fits from data on exclusive \(B^{+}\to\pi^{+}\mu^{+}\mu^{-}\), \(B_{s}\to\bar{K}^{*0}\mu^{+}\mu^{-}\), \(B^{0}\to\mu^{+}\mu^{-}\), and inclusive radiative \(B\to X_{d}\gamma\) decays. Being consistent with the SM, these fits leave a sizable room for new physics. The ratio \(R_{\pi}(\mu/e)\), involving the branching ratios of \(B^{+}\to\pi^{+}\ell^{+}\ell^{-}\) for \(\ell^{+}=e^{+},\mu^{+}\), has been studied theoretically at great length to search for the LFU-violation [53] in the semileptonic \(b\to d\) sector. At present, however, there is no data available on the ratio \(R_{\pi}(\mu/e)\). Precision tests of LFU involving the decays \(b\to(s,\,d)\,\tau^{+}\tau^{-}\) remain to be undertaken. Compared to the \(b\to(s,\,d)\,\epsilon^{+}e^{-}\) and \(b\to(s,\,d)\,\mu^{+}\mu^{-}\) modes, they have a reduced phase space, in addition to the experimental difficulty of reconstructing the \(\tau^{\pm}\)-leptons. These modes will be targeted at the LHCb and Belle II experiments. Theoretically, they have the advantage of being relatively free of the LD-contributions and the form factors involved in the SD-piece can eventually be calculated precisely on the lattice. In the \(b\to s\) sector, the decays \(B\to K^{(*)}\tau^{+}\tau^{-}\) have recently been studied theoretically from the BSM physics point of view [54; 55; 56; 57]. Currently only weak experimental upper limits on these decays are available, with \({\rm Br}(B^{+}\to K^{+}\tau^{+}\tau^{-})<2.25\times 10^{-3}\) by the BaBar collaboration [58] and \({\rm Br}(B^{0}\to K^{*0}\tau^{+}\tau^{-})<2.0\times 10^{-3}\) by Belle [59], both obtained at 90% CL. In the \(b\to d\tau^{+}\tau^{-}\) sector, the main decays of interest are \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\), \(B^{0}\to\pi^{+}\pi^{-}\tau^{+}\tau^{-}\), \(B^{0}\to\rho^{0}\tau^{+}\tau^{-}\) and \(B^{+}\to\rho^{+}\tau^{+}\tau^{-}\). In this paper, we present a theoretical analysis of the \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) decay in the SM and work out the ditauon invariant-mass distribution and decay width for three popular parametrizations of the \(B\to\pi\) transition form factors [60; 61; 62]. The LD-contributions are calculated using the available data on the decay chain \(B\to\pi V\to\pi\ell^{+}\ell^{-}\)[36], which can, however, be greatly reduced by imposing a cut on the dilepton invariant mass, \(m_{\ell^{+}\ell^{-}}>M_{\phi(25)}\). We also estimate the ratio of the tauonic-to-muonic branching fractions, \(R_{\pi}(\tau/\mu)\), which holds also for the ratio \(R_{\pi}(\tau/e)\) in the SM. Their measurements will test the LFU-violations involving all three charged leptons in the FCNC \(b\to d\) sector. ## 2 Effective Hamiltonian for the \(b\to d\ell^{+}\ell^{-}\) decays in the SM Our analysis is carried out in the Effective Electroweak Hamiltonians approach [63; 64], where the SM heavy degrees of freedom (\(W^{+},Z^{0},t\)) are absent. This effective theory also does not contain photons and gluons with energies exceeding the mass of the \(b\)-quark, \(m_{b}\), which represents the largest energy scale of the theory. Photons and gluons with lower energies are included using the QED and QCD Lagrangians. Rare semileptonic decays of the \(B\)-mesons involving the \(b\to s\) and \(b\to d\) FCNC transitions are calculated in this framework, of which the \(b\to d\) part has the form: \[\mathcal{H}_{\rm weak}^{b\to d}=\frac{4G_{F}}{\sqrt{2}}\bigg{\{}V_{ud}V_{ud}^ {*}\left[C_{1}(\mu)\,\mathcal{P}_{1}^{(\mu)}(\mu)+C_{2}(\mu)\,\mathcal{P}_{2} ^{(\mu)}(\mu)\right] \tag{1}\] \[+V_{cd}V_{cb}^{*}\left[C_{1}(\mu)\,\mathcal{P}_{1}^{(c)}(\mu)+C_ {2}(\mu)\,\mathcal{P}_{2}^{(c)}(\mu)\right]\] \[-V_{ud}V_{tb}^{*}\sum_{j=3}^{10}C_{j}(\mu)\,\mathcal{P}_{j}(\mu) \bigg{\}}+\mathrm{h.\,c.},\] where \(G_{F}\) is the Fermi constant, \(V_{q_{1}q_{2}}\) are the CKM matrix elements satisfying the unitary condition \(V_{ud}V_{ub}^{*}+V_{cd}V_{cb}^{*}+V_{td}V_{tb}^{*}=0\) which can be used to eliminate one of their products, and \(C_{j}(\mu)\) are Wilson coefficients determined at the scale \(\mu\). For the operators \(\mathcal{P}_{j}(\mu)\), the following basis is chosen [64; 65]: \[\mathcal{P}_{1}^{(p)}=(\bar{d}\gamma_{\mu}LT^{A}p)\,(\bar{p} \gamma^{\mu}LT^{A}b), \tag{2}\] \[\mathcal{P}_{2}^{(p)}=(\bar{d}\gamma_{\mu}Lp)\,(\bar{p}\gamma^{ \mu}Lb),\] (3) \[\mathcal{P}_{3}=(\bar{d}\gamma_{\mu}Lb)\,\sum_{q}(\bar{q}\gamma^{ \mu}q),\] (4) \[\mathcal{P}_{4}=(\bar{d}\gamma_{\mu}LT^{A}b)\,\sum_{q}(\bar{q} \gamma^{\mu}T^{A}q),\] (5) \[\mathcal{P}_{5}=(\bar{d}\gamma_{\mu}\gamma_{\nu}\gamma_{\rho}Lb) \,\sum_{q}(\bar{q}\gamma^{\mu}\gamma^{\nu}\gamma^{\rho}q),\] (6) \[\mathcal{P}_{6}=(\bar{d}\gamma_{\mu}\gamma_{\nu}\gamma_{\rho}LT^{ A}b)\,\sum_{q}(\bar{q}\gamma^{\mu}\gamma^{\nu}\gamma^{\rho}T^{A}q),\] (7) \[\mathcal{P}_{7\gamma}=\frac{e}{16\pi^{2}}\left[\bar{d}\sigma^{ \mu\nu}(m_{b}R+m_{d}Lb)\right]F_{\mu\nu},\] (8) \[\mathcal{P}_{8g}=\frac{g_{\rm st}}{16\pi^{2}}\left[\bar{d}\sigma^ {\mu\nu}(m_{b}R+m_{d}L)T^{A}b\right]G_{\mu\nu}^{A},\] (9) \[\mathcal{P}_{9\ell}=\frac{\alpha_{\rm em}}{2\pi}(\bar{d}\gamma_{ \mu}Lb)\sum_{\ell}(\bar{d}\gamma^{\mu}\ell), \tag{10}\] \[\mathcal{P}_{10\ell}=\frac{\alpha_{\rm em}}{2\pi}(\bar{d}\gamma_{\mu}Lb)\sum_ {\ell}(\bar{d}\gamma^{\mu}\gamma^{5}\ell), \tag{11}\] where \(p=u\), \(c\) is the quark flavor, \(T^{A}\) (\(A=1,\,\ldots,\,8\)) are the generators of the color \(SU(3)_{C}\)-group, \(L,R=(1\mp\gamma_{5})\,/2\) are the left- and right-handed fermionic projectors, \(F_{\mu\nu}\) and \(G_{\mu\nu}^{A}\) are the electromagnetic and gluon field strength tensors, respectively, \(m_{b}\) and \(m_{d}\) are the \(b\)- and \(d\)-quark masses of which the \(d\)-quark mass is neglected, \(\sigma_{\mu\nu}=i\left(\gamma_{\mu}\gamma_{\nu}-\gamma_{\nu}\gamma_{\mu}\right)/2\), and \(\alpha_{\rm em}=e^{2}/(4\pi)\) is the fine structure constant. The summation over \(q\) and \(\ell\) denotes sums over all quarks (except the \(t\)-quark) and charged leptons, respectively. The Wilson coefficients \(C_{f}(\mu)\), which depend on the renormalization scale \(\mu\), are calculated at the matching scale \(\mu_{W}\sim m_{W}\), where \(m_{W}\) is the \(W\)-boson mass, as a perturbative expansion in the strong coupling constant \(\alpha_{s}(\mu_{W})\)[65]: \[C_{j}(\mu_{W})=\sum_{k=0}^{\infty}\left[\frac{\alpha_{s}(\mu_{W})}{4\pi}\right] ^{k}C_{j}^{(k)}(\mu_{W}), \tag{12}\] which are evolved to a lower scale \(\mu_{b}\sim m_{b}\) using the anomalous dimensions of the above operators. They have been calculated to the next-next-leading-log (NNLL) accuracy [65]: \[\gamma_{i}=\frac{\alpha_{s}(\mu_{W})}{4\pi}\,\gamma_{i}^{(0)}+\left(\frac{ \alpha_{s}(\mu_{W})}{4\pi}\right)^{2}\,\gamma_{i}^{(1)}+\left(\frac{\alpha_{s}( \mu_{W})}{4\pi}\right)^{3}\gamma_{i}^{(2)}+\ldots \tag{13}\] Numerical values of the Wilson coefficient, calculated to NLL accuracy, are presented in Table 1, where one can see that the Wilson coefficients of the QCD penguin operators, \(C_{j}(m_{b})\) with \(j=3,\,4,\,5,\,6\), have much smaller values than the others. Feynman diagrams for the \(B^{+}\to\pi^{+}\ell^{+}\ell^{-}\) decay are shown in Fig. 1, where the left one denotes the \(\mathcal{P}_{7\gamma}\) contribution, and the right one denotes the \(\mathcal{P}_{9\ell}\) and \(\mathcal{P}_{10\ell}\) contributions. The matrix elements for the \(B\to P\) transition, where \(P\) is a pseudo-scalar meson, are expressed in terms of three transition form factors [66]: vector \(f_{+}(q^{2})\), scalar \(f_{0}(q^{2})\), and tensor \(f_{T}(q^{2})\), where \(q^{\mu}=(p_{B}-k)^{\mu}\) is the four-momentum \begin{table} \begin{tabular}{|c c|c c|} \hline \(C_{1}(m_{b})\) & \(-0.146\) & \(C_{2}(m_{b})\) & \(1.056\) \\ \(C_{3}(m_{b})\) & \(0.011\) & \(C_{4}(m_{b})\) & \(-0.033\) \\ \(C_{5}(m_{b})\) & \(0.010\) & \(C_{6}(m_{b})\) & \(-0.039\) \\ \(C_{7\gamma}(m_{b})\) & \(-0.317\) & \(C_{8g}(m_{b})\) & \(0.149\) \\ \(C_{9\ell}(m_{b})\) & \(4.15\) & \(C_{10\ell}(m_{b})\) & \(-4.26\) \\ \hline \end{tabular} \end{table} Table 1: Wilson coefficients at the scale \(\mu_{b}=m_{b}=4.8\) GeV. transferred to the lepton pair: \[\langle P(k)|\bar{p}\gamma^{\mu}b|B(p_{B})\rangle=f_{+}(q^{2}) \tag{14}\] \[\times\left[p_{B}^{\mu}+k^{\mu}-\frac{m_{B}^{2}-m_{P}^{2}}{q^{2}} \,q^{\mu}\right]+f_{0}(q^{2})\,\frac{m_{B}^{2}-m_{P}^{2}}{q^{2}}\,q^{\mu},\] \[\langle P(k)|\bar{p}\sigma^{\mu\nu}q,b|B(p_{B})\rangle=i\,\frac{f _{T}(q^{2})}{m_{B}+m_{P}}\] (15) \[\times\left[\left(p_{B}^{\mu}+k^{\mu}\right)q^{2}-q^{\mu}\left(m_ {B}^{2}-m_{P}^{2}\right)\right],\] where \(m_{B}\) and \(m_{P}\) are the \(B\)- and pseudo-scalar meson masses, respectively. Taking into account the sub-leading contributions, the differential branching fraction is as follows [47]: \[\frac{d\text{Br}\left(B\to P\ell^{+}\ell^{-}\right)}{dq^{2}}=S_{P} \frac{2G_{F}^{2}\alpha_{\text{em}}^{2}\tau_{B}}{3(4\pi)^{5}m_{B}^{3}}|V_{ib}V _{tp}^{*}|^{2}\lambda^{3/2}(q^{2})\] \[\times F^{BP}(q^{2})\sqrt{1-4m_{\ell}^{2}/q^{2}}, \tag{16}\] \[F^{BP}(q^{2})=F_{97}^{BP}(q^{2})+F_{10}^{BP}(q^{2}),\] \[F_{97}^{BP}(q^{2})=\left(1+\frac{2m_{\ell}^{2}}{q^{2}}\right) \left|C_{9}^{\text{eff}}(q^{2})\,f_{+}^{BP}(q^{2})\right.\] \[\left.+\frac{2m_{b}C_{V}^{\text{eff}}(q^{2})}{m_{B}+m_{P}}\,f_{T} ^{BP}(q^{2})+L_{A}^{BP}(q^{2})+\Delta C_{V}^{BP}(q^{2})\right|^{2},\] \[F_{10}^{BP}(q^{2})=\left(1-\frac{4m_{\ell}^{2}}{q^{2}}\right) \left|C_{10}^{\text{eff}}\,f_{\pi^{\text{e}}}^{BP}(q^{2})\right|^{2}\] \[\left.+\frac{6m_{\ell}^{2}}{q^{2}}\,\frac{\left(m_{B}^{2}-m_{P}^{ 2}\right)^{2}}{\lambda(q^{2})}\left|C_{10}^{\text{eff}}\,f_{0}^{BP}(q^{2}) \right|^{2},\] where \(S_{P}\) is the isospin factor of the final meson (\(S_{\pi^{\pm}}=1\) and \(S_{\pi^{0}}=1/2\) for the \(\pi\)-mesons, the case of our interest in this paper), \(C_{7,9,10}^{\text{eff}}\) are the effective Wilson coefficients including the NLO QCD corrections [67], \(L_{A}^{BP}(q^{2})\) is the Weak-Annihilation (WA) contribution, \(\Delta C_{V}^{BP}(q^{2})\) is the Long-Distance (LD) contribution, and \[\lambda(q^{2})=\left(m_{B}^{2}+m_{P}^{2}-q^{2}\right)^{2}-4m_{B}^{2}m_{P}^{2}, \tag{17}\] is the kinematical function encountered in three-body decays (the triangle function). Note that the differential branching fraction for the decay with the \(\tau^{+}\tau^{-}\)-pair production differs from its counterparts with \(e^{+}e^{-}\) and \(\mu^{+}\mu^{-}\) due to the important role of the scalar form factor, \(f_{0}(q^{2})\). In the electronic and muonic modes, its contribution is chirally suppressed by \(m_{e}^{2}\) and \(m_{\mu^{+}}^{2}\), respectively, while this no longer holds for the \(\tau^{+}\tau^{-}\) case. The WA contribution is calculated in the so-called Large Energy Effective Theory (LEET) [68] and has a significant effect for \(q^{2}\lesssim 1\) GeV\({}^{2}\) only, so its inclusion makes sense for the \(B^{\pm}\to\pi^{\pm}e^{+}e^{-}\) and \(B^{\pm}\to\pi^{\pm}\mu^{+}\mu^{-}\) decays, but it is irrelevant for the \(B^{\pm}\to\pi^{\pm}\tau^{+}\tau^{-}\) case having the \(q^{2}\)-threshold above 12 GeV\({}^{2}\). Two-particle decays, \(B\to V\pi\), where \(V\) is a neutral vector meson, followed by the leptonic decay \(V\to\ell^{+}\ell^{-}\) determine the LD-contributions. They can be represented as follows [46]: \[\Delta C_{V}^{B\pi}=-16\pi^{2}\,\frac{V_{ub}V_{ud}^{*}H^{(u)}+V_{cb}V_{cd}^{*} H^{(c)}}{V_{tb}V_{td}^{*}}, \tag{18}\] \[H^{(p)}(q^{2})=\sum_{V}\frac{\left(q^{2}-q_{0}^{2}\right)k_{V}f_ {V}A_{B\pi}^{p}}{(m_{V}^{2}-q_{0}^{2})(m_{V}^{2}-q^{2}-im_{V}V_{V}^{\text{tot}})}, \tag{19}\] where \(m_{V}\), \(f_{V}\) and \(\Gamma_{V}^{\text{tot}}\) are the mass, decay constant and total decay width of the vector meson, respectively, \(k_{V}\) is a valence quark content factor, \(A_{B\pi\pi}^{p}\) (\(p=u\), \(c\)) are the transition amplitudes, and the free parameter \(q_{0}^{2}=-1.0\) GeV\({}^{2}\) is chosen to achieve a better convergence in the denominator of (19). The differential branching fraction (16) involves three \(B\to P\) form factors. They are scalar functions of \(q^{2}\), discussed for the \(B\to\pi\) case in the next section. ## 3 Form Factor Parametrizations Among the available parametrizations of the \(B\to\pi\) transition form factors (FF) known in the literature, we chose those which are based on analyticity, crossing symmetry and the QCD dispersion relations. They are represented as a series in powers of the function \(z(q^{2},q_{0}^{2})\) projecting \(q^{2}\) into the unit ellipse in the complex plane1. Footnote 1: Parameter \(q_{0}^{2}\) used here differs from the one in Eq. (19). The first one is the Boyd-Grinstein-Lebed (BGL) parametrization [60] (\(i=+\), \(0\), \(T\)): \[f_{i}(q^{2})=\frac{1}{P_{i}(q^{2})\,\phi_{i}(q^{2},q_{0}^{2})}\sum _{k=0}^{N}a_{k}^{(i)}\,z^{k}(q^{2},q_{0}^{2}), \tag{20}\] \[z(q^{2},q_{0}^{2})=\frac{\sqrt{m_{+}^{2}-q^{2}}-\sqrt{m_{+}^{2} -q_{0}^{2}}}{\sqrt{m_{+}^{2}-q^{2}}+\sqrt{m_{+}^{2}-q_{0}^{2}}}, \tag{21}\] where \(P_{i=+},T(q^{2})=z(q^{2},m_{B^{-}}^{2})\) and \(P_{0}(q^{2})=1\) are the Blaschke factors, \(m_{B^{-}}=(5324.71\pm 0.21)\) MeV is the vector \(B^{*}\)-meson mass [36], \(\phi_{l}(q^{2},q_{0}^{2})\) is an outer function [60], depending on three free parameters \(K_{i}\), \(\alpha_{i}\), and \(\beta_{i}\), \(m_{+}=m_{B}+m_{\pi}\), and \(q_{0}^{2}=0.65\,(m_{B}-m_{\pi})^{2}\). Expansion coefficients \(a_{k}^{(i)}\) are non-perturbative parameters, which are determined either phenomenologically or by non-perturbative methods. The second one is the Bourrely-Caprini-Lellouch (BCL) parametrization [61] (\(i=+,\,T\)): \[f_{l}(q^{2})=\frac{1}{1-q^{2}/m_{B^{*}}^{2}}\] \[\times\sum_{k=0}^{N-1}b_{k}^{(i)}\bigg{[}\frac{x_{k}}{c}(q^{2},q_ {0}^{2})-(-1)^{k-N}\frac{k}{N}\,z^{N}(q^{2},q_{0}^{2})\bigg{]}, \tag{22}\] \[f_{0}(q^{2})=\sum_{k=0}^{N-1}b_{k}^{(0),k}(q^{2},q_{0}^{2}),\] (23) \[q_{0}^{2}=m_{+}(\sqrt{m_{B}}-\sqrt{m_{\pi}})^{2}. \tag{24}\] Here, \(z(q^{2},q_{0}^{2})\) is the same as in Eq. (21) and the form factors are calculated by truncating the series at \(N~{}=~{}4\). The third type is the modified Bourrely-Caprini-Lellouch (mBCL) parametrization [62] (\(i=+,\,T\)): \[f_{l}(q^{2})=\frac{f_{l}(q^{2}=0)}{1-q^{2}/m_{B^{*}}^{2}} \tag{25}\] \[\times\bigg{[}1+\sum_{k=1}^{N-1}b_{k}^{(i)}\bigg{(}\tilde{z}_{k} (q^{2},q_{0}^{2})-(-1)^{k-N}\frac{k}{N}\,z_{N}(q^{2},q_{0}^{2})\bigg{)}\bigg{]},\] \[f_{0}(q^{2})=\frac{f_{+}(q^{2}=0)}{1-q^{2}/m_{B_{0}}^{2}}\bigg{[} 1+\sum_{k=1}^{N}b_{k}^{(0)}\tilde{z}_{k}(q^{2},q_{0}^{2})\bigg{]}, \tag{26}\] where \(\tilde{z}_{k}(q^{2},q_{0}^{2})=\hat{z}^{k}(q^{2},q_{0}^{2})-\hat{z}^{k}(0,q_{0 }^{2})\). The function \(z(q^{2},q_{0}^{2})\) is defined in Eq. (21) and \(q_{0}^{2}\) takes the optimal value (24). Here, unlike other types of the \(f_{0}(q^{2})\) parametrizations, this form factor has a pole but at higher \(q^{2}\) -- at the scalar \(B_{0}\)-meson mass squared, \(m_{B_{0}}^{2}\). This state is not yet observed experimentally and its mass is taken from theory. We set \(m_{B_{0}}=5.54\) GeV, as was used in the determination of the expansion coefficients \(b_{k}^{(0)}\)[62]. Note that the Dispersion Matrix (DM) method was suggested in [69] to describe the FFs by using also analyticity, crossing symmetry and the QCD dispersion relations. This method is based on the non-perturbative determination of the dispersive bounds and describes in a model-independent way the FFs in the full kinematical range, starting from existing Lattice QCD data at large momentum transfer, without a series expansion in powers of \(z(q^{2},q_{0}^{2})\). It was already applied to the semileptonic \(B\to\pi\ell\nu_{\ell}\) decays [70] and can be also used for the analysis of semileptonic FCNC \(B\)-meson decays. ## 4 Numerical Analysis of the \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) Decay ### Perturbative Contribution The distribution in the tau-pair invariant mass calculated in perturbation theory for three types of form factor parametrizations is presented in Fig. 2. The spread shown in these distributions reflect the convoluted uncertainties in the scale parameter \(\mu\), entering via the Wilson coefficients by varying it in the range \(m_{b}/2\leq\mu\leq 2m_{b}\), and the input value of the CKM matrix element \(V_{td}=(8.54\pm 0.30)\times 10^{-3}\)[36]. Numerical results for the total branching fraction for the three FF parametrizations used in this work are consistent with each other within uncertainties as shown in Table 2. In working out the numerical values, the expansion coefficients of the BGL parametrization are taken from [43], where the data on the CC-process \(B\to\pi\ell\nu_{\ell}\) decay are fitted, and the relations between the \(B\to K\) and \(B\to\pi\) form factors are used. The values of the BCL parametrization coefficients were obtained within the framework of Lattice QCD (LQCD) [71; 72]. The values of the mBCL parametrization coefficients are obtained by the combined use of the Light-Cone Sum Rules (LCSR) and LQCD [62]. All the expansion coefficients are collected in A. The entries in Table 2 are also consistent with the existing theoretical predictions in the literature, for example, \(\mathrm{Br}_{\mathrm{h}}^{\mathrm{FG}}(B^{+}\to\pi^{+}\tau^{+}\tau^{-})=(7.0\pm 0. 7)\times 10^{-9}\)[44] and \(\mathrm{Br}_{\mathrm{h}}^{\mathrm{W}X}(B^{+}\to\pi^{+}\tau^{+}\tau^{-})=(6.0 ^{+2.6}_{-2.1})\times 10^{-9}\)[73]. It is customary to compare data and theoretical distributions in bins of \(q^{2}\): \[(\Delta\mathrm{Br})_{\pi}^{\tau}(q_{1}^{2},q_{2}^{2})\equiv\int_{q_{1}^{2}}^{q _{2}^{2}}dq^{2}\,\frac{d\mathrm{Br}(B^{+}\to\pi^{+}\tau^{+}\tau^{-})}{dq^{2}}. \tag{27}\] To that end, we plot the theoretical results for the partial branching ratio \((\Delta\mathrm{Br})_{\pi}^{\tau}(q_{1}^{2},q_{2}^{2})\) in bins of the ditauon invariant-mass squared using the three FF parametrization in Fig. 3, and collect the corresponding values of the partial branching fractions, integrated over the indicated ranges, in Table 3. For comparison, the Lattice results [72] are also shown in the last column. The errors shown are from the CKM matrix element, form factors, variation of the high and low matching scales, and the quadrature sum of all other contributions, respectively. We note that the BGL parametrization predictions are in good agreement with the Lattice-based estimates, both of which are, however, systematically higher than the predictions based on the BCL and mBCL ones in each bin. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & BGL & BCL & mBCL \\ \hline \(\mathrm{Br}_{\mathrm{h}}\times 10^{9}\) & \(7.56^{+0.74}_{-0.43}\) & \(6.00^{+0.81}_{-0.49}\) & \(6.28^{+0.76}_{-0.46}\) \\ \hline \end{tabular} \end{table} Table 2: Theoretical predictions for the \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) total branching fraction, obtained for the three indicated FF parametrizations. The same also holds for the total branching fraction (see Table 2). ### Long-Distance Contributions Since the \(q^{2}\)-threshold in the \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) decay is \(4m_{\tau}^{2}=12.6\) GeV\({}^{2}\), the tauonic invariant-mass distribution would include the \(\psi(2S)\)-meson and higher charmonia decaying into the \(\tau^{+}\tau^{-}\)-pair, estimated below. For the \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) decay, contribution from the \(\psi(2S)\)-meson, the total branching fractions for the \(B^{+}\to\pi^{+}\psi(2S)\) and \(\psi(2S)\to\tau^{+}\tau^{-}\) decays are as follows [36]: \[\mathrm{Br}(B^{+}\to\pi^{+}\psi(2S))=(2.44\pm 0.30)\times 10^{-5}, \tag{28}\] \[\mathrm{Br}(\psi(2S)\to\tau^{+}\tau^{-})=(3.1\pm 0.4)\times 10^{-3}, \tag{29}\] which yield the following product branching ratio: \[\mathrm{Br}(B^{+}\to\pi^{+}\psi(2S)\to\pi^{+}\tau^{+}\tau^{-})=(7.6\pm 1.3) \times 10^{-8}. \tag{30}\] Being of order \(10^{-7}\), the \(\psi(2S)\)-contribution strongly modifies the SD-based ditauon-mass spectrum but, as \(\psi(2S)\)-meson is a narrow resonance with \(M_{\psi(2S)}^{2}\simeq 13.6\) GeV\({}^{2}\) and has the decay width \(\Gamma_{\phi(2S)}=(294\pm 8)\) keV [36], it affects only the \(q^{2}\)-region in the vicinity of the \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) threshold. Experimentally, this contribution can be largely reduced by putting kinematical cuts, say, \(q^{2}\geq 15\) GeV\({}^{2}\). The next vector \(c\bar{c}\) resonance is the \(\psi(3S)\)-meson, also known as \(\psi(3770)\). The total branching fractions of the \(B^{+}\to\pi^{+}\psi(3S)\) and \(\psi(3S)\to\tau^{+}\tau^{-}\) decays are not yet known experimentally. We can estimate the pionic \(B\)-meson decay ratio by using the kaonic \(B\)-meson decay modes, \(B^{+}\to K^{+}\psi(2S)\) and \(B^{+}\to K^{+}\psi(3S)\), which have been measured: \(\mathrm{Br}(B^{+}\to K^{+}\psi(2S))=(6.24\pm 0.20)\times 10^{-4}\) and \(\mathrm{Br}(B^{+}\to K^{+}\psi(3S))=(4.3\pm 1.1)\times 10^{-4}\)[36]. The branching fraction for the decay \(B^{+}\to\pi^{+}\psi(3S)\) can be found with the help of the (approximate) \(SU(3)_{F}\) relation: \[\frac{\mathrm{Br}(B^{+}\to\pi^{+}\psi(3S))}{\mathrm{Br}(B^{+}\to K^{+}\psi(3S ))}\simeq\frac{\mathrm{Br}(B^{+}\to\pi^{+}\psi(2S))}{\mathrm{Br}(B^{+}\to K^{ +}\psi(2S))}, \tag{31}\] where \(\mathrm{Br}(B^{+}\to\pi^{+}\psi(2S))\) is presented in (28). Taking (31) as a good approximation, we get: \[\mathrm{Br}(B^{+}\to\pi^{+}\psi(3S))=(1.7\pm 0.5)\times 10^{-5}. \tag{32}\] Figure 3: Partial branching fraction of the \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) decay, \((\Delta\mathrm{Br})_{\tau}^{\tau}(q^{2}_{\mathrm{min}},q^{2}_{\mathrm{min}})\), in bins of ditauon invariant mass squared for the BGL (top), BCL (center) and mBCL (bottom) form factor parametrizations. Figure 2: The dilepton invariant-mass distribution for \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) decay for the BGL (top), BCL (center) and mBCL (bottom) parametrizations of the form factors. The green areas indicate the uncertainty due to the factorization scale, FF expansion coefficients and CKM matrix element \(V_{ud}\). However, in contrast to the narrow \(\psi(2S)\)-meson, \(\psi(3770)\) is a broad resonance which decays mainly to \(D^{+}D^{-}\) and \(D^{0}\bar{D}^{0}\), so its purely leptonic decay modes are strongly suppressed. To see this suppression numerically for \(\psi(3S)\to\tau^{+}\tau^{-}\), we use the lepton flavor universality, obeyed by QED and QCD, and the experimentally measured branching fraction \(\text{Br}(\psi(3S)\to e^{+}e^{-})=(9.6\pm 0.7)\times 10^{-6}\)[36]. The branching ratios \(\text{Br}(\psi(3S)\to e^{+}e^{-})\) and \(\text{Br}(\psi(3S)\to\tau^{+}\tau^{-})\) differ from each other only by the phase space factor and hence their relative rates follow the kinematic relation: \[\frac{\text{Br}(\psi(3S)\to\tau^{+}\tau^{-})}{\text{Br}(\psi(3S)\to e^{+}e^{-}) }=\frac{\lambda(M_{\psi(3S)},m_{\tau},m_{\tau})}{\lambda(M_{\phi(3S)},m_{e},m_ {e})}, \tag{33}\] where \(\lambda(M,m,m)=M\sqrt{M^{2}-4m^{2}}\)[74]. Taking the masses into account: \(M_{\psi(3S)}=(3773.7\pm 0.4)\) MeV, \(m_{e}=0.511\) MeV, and \(m_{\tau}=(1776.86\pm 0.12)\) MeV [36], we obtain: \[\text{Br}(\psi(3S)\to\tau^{+}\tau^{-})=(3.2\pm 0.2)\times 10^{-6}, \tag{34}\] which in turn yields the LD branching fraction \(\text{Br}(B^{+}\to\pi^{+}\tau^{+}\tau^{-})\) from the \(\psi(3S)\) resonance: \[\text{Br}(B^{+}\to\pi^{+}\psi(3S)\to\pi^{+}\tau^{+}\tau^{-})=(5.4\pm 1.9) \times 10^{-11}. \tag{35}\] This value is three orders of magnitude smaller than the similar decay rate of the \(\psi(2S)\)-meson (30). Comparing with the SD (perturbative) contribution (see Table 3), which is of order of \(10^{-9}\), the \(\psi(3770)\) contribution, \(\text{Br}(B^{+}\to\pi^{+}\psi(3S)\to\pi^{+}\tau^{+}\tau^{-})\), is subdominant, comparable to the current perturbative errors. There are yet more vector charmonium resonances with masses above the \(\psi(3S)\)-meson mass, \(\psi(4040)\), \(\psi(4160)\), \(\psi(4230)\), \(\psi(4360)\), and \(\psi(4415)\), which also have purely leptonic decay modes. However, as they decay strongly into the \(D\bar{D}\)-pair etc., their electronic or muonic decay rates are also of order of \(10^{-5}\)[36], similar to the case of the \(\psi(3S)\)-meson, as shown in Table 4. It follows that their contributions to the \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) branching fraction are of the same order of magnitude as from the \(\psi(3S)\)-meson. Consequently, we drop the contribution from all the strongly decaying charmonium resonances (\(\psi(3S)\) and higher), and consider the LD-contribution from the narrow \(\psi(2S)\)-meson only. Since the LD contributions (19) depend on the choice of the amplitude phases \(\delta^{(u)}_{\psi(2S)}\) and \(\delta^{(c)}_{\psi(2S)}\), we present the total branching fraction of the \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) decay, including the \(\psi(2S)\) LD-contribution, and its dependence on the assumed values of the strong phases in Tables 5, 6 and 7 for the BGL, BCL and mBCL parametrizations of the form factors, respectively. As can be seen, the variation of the branching fraction on the strong phases is not very marked, and is similar to the errors shown from the SD contribution. The central value including the LD contribution is given for \(\delta^{(u)}_{\psi(2S)}=0\) and \(\delta^{(c)}_{\psi(2S)}=3\pi/4\). The ditau invariant mass distribution including the LD contribution from the \(\psi(2S)\)-meson is presented in Fig. 4 for the BGL, BCL and mBCL form factors. The vertical solid line at \(q^{2}=15\) GeV\({}^{2}\) in the plots indicates the kinematical cut to exclude the dominant \(\psi(2S)\) contribution. Figure 4: The ditauon invariant mass distribution with the long-distance contribution from the \(\psi(2S)\)-meson for the BGL (upper plot), BCL (central plot) and mBCL (lower plot) parametrizations. The green areas indicate the uncertainty due to the factorization scale, FF expansion coefficients and the CKM matrix element \(V_{ul}\). The vertical solid line at \(q^{2}=15\) GeV\({}^{2}\) indicates the kinematical cut to exclude the \(\psi(2S)\) contribution. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{\(10^{8}\times(\Delta\text{Br}_{\tau}^{2})\)} \\ \hline \([q_{\text{max}}^{2},q_{\text{max}}^{2}]\) & BGL & BCL & mBCL & Lattice-QCD[72] \\ \hline \([13.0,15.0]\) & \(0.91^{+0.74}_{-0.08}\) & \(0.67^{+0.77}_{-0.07}\) & \(0.71^{+0.78}_{-0.08}\) & - \\ \hline \([15.0,17.0]\) & \(1.2^{+0.08}_{-0.08}\) & \(0.95^{+0.09}_{-0.09}\) & \(1.00^{+0.09}_{-0.09}\) & \(1.111,8.2,4.4\) \\ \hline \([17.0,19.0]\) & \(1.3^{+0.08}_{-0.08}\) & \(1.05^{+0.16}_{-0.10}\) & \(1.10^{+0.05}_{-0.08}\) & \(1.25(8,8,2.3)\) \\ \hline \([19.0,2.0]\) & \(2.00^{+0.19}\) & \(1.62^{+0.17}_{-0.11}\) & \(1.60^{+0.15}_{-0.12}\) & \(1.93(10,4.5)\) \\ \hline \([22.0,25.0]\) & \(1.58^{+0.11}_{-0.09}\) & \(1.34^{+0.11}_{-0.09}\) & \(1.42^{+0.12}_{-0.09}\) & \(1.59(10,7,4.4)\) \\ \hline \end{tabular} \end{table} Table 3: Partial branching ratios for the \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) decay, \((\Delta\text{Br})_{\tau}^{2}(q_{\text{max}}^{2},q_{\text{max}}^{2})\), obtained using the BGL, BCL and mBCL FF parametrizations in comparison with the Lattice QCD predictions [72]. Invariant mass squared, \(q^{2}\), is given in units of GeV\({}^{2}\). Errors in the last column obtained by the Lattice QCD calculations are explained in the text. ### The Ratios \(R_{\pi}(\tau/\ell)\) (\(\ell=e,\mu\)) Since in the SM \(R_{\pi}(\tau/\mu)=R_{\pi}(\tau/e)\) holds to a very high accuracy, we show numerical results only for \(R_{\pi}(\tau/\mu)\), the ratio of the partially-integrated tauonic branching fraction (\(\Delta\)Br)\({}_{\pi}^{*}(q_{1}^{2},q_{2}^{2})\) to the muonic one (\(\Delta\)Br)\({}_{\pi}^{\mu}(q_{1}^{2},q_{2}^{2})\). The partial ratio is defined as follows: \[R_{\pi}^{(\tau/\mu)}(q_{1}^{2},q_{2}^{2})\equiv\frac{(\Delta\mbox{Br})_{\pi}^{ *}(q_{1}^{2},q_{2}^{2})}{(\Delta\mbox{Br})_{\pi}^{\mu}(q_{1}^{2},q_{2}^{2})}. \tag{36}\] To study the dependence of theoretical results on the choice of the FF parametrization, we plot the partial ratio \(R_{\pi}^{(\tau/\mu)}(q_{\rm min}^{2},q_{\rm max}^{2})\) in bins of the dilepton invariant mass squared, shown in Fig. 5. Numerical values of this ratio are shown in Table 8, obtained by integrating the partial branching ratio over the indicated \(q^{2}\)-ranges. The errors shown take into account the uncertainties due to the factorization-scale, the CKM matrix element \(V_{td}\), and the form factor errors. Theoretical predictions for the total ratio for the BGL, BCL and mBCL parametrizations are as follows: \[R_{\pi}^{\rm BGL}(\tau/\mu)=0.44\pm 0.16, \tag{37}\] \[R_{\pi}^{\rm BGL}(\tau/\mu)=0.31\pm 0.12,\] (38) \[R_{\pi}^{\rm mBCL}(\tau/\mu)=0.37\pm 0.15. \tag{39}\] They agree with each other within the indicated uncertainties. The central values for \(R_{\pi}(\tau/\mu)\) lie in the range \(0.30-0.45\). The main uncertainty on \(R_{\pi}(\tau/\mu)\), as opposed to the very precise ratio \(R_{\pi^{(\rm U)}}(\mu/e)\), is due to the form factors. These results are potentially useful in testing the lepton flavor universality in the \(b\to d\ell^{+}\ell^{-}\) sector. ## 5 Summary and outlook We have presented theoretical predictions for the branching ratio Br(\(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\)) and the ditauon invariant-mass distribution at NLO accuracy in the SM, using three popular parametrizations of the \(B\to\pi\) form factors, known in the literature as the BGL [60], BCL [61], and mBCL [62]. In the SM, \begin{table} \begin{tabular}{|c|c|c|} \hline \(\delta_{\phi(2S)}^{(a)}\) & \(\delta_{\phi(2S)}^{(c)}\) & Br(\(B^{+}\to\pi^{+}\tau^{+}\tau^{-})\times 10^{-9}\) \\ \hline \multicolumn{3}{|c|}{BCL} \\ \hline SDC & \multicolumn{2}{|c|}{\(6.00^{+0.81}_{-0.49}\)} \\ \hline 0 & 0 & \(5.79^{+0.78}_{-0.48}\) \\ \hline 0 & \(\pi\) & \(6.23^{+0.84}_{-0.50}\) \\ \hline 0 & \(3\pi/4\) & \(6.05^{+0.80}_{-0.47}\) \\ \hline \(\pi/2\) & \(\pi\) & \(6.24^{+0.84}_{-0.51}\) \\ \hline \(3\pi/2\) & 0 & \(5.78^{+0.78}_{-0.48}\) \\ \hline \end{tabular} \end{table} Table 6: The total branching fraction for the \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) decay in the BCL parametrization including the LD contribution from the \(\phi(2S)\)-meson for the various assumed values of the amplitude phases. SDC means the short-distance (perturbative) contribution. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(V\) & \(M_{V}\) [MeV] & \(\Gamma_{V}\) [MeV] & \(10^{5}\times\mbox{Br}(B^{+}\to VK^{+})\) & \(10^{6}\times\mbox{Br}(V\to e^{+}e^{-})\) & \(10^{6}\times\mbox{Br}(V\to\tau^{+}\tau^{-})\) \\ \hline \(\psi(4040)\) & \(4039\pm 1\) & \(80\pm 10\) & \(1.1\pm 0.5\) & \(10.7\pm 1.6\) & \(5.1\pm 0.8\) \\ \hline \(\psi(4160)\) & \(4191\pm 5\) & \(70\pm 10\) & \(51\pm 27\) & \(6.9\pm 3.3\) & \(3.7\pm 1.7\) \\ \hline \(\psi(4230)\) & \(4222.7\pm 2.6\) & \(49\pm 8\) & & \(31\pm 28\) & \(17\pm 15\) \\ \hline \(\psi(4360)\) & \(4372\pm 9\) & \(115\pm 13\) & & \(0.10\pm 0.05\) & \(0.06\pm 0.03\) \\ \hline \(\psi(4415)\) & \(4421\pm 4\) & \(62\pm 20\) & \(2.0\pm 0.8\) & \(9.4\pm 3.2\) & \(5.6\pm 1.9\) \\ \hline \end{tabular} \end{table} Table 4: Experimental data [36] on vector charmonia with masses above the open charm threshold. The branching fraction of the \(\psi(4360)\to e^{+}e^{-}\) decay follows from the electronic decay width \(\Gamma_{ee}=\left(11.6^{+5.0}_{-4.4}\pm 1.9\right)\) eV in which the errors are added in quadrature. In getting the \(V\to\tau^{+}\tau^{-}\) branching fractions, Eq. (33) is used. LFU holds, which relates the decay \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) to the observed decay \(B^{+}\to\pi^{+}\mu^{+}\mu^{-}\). In extensions of the SM, the LFU hypothesis can be easily violated, of which the leptoquark models are the primary candidates [75; 76]. In view of this, we have worked out the ratio of the branching ratios \(R_{\pi}(\tau/\mu)\). Together with the corresponding ratios \(R_{\pi}(\mu/e)\) and \(R_{\pi}(\tau/e)\), their measurement will provide a precision test of the lepton flavor universality in the FCNC \(b\to d\) transitions. Suffice to say that none of these ratios have been subjected to experimental scrutiny so far. Of these, the ratio \(R_{\pi}(\mu/e)\) has been theoretically investigated in [53]. We have concentrated here on the ratios \(R_{\pi}(\tau/\mu)\) and \(R_{\pi}(\tau/e)\). The decay \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) involves all three form factors, \(f_{+}(q^{2})\), \(f_{0}(q^{2})\), and \(f_{T}(q^{2})\). The uncertainties in \({\rm Br}(B^{+}\to\pi^{+}\tau^{+}\tau^{-})\) arise from the FF parametrizations, scale-dependence of the Wilson coefficients, and the CKM matrix element. Numerical values of the SD (perturbative) contribution to \({\rm Br}(B^{+}\to\pi^{+}\tau^{+}\tau^{-})\) are listed in Table 2. Partial branching fractions of the \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) decay, \((\Delta{\rm Br})^{\tau}_{\pi}(q^{2}_{\rm min},q^{2}_{\rm max})\), in bins of diuon invariant mass squared, are shown in Fig. 3 and displayed in Table 3. In addition, the LD-contribution from the process \(B\to\pi V\to\pi\ell^{+}\ell^{-}\) is calculated. Experimental data on the masses, partial and total decay widths of the charmonium states are given in Table 4. Due to the small decay width of the \(\psi(2S)\)-meson, its LD-contribution is concentrated close to the \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) reaction threshold and can be largely eliminated by a cut on the ditauon invariant mass squared (\(q^{2}\geq 15\) GeV\({}^{2}\)) (see Fig. 4). We have given arguments why the contribution from the higher charmonium resonances are numerically small, and hence they do not change the SD-contribution in this region perceptibly. Taking into account the LD contribution from the \(\psi(2S)\)-resonance, numerical results for the branching ratio \({\rm Br}(B^{+}\to\pi^{+}\tau^{+}\tau^{-})\) are given in Table 5 for the BGL form factors. Various entries in this table correspond to using the indicated strong phases. The corresponding results for the BCL form factors are given in Table 6 and for the mBCL form factors in Table 7. Since the BGL parametrization and the Lattice-QCD based estimates are rather close to each other, our estimate of the total branching fraction is \({\rm Br}_{\rm th}(B^{+}\to\pi^{+}\tau^{+}\tau^{-})=7.5\times 10^{-9}\), with an uncertainty of about 10%. Numerical values for the ratio \(R_{\pi}(\tau/\mu)\) are given in Eqs. (37)-(39) for three parametrizations chosen. The central values lie in the range \(0.30-0.45\). The main uncertainty on \(R_{\pi}(\tau/\mu)\), as opposed to the very precise ratio \(R_{\pi^{0}(\tau/\mu)}\), is due to the form factors. Partial ratios \(R^{(\tau/\mu)}_{\pi}(q^{2}_{\rm min},q^{2}_{\rm max})\) in bins of ditauon invariant mass squared are shown in Fig. 5. These ratios can be more precisely calculated in the future by progress in Lattice QCD. The branching ratio for \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\), integrated over the region \(q^{2}\geq 15\) GeV\({}^{2}\), has a factor of about 3 suppression compared with the \(B^{+}\to\pi^{+}\mu^{+}\mu^{-}\) total branching fraction, which has already been measured. Of course, one has to factor in the experimental efficiency of reconstructing the \(\tau^{+}\tau^{-}\) pair, but a precise measurement is still feasible for the anticipated integrated luminosities at Belle II and LHCb experiments. Once sufficient data is collected, also the measurements of the various asymmetries, such as the isospin-asymmetry involving \(B^{0}\to\pi^{0}\ell^{+}\ell^{-}\) and \(B^{\pm}\to\pi^{\pm}\tau^{+}\tau^{-}\), and CP-violating asymmetries involu Figure 5: Partial ratios \(R^{(\tau/\mu)}_{\pi}(q^{2}_{\rm min},q^{2}_{\rm max})\) in bins of the ditauon invariant mass squared for the BGL (top), BCL (center) and mBCL (bottom) form factor parametrizations. ing \(B^{+}\to\pi^{+}\tau^{+}\tau^{-}\) and \(B^{-}\to\pi^{-}\tau^{+}\tau^{-}\) become interesting, which should be looked at theoretically. _Acknowledgments_. I. P. acknowledges the financial support of the German-Russian Foundation G-RISC and the kind hospitality of Christoph Grojean and the theory group at DESY, Hamburg, during her stay in Hamburg in the autumn of 2021. A. P. and I. P. are supported by RSF (Project No. 22-22-00877, [https://rscf.ru/en/project/22-22-00877/](https://rscf.ru/en/project/22-22-00877/)). ## Appendix A Expansion Coefficients of the FF Parametrization Values of the expansion coefficients in the BGL FF parametrization are borrowed from [43]. They are obtained as truncated series up to \(k_{\rm max}=2\) and presented in Table 9. Values of the expansion coefficients in the BCL parametrization are calculated from the Lattice-QCD analysis presented in [71; 72]. They are obtained as truncated series up to \(k_{\rm max}=3\) and collected in Table 10. Values of the expansion coefficients in the mBCL parametrization are obtained from the joint Light-Cone Sum Rules and Lattice QCD data [62]. They are presented in Table 11.
2307.07277
Are words equally surprising in audio and audio-visual comprehension?
We report a controlled study investigating the effect of visual information (i.e., seeing the speaker) on spoken language comprehension. We compare the ERP signature (N400) associated with each word in audio-only and audio-visual presentations of the same verbal stimuli. We assess the extent to which surprisal measures (which quantify the predictability of words in their lexical context) are generated on the basis of different types of language models (specifically n-gram and Transformer models) that predict N400 responses for each word. Our results indicate that cognitive effort differs significantly between multimodal and unimodal settings. In addition, our findings suggest that while Transformer-based models, which have access to a larger lexical context, provide a better fit in the audio-only setting, 2-gram language models are more effective in the multimodal setting. This highlights the significant impact of local lexical context on cognitive processing in a multimodal environment.
Pranava Madhyastha, Ye Zhang, Gabriella Vigliocco
2023-07-14T11:17:37Z
http://arxiv.org/abs/2307.07277v1
# Are words equally surprising in audio and audio-visual comprehension? ###### Abstract We report a controlled study investigating the effect of visual information (i.e., seeing the speaker) on spoken language comprehension. We compare the ERP signature (N400) associated with each word in audio-only and audio-visual presentations of the same verbal stimuli. We assess the extent to which surprisal measures (which quantify the predictability of words in their lexical context) are generated on the basis of different types of language models (specifically _n_-gram and Transformer models) predict N400 responses for each word. Our results indicate that cognitive effort differs significantly between multimodal and unimodal settings. In addition, our findings suggest that while Transformer-based models, which have access to a larger lexical context, provide a better fit in the audio-only setting, 2-gram language models are more effective in the multimodal setting. This highlights the significant impact of local lexical context on cognitive processing in a multimodal environment. **Keywords:** Surprisal theory, face-to-face communication setup, multimodal language comprehension, language models. ## Introduction A significant amount of research in language comprehension has been dedicated to examining how humans interpret written or spoken language. These studies have mainly focused on analyzing the verbal form of language [12, 13, 14, 15]. This approach involves building an understanding of the text or speech one word at a time, with some words being more difficult to process than others. Expectation-based theories of sentence processing [12] propose that the difficulty in processing a sentence is driven by the predictability of upcoming lexical material in context. Surprisal, an information-theoretic measure of predictability, is computationally operationalised using language models [1, 13, 15, 16]. Language models (LMs) calculate the probability of a word given its context, which is then used to calculate surprisal. Surprisal has been supported by behavioural and neural measures of processing difficulty [1, 14, 15, 16]. However, a large body of previous work in language comprehension does not consider the visual contextual cues available in face-to-face communication. Language has evolved, is learnt and is most often used in face-to-face contexts in which comprehenders have access to a multitude of visual cues, such as hand gestures, body movements and mouth movements that contribute to language processing [10]. In this paper, we follow this line of research and examine how multiple modalities of information impact language comprehension. We present a controlled study comparing the comprehension of language-related stimuli in both audio-only and audio-visual conditions and analyse changes in ERP signals. ### N400, Language Models and Surprisal The N400 is an event-related potential (ERP) component peaking negatively at \(\approx\)400ms at the central parietal areas that are observed in the brain during language _processing tasks_, measured using electroencephalography (EEG). The N400 is larger in response to semantically incongruent or unexpected words compared to congruent or expected words [16, 17, 18, 19]. This indicates that the N400 is related to semantic processing, and the N400 effect has been interpreted as reflecting the brain's automatic evaluation of incoming linguistic information for semantic coherence Typically, when an upcoming word is semantically consistent with the context, it leads to a smaller N400 amplitude compared to when it is not. It has been reported in reading-related tasks that words with higher surprisal, thus less predictable1 and more difficult to process, elicit more negative N400 [1, 16, 17, 18, 19, 20, 21]. Previous research has demonstrated the robustness of the N400 effect, and surprisal has been shown to predict N400 for various experimental tasks, including cloze-style tasks and semantic relatedness, among others [16]. Recent work observe that surprisal estimates computed using some types of language models may be better predictors than other types. For example, Frank et al. (2015) find that _n_-gram based language models with larger window sizes (4-grams) were best at explaining variance. More recent works have investigated Transformer based language models [20] and show that Transformer based models may be better predictors of suprisal than other language models. Michaelov et al. (2021) compared surprisal obtained from GPT-2 [19]. et al., 2019) (a Transformer based language model trained over large web-based corpora), Recurrent Neural Network (RNN) based language model (Elman, 1990) and manual cloze probability in predicting N400 in cloze tasks, where the target words are manipulated to have different cloze probabilities. The authors discovered that all three measures showed a significant association with N400, but surprisal estimates generated from GPT-2 explained the largest amount of variance. Merkx and Frank (2020) conducted a study in which they trained language models with Transformer and RNN-based models in a controlled setting using similar corpora. Under these controlled conditions, the study found that surprisal estimates generated from Transformer-based models, overall, provided a better fit to the EEG data. The increased performance has been hypothesized to be primarily due to the access to a larger lexical context in Transformer-based language models, which helps the model capture longer-range dependencies. Overall, most recent works have shown that surprisal estimates from Transformer based models correlate better with N400 based estimates of cognitive effort. We note that the majority of the N400 and surprisal correlations were found in reading based tasks. Some recent works have shown that surprisal also predicts N400 based cognitive effort in audio (Brennan and Hale, 2019) and audio-visual tasks (Zhang, Ding, et al., 2021; Zhang, Frassinelli, et al., 2021) contexts. However, it remains unclear whether Transformer-based models such as GPT-2 are better predictors in audio or audio-visual settings where multiple sources of information are available. ### Multimodality, Surprisal and N400 Language learning and use is fundamentally face-to-face, involving information from multiple sources (or modalities) such as gestures, facial expression, mouth movements and prosody, in addition to the lexical content of speech (Zhang, Ding, et al., 2021; Zhang, Frassinelli, et al., 2021). These modalities provide additional context and meaning, making communication more effective (Ankener et al., 2018; Grzyb et al., 2022; Zhang, Ding, et al., 2021; Zhang, Frassinelli, et al., 2021). Recent studies have shown that multiple modalities, such as prosody (the rhythm, stress, and intonation of speech) and gestures, play a key role in shaping language use during face-to-face interactions and in general language use. Crucially, multimodal information, such as prosody and gesture, also modulates N400. For example, prosodic stress has been shown to mark the information structure, with new information more likely to carry prosodic stress than lexical information (Aylett and Turk, 2004; Cruttenden, 2006). Violations of such patterns elicit larger N400 (Baumann and Schumacher, 2012; Dimitrova et al., 2012; Magne et al., 2005; Wang et al., 2011), indicating that prosodic information is taken into account in semantic processing. Crucially also visual signals such as iconic gestures (hand movements imagistically related to the content of speech, e.g., "drawing" - imitate holding a pen and moving around) have been shown to affect the N400. Iconic gestures that mismatch the speech elicit larger N400 (Holle and Gunter, 2007; Kelly et al., 2004; Ozyurek et al., 2007; Wu and Coulson, 2005), indicating enhanced semantic processing difficulty. Zhang, Frassinelli, et al. (2021) further investigated how multimodal information modulates N400 in the naturalistic context where different cues co-occur. The authors in this study present participants with videos where a speaker produces short passages with naturally occurring prosody, gestures and mouth movements. They then quantified the correlation between the lexical predictability (using 2-gram surprisal estimates), prosody (using mean F0, capturing the pitch of the word), gestures (annotated as meaningful, e.g. "drinking" - imitate holding a cup to drink, or beats, the rhythmic hand movements that are not directly meaningful), and informativeness of mouth movements. This study shows that ERP between 300-600ms is indeed sensitive to surprisal, extending the previous N400-surprisal effect to audio-visual modality. However, they also found that the effect of surprisal on N400 is modulated by multimodal information, as pitch prosody, meaningful gestures and informative mouth movements and their combinations reduce the N400, especially for higher surprisal words, indexing easier comprehension than predicted by surprisal alone. Zhang, Ding, et al. (2021) further report similar patterns in highly proficient non-native English compreheenders. These findings indicate that surprisal may not fully capture comprehension in the multimodal context, as the surprisal effect is modulated by multimodal information. However, both these studies only use a 2-gram based language model to compute surprisal estimates. It is unclear whether other models such as Transformers (which have access to a larger window of context) would allow for a better fit for N400 in the audio and audio-visual context. Ankener et al. (2018) presented evidence showing that visual information can impact lexical expectations in reading and listening experiments. They determined the index of cognitive activity by examining the impact of visual uncertainty on word surprisal and cognitive effort. These experiments focused on presenting additional visual stimuli that matched the words in the sentences. These findings suggest that in a controlled environment where visual stimuli are carefully provided, they have a significant effect on cognitive processing. This indicates the importance of taking into account additional information channels besides lexical content to accurately predict cognitive effort. ### The Present Study We report a controlled study to investigate the effects of visual signals (seeing the speaker) on language comprehension. We compare the effects of audio-only and audio-visual settings using the same language stimuli and analyze the changes in ERP signals. We then evaluate the effectiveness of surprisal estimates, using different language models with varying lexical context windows, in explaining cognitive effort in both unimodal (audio-only) and multimodal (audio-visual) conditions. Our study extends recent observations that indicate that other modalities of information significantly contribute towards the cognitive effort of language processing (Ankener et al., 2018; Zhang, Ding, et al., 2021; Zhang, Frassinelli, et al., 2021, _inter alia_). We provide a comparison of EEG responses to the same lexical context but presented in a unimodal or multimodal manner. Crucially, our analysis of language model surprisal estimates assesses whether language models with different architectures and degrees of complexities provide equally good fit across unimodal and multimodal contexts. We first present our methodology followed by the results and finally discuss the salient observations in the following sections. ## Methods ### Electrophysiological Data ParticipantsWe collected experimental data from two cohorts: a) 27 participants in the audio-only condition and b) 31 participants in the audio-visual condition. All participants were native English speakers with normal hearing, vision, and no known neurological disorder2. Footnote 2: The study was approved by the university ethics committee. Participants gave written consent and were paid £7.5/h for their participation. Materials103 naturalistic passages were randomly selected from the British National Corpus (BNC) and were evaluated by native English speakers to be semantically and grammatically coherent. They were recorded by a native English-speaking actress with natural prosody and facial expressions. The final corpus of experimental stimuli has a mean duration of 8.50 seconds and an average word count of 23. The onset and offset of each word were automatically detected using a word-phoneme aligner based on a Hidden Markov Model (Rapp, 1995) and was further manually verified (mean=440ms, SD=376ms). Participants watched the videos in the audio-visual setting and listened to the soundtrack of the videos in the audio-only setting. ProceduresParticipants were seated approximately 1 meter away from a computer and wore earphones during the experiment. After three practice trials, they were presented with audio stimuli in the first experiment and audio-visual stimuli in the second experiment. To ensure comparability between the two experiments, participants in the first experiment also viewed a static snapshot of the same actress taken from the video, to control for the presence of visual input. Each trial was separated by a 2000ms interval, and 35 clips were followed by attention checks to ensure participants were paying attention to the stimuli. Participants were instructed to carefully listen to or watch the stimuli and answer as quickly and accurately as possible. The EEG data was collected for both the audio-only and audio-visual conditions using the same 32-channel Biosemi system with CMS and DRL as ground reference. Two external electrodes were attached to the left and right mastoid as an offline reference, and two external electrodes captured horizontal and vertical eye movements. Participants were instructed to avoid moving, keep their facial muscles relaxed, and reduce blinking, if possible. The electrode offsets were maintained between \(\pm\) 25mV. The recording was conducted in a shielded room with a temperature of 18 \({}^{\circ}\)C. The EEG session lasted approximately 60 minutes. EEG PreprocessingThe data was pre-processed with EEGLAB (Delorme and Makeig, 2004, v.14.1.1) and ERPLAB (Lopez-Calderon and Luck, 2014, v.7.0.0) running under MATLAB 2019a. All electrodes were included. While N400 has a central-parietal distribution, the scalp distribution of audio and audiovisual speech can be more frontal and may be different from one another due to the modality differences (Kutas and Federmeier, 2011). Therefore, instead of focusing on a predefined region of interest (ROI), we included all electrodes (Zhang, Ding, et al., 2021; Zhang, Frassinelli, et al., 2021), categorized them into ROIs and added them in the statistical model (as in Michaelov et al., 2021, see Statistical Analysis section below for more description). EEG files were referenced to mastoids, down-sampled to 512Hz, separated into -100 to 1200ms epochs time-locked to word onset and filtered with a 0.05-100Hz band-pass filter. Artefacts (e.g., eye movements and muscle noise) were first corrected with ICA. The remaining artefacts were rejected using a moving window peak-to-peak analysis and step-like artefact analysis. Due to likely overlap between any baseline period (-100 to 0ms) and the EEG signal elicited by the previous word, we did not perform baseline correction, but instead extracted the mean EEG amplitude in this time interval and later used it as a control variable in the statistical analysis (see also Frank et al., 2015). Following previous work (Frank et al., 2015; Zhang, Frassinelli, et al., 2021), we take the mean ERP amplitude between 300-500ms as the N400 signal. ### Computing Surprisal Surprisal theory (Boston et al., 2008; Hale, 2001; Levy, 2008) is rooted in information theoretic principles (Shannon, 1948) by utilising entropy, a core concept in information theory, to assess the predictability of events and the level of surprise they generate. The theory examines the connection between predictability and the processing of lexical information in the human brain. In this framework, lexical units carry information which is conveyed through a probabilistic measure. The level of predictability of these units influences how the brain processes and evaluates them. When predictability is low, it results in higher levels of surprise and requires more cognitive resources for processing. The exact amount of information conveyed by a unit is hence quantified as its _surprisal_. Formally, consider a linguistic signal **I** made of units: \(\{l_{1},\cdots,l_{n}\}\) (where the units could be words, phonemes, etc.); surprisal is then defined as: \[s(l_{t})=-\log p(l_{t}\mid l_{1},\cdots,l_{t-1}) \tag{1}\] which represents the negative log-probability of a unit \((l_{t})\) given its preceding context \((l_{1},\cdots,l_{t-1})\), where \(t\) indicates the sequence time-steps. Surprisal theory asserts that the effort needed to process a linguistic unit is directly proportional to its unexpectedness in its context, which is measured by its surprisal. Formally, for a linguistic unit \((l_{t})\), the processing effort is linearly proportional to its surprisal: \[\text{effort}(l_{t})\propto s(l_{t})\] As we don't have direct access to the true conditional probabilities of observing linguistic units given their context, we use language models to estimate them instead. We obtain surprisal estimates using log-probabilities (see Equation 1 above) through classical _n_-gram-based language models and more recent Transformer-based models. For _n_-gram models, we cover an entire spectrum of _n_-gram models and construct {2,3,4,5,6}-gram models using modified Kneser-Ney Smoothing (Ney et al., 1994)3. All probability estimates are computed at the word level. For Transformer-based models, we use GPT-2 and BERT 4, and all probability estimates are also computed at the word level. We note that BERT is trained for a cloze-style task and hence the probabilities from this model are considered as pseudo surprisal estimates. Footnote 3: Following Meister et al. (2021), we use Wiki-text 103 as the corpus for estimating the _n_-gram probabilities. Footnote 4: We use openly available pre-trained models from huggingface library (Wolf et al., 2020). ### Statistical Analysis Correlation between Audio and Audiovisual N400To determine the correlation between N400 in audio and audio-visual settings, we calculate Pearson's correlation of N400 per word across modalities. N400 was calculated as the mean ERP between 300-500ms minus the baseline ERP mean (as we did not perform baseline correction during preprocessing, as previously mentioned). The variance was reduced by averaging the results across all participants and electrode sites for each word in each modality. Evaluating model performances across modalitiesWe compared the performances of surprisal generated by different computational models using a linear mixed effect regression model conducted in R using the lme4 package (Bates, 2010). We followed a similar approach as Michaelov et al. (2021) by comparing a baseline model with more complex models containing surprisal. The dependent variable was the mean ERPs in the 300-500ms time window extracted from 32 electrodes for all content words (e.g. nouns, verbs, adjectives, as in Frank et al. (2015)). The baseline model contains regions of interest (ROI) which describes the location of each electrode. 32 electrodes were categorised as 5 ROIs, including prefrontal (Fp1, Fp2, AF3, AF4), fronto-central (F3, F7, Fz, F4, F8, FC5, FC1, FC6, FC2), central (C3, C4, Cz), posterior (CP1, CP5, CP2, CP6, P3, P7, Pz, P4, P8, PO3, PO4, O1, Oz, O2), left temporal (T7) and right temporal (T8). The baseline model also contains the mean EEG amplitude from the baseline interval extracted above. The baseline model includes participant, passage and electrode as random intercepts. Then, we added the main effect of surprisal to create surprisal+ROI models and further the interaction between surprisal and ROI to create surprisal\(\times\)ROI models. We then estimated the improvement of fit by model comparisons, where the surprisal+ROI models are compared with the baseline models and the surprisal\(\times\)ROI models are compared with the surprisal+ROI models using anova() function in R (_p_-values FDR adjusted for multiple comparisons). We also calculated the decrease of AIC value for each model compared with the baseline model (\(\widetilde{\Delta}AIC\)). The same analysis was performed for audio and audio-visual data separately. ## Results ### N400 is weakly correlated across settings The Pearson correlation coefficient for N400 per word between audio-only and audio-visual settings is 0.11 (t = 5.16, \(p<.001\)), indicating a weak positive correlation between the two settings. Figure 1 shows a scatter plot of N400 across the two settings, which indicates that while most of the data points are densely populated in the center, there is no meaningful relationship between the two settings. If the lexical information were the most significant contributing factor, we would expect a stronger correlation between audio-only and audio-visual conditions since both experiments involve the same verbal stimuli. Figure 1: Scatter plot of N400 signal across audio-only and audio-visual modalities (Pearson correlation \(r=0.11\)). The plot showcases that there is a weak correlation between the two settings for the same lexical input. ### Statistical models behave differently across settings We present statistical analysis for the audio-only and audio-visual settings in Tables 1. We find that additive models (surprisal\(+\)ROI) provide a good fit for N400 amplitudes than baseline models in both audio-only and audio-visual settings, as indicated in \(\chi^{2}\) and \(p\) values. Furthermore, the multiplicative models (surprisal\(\times\)ROI) almost always improve the model fit compared to the additive models. The difference of the multiplicative models over additive models (surprisal\(\times\)ROI) indicates that surprisal generated from all models predicts N400 amplitudes in both audio and audio-visual conditions in interaction with ROIs. In the auditory setting, we observe the largest reduction in AIC compared to the baseline model (\(\tilde{\Delta}AIC\)) is associated with GPT-2, followed by BERT and n-gram models. This suggests that Transformer-based models (especially GPT-2), \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Model & \(\chi^{2}\) & Df & \(p\) & \(\tilde{\Delta}AIC\) \\ \hline 2-gram & surprisal\(+\)ROI & 17.76 & 1.00 & \(<\).001 & 15.76 \\ & Asurprisal\(\times\)ROI & 10.03 & 5.00 & 0.07 & 15.80 \\ \hline 3-gram & surprisal\(+\)ROI & 21.46 & 1.00 & \(<\).001 & 19.46 \\ & Asurprisal\(\times\)ROI & 20.05 & 5.00 & \(<\).001 & 29.51 \\ \hline 4-gram & surprisal\(+\)ROI & 34.64 & 1.00 & \(<\).001 & 32.64 \\ & Asurprisal\(\times\)ROI & 20.74 & 5.00 & \(<\).001 & 43.37 \\ \hline 5-gram & surprisal\(+\)ROI & 33.02 & 1.00 & \(<\).001 & 31.02 \\ & Asurprisal\(+\)ROI & 10.96 & 5.00 & \(<\).001 & 41.62 \\ \hline 6-gram & surprisal\(+\)ROI & 33.96 & 1.00 & \(<\).001 & 31.96 \\ & Asurprisal\(\times\)ROI & 21.09 & 5.00 & \(<\).001 & 43.05 \\ \hline BERT & surprisal\(+\)ROI & 66.10 & 1.00 & \(<\).001 & 64.10 \\ & Asurprisal\(\times\)ROI & 16.19 & 5.00 & 0.01 & 70.28 \\ \hline GPT-2 & surprisal\(+\)ROI & 98.95 & 1.00 & \(<\).001 & 96.95 \\ & Asurprisal\(\times\)ROI & 63.90 & 5.00 & \(<\).001 & 150.85 \\ \hline \hline \end{tabular} \end{table} Table 1: Model comparisons in audio-only (left) and audio-visual (right) settings. \(\tilde{\chi}^{2}\) and \(p\)-values of additive (surprisal\(+\)ROI) and multiplicative (surprisal\(\times\)ROI) models were derived from comparisons with baseline and additive models respectively, while \(\tilde{\Delta}AIC\) was always derived from comparisons with the baseline model. We observe that, models with surprisal (both additive and multiplicative models), in both settings, provide a good fit for N400 amplitudes. We also observe that multiplicative models, almost always, provide a better fit than additive models. Figure 2: Reduction of AIC (in surprisal\(\times\)ROI models) associated with each model across modalities. While models with access to larger lexical context windows provide a better fit in the audio-only setting, models with smaller and local lexical context seem to provide a better fit in the audio-visual setting for same verbal stimuli. which has access to a largest lexical context, can better predict N400 amplitudes, in a unimodal setting. Previous work has also seen similar pattern, where models that consider larger lexical contexts have been shown to provide better fit [11, 12]. However, in the audio-visual setting, we observe a reversal of this pattern where, strikingly, the 2-gram model shows the largest \(\Delta\)AIC, while GPT-2 shows the smallest \(\Delta\)AIC. In general, we notice that the models with smaller context window provide a better fit in the audio-visual setting. We present our results in Figure 2, which shows the reduction in AIC (\(\Delta\)AIC) across models and modalities. We note that we only plot multiplicative models, as they offer a better overall fit (but the additive models showed similar patterns). These observations indicate that local lexical information is more prominent in the multimodal setting. ## 5 Discussion Our results demonstrate that under the same verbal stimuli, cognitive processing, as captured using ERP, significantly differs between unimodal and multimodal experimental settings. We replicate the earlier findings of multiplicative models (surprisal\(\times\)ROI) providing better fit for the data in comparison to additive models (surprisal\(+\)ROI) models. Although we validate earlier findings that Transformer-based models like GPT-2 are better predictors of N400 in unimodal (audio-only) settings, the opposite trend is observed in the multimodal setting. These observations strongly suggest that nonverbal cues significantly contribute to cognitive processing more than lexical information alone. In the unimodal setting, the surprisal estimates from GPT-2 based language model exhibit the best fit compared to other models, consistent with previous research [12] demonstrating the superiority of Transformer-based models over other language models, such as RNNs and traditional _n_-gram models over a variety of psychometric data. Our findings show that in the unimodal setting, the surprisal estimates from GPT-2 based language model outperforms other models, as previously demonstrated in previous studies. However, the BERT displays slightly different results, possibly due to its training objective as a masked language model, which limits access to only pseudo log-probabilities. This difference in objectives between BERT and GPT-2, combined with the limitations in accessing log-probabilities from BERT, could contribute to the differing performance of these models. Similar findings have been reported in previous work [12]. In the multimodal setting, our results reveal a reversal of trends compared to the unimodal setting. Surprisal values derived the _n_-gram language models, particularly the 2-gram model, provide the best fit for N400 in the multimodal scenario. We note that surprisal only captures word predictability based on previous lexical context, ignoring any multimodal information in the stimuli. We posit that, in the multimodal setting, participants utilise multiple sources of information, such as gestures, mouth movements, eye movements, and posture. The increased information content from multiple sources may only allow participants to better track local lexical context, rather than global lexical context. Our findings using language models over different contextual windows suggest some validation of this hypothesis. Especially, we observe in Figure 2 that as we increase the context window from 2 to 6 we overall see a degradation in \(\Delta\)AIC, indicating a worse fit in comparison to 2-gram. The differences in N400 across audio and audiovisual modalities indicate that cognitive processing strategies differ across modalities even when the verbal stimuli is identical. Overall, our findings provide strong evidence that multimodal processing of language differs significantly from unimodal processing of language, even under the same verbal stimuli. Our results generally highlight the importance of considering non-verbal cues in language processing. ## 6 Summary and Conclusions In this paper, we present a controlled study, investigating the effect of multiple modalities of information on cognitive processing of language comprehension. We conduct experiments over audio-only and audio-visual modalities with the same verbal stimuli. Our findings overall suggest that cognitive effort in a multimodal setting significantly differs from that in a unimodal setting, with nonverbal contextual information playing a significant role. We also observe that local verbal context significantly influences cognitive processing effort in a multimodal setting in comparison to the unimodal setting. We believe that our results highlight the importance of modelling non-verbal cues for language comprehension and processing.5 Footnote 5: Sourcecode and data for replication of our study are made available here: [https://github.com/pmadhyastha/multimodal_comprehension](https://github.com/pmadhyastha/multimodal_comprehension)
2307.00518
DSTCGCN: Learning Dynamic Spatial-Temporal Cross Dependencies for Traffic Forecasting
Traffic forecasting is essential to intelligent transportation systems, which is challenging due to the complicated spatial and temporal dependencies within a road network. Existing works usually learn spatial and temporal dependencies separately, ignoring the dependencies crossing spatial and temporal dimensions. In this paper, we propose DSTCGCN, a dynamic spatial-temporal cross graph convolution network to learn dynamic spatial and temporal dependencies jointly via graphs for traffic forecasting. Specifically, we introduce a fast Fourier transform (FFT) based attentive selector to choose relevant time steps for each time step based on time-varying traffic data. Given the selected time steps, we introduce a dynamic cross graph construction module, consisting of the spatial graph construction, temporal connection graph construction, and fusion modules, to learn dynamic spatial-temporal cross dependencies without pre-defined priors. Extensive experiments on six real-world datasets demonstrate that DSTCGCN achieves the state-of-the-art performance.
Binqing Wu, Ling Chen
2023-07-02T08:53:10Z
http://arxiv.org/abs/2307.00518v1
# DSTCGCN: Learning Dynamic Spatial-Temporal Cross Dependencies for Traffic Forecasting ###### Abstract Traffic forecasting is essential to intelligent transportation systems, which is challenging due to the complicated spatial and temporal dependencies within a road network. Existing works usually learn spatial and temporal dependencies separately, ignoring the dependencies crossing spatial and temporal dimensions. In this paper, we propose DSTCGCN, a dynamic spatial-temporal cross graph convolution network to learn dynamic spatial and temporal dependencies jointly via graphs for traffic forecasting. Specifically, we introduce a fast Fourier transform (FFT) based attentive selector to choose relevant time steps for each time step based on time-varying traffic data. Given the selected time steps, we introduce a dynamic cross graph construction module, consisting of the spatial graph construction, temporal connection graph construction, and fusion modules, to learn dynamic spatial-temporal cross dependencies without pre-defined priors. Extensive experiments on six real-world datasets demonstrate that DSTCGCN achieves the state-of-the-art performance. Traffic forecasting, spatial-temporal graph neural networks, fast Fourier transform ## I Introduction Traffic forecasting is an essential part of an intelligent transportation system and a crucial technique for developing a smart city [1, 2]. Accurate traffic forecasting will provide reliable guidance for scheduling transportation resources, mitigating traffic congestion, raising early warnings for public safety, and offering suggestions to citizens for their daily commuting [3]. Since traffic forecasting has a wide range of real-world applications, it has become a popular research focus in academic and industrial communities for decades. Traffic forecasting aims to accurately predict future traffic data, e.g., traffic flow and speed, given historical traffic data recorded by sensors on a road network. It is highly challenging due to complicated spatial and temporal dependencies within the road network. Spatially, traffic data collected by a sensor are influenced by nearby traffic conditions, as the traffic dynamics propagate along the road. Temporally, the current features of traffic data are influenced by historical features. Moreover, spatial dependencies and temporal dependencies are entangled and time-varying in real-world traffic systems. In the past decades, many works have been proposed for this challenging task, from using shallow machine learning [4, 5] to applying recurrent neural network (RNN) and convolutional neural network (CNN) based deep learning [2, 6, 7, 8]. Although these works make it possible to model temporal dependencies and grid-based spatial dependencies, they cannot capture graph-based spatial dependencies within an irregular road network in reality [9]. Towards this problem, graph neural network (GNN) based works have been proposed to leverage the graph structure of a road network effectively [10, 11]. Specifically, these works use a graph to define a road network, where each node represents a sensor, and each edge represents a spatial dependency between sensors. More recently, researchers have integrated GNNs to capture spatial dependencies with RNNs [9, 12, 13], CNNs [14, 15], or Attentions [16, 17] to model temporal dependencies. This type of network, known as spatial-temporal graph neural networks (STGNNs), has shown the state-of-the-art performance for traffic forecasting [2, 11]. Despite the success, the performance of many existing STGNNs is highly constrained by utilizing static dependencies. They usually construct static graphs, e.g., distance graphs [12, 14, 18, 19], POI similarity graphs [20, 21], temporal similarity graphs [21, 22], and static adaptive graphs [13, 23, 24], to model spatial dependencies, which neglect the changing nature of the spatial dependencies within road networks. Some explorations have been conducted to model such dynamics. For example, a static graph and dynamic attribute based graphs are integrated to obtain time-varying structures [25], and attention mechanisms are exploited to construct structures changing with time [26]. However, these works only focus on the dynamics of spatial dependencies and ignore dependencies crossing spatial and temporal dimensions, which may fail to extract some effective features carried by cross dependencies. The effectiveness of spatial-temporal cross dependencies has been empirically shown for traffic forecasting. These works [22, 27, 28] usually represent cross dependencies by a fused graph, e.g., a spatial-temporal synchronous graph [27] constructed by distance graphs and temporal connection graphs, and a spatial-temporal fusion graph [22] constructed by distance graphs, time similarity graphs, and temporal connection graphs. However, these works still rely on static graphs, which cannot capture dynamic cross dependencies. To address the aforementioned problems, we propose a **D**ynamic **S**patial-**T**emporal **C**ross **G**raph **C**onvolution **N**etwork (DSTCGCN). To the best of our knowledge, DSTCGCN is the first work that learns dynamic spatial and temporal dependencies jointly via graphs to explore and utilize time-varying cross dependencies for traffic forecasting. The main contributions of our work are as follows: * Introduce an FFT-based attentive selector to choose the
2304.01356
Elliptic cross sections in blood flow regulation
Arterial deformations arise in blood flow when surrounding tissue invades the space available for a blood vessel to maintain its circular cross section, the most immediate effects being a reduction in blood flow and redistribution of shear stress. Here we consider deformations from circular to elliptic cross sections. Solution of this problem in steady flow is fairly straightforward. The focus in the present paper is on pulsatile flow where the change from circular to elliptic cross sections is associated with a transition in the character of the equations governing the flow from Bessel to Mathieu equations. The study of this problem has been hampered in the past because of difficulties involved in the solution of the governing equations. In the present study we describe methods we have used to overcome some of these difficulties and present a comprehensive set of results based on these methods. In particular, vessel deformation is examined under two different conditions relevant to blood flow regulation: (i) keeping cross sectional area constant and (ii) keeping cross sectional circumference constant. The results provide an important context for the mechanism of neurovascular control of blood flow under the pathological conditions of vessel deformation.
Chris Brimacombe, Robert M. Corless, Mair Zamir
2023-01-19T21:57:51Z
http://arxiv.org/abs/2304.01356v1
# Elliptic cross sections in blood flow regulation+ ###### Abstract Arterial deformations arise in blood flow when surrounding tissue invades the space available for a blood vessel to maintain its circular cross section, the most immediate effects being a reduction in blood flow and redistribution of shear stress. Here we consider deformations from circular to elliptic cross sections. Solution of this problem in steady flow is fairly straightforward. The focus in the present paper is on pulsatile flow where the change from circular to elliptic cross sections is associated with a transition in the character of the equations governing the flow from Bessel to Mathieu equations. The study of this problem has been hampered in the past because of difficulties involved in the solution of the governing equations. In the present study we describe methods we have used to overcome some of these difficulties and present a comprehensive set of results based on these methods. In particular, vessel deformation is examined under two different conditions relevant to blood flow regulation: (i) keeping cross sectional area constant and (ii) keeping cross sectional circumference constant. The results provide an important context for the mechanism of neurovascular control of blood flow under the pathological conditions of vessel deformation. **Keywords:** Neurovascular control; Blood vessel deformation; Pulsatile blood flow; Coronary arteries; Mathieu equations/functions ## 1 Introduction Arterial deformations arise in blood flow when surrounding tissue invades the space available for a blood vessel to maintain its circular cross section. This may occur in steady state when the invading tissue is pathological, or in oscillatory state when the invading tissue is driven by the effects of pulsatile blood flow. In the brain, the presence of a tumor may compress surrounding lymphatic and blood vessels, causing flow disruptions, especially within the restrictive environment of the rigid skull [(30, 40, 34)]. In the heart, coronary vasculature embedded within the ventricular walls undergo periodic compression and deformation with each contraction of the heart muscle [(43)]. Segments of the aorta near the heart have also been reported to undergo periodic deformations from circular to elliptic cross section with each heart beat [(27)]. Coronary arteries tethered to the surface of the heart undergo a different kind of deformation as they are laterally displaced with each heart beat, causing a lateral acceleration of fluid and a lateral force on the tube wall, resulting in a change in its shape from a circular to an elliptic cross section [(7)]. Flow in tubes of noncircular cross sections, both steady and pulsatile, have also been discussed in relation to the movement of spinal fluids under normal and pathological conditions [(17, 8, 19, 39, 21)]. It is well known that the flow in a tube of circular cross section is singular in the sense that any departure from the circular geometry of the cross section causes a reduction in the flow rate as well as a redistribution of the shear stress along the circumference of the tube wall whereby the shear stress at some points will be higher than that in an equivalent tube of circular cross section [(12)]. Both of these changes are important in blood flow, the latter in particular in relation to atherosclerosis [(5, 22, 36, 14)]. While blood vessel deformation by surrounding tissue may lead to many different forms of deformation of the vessel cross section, in the present study, to keep the problem mathematically tractable, we consider the limited problem of deformations from circular to elliptic cross sections. Flow within a blood vessel is generally under neurovascular control whereby a change in flow rate is mediated by a change of vessel diameter. The latter in turn is mediated by a change in muscular tension within the vessel wall to the effect of changing the length of the wall circumference [(35)]. If the vessel is deformed by surrounding tissue such that its cross section is transformed from circular to elliptic form, two distinctly different scenarios may follow, which we shall refer to as "passive" and "active" scenarios. Under a passive scenario the neurovascular control is absent, and a change from circular to elliptic cross section occurs with the circumference of the vessel wall remaining constant. Under the active scenario the neurovascular control responds by changing the tension within the vessel wall in an attempt to maintain the flow rate by keeping the cross sectional area available to the flow constant. The aim of the present study is to outline the analyses associated with these two scenarios and to present results illustrating the hemodynamic consequences in the two cases. While from a geometrical perspective the change from circular to elliptic cross sections may seem to be a "smooth" change, from a mathematical perspective it presents a discontinuity in the character of the equations governing pulsatile flow as well as in their solutions. Specifically, in the case of circular cross sections the equations governing the flow are Bessel equations and the solutions involve Bessel functions, while in the case of elliptic cross sections the flow is governed by Mathieu equations and the solutions involve Mathieu functions [13]. The study of pulsatile flow in tubes of elliptic cross sections has been hampered in the past because of difficulties involved in the solution of these equations and in the numerical evaluation of Mathieu functions with complex arguments [12, 33, 3, 8, 45]. In the present study we use a methodology described in [4] to overcome these difficulties and to extend the range of ellipticity at which flow properties can be evaluated. In particular, the effects of vessel deformation on flow rate and on shear stress distribution along the vessel wall are presented. ## 2 Model equations and consequences Consider an ellipse with semi-major and semi-minor axes, \(\alpha\), \(\beta\), respectively. Figure 2 shows ellipses in confocal elliptic \(\xi\), \(\eta\) coordinates, which we will find useful. If the foci are at \((\pm d,0)\) then the normal Cartesian coordinates are \(x=d\cosh\xi\cos\eta\) and \(y=d\sinh\xi\sin\eta\). If the parameter of the outer ellipse is \(\xi_{0}\) then \(\alpha=d\cosh\xi_{0}\) and \(\beta=d\sinh\xi_{0}\). We have \(d^{2}+\beta^{2}=\alpha^{2}\) from elementary geometry. The eccentricity \(\varepsilon\) of the outermost ellipse, at \(\xi=\xi_{0}\), is defined by \(\varepsilon=d/\alpha=\mathrm{sech}\xi_{0}\). Thus the eccentricity of the confocal ellipses changes as \(\xi_{0}\) changes. Using polar coordinates \(x=\alpha\sin\theta\) and \(y=\beta\cos\theta\), the circumference of the ellipse is given by \[4\int_{0}^{\pi/2}\sqrt{\left(\frac{dx}{d\theta}\right)^{2}+ \left(\frac{dy}{d\theta}\right)^{2}}\ d\theta \tag{1}\] \[=4\alpha\mathrm{E}(\varepsilon) \tag{2}\] where \[\mathrm{E}(\varepsilon)=\int_{0}^{\pi/2}\sqrt{1-\varepsilon^{2}\sin^{2} \theta}\ d\theta \tag{3}\] Figure 1: A blood vessel of circular cross section is compressed by surrounding tissue such that its cross section becomes elliptic with semiminor axis \(\beta=f_{e}a\) where \(f_{e}\) is a prescribed fraction of the circle radius \(a\). Under a passive scenario (blue) regulatory control is absent and the length of circumference of the resulting ellipse is the same as that of the circle. Under an active scenario (red) the regulatory system intervenes in an attempt to keep the cross sectional area of the resulting ellipse the same as that of the circle. is the complete elliptic integral of the second kind (20). _Passive scenario._ Under this scenario the change from circular to elliptic cross section occurs while keeping the length of the circumference constant. For an ellipse of eccentricity \(\varepsilon\) and a circle of radius \(a\) to have the same length of circumference, we have \[2\pi a=4\alpha\mathrm{E}(\varepsilon) \tag{2.4}\] therefore \[\frac{\alpha}{a} =\frac{\pi}{2\mathrm{E}(\varepsilon)} \tag{2.5}\] \[\frac{\beta}{a} =\sqrt{1-\varepsilon^{2}}\frac{\pi}{2\mathrm{E}(\varepsilon)} \tag{2.6}\] If the area of the ellipse is denoted by \(\mathrm{S}_{e}\) (\(=\pi\alpha\beta\)) and the area of the circle is denoted by \(\mathrm{S}_{c}\) (\(=\pi a^{2}\)), then the ratio of the two is given by \[\frac{\mathrm{S}_{e}}{\mathrm{S}_{c}}=\left(\frac{\pi}{2\mathrm{E}( \varepsilon)}\right)^{2}\sqrt{1-\varepsilon^{2}} \tag{2.7}\] In the passive scenario, where the circumference remains constant on deformation from a circle of radius \(a\) Figure 2: Confocal elliptic coordinate system \(x=d\cosh\xi\cos\eta\), \(y=d\sinh\xi\sin\eta\) used in the solution of the governing equations, where \(\xi=\xi_{0}\) is the outer circumference of a cross section of the tube of elliptic cross section. Foci at \((\pm d,0)\) indicated by solid black dots. The length of the semimajor axis of the outermost ellipse is \(\alpha=d\cosh\xi_{0}\) and the length of the semiminor axis is \(\beta=d\sinh\xi_{0}\). the foci of the ellipse are located at \((\pm d,0)\) where \[d=\frac{\pi\varepsilon}{2E(\varepsilon)}\,a\,. \tag{8}\] This tends to \(\pi a/2\) as \(\varepsilon\) tends to \(1\). Compressing a circle of original radius \(a\) to an ellipse with semi-minor axis \(\beta=f_{e}a\) with \(f_{e}<1\) while keeping the circumference \(2\pi a=4E(\varepsilon)\alpha\) constant requires that \(\alpha=g_{e}a\) where \(g_{e}>1\) is given by the following implicit formulae from equations (5)-(6): \[f_{e}= \frac{\pi\sqrt{1-\varepsilon^{2}}}{2E(\varepsilon)} \tag{9}\] \[g_{e}= \frac{\pi}{2E(\varepsilon)}\,. \tag{10}\] To find \(g_{e}\) for a given \(f_{e}\) one must solve the transcendental equation (9) for \(\varepsilon\), and then use that in the equation for \(g_{e}\). This is straightforward in Maple, by use of the command fsolve. For convenience, we tabulate some fractions in Table 1. Active scenarioUnder this scenario the change from circular to elliptic cross section occurs while keeping the cross sectional area constant. In the active scenario, compressing a circle of radius \(a\) so that its semi-minor axis \(\beta=f_{e}a\) is a given fraction \(f_{e}\) of the original radius, then since the area is \(\pi\alpha\beta\) we must have \(\alpha=a/f_{e}\). The ratio of the circumference of an ellipse to the circumference of a circle with the same area is \[\frac{C_{\rm e}}{C_{\rm c}}=\frac{2E(\varepsilon)}{\pi(1-\varepsilon^{2})^{1/ 4}}\;. \tag{11}\] Figure 3: Relationships between areas of the circular and the elliptic cross sections when the length of their circumferences are the same. S\({}_{e}\) and S\({}_{c}\) = are the areas of the elliptic and circular cross sections, respectively. In the limit, as the vessel cross section is flattened such that \(f_{e}\to 0\), the area of the ellipse vanishes. This is plotted in figure 4. In the active scenario, where on deformation from a circle of radius \(a\) the circumference is stretched by the regulatory system in order to keep the area constant, the foci of the ellipse are located at \((\pm d,0)\) where \[d=\frac{\varepsilon}{(1-\varepsilon^{2})^{1/4}}\,a\,. \tag{12}\] This is unbounded as \(\varepsilon\to 1^{-}\). **Steady Flow in Tubes of Elliptic Cross Sections.** The properties of steady flow in a tube of elliptic cross section will be used as reference for the corresponding properties in pulsatile flow. The function \begin{table} \begin{tabular}{c|c|c} \(f_{e}=\beta/a\) & \(\varepsilon\) & \(g_{e}=\alpha/a\) \\ \hline 0.4 & 0.9611 & 1.448 \\ 0.5 & 0.9334 & 1.392 \\ 0.6 & 0.8925 & 1.330 \\ 0.7 & 0.8314 & 1.260 \\ 0.8 & 0.7359 & 1.182 \\ 0.9 & 0.5698 & 1.096 \\ \end{tabular} \end{table} Table 1: Table of the major and minor axes, \(\alpha\), \(\beta\), of an ellipse having the same length of circumference of a circle of radius \(a\), where \(\varepsilon\) is the eccentricity of the ellipse. Figure 4: The ratio of the circumference of an ellipse to the circumference of a circle with the same area. The formula is \(C_{e}/C_{c}=2E(\varepsilon)/(\pi(1-\varepsilon^{2})^{1/4})\), with \(E(\varepsilon)\) being the complete elliptic integral. As the fraction \(f_{e}\to 0\), the eccentricity \(\varepsilon\to 1\). The ratio of circumferences is singular as the fraction goes to 0, as one would expect. The figure thus points to an intrinsic limitation of the regulatory system to increase the length of circumference of a blood vessel of elliptic cross section in an attempt to maintain its cross sectional area. governing the axial velocity \(u_{0,e}\) is given by (44): \[u_{0,e}=-\frac{k_{0}\alpha^{2}\beta^{2}}{2\mu\left(\alpha^{2}+\beta^{2}\right)} \left(1-\frac{x^{2}}{\alpha^{2}}-\frac{y^{2}}{\beta^{2}}\right) \tag{13}\] where \(\mu\) is viscosity, and \(k_{0}\) is the constant pressure gradient driving the flow. The maximum velocity occurs at \(x=0,y=0\), the center of the ellipse: \[\hat{u}_{0,e}=-\frac{k_{0}\alpha^{2}\beta^{2}}{2\mu\left(\alpha^{2}+\beta^{2} \right)} \tag{14}\] and volumetric flow rate is given by (44) \[q_{0,e}=\frac{\hat{u}_{0,e}\mathrm{S}_{e}}{2} \tag{15}\] Shear stress on the tube wall is given by (31) \[\tau_{0,e}(x,y)=\frac{k_{0}\alpha^{2}\beta^{2}}{\alpha^{2}+\beta^{2}}\left( \frac{x^{2}}{\alpha^{4}}+\frac{y^{2}}{\beta^{4}}\right)^{1/2} \tag{16}\] Maximum shear occurs at the ends of the minor axis \[\hat{\tau}_{0,e}=\frac{k_{0}\alpha^{2}\beta}{\alpha^{2}+\beta^{2}} \tag{17}\] Minimum shear occurs at the ends of the major axis \[\hat{\tau}_{0,e}=\frac{k_{0}\alpha\beta^{2}}{\alpha^{2}+\beta^{2}} \tag{18}\] Maximum velocity and maximum shear are related by \[\hat{u}_{0,e}=-\frac{\beta}{2\mu}\hat{\tau}_{0,e} \tag{19}\] The corresponding quantities for a circle are obtained by letting \(\beta\to\alpha\) or equivalently \(f_{e}\to 1\). Then, for instance, the maximum and minimum shear both become \(k_{0}a/2\). In Figures 5-7 we use these formulas to compare the difference in steady flow in the two different scenarios, active (depicted with red curves in the figures) and passive (depicted with black curves). All of the quantities above involve the semimajor and semiminor axes, \(\alpha\) and \(\beta\). While for both scenarios \(\beta=f_{e}a\) is the same, the value of \(\alpha\) will be different in the active scenario \((a/f_{e})\) to the passive scenario \(g_{e}a\) where \(g_{e}\) is computed by solving a transcendental equation. We see that there is indeed some difference in the flow quantities that arises in the two scenarios. where the foci are at \((\pm d,0)\) and \(\xi\), \(\eta\) are the elliptic coordinates. Using these confocal elliptic coordinates, an oscillatory pressure gradient of the form \[\frac{\partial p}{\partial z}=k_{0}e^{i\omega t}\;, \tag{2.23}\] and separation of variables \[u_{\phi,e}(\xi,\eta,t)=w(\xi,\eta)e^{i\omega t}\;, \tag{2.24}\] equation (2.21) can be formulated as an inhomogeneous Helmholtz equation \[\frac{2}{d^{2}(\cosh 2\xi-\cos 2\eta)}\left(\frac{\partial^{2}w}{\partial\xi^ {2}}+\frac{\partial^{2}w}{\partial\eta^{2}}\right)-\frac{i\rho\omega}{\mu}w= \frac{k_{0}}{\mu}\;. \tag{2.25}\] Using the translation \[w(\xi,\eta)=v(\xi,\eta)-\frac{k_{0}}{i\rho\omega}\;, \tag{2.26}\] the inhomogeneous term of equation (2.25) is eliminated and the equation becomes \[\left(\frac{\partial^{2}v}{\partial\xi^{2}}+\frac{\partial^{2}v}{\partial \eta^{2}}\right)-\frac{i}{2}\Lambda_{e}\left(\cosh 2\xi-\cos 2\eta\right)v=0\;, \tag{2.27}\] where \[\Lambda_{e}=\frac{\rho\omega d^{2}}{\mu} \tag{2.28}\] Figure 5: Maximum velocity in steady flow in a tube of elliptic cross section from equation (2.14) compared to that in a tube of circular cross section in the two scenarios, active (red) and passive (black). is a nondimensional frequency parameter. The boundary conditions are given by \[v(\xi_{0},\eta) =\frac{k_{0}}{i\rho\omega}\text{ (no slip at tube wall)} \tag{29}\] \[\frac{\partial v}{\partial\xi}\Big{|}_{\xi=0} =0\text{ (symmetry)}\] (30) \[v(\xi,0) =v(\xi,\pi)\text{ ($\pi$ periodic in $\eta$)} \tag{31}\] While direct numerical solution of the governing equation (27) is also possible, in this paper we pursue a solution based on the use of separation of variables, leading to the use of Mathieu functions. We do this in order to maintain the analytical connection with the classical solution of pulsatile flow in tubes of circular cross sections based on Bessel functions (44). The use of Mathieu functions is not as straightforward as the use of Bessel functions, however, in part because of numerical difficulties in the evaluation of Mathieu functions of imaginary arguments. Balancing that, this method is spectrally accurate, and does not require many eigenfunctions for the range of \(q\) that we consider here. Typically, we need only terms up to about \(N=6\) or \(N=8\). We postpone discussion of how to construct and evaluate the solution until section 3. In detail, the method proceeds as follows. The treatment is standard, and we include it mostly for notation and readability. Applying separation of variables to equation (27) then yields two separate equations: \[\frac{d^{2}g}{d\eta^{2}}+\left(s-2q\cos 2\eta\right)g =0 \tag{32}\] \[\frac{d^{2}f}{d\xi^{2}}-\left(s-2q\cosh 2\xi\right)f =0 \tag{33}\] Figure 6: Maximum wall shear stress in steady flow in a tube of elliptic cross section from equation (17) scaled by the constant shear stress on boundary of a tube of circular cross section and compared under the active (red) and passive (black) regulatory scenarios. where \(s\) is a separating constant and \[q=-\frac{i\Lambda_{e}}{4}\;. \tag{34}\] There is some risk of notational confusion because flow rate is often denoted by the variable \(q\); here we will use flow variables with subscripts only, and the undecorated symbol \(q\) will refer to the parameter in equation (34). This notation is standard for Mathieu functions, and we believe less confusion results when we use symbols in this fashion. Equation (32) is the Mathieu equation, and equation (33) is the modified Mathieu equation. These equations are equivalent, with the change of variable \(\xi=i\eta\). The character of the solutions of the two equations are quite different, however. Since in the present problem \(\eta\) varies from \(0\) to \(2\pi\), then \(\eta\) must have periodicity \(\pi\) or \(2\pi\), which only occurs for discrete values of \(s\), the eigenvalues of the Mathieu equation, designated by \(s_{m}\) in (28). These eigenvalues are more commonly denoted nowadays with the letters \(a_{m}\) and \(b_{m}\); see the DLMF [https://dlmf.nist.gov/28.2.ii](https://dlmf.nist.gov/28.2.ii). This results in the set of Mathieu eigenfunctions \(\mathrm{ce}_{m}\) and \(\mathrm{se}_{m}\) for equation (32). Both \(\mathrm{ce}_{m}\) and \(\mathrm{se}_{m}\) are periodic, and \(\mathrm{ce}_{m}\) is even, whereas \(\mathrm{se}_{m}\) is odd. By convention, if \(m\) is odd, then the Mathieu functions have period \(2\pi\), while if \(m\) is even, the Mathieu functions have period \(\pi\). In our problem, we are only interested in even \(m\) values, so as to have \(\pi\)-periodicity, and even functions to satisfy symmetry along both axes1. Footnote 1: Symmetry breaking might very well be possible in a physical situation, and we believe it will be worthwhile to investigate this in future work The even \(\pi\)-periodic solutions of Eq.(32) are the ordinary Mathieu functions denoted by \(\mathrm{ce}_{2m}(\eta,q)\). We will also need the modified Mathieu functions for the same \(m\) and the same value of \(q\), which are solutions Figure 7: Flow rate in steady flow in a tube of elliptic cross section from equation (15) scaled by the flow rate in a tube of circular cross section and compared under the active (red) and passive (black) regulatory scenarios. of Eq. (33). The solution that we will compute will then be of the form \[v(\xi,\eta)=\frac{k_{0}}{i\rho\omega}\sum_{m\geq 0}b_{2m}\mathrm{Ce}_{2m}(\xi,q) \mathrm{ce}_{2m}(\eta,q) \tag{35}\] where the coefficients \(b_{2m}\) will be determined by the no-slip boundary conditions, and we have taken the opportunity for a convenient scaling by \(k_{0}/(i\rho\omega)\). For certain values of \(q\), however, such as the Mulholland-Goldstein value \(q\approx 1.4688\,i\) (see (4)), the Mathieu equation has double eigenvalues and at those points special care must be taken, because the ordinary Mathieu functions no longer form a complete set of orthogonal functions for expansion. As \(q\) tends to the Mulholland-Goldstein point, \(\mathrm{ce}_{0}(\eta)\) and \(\mathrm{ce}_{2}(\eta)\) coalesce and become the same function, and to ensure that expansion in these eigenfunctions is possible (also known as "completeness"), a generalized eigenfunction must be added to the set of Mathieu functions. In practice, as we will see, these isolated points make little difference to the solution because the overall problem is continuous (indeed analytic) in \(q\), and so it is only the solution process which must be altered at these points. Again, this is discussed in (4). Using the no slip boundary condition from equation (29), we find \[v(\xi_{0},\eta)=\frac{k_{0}}{i\rho\omega}=\frac{k_{0}}{i\rho\omega}\sum_{m\geq 0 }b_{2m}\mathrm{Ce}_{2m}(\xi_{0},q)\mathrm{ce}_{2m}(\eta,q)\;. \tag{36}\] Since the Mathieu functions are orthogonal under the bilinear form \[\langle f,g\rangle:=\int_{\eta=0}^{2\pi}f(\eta)g(\eta)\,d\eta \tag{37}\] (and if they have period \(\pi\), the upper limit on the integral can be reduced to \(\pi\)), then multiplying equation (36) by \(\mathrm{ce}_{2p}(\eta,q)\) and integrating with respect to \(\eta\) gives \[\int_{0}^{2\pi}\mathrm{ce}_{2p}(\eta,q)\ d\eta=b_{2p}\mathrm{Ce}_{2p}(\xi_{0}, q)\int_{0}^{2\pi}\mathrm{ce}_{2p}^{2}(\eta,q)\ d\eta\;. \tag{38}\] We note that the bilinear form does not involve the complex conjugate. Eigenvalues need not be real, and as parameters vary, eigenfunctions can coalesce. Expansion in Mathieu functions is similar to harmonic expansion, but more complicated. In the usual case, when eigenvalues are simple, \(b_{2m}\) is given by \[b_{2m}=\frac{\int_{0}^{2\pi}\mathrm{ce}_{2m}(\eta,q)\ d\eta}{\mathrm{Ce}_{2m}( \xi_{0},q)I_{2m}}\;. \tag{39}\] Here \[I_{2m}=\int_{0}^{2\pi}\mathrm{ce}_{2m}^{2}(\eta,q)\ d\eta\;. \tag{40}\] We compute these integrals by doing exact integration of the polynomial "blends" interpolating the solution, as described in section 3. The computational cost for this is trivial. **Remark 1**: The integral \(I_{2m}\) can be zero. In particular, if \(q=1.4688\ldots i\) (the Mulholland-Goldstein point mentioned earlier) then this integral is zero. In this case, the expansion must be computed by a different method. We ignore this possibility for the moment. **Remark 2**: The value of \(\mathrm{Ce}_{2m}(\xi_{0},q)\) might a priori be zero. In this case, we would have found a natural frequency of oscillation, and the solution would exhibit resonance. We did not encounter resonance in any of the configurations we tried. It seems that symmetric, even Mathieu functions with purely imaginary values of \(q\) have no zeros on the imaginary axis, although we have not proved this. With this nonzero integral, equation (26) for \(w\) becomes \[w(\xi,\eta)=-\frac{k_{0}}{i\rho\omega}\left(1-\sum_{m\geq 0}b_{2m}\mathrm{Ce}_{ 2m}(\xi,q)\mathrm{ce}_{2m}(\eta,q)\right)\;. \tag{41}\] **Oscillatory Velocity.** The oscillatory flow velocity in a tube of elliptic cross section is then (13) \[u_{\phi,e}(\xi,\eta,t)=\frac{4\hat{u}_{0,e}}{i\lambda_{e}}\left(1-\sum_{m\geq 0 }b_{2m}\mathrm{Ce}_{2m}(\xi,q)\mathrm{ce}_{2m}(\eta,q)\right)e^{i\omega t} \tag{42}\] where \[\lambda_{e}=\frac{1}{2}\sinh 2\xi_{0}\tanh 2\xi_{0}\Lambda_{e}=\frac{2(1- \varepsilon^{2})}{\varepsilon^{2}\left(2-\varepsilon^{2}\right)}\Lambda_{e} \tag{43}\] is a second nondimensional frequency parameter. **Oscillatory Flow Rate.** The flow rate is obtained by integrating the oscillatory velocity over the elliptic cross section \[q_{\phi,e}(t) =\iint\limits_{D}u_{\phi,e}(\xi,\eta,t)\ dA \tag{45}\] \[=\left(\iint\limits_{D}v(\xi,\eta)\ dA+\frac{4\hat{u}_{0,e}}{i \lambda_{e}}\iint\limits_{D}\ dA\right)e^{i\omega t} \tag{44}\] where \(D\) is the region enclosed by the bounding ellipse. The second integral on the right side of equation (45) can be evaluated analytically: \[\frac{4\hat{u}_{0,e}}{i\lambda_{e}}\iint\limits_{D}\ dA=\frac{8q_{0,e}}{i \lambda_{e}} \tag{46}\] where \(q_{0,e}\) is the steady flow rate in a tube of elliptic cross section (equation (15)). The first integral on the right hand side of equation (45) is then evaluated using \(v\) in equation (27) \[\iint\limits_{D}v\left(\xi,\eta\right)\ dA=\frac{\mu}{i\rho\omega}\iint \limits_{D}\frac{2}{d^{2}(\cosh 2\xi-\cos 2\eta)}\left(\frac{\partial^{2}v}{ \partial\xi^{2}}+\frac{\partial^{2}v}{\partial\eta^{2}}\right)\ dA \tag{47}\] If \(n\) is an outward pointing normal and \(ds\) is an elemental surface, then by Green's theorem it follows that \[\frac{\mu}{i\rho\omega}\iint\limits_{D}\frac{2}{d^{2}(\cosh 2\xi-\cos 2\eta)} \left(\frac{\partial^{2}v}{\partial\xi^{2}}+\frac{\partial^{2}v}{\partial\eta ^{2}}\right)\ dA=\frac{\mu}{i\rho\omega}\int_{\partial D}\frac{\partial v}{ \partial n}\ ds \tag{48}\] where \(\partial D\) is the positively oriented bounding curve of \(D\). It is shown in McLachlan (24) that \(ds=\delta d\eta\) and \(dn=\delta d\xi\), where \[\delta=d\left(\cosh^{2}\xi-\cos^{2}\eta\right)^{1/2} \tag{49}\] Thus, equation (48) becomes \[\frac{\mu}{i\rho\omega}\int_{\partial D}\frac{\partial v}{\partial n}\ ds= \frac{\mu}{i\rho\omega}\int_{0}^{2\pi}\left(\frac{\partial v}{\partial\xi} \right)_{\xi=\xi_{0}}d\eta \tag{50}\] and \[\left(\frac{\partial v}{\partial\xi}\right)_{\xi=\xi_{0}}=\frac{k_{0}}{i\rho \omega}\sum_{m\geq 0}b_{2m}\mathrm{Ce}^{\prime}_{2m}(\xi_{0},q)\mathrm{ce}_{2m}( \eta,q) \tag{51}\] where \({}^{\prime}\) here denotes differentiation with respect to \(\xi\). Because blends are polynomials, differentiation with them is simple, and the code we use provides for this automatically2. We thus get (apart from rounding errors) exact derivatives of the interpolants being used to represent the solutions. Because the solutions are so high-order, the derivatives are themselves accurate: while they typically lose an order of accuracy for each derivative taken, if one starts with order 16 then taking one derivative does not do much harm. We remark that with high enough frequency, however, which does occur with large eigenvalues for Mathieu functions, one would need to work to higher precision to maintain this accuracy. For the computations of this paper, we only used higher precision to check the numerics, and found double precision to be perfectly satisfactory. Integration of this formula with respect to \(\eta\) is straightforward, using the exact quadrature formula for blendstrings. But in fact we have already integrated each of these functions, in computing the \(b_{2m}\). NB: if the integrals were only to \(\pi\) and not to \(2\pi\), one must multiply the following formula by 2. Using equation (2.40) we find that \[\frac{\mu}{i\rho\omega}\int_{0}^{2\pi}\left(\frac{\partial v}{\partial\xi} \right)_{\xi=\xi_{0}}d\eta=-\frac{\mu k_{0}}{\rho^{2}\omega^{2}}\sum_{m\geq 0 }b_{2m}^{2}I_{2m}\mathrm{Ce}_{2m}^{\prime}(\xi_{0},q)\mathrm{Ce}_{2m}(\xi_{0}, q)\:. \tag{2.52}\] If we further use the relation \[\frac{\lambda_{e}\mu}{\rho\omega\mathrm{S}_{e}}=\frac{1}{\pi}\tanh 2\xi_{0}\:, \tag{2.53}\] this implies that the oscillatory flow rate in a tube of elliptic cross section is given by the following (cf. (13)): \[q_{\phi,e}(t)=\frac{8q_{0,e}}{i\lambda_{e}}\left(1-\frac{1}{i\pi\lambda_{e}} \tanh 2\xi_{0}\sum_{m\geq 0}b_{2m}^{2}I_{2m}\mathrm{Ce}_{2m}^{\prime}(\xi_{0},q )\mathrm{Ce}_{2m}(\xi_{0},q)\right)e^{i\omega t}\:. \tag{2.54}\] ### Oscillatory Wall Shear Stress By its definition, the wall shear stress is given by \[\tau_{\phi,e}\left(\eta,t\right)=\mu\left(\frac{\partial u_{\phi,e}}{\partial n }\right)_{\partial D}=\mu\left(\frac{\partial v}{\partial n}\right)_{ \partial D}e^{i\omega t} \tag{2.55}\] where \(u_{\phi,e}\) is the oscillatory velocity in a tube of elliptic cross section (equation (2.42)). Using the elemental arc length analysis of McLachlan (24), it can be shown that \[\left(\frac{\partial v}{\partial n}\right)_{\partial D}=\frac{1}{\delta_{0}} \left(\frac{\partial v}{\partial\xi}\right)_{\xi=\xi_{0}} \tag{2.56}\] where \(\delta_{0}\) is \[\delta_{0}=d\left(\cosh^{2}\xi_{0}-\cos^{2}\eta\right)^{1/2} \tag{2.57}\] Substituting from equation (2.51) for the derivative on the right hand side, this becomes \[\left(\frac{\partial v}{\partial n}\right)_{\partial D}=\frac{1}{\delta_{0}} \left(\frac{k_{0}}{i\rho\omega}\sum_{m\geq 0}b_{2m}\mathrm{Ce}_{2m}^{\prime}(\xi_{0},q )\mathrm{ce}_{2m}(\eta,q)\right)\:. \tag{2.58}\] Using equation (2.17) we can replace \(k_{0}\) by \(\hat{\tau}_{0,e}(\alpha^{2}+\beta^{2})/(\alpha^{2}\beta)\), or more conveniently by the limiting case of the circle: \(k_{0}=2\hat{\tau}_{0,c}/a\). Remember that \(a\) is the radius of the original circle, and \(\beta=f_{e}a\). After some algebra we obtain the following expression for oscillatory wall shear stress in a tube of elliptic cross section: \[\tau_{\phi,e}\left(\eta,t\right)=\frac{4\beta f_{e}\hat{\tau}_{0,c}}{i\delta_ {0}\lambda_{e}(2-\varepsilon^{2})}\left(\sum_{m\geq 0}b_{2m}\mathrm{Ce}_{2m}^{ \prime}(\xi_{0},q)\mathrm{ce}_{2m}(\eta,q)\right)e^{i\omega t} \tag{2.59}\] For reference and comparison, the (constant) oscillatory wall shear stress in a tube of circular cross section is given by the following (42): \[\tau_{\phi,c}=\frac{2\tau_{0,c}}{\Lambda_{c}}\frac{J_{1}(\Lambda_{c})}{J_{0}( \Lambda_{c})} \tag{2.60}\] where \(J_{k}(z)\) for \(k=0\), \(1\) are Bessel functions of the first kind, \(\tau_{0,c}=k_{0}a/2\), and \[\Lambda_{c}=\left(\frac{i-1}{\sqrt{2}}\right)\sqrt{\frac{\rho\omega}{\mu}}\:a\:. \tag{2.61}\] ## 3 Computation with Mathieu functions We will not review all existing numerical methods for computing with Mathieu functions here, but instead refer to (4), which is available as an open-access article. We will, however, summarize the method that we actually used, and give a few more details about the method in a subsection that may be skipped by a reader more concerned with the results, as opposed to how we got them. For notational convenience we refer to values of \(q\) with positive imaginary part, but because the eigenvalues are the same for \(q\) and \(-q\) in the even and symmetric case (see eg. the DLMF [https://dlmf.nist.gov/28.2](https://dlmf.nist.gov/28.2)), this is sufficient for our application (which has negative imaginary part) and saves writing many minus signs. The previous work of Haslam and Zamir in (12) used truncations of an infinite tri-diagonal eigenvalue-eigenvector problem to obtain approximations to the eigenvalues \(a_{2m}\). This method goes back at least to the work of Ince, and is widely used (4). The matrix in question, for the even and symmetric eigenfunctions, is \[\left[\begin{array}{cccccc}0&\sqrt{2}q&0&0&0&\cdots\\ \sqrt{2}q&4&q&0&0&\cdots\\ 0&q&16&q&0&\cdots\\ 0&0&q&36&q&\cdots\\ 0&0&0&q&64&\ddots\\ \vdots&\vdots&\vdots&\vdots&\ddots&\ddots\end{array}\right]\left[\begin{array} []{c}\sqrt{2}A_{0}\\ A_{2}\\ A_{4}\\ A_{6}\\ A_{8}\\ \vdots\end{array}\right]=\lambda\left[\begin{array}{c}\sqrt{2}A_{0}\\ A_{2}\\ A_{4}\\ A_{6}\\ A_{8}\\ \vdots\end{array}\right]. \tag{13}\] Truncation at "large enough" dimension gives good estimates of the eigenvalues, but there is a question of exactly how large should we take the matrix, and once the eigenvalues have been computed, how accurate they are. Notice that this is a complex symmetric matrix, not a Hermitian matrix. In our computations, we start with the matrix method, but only to get initial estimates of the eigenvalues \(a_{2m}\). We then apply the continued fraction method of Blanch as described in (4) and use Newton's method to refine the eigenvalues to the desired accuracy. This tells us precisely how accurate each eigenvalue is, and is more efficient than computing larger and larger matrices until the eigenvalues converge. Our procedure works well enough for all simple eigenvalues, although sometimes we have to increase precision. For the double eigenvalues, we proceed differently. Double eigenvalues occur for purely imaginary \(q\), but (as elsewhere in the complex \(q\)-plane) only at isolated points: At the Mulholland-Goldstein point \(q\approx 1.4688i\), and (next smallest) \(q\approx 16.47i\), and so on. We have pre-computed several of these by the method of Hunter and Guerrieri (15). They are tabulated in (4) and are also available on-line in the code repository for that paper. Given numerical values for the semimajor axis \(\alpha\) and semiminor axis \(\beta\), and given a numerical value for the (purely imaginary) parameter \(q\), we computed up to certain index \(N\) (frequently \(N\) was 6 and sometimes 8; because this is a spectral method, convergence is very rapid) of the Mathieu eigenvalues \(a_{2m}(q)\), for \(m=0,\,1,\,\ldots,\,N\). If the eigenvalues were distinct (which was usually, but not always, the case) then we computed the Mathieu functions \(\mathrm{ce}_{2m}(\eta)\) on the interval \(0\leq\eta\leq\pi\) and the corresponding modified Mathieu functions \(\mathrm{Ce}_{2m}(\xi)\) on the interval \(0\leq\xi\leq\xi_{0}=\mathrm{invcosh}(\alpha/d)=\mathrm{invesch}(d/\alpha)= \mathrm{invesch}(\varepsilon)\). This is because \(\alpha=d\cosh\xi_{0}\) or, more simply, \(\varepsilon=\mathrm{sech}\xi_{0}\) gives the value \(\xi_{0}\) of the parameter \(\xi\) at the tube wall3. Footnote 3: Here, we are using David Jeffrey’s notation for functional inverses: \(y=\mathrm{invcosh}(x)\) means \(x=\cosh(y)\), etc. This notation is superior for branched inverses, and superior pedagogically even for simple functions, to the more common overloading of superscripts or use of the inappropriate word “arc”, and we hope that it catches on. To compute the Mathieu functions and modified Mathieu functions, we used the Hermite-Obreshkov integrator sketched in (4). We worked in double precision (except where noted explicitly here) and typically used an order 30 or 40 method, with grade4 15 or 20 Taylor series computed on each marching step, and "blendstrings" as piecewise polynomial interpolants giving the value of the solution (and whatever derivatives were required). ### More details of the numerical method We treat the Mathieu equation (and the modified Mathieu equation) as an initial-value problem (IVP) for an ordinary differential equation (ODE), once both \(q\) and the eigenvalue \(s\) are fixed. To compute the Mathieu function, we could use almost any standard method to solve the IVP5. But the modified Mathieu equation is related to the Mathieu equation by the change of variables \(\xi=i\eta\). That is, if the standard method chosen for the Mathieu equation could work in the complex plane, then it could also be used for the modified Mathieu equation. This idea restricts us to implementations that work over the complex plane, but because we have a complex parameter \(q\) (in fact, purely imaginary in our application), this is necessary anyway. Footnote 5: We reassure the reader that we do know and highly value the standard general methods, as described for instance in the classic (10, 11). We are also aware of the truly remarkable advances made since then, such as are described in (32). We have even contributed to the literature and the software ecosystem in the past (37). But while writing a special-purpose solver for the Mathieu equation—when so many good solvers already exist—might seem quixotic, bear with us for a bit: it turns out to be useful and we believe interesting, and in particular it is reassuring to have the ability to retrospectively measure how accurate the solutions are. Also, there is an opportunity for greater efficiency and control. Since the Mathieu equation is linear, special-purpose methods appropriate for linear problems might be used. More, since the Mathieu equation can be written in a "D-finite" or "holonomic" form6, Taylor series coefficients can be computed rapidly given the initial values \(y(\eta_{n})\) and \(y^{\prime}(\eta_{n})\). In fact, we do not use the D-finite form even though it does offer the potential of significant speed-up (26); this might be pursued in future. Straightforward generation of Taylor coefficients by Cauchy convolution with those of \(\cos 2\eta\) was fast enough for our purposes. Footnote 6: This fact was already known to Mathieu, although the names \(D\)-finite or holonomic had not been invented yet in 1868. But writing the differential equation in this form allows for faster human computation, too. #### 3.1.1 Blends We now explain the interpolants that we use. "Blends", or two-point Hermite interpolants, are described in (6). In brief, if one knows Taylor coefficients \(p_{j}\) for \(0\leq j\leq m\) at one end \(z_{k}\) of an interval, and Taylor coefficients \(q_{j}\) for \(0\leq j\leq n\) at the other end \(z_{k+1}\) of an interval, and \(z=z_{k}+sh\) where \(h=z_{k+1}-z_{k}\) is the width of the interval so that \(0\leq s\leq 1\), then the following polynomial "blends" the two sets of Taylor coefficients together to form an excellent approximation of the function over the interval: (Hermite, Cours d'Analyse 1873) \[H_{m,n}(s)= \sum_{j=0}^{m}\sum_{k=0}^{m-j}{n+k\choose k}s^{k+j}\left(1-s \right)^{n+1}p_{j} \tag{32}\] \[+\sum_{j=0}^{n}\left(-1\right)^{j}\sum_{k=0}^{n-j}{m+k\choose k}s ^{m+1}\left(1-s\right)^{k+j}q_{j}\] has \(H^{(j)}(0)/j!=p_{j}\) for \(0\leq j\leq m\) and \(H^{j}(1)/j!=q_{j}\) for \(0\leq j\leq n\). In this formula, differentiation is with respect to \(s\), and care must be taken to include the correct factors of \(h\) from the chain rule when using the formula for the interval \([z_{k},z_{k+1}]\). The error in Hermite interpolation is known; the results on the real line are given in (18) (and the complex results were known to Hermite). Here, the general real results simplify to \[f(s)-H_{m,n}(s)=\frac{f^{(m+n+2)}(\theta)}{(m+n+2)!}s^{m+1}(s-1)^{n+1} \tag{33}\] for some \(\theta=\theta(s)\) between \(0\) and \(1\). If we have a sequence of nodes, say \(z_{k}\) for \(0\leq k\leq M\), where Taylor coefficients for an analytic function \(f(z)\) are known up to grade (say) \(m_{k}\) at each node, then it is natural to approximate \(f(z)\) on each segment from \(z=z_{k}\) to \(z=z_{k+1}\) by the blend determined by those two sets of Taylor coefficients. This gives a piecewise polynomial interpolant, which we call a "blendstring" for short. For instance, if Taylor series of only grade \(1\) are used at each node, then the blendstring is just the more familiar pure piecewise cubic Hermite interpolant on each subinterval, and the result is similar to a cubic spline. Taylor series of only grade \(1\) do not give us the needed accuracy, though, and we always use much higher order. As described in (6), these interpolants are remarkably stable numerically, even for ludicrously high order such as \(m=500\), when implemented in a doubly-recursive Horner form. This turns out to be quite convenient for this application, where we typically use grades of \(15\) or so but sometimes as high as \(40\). Blends can be integrated exactly, as follows, and this is useful (6): \[\int_{s=0}^{1}H_{m,n}(s)\,ds= \frac{(m+1)!}{(m+n+2)!}\sum_{j=0}^{m}\frac{(n+m-j+1)!}{(j+1)\,(m-j)!} \,p_{j} \tag{10}\] \[+\frac{(n+1)!}{(m+n+2)!}\sum_{j=0}^{n}\frac{(n+m-j+1)!}{(j+1)\,(n- j)!}\,\left(-1\right)^{j}q_{j}\;.\] The numbers showing up in this formula turn out to be smaller for the higher-order Taylor coefficients, as one would expect. Note that the above formula gives (in exact arithmetic) the exact integral of the blend over the whole interval. If the blend is approximating a function \(f(s)\), then integrating equation (11) gives us \[\int_{s=0}^{1}f(s)\,ds-F(1)=(-1)^{n+1}\frac{(m+1)!(n+1)!}{(m+n+3)!}\frac{f^{(m +n+2)}(c)}{(m+n+2)!} \tag{11}\] where, using the Mean Value Theorem for integrals and the fact that \(s^{m+1}(1-s)^{n+1}\) is of one sign on the interval, we replace the evaluation of the derivative at one unknown point \(\theta\) with another unknown point \(c\) on the interval. Indeed, as described in (6), one can construct a new blendstring \(H(z)\) for the antiderivative \(F(z)\) from a blendstring \(h(z)\) for \(f(z)\), so that \(H^{\prime}(z)=h(z)\) exactly (up to roundoff error), and well approximates the antiderivative of \(f(z)\). This is useful for the problem at hand. The code is available at [https://github.com/rorless/Puiseux-series-Mathieu-double-points](https://github.com/rorless/Puiseux-series-Mathieu-double-points) in the files ActiveLoopc1p0.maple for the simple eigenvalue case and ActiveDoubles.maple for the double eigenvalue case. #### 3.1.2 Marching We chose an implicit marching method based on Taylor series generation7, quite standard in outline, as follows. Taylor series coefficients that have been generated at the current node, say \(\eta_{n}\), are supposed to be"known". Specifically, suppose to start with that we have generated a Taylor polynomial of grade \(m\) for our desired solution at this point. Footnote 7: Taylor series methods for solving IVP for ODE have historically been considered impractical by many people, but in fact this is not so, especially if the series coefficients can be generated easily, as in this case. The quality of the free interpolants that one gets turns out to be a significant benefit. Taylor series methods have other benefits as well: see (29) and its references. Suppose also that we have chosen a tentative next node, \(\eta_{n+1}=\eta_{n}+h\). If our variable \(\eta\) were time, this would be a time step. The stepsize \(h\) is tentative at this point. We now generate Taylor coefficients for two independent solutions, satisfying (for one solution) \[y(\eta_{n+1})=1\text{ and }y^{\prime}(\eta_{n+1})=0 \tag{12}\] and (for the complementary solution) \[y(\eta_{n+1})=0\text{ and }y^{\prime}(\eta_{n+1})=1\;. \tag{13}\] Next, we blend the known coefficients at \(\eta_{n}\) with these independent solutions in the following way. Form a blend of the known coefficients at \(\eta_{n}\) with the zero Taylor series at \(\eta_{n+1}\). Call the result \(L(\eta)\). Form a blend of the first series above at \(\eta_{n+1}\) with the zero Taylor series at \(\eta_{n}\). Call the result \(C(\eta)\). Form a blend of the second series above with the zero Taylor series at \(\eta_{n}\) and call the result \(S(\eta)\). Our desired solution will then be a linear combination of these three: say \(y=A\,C(\eta)+B\,S(\eta)+L(\eta)\). This uses the linearity of the equation, and the linear dependence of blends on their constituent Taylor coefficients. We then use collocation at the two points \(\eta_{n}+h/4\) and \(\eta_{n}+3h/4\) (which are Chebyshev-Lobatto points, not that it matters much at this low order) to give us two equations in the two unknowns \(A\) and \(B\). That is, we compute the residuals \[r_{L}(\eta):= L^{\prime\prime}+(s-2q\cos 2\eta)L \tag{14}\] \[r_{C}(\eta):= C^{\prime\prime}+(s-2q\cos 2\eta)C\] (15) \[r_{S}(\eta):= S^{\prime\prime}+(s-2q\cos 2\eta)S\] at those two points, and set the residual for \(y\) to zero at those two points: \[0= A\,r_{C}(\eta_{n}+h/4)+B\,r_{S}(\eta_{n}+h/4)+r_{L}(\eta_{n}+h/4) \tag{11}\] \[0= A\,r_{C}(\eta_{n}+3h/4)+B\,r_{S}(\eta_{n}+3h/4)+r_{L}(\eta_{n}+3h/4)\;.\] We solve this two-by-two linear system by the exact formula for the inverse (this is as good a method as any, for such a small system) to acquire the coefficients \(A\) and \(B\). This system is nonsingular because the solutions are linearly independent at the right endpoint and are well-scaled and well-conditioned in practice, as we observed experimentally. Collocation is a well-understood technique for boundary-value problems for ODE (1, 2), but it has historically been used successfully for stiff initial-value problems as well (41). After having computed \(A\) and \(B\) and used them to form our tentative solution \(y\), we then sample the residual of \(y\), namely \(y^{\prime\prime}+(s-2q\cos 2\eta)y\), at the midpoint \(\eta_{n+1/2}=\eta_{n}+h/2\). This is (asymptotically as \(h\to 0\)) the location of the maximum residual over the step. If this is smaller than our tolerance, we accept the step and continue. Note that if the step is accepted, the Taylor coefficients then become known at \(\eta_{n+1}\), being simply the known linear combination of the first and second sets of computed series coefficients. We also use the measured residual (by known step-size control techniques (9)) to predict the next step size \(h_{n+1}\) and thus \(\eta_{n+2}\). If the step is rejected instead because it does not satisfy the accuracy tolerance, we reduce the stepsize by an amount indicated by the size of the measured residual (taking the order \(2m\) into account), and try again. Various known heuristics and safety factors are included in order to be cautious about various contingencies (for instance, the measured residual might be accidentally small, which throws the predicted stepsize off; similarly, the stepsize predictions are determined by assuming that the derivatives involved in the error coefficients "don't change much" from step to step, but this is sometimes violated in practice). Error messages can be generated if too many stepsize reductions are encountered, or if the solver can't find a good starting stepsize8, or if the maximum number of steps is reached, as is usual with IVP solvers. Footnote 8: We start with a pure Taylor series to estimate the initial step size \(h\). This has some potential to go wrong, and sometimes does, because it does not benefit from implicitness, but we have found it satisfactory. #### 3.1.3 Rationale The reasons we do this, instead of using a more standard method that has already been implemented and tested, include the following. 1. We work from the beginning over the complex plane (most standard implementations put integration over the real line first). 2. We can handle the double-eigenvalue case in a straightforward way. To be fair, other methods can also handle this case in a straightforward way, as well, but at least we are not at a disadvantage. 3. The functions are entire, and therefore Taylor series are defined everywhere for them. Since blend-strings are very smooth (with grade \(m\) Taylor coefficients at each knot, they are \(m\) times continuously differentiable) they may be expected to be accurate and convenient. 4. The problem is linear, so the implicitness of the method is simple to deal with (and there are no convergence issues in solving nonlinear equations at each step). 5. Putting \(\eta=\eta_{n}+sh\), the residual has the error expression9 Footnote 9: To show this, notice that the residual is \(O((z-\alpha_{k})^{m-1})\) at the left endpoint (not \(O((z-\alpha_{k})^{m+1})\) because we have differentiated twice), is \(O((z-\alpha_{k+1})^{m-1})\) at the right endpoint, and vanishes at the Chebyshev–Lobatto points in between. (10) \[r(\eta)=Kh^{2m}s^{m-1}(s-\frac{1}{4})(s-\frac{3}{4})(1-s)^{m-1}+O(h^{2m+1})\] for some "constant" \(K\) depending on high-order derivatives of the solution, evaluated at some point in the interval. In comparison to an explicit Taylor series method, this gains a factor of \(2^{2m+2}\) in accuracy because the maximum value of the polynomial in \(s\) is \(2^{-2m-2}\). Since we typically take \(m=15\) or higher, this accuracy gain is noticeable. 6. The effect of the residual on the solution can be analyzed by using the Green's function for the Mathieu equation, which can easily be computed by the same methods: (11) \[G(\eta,\tau)=w_{I}(\eta)w_{II}(\tau)-w_{I}(\tau)w_{II}(\eta)\;.\] Here we use the notation for the basic solutions as described in the DLMF [https://dlmf.nist.gov/28.2.ii](https://dlmf.nist.gov/28.2.ii). The change in solution produced by a residual \(r(\eta)\) is (3.12) \[\int_{\tau=0}^{\eta}G(\eta,\tau)r(\tau)\,d\tau\,.\] 7. We can re-use standard stepsize heuristics, which are well-known to produce "good" meshes which reflect dynamic changes in the solution. 8. The Mathieu equation is not "stiff" with the stepsizes and tolerances we are using (38), but is rather oscillatory, and as such benefits somewhat from the implicitness of this method. There is still a stability restriction, but it is not very important compared to the stepsize restriction needed for accuracy, and this implicit method does perform better than a pure explicit Taylor series method. 9. Using a residual (defect) control is useful even for unstable differential equations. The modified Mathieu equation can be very unstable, exhibiting doubly exponential growth. 10. These Taylor coefficients are very easy to generate, and the code is quite simple. The fact that the order of accuracy can be chosen more or less arbitrarily is an advantage for very high-precision computation: the cost for accurate solution is polynomial in the number of bits of accuracy (16). 11. We do want high-precision computation, because we want to be able to state unequivocally that numerical artifacts are not present, and to verify that any given solution is as accurate as the code claims. This is not a given, without an external check, because of the heuristics and safety factors needed in practice for the solver. 12. Taking derivatives and integrals of blends is very simple, and both of these are needed for subsequent computations with the solution. We are not just interested in the solution, but also in integrals and derivatives of the solution. 13. It might be true that this method is useful for the numerical solution of some other, similar, equations. In particular this might be of interest for D-finite (holonomic) systems. This application provides a useful test case. ### Testing the numerical solution Because each computed Mathieu function and modified Mathieu function is a smooth piecewise polynomial, it can be differentiated and substituted back into the differential equation. What is left over is sometimes called the "defect" but the more usual name in numerical analysis is the residual. The solutions always had a residual comparable to the tolerance with which the solver was called; typically about \(10^{-11}\) if we were working in double precision, and about \(10^{-28}\) if we were working in 30 decimal digits. This is, of course, not enough to say that the forward error is small: one needs also to compute the Green's function, or otherwise verify that the condition number10 is small. Footnote 10: By this we do not mean the condition number of a matrix (there are no matrices here) but rather the condition number of the Mathieu differential equation, which since the equation is linear, is equivalent to the maximum value of the Green’s function. It turns out that for \(f_{e}\) near 1, i.e. when the ellipse is nearly circular and the eccentricity \(\varepsilon\) is small, then \(\xi_{0}\) gets modestly large--and because the modified Mathieu functions grow doubly exponentially, the Green's function does indeed amplify errors in this case. Indeed, the condition number of \(\mathrm{Ce}_{0}(\xi,q)\) for evaluation, namely \(C=\xi\mathrm{Ce}_{0}^{\prime}(\xi,q)/\mathrm{Ce}_{0}(\xi,q)\) always grows exponentially with \(\xi\). For \(q=0.3823i\), a typical value of the parameter, the value of the condition number \(C(4.0)\) is approximately 120. This is tolerable. For all values of the parameters that we used, except for the stress test when \(f_{e}=0.9999\) (more about this, below), the condition number was similarly modest. Another way to see this is to vary the parameters (such as \(f_{e}\) or \(\omega\)) and verify that the solution does not change much. A third way is to do the computations again in higher precision. We found in the end that our computation of the Mathieu functions and modified Mathieu functions was very reliable. It is also possible to verify that the underlying PDE is satisfied: one computes (for instance) \(\partial^{2}w/\partial\xi^{2}\), \(\partial^{2}w/\partial\eta^{2}\), and \(w\) at one or several or a great many points, and substitues these values back into the partial differential equation (2.25). When we do this we see that what is left over is about the size of the integration tolerance (typically quite near to the unit roundoff level in our runs), over the whole ellipse. Figure 7(a) shows the result of doing this in one case, for \(v(\xi,\eta)\) using equation (2.27). This kind of a posteriori solution validation is a powerful check against numerical errors. What we have proved by this a posteriori computation is that we have computed the exact solution of a perturbed PDE, where the perturbation is smaller than \(10^{-10}\). Compared to modelling errors (for instance that the true deformed shape is not exactly elliptical, or the even greater modelling error of neglecting the third dimension) this shows definitively that the numerical method has performed satisfactorily. This is a useful guard against blunders, as well: We were reminded to use the chain rule, and also found a typographical error in one equation, when we did this. Finally one needs to check the boundary conditions. In figure (b)b we see one such check. The oscillatory nature of the error indicates that it is truncation error we are seeing--the effect of taking only \(N=6\) terms in the expansion. With a high enough \(N\), one sees only rounding errors at this stage. #### 3.2.1 Difficult cases for the code If \(q\) is small, then the continued fraction approach of Blanch becomes somewhat fragile. Blanch had performed a good numerical analysis of the method, and using her methods it can be made to work well in this situation by various adjustments. In contrast, however, the performance of the matrix method improves as \(q\to 0\), so it is simpler just to drop the use of the continued fraction approach when \(q\) is "too small". We chose, somewhat arbitrarily, to use just the matrix method if \(|q|<0.2\). The solver is meant for use by people willing to adjust parameters experimentally and not, in Blanch's words, simply to be run "in a robot-like fashion." The code has, in particular, an aggressive initial step-size heuristic based on explicit Taylor series. This got into trouble for some of the runs in the passive scenario with circumference 2.0cm and fractions \(f_{e}\) closer to 1. We could have adjusted the parameters (tolerance and grade \(m\) of Taylor approximation) but it was simpler to use higher precision for those runs, which we did in 30 decimal digits. The time penalty was slight, even though we have not optimized the code for speed. The first step for that, of course, would be to use a production language instead of a prototyping language such as Maple (which saves our time and not the computer's). Even so, the solver is gratifyingly rapid, even at very high precision. The only real difficulty that occurs with expansion in Mathieu functions is when the fraction \(f_{e}\) is nearly 1. That is, the nearly-circular case is the difficult one for expansion in terms of Mathieu functions. This is because the coordinate transformation used, namely \(x=d\cosh\xi\cos\eta\) and \(y=d\sinh\xi\sin\eta\), becomes singular as the focal distance \(d\to 0\), which it must as the ellipse becomes a circle. Another way to think about this is to "zoom out" on confocal ellipses; the larger the diameter, the more nearly circular the confocal ellipses are. This singular behaviour shows up in several ways, numerically. For instance, taking \(f_{e}=0.999\) in the active scenario, and choosing \(\omega\) so \(q\) is the Mulholland-Goldstein point, seems that it should not cause problems. But it does, because \(\xi_{0}=\operatorname{invsech}(\varepsilon)\) is then about 4.6. Although that does not seem like much, the modified Mathieu functions grow doubly exponentially11; in this case, \(\operatorname{Ce}_{0}(\xi_{0},q^{*})\) has magnitude \(6.0972\cdot 10^{45}\). The modified Mathieu equation becomes very difficult to integrate accurately for large \(\xi\) because of this doubly exponential growth. Footnote 11: One is used to exponential growth, but doubly exponential growth is remarkably difficult to deal with. To see the asymptotics for \(\operatorname{Ce}(\xi,q)\) look at the DLMF. See [https://dlmf.nist.gov/28.25](https://dlmf.nist.gov/28.25) in particular. As a stress-test for the code, we solved the problem at very high precision with \(f_{e}=0.9999\) in both the active and passive scenarios (which wind up being very similar, of course). Using 200 decimal digits of precision, and Taylor series of degree 100 (so the numerical method was of order 200) we were able to solve the problems accurately in only a few seconds. It is ironic that the "difficult case" resembles so strongly the simple case of a tube of circular cross section, which has a direct and natural solution in terms of Bessel functions. #### 3.2.2 Comparison with a standard code When we compare this code with that of (37), we see that that standard code performs very well, in fact. Even just with the default solver (a version of RKF45) all scenarios are rapidly solved. The doubly-exponential growth of the modified Mathieu equation is simply taken in stride by the code. We remark that that code, while more than 20 years old, has undergone steady development since then at the hands of Allan Wittkopf of Maplesoft, who has (without publishing papers on the subject) incorporated many speed and reliability enhancements. However, access to the internal interpolants used by the code is quite awkward, and it is not easily possible to differentiate the interpolant to compute a residual to validate the solution it produces. With the present code, this is simple (indeed automatic). Secondly, if very high precision is wanted, the higher order of the present code lowers the cost. Indeed, using this method, the cost of solution is polynomial in the number of bits of accuracy requested (16), while for fixed-order Runge-Kutta methods the cost is exponential in the number of bits of accuracy requested. At modest accuracy, or even at double precision accuracy, this is not a problem, of course. The third advantage of the present method is the decent numerical properties of the underlying interpolant. In comparison, the monomial basis used by the internal Maple code can suffer more from rounding errors (at high precision), although we have no doubt that the developers have taken steps to minimize the difficulty. Something that might have been a fourth advantage, the ease of combining and integrating blends to compute, for instance, the Green's function, is not much of an advantage after all: Maple's dsolve/numeric interface has several flexible features that let one combine solutions, and integrating the solution of a differential equation is merely a matter of integrating the differential equation for the integral in question. Still, this present code offers some potential advantages for other applications, and using this problem as a test case for it has proved to be interesting. ## 4 The double-eigenvalue case In the case of a double eigenvalue, the previous formulae need to be amended. For the Mathieu equation, double eigenvalues are isolated, and there are no higher-order eigenvalues, so the treatment is relatively straightforward. The theory has been known for a long time (25), but in practice it seems to have been ignored. We therefore give a detailed treatment below. We will use a Puiseux series expansion near the double eigenvalue to deduce the analytical form needed for expansion exactly at the double point. We emphasize that the computations in this section are exact computation of series coefficients, and analytical cancellation of large terms will give us the result that we want. Now suppose that the coalescing eigenfunctions are \(\mathrm{ce}_{0}(\eta,q)\) and \(\mathrm{ce}_{2}(\eta,q)\). For our computations this is the one that mattered the most, when \(q\approx 1.4668\,i\) is the Mulholland-Goldstein point; but other purely imaginary eigenvalues also occur for larger frequencies, or larger circumference blood vessels; so we give details of the process. If \(q^{*}=1.4687686137851\ldots\)\(i\) is the Mulholland-Goldstein point, then we may expand the eigenvalues \(a_{0}\) and \(a_{2}\) in Puiseux series to get \(a_{0}=a^{*}-\alpha_{1}\sqrt{q-q^{*}}+O(q-q^{*})\) and \(a_{2}=a^{*}+\alpha_{1}\sqrt{q-q^{*}}+O(q-q^{*})\) in a region close to that point. Here \(a^{*}=2.08869890274969540\ldots\) is real, and known to many decimal places (4). Similarly, \[\alpha_{1}=1.659487804320\ldots+1.659487804320\ldots\,i \tag{26}\] is also known to many decimal places, although as we will see it does not appear in the final formulae for the spectral expansion coefficients at the double eigenvalue. If we write a Mathieu series expansion for some function, say \(v(\xi,\eta)\), at a point near to the double point, we find that the coefficients of the terms \(\mathrm{Ce}_{0}(\xi,q)\mathrm{ce}_{0}(\eta,q)\) and \(\mathrm{Ce}_{2}(\xi,q)\mathrm{ce}_{2}(\eta,q)\) are large and of opposite sign; indeed they have leading behaviour that is \(O((q-q^{*})^{-1/2})\). Also, all of \(\mathrm{Ce}_{0}\), \(\mathrm{ce}_{0}\), \(\mathrm{Ce}_{2}\), and \(\mathrm{ce}_{2}\) can be written as functions of the fundamental solution \(w_{I}(z,a,q)\) as follows: \[\mathrm{ce}_{0}(\eta,q)= w_{I}(\eta,a_{0},q)\] \[\mathrm{Ce}_{0}(\xi,q)= w_{I}(i\xi,a_{0},q)\] \[\mathrm{ce}_{2}(\eta,q)= w_{I}(\eta,a_{2},q) \tag{27}\] \[\mathrm{Ce}_{2}(\xi,q)= w_{I}(i\xi,a_{2},q)\,. \tag{28}\] We will also need the following two new functions: \[u(\eta):= D_{2}(w_{I})(\eta,a^{*},q^{*}) \tag{29}\] \[U(\xi):= D_{2}(w_{I})(i\xi,a^{*},q^{*})\,. \tag{30}\] Here \(D_{2}(f)(x,y,z)\) means the partial derivative with respect to the second variable, and then evaluated at the point \((x,y,z)\). We will show in a following subsection how these can be computed. Now suppose that the solution at the point \(q\) has the expansion \[v(\xi,\eta)=b_{0}\mathrm{Ce}_{0}(\xi,q)\mathrm{ce}_{0}(\eta,q)+b_{2}\mathrm{ Ce}_{2}(\xi,q)\mathrm{ce}_{2}(\eta,q)+\cdots \tag{31}\] where the terms not included have eigenvalues that will not coalesce, and therefore the previous treatment using orthogonality will suffice to identify their coefficients \(b_{2m}\) for \(m>1\). Putting for brevity \(x=\sqrt{q-q^{*}}\) and expanding everything in series in \(x\) and neglecting terms of size \(O(x^{2})\) or smaller, we have the following: \[a_{0}= a^{*}-\alpha_{1}x+O(x^{2})\] \[a_{2}= a^{*}+\alpha_{1}x+O(x^{2})\] \[b_{0}= \frac{A_{0}}{x}+B_{0}+O(x)\] \[b_{2}= \frac{A_{2}}{x}+B_{2}+O(x)\] \[\mathrm{Ce}_{0}(\xi,q)= \mathrm{Ce}_{0}(\xi,q^{*})-\alpha_{1}U(\xi)x+O(x^{2})\] \[\mathrm{ce}_{0}(\eta,q)= \mathrm{ce}_{0}(\eta,q^{*})-\alpha_{1}u(\eta)x+O(x^{2})\] \[\mathrm{Ce}_{2}(\xi,q)= \mathrm{Ce}_{0}(\xi,q^{*})+\alpha_{1}U(\xi)x+O(x^{2}) \tag{4.6}\] \[\mathrm{ce}_{2}(\eta,q)= \mathrm{ce}_{0}(\eta,q^{*})+\alpha_{1}u(\eta)x+O(x^{2})\,.\] Now in our case, the coefficients of \(v(\xi,\eta)\) are determined by integration against the constant function \(1\) at the wall \(\xi=\xi_{0}\), so that for \(q\) near to \(q^{*}\) we have \[\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*})\,d\eta-\alpha\int_{\eta=0}^{2 \pi}u(\eta)\,d\eta x+O(x^{2}) \tag{4.7}\] on the left-hand side, and, expanding everything out and using the fact that \[\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}^{2}(\eta,q^{*})\,d\eta=0\,, \tag{4.8}\] we find \[-2\alpha_{1}A_{0}\mathrm{Ce}_{0}(\xi_{0},q^{*})\int_{\eta=0}^{2\pi}\mathrm{ce} _{0}(\eta,q^{*})u(\eta)\,d\eta+L_{1}x+O(x^{2}) \tag{4.9}\] on the right-hand side, with \[L_{1}=\alpha_{1}^{2}A_{0}\mathrm{Ce}_{0}(\xi_{0},q^{*})\int_{\eta=0}^{2\pi}u^ {2}(\eta)\,d\eta-2\alpha_{1}\left(B_{0}\mathrm{Ce}_{0}(\xi_{0},q^{*})-\alpha_{ 1}A_{0}U(\xi_{0})\right)\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*})u(\eta )\,d\eta\,. \tag{4.10}\] Since the squared integral of the generalized eigenfunction \(u(\eta)=D_{2}(w_{I})(\eta,a^{*},q^{*})\) is not zero, and since the integral of the product of \(u(\eta)\) with \(\mathrm{ce}_{0}(\eta,q^{*})\) is not zero, and since \(\mathrm{Ce}_{0}(\xi_{0},q^{*})\) is not zero, we may equate the constant terms and the terms linear in \(x\) and solve for \(A_{0}\) and for \(B_{0}\). We get \[A_{0}=-\frac{\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*})\,d\eta}{2\alpha_{ 1}\mathrm{Ce}_{0}(\xi_{0},q^{*})\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*} )u(\eta)\,d\eta} \tag{4.11}\] and \[B_{0}=\frac{\int_{\eta=0}^{2\pi}u(\eta)\,d\eta+\alpha_{1}A_{0}\left(2U(\xi_{0} )\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*})u(\eta)\,d\eta+\mathrm{Ce}_{0} (\xi_{0},q^{*})\int_{\eta=0}^{2\pi}u^{2}(\eta)\,d\eta\right)}{2\mathrm{Ce}_{0 }(\xi_{0},q^{*})\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*})u(\eta)\,d\eta} \tag{4.12}\] Similarly, we get \[A_{2}=-A_{0}=\frac{\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*})\,d\eta}{2 \alpha_{1}\mathrm{Ce}_{0}(\xi_{0},q^{*})\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}( \eta,q^{*})u(\eta)\,d\eta} \tag{4.13}\] and \[B_{2}=\frac{\int_{\eta=0}^{2\pi}u(\eta)\,d\eta-\alpha_{1}A_{2}\left(2U(\xi_{0}) \int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*})u(\eta)\,d\eta+\mathrm{Ce}_{0}( \xi_{0},q^{*})\int_{\eta=0}^{2\pi}u^{2}(\eta)\,d\eta\right)}{2\mathrm{Ce}_{0}( \xi_{0},q^{*})\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*})u(\eta)\,d\eta} \tag{4.14}\] which resembles the formula for \(B_{0}\). Putting these formulae into the expansion for \(v(\xi,\eta)\) we get \(v(\xi,\eta)=(B_{0}+B_{2})\mathrm{Ce}_{0}(\xi,q^{*})\mathrm{ce}_{0}(\eta,q^{*}) -2\alpha_{1}A_{0}\left(U(\xi)\mathrm{ce}_{0}(\eta,q^{*})+\mathrm{Ce}_{0}(\xi, q^{*})u(\eta)\right)+O(x)\) and thus as \(q\to q^{*}\) the expansion of \(v(\xi,\eta)\) becomes \[v(\xi,\eta)=b_{0}\mathrm{Ce}_{0}(\xi,q^{*})\mathrm{ce}_{0}(\eta,q^{*})+\hat{b }_{2}\left(U(\xi)\mathrm{ce}_{0}(\eta,q^{*})+\mathrm{Ce}_{0}(\xi,q^{*})u(\eta )\right)+\cdots \tag{4.15}\] where \[b_{0}= \frac{\int_{\eta=0}^{2\pi}u(\eta)\,d\eta}{\mathrm{Ce}_{0}(\xi_{0 },q^{*})\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*})u(\eta)\,d\eta} \tag{4.16}\] \[+\frac{\left(\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*})\,d \eta\right)\left(\mathrm{Ce}_{0}(\xi_{0},q^{*})\int_{\eta=0}^{2\pi}u^{2}(\eta )\,d\eta+U(\xi_{0})\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*})u(\eta)\,d \eta\right)}{\left(\mathrm{Ce}_{0}(\xi_{0},q^{*})\int_{\eta=0}^{2\pi}\mathrm{ ce}_{0}(\eta,q^{*})u(\eta)\,d\eta\right)^{2}}\] (4.17) \[\hat{b}_{2}=\frac{\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*}) \,d\eta}{\mathrm{Ce}_{0}(\xi_{0},q^{*})\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}( \eta,q^{*})u(\eta)\,d\eta}\;,\] and the omitted terms can all be calculated by orthonormality as before. _Remark 4.1_.: This result can be derived a different way, by differentiating the original formula with respect to \(a\). With that method, the appearance of \(U(\xi)\mathrm{ce}_{0}(\eta)+\mathrm{Ce}_{0}(\xi)u(\eta)\), being the derivative of \(\mathrm{Ce}_{0}(\xi)\mathrm{ce}_{0}(\eta)\), seems natural. Then one can use the orthogonality of \(\mathrm{ce}_{0}(\eta)\) with all \(\mathrm{ce}_{2m}(\eta)\) (including itself) to compute \(\hat{b}_{2}\), and then integrate against \(u(\eta)\) and solve the resulting equation for \(b_{0}\) using the known \(\hat{b}_{2}\). This leads to the same result, but we feel that the detailed derivation above is more convincing, and explains what happens to the expansion coefficients as \(q\to q^{*}\). All that remains is the computation of \(u(\eta)\) and \(U(\xi)\). To do this, we compute the Frechet derivatives of the Mathieu equation and the modified Mathieu equation: \[\frac{d^{2}u}{d\eta^{2}}+(a^{*}-2q^{*}\cos 2\eta)u+y =0 \tag{4.18}\] \[\frac{d^{2}U}{d\xi^{2}}-(a^{*}-2q^{*}\cosh 2\xi)U-y =0\;. \tag{4.19}\] In the first equation, replace \(y\) by \(\mathrm{ce}_{0}(\eta,q^{*})\) and solve (we use the Green's function for the Mathieu equation to do so, because algebraic operations and integration are accurate and efficient with blendstrings) and similarly in the second equation replace \(y\) by \(\mathrm{Ce}_{0}(\xi,q^{*})\) and solve. As for initial or boundary conditions, we need to take \(u(0)=u(2\pi)=0\) to ensure periodicity, and we need to take \(U(0)=U^{\prime}(0)=0\) to ensure symmetry at the line \(\xi=0\). This analysis is implicit in the treatment in (25), but does not seem to be widely pursued, and so we have written it down in some detail here. Finally, we must amend the formulas for flow rate and oscillatory wall shear stress. Equation (2.54) becomes \[q_{\phi,e}(t)=\frac{8q_{0,e}}{i\lambda_{e}}\Big{(}1-\frac{1}{i \pi\lambda_{e}}\tanh 2\xi_{0}\Big{(}b_{0}\mathrm{Ce}_{0}^{\prime}(\xi_{0},q^{*}) \int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\hat{b}_{2}\left(U^{ \prime}(\xi_{0})\int_{\eta=0}^{2\pi}\mathrm{ce}_{0}(\eta,q^{*})\,d\eta+ \mathrm{Ce}_{0}^{\prime}(\xi_{0},q^{*})\int_{\eta=0}^{2\pi}u(\eta)\,d\eta\right) \tag{4.20}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\sum_{m\geq 2}b_{2m}^{2}I_{2m} \mathrm{Ce}_{2m}^{\prime}(\xi_{0},q^{*})\mathrm{Ce}_{2m}(\xi_{0},q^{*})\Big{)} \Big{)}e^{i\omega t}\;.\] The integrals appearing in the formula above have already been calculated, in order to find the \(b_{2m}\), but the relationships used to simplify to get \(b_{2m}^{2}I_{2m}\) as in the rest of the sum no longer obtain because \(I_{0}=0\). Equation (59) becomes \[\tau_{\phi,e}\left(\eta,t\right)=\frac{2\hat{\tau}_{0,e}}{i\delta_{0}\lambda_{e }}\left(\hat{b}_{2}\left(U^{\prime}(\xi_{0})\mathrm{ce}_{0}(\eta,q^{*})+ \mathrm{Ce}_{0}^{\prime}(\xi_{0},q^{*})u(\eta)\right)+\sum_{m\neq 1}b_{2m} \mathrm{Ce}_{2m}^{\prime}(\xi_{0},q^{*})\mathrm{ce}_{2m}(\eta,q^{*})\right)e^{ i\omega t}\,. \tag{61}\] ### The value of being able to solve the double eigenvalue case Because the solution to the original model equations is continuous (indeed analytic) in the parameters involved in \(q=-i\Lambda_{e}/4\), the underlying solution changes continuously as \(q\) passes through a value where a double eigenvalue of the Mathieu equation occurs. Therefore, it is only a discontinuity in the representation of the solution, not the solution itself. This means that sampling \(q\) "near enough" to the double point would give solutions that are "near enough" to the solution at that point. The only difficulty, and this is rather mild, is that the expansion coefficients in Mathieu functions become large and of opposite sign, which might incur some visible rounding error owing to cancellation. Because the size of the coefficients is only \(O(q-q^{*})^{-1/2}\) this is not typically very severe. Nonetheless we feel that it is worthwhile to be able to give the precise solution exactly at a double point for comparison to simple solutions nearby, to be assured that the solutions shown are representative of the model. ## 5 Results and discussion In the results to follow we consider a change in the cross section of a tube from circular to elliptic, under both passive and active scenarios, and examine the effects of this on the properties of oscillatory flow in a tube of elliptic cross section under the same oscillatory pressure gradient as that in a tube of circular cross section. The effects of physiological interest are those on flow rate and on the distribution of shear stress around the circumference of the tube. The main focus of our study is therefore on these two properties as well on the form of the passive scenario of velocity profiles in a tube of elliptic cross section. As noted, the properties of oscillatory flow in a tube of elliptic cross section depend on the nondimensional Figure 8: Guarding against blunders and errors: On the left, the computed residual \(\delta=\partial^{2}v/\partial\xi^{2}+\partial^{2}v/\partial\eta^{2}-i\rho \omega d^{2}v/(2\mu)\) with \(N=6\) terms, \(f=0.6\), \(c=1.0\)cm, active scenario, \(\omega=3.83722019332829\) (a double eigenvalue case), in double precision. This shows that the differential equation is satisfied to better than \(10^{-10}\). On the right, we plot our computed \(\epsilon=k_{0}(v(\xi_{0},\eta)-1)/(i\rho\omega)\), which ideally should be zero at the boundary, for the same parameter values, real part in black and imaginary part in red, on \(0\leq\eta\leq 2\pi\). We see the effect of truncating our series expansion at \(N=6\). Because this is a double eigenvalue case and the Green's function was used, the piecewise polynomial approximation for the solution is not quite periodic: there is a tiny jump between the values at \(\eta=0\) and at \(\eta=2\pi\). Indeed the error is not quite periodic with period \(\pi\), which it would be ideally. parameter \(\Lambda_{e}\) (Eq. 28) which involves the frequency of oscillation, \(\omega\), as well as the focal distance, \(d\). Thus the effects of tube dimension on oscillatory flow in the tube of elliptic cross section are different at different frequencies and, similarly, the effects of frequency on oscillatory flow in the tube of elliptic cross section are different at different tube dimensions. As a consequence, the effects of tube dimension and of frequency cannot be scaled out and, in the results to follow we examine three specific values of the circumference and thus radii, deformed by forcing them to different fractions \(f_{e}\) of their original radii, and several specific values of frequency, as shown in the figures. All the results to follow are based on the real part of the oscillatory pressure gradient (equation (23)). We note that the absolute value of the various complex quantities must appear at some point in the flow, possibly with a different phase lag for different locations in the vessel. In our animations (not given here) the differences in phase lags were never very significant. The fluid density and viscosity in all the results were taken as 1.0 g/cm\({}^{3}\) and 0.04 g/(cm-s), respectively. We note that we only examine tubes equivalent to those of radius of 0.5cm, 1.0cm, and 2.0cm in both active and passive scenarios for maximum flow rate and maximum wall shear stress. The primary factor in the transition of pulsatile blood flow in a vessel of circular cross section to one in a vessel of an elliptic cross section is the loss of radial symmetry of the circular cross section. While from a geometrical perspective this loss of symmetry appears to occur fairly smoothly, from both a mathematical and a hemodynamical perspective it represents a significant change. Geometrically, the change from a circular to an elliptic cross section, however small, introduces "poles" in the cross section, places where the curvature is maximum--at the ends of the major axis--and where the curvature is minimum, at the ends of the minor axis. Further, the most convenient coordinate system, namely confocal elliptical coordinates, has singular behaviour in the limit as the focus distance \(d\to 0\). This in turn induces a change in the governing equations of pulsatile flow from Bessel equations to Mathieu equations. Hemodynamically, the change causes a redistribution of shear stress on the vessel boundary, from a uniform distribution in the case of circular cross section to a polarized distribution in the case of elliptic cross section, with maximum shear occurring at the two ends of the minor axis of the ellipse and minimum shear at the two ends of the major axis. From the perspective of blood flow regulation, which our study was aimed at, the transition from flow in a vessel of circular cross section to one in an elliptic cross section represents a departure from well known physiological rules of blood flow regulation to a somewhat uncharted territory. A simple change in the diameter of a vessel is well known as the physiological (neurovascular) mechanism used to change the cross sectional area of a blood vessel in order to affect a required change in flow rate. Our study was aimed at the question of how this well established rule of blood flow regulation might be altered in the case of an elliptic cross section. Our results indicate that if the regulatory system does not respond to the change from circular to elliptic cross section, which we have dubbed as a "passive scenario", the change from circular to elliptic cross section will occur with no change in the length of circumference of the changing cross section. As a consequence, the cross sectional area available to the flow will then be reduced under this scenario. If, on the other hand, the regulatory system intervenes in an attempt to maintain the cross sectional area available to the flow, as it does in the case of a circular cross section, the transition from circular to elliptic cross section will occur while keeping the cross sectional area constant and hence, necessarily, by increasing the length of circumference of the changing cross section. We have dubbed this as an "active scenario". This makes a difference even in steady flow in tubes of elliptic cross section, as already seen in Figures 5-7. The flow quantities tend to be higher under the active scenario. This persists for pulsatile flow, as we will see; but we measure the pulsatile flow quantities relative to their steady counterparts, and so the increase may be hidden. This must be kept in mind. While what has been said so far applies to steady flow, the effects of ellipticity in oscillatory flow are further complicated by the acceleration and deceleration of the fluid within the oscillatory cycle. The effects of acceleration and deceleration depend on the volume of fluid being accelerated and decelerated, which in turn depends on the cross sectional area of the tube in which the flow is taking place. We recall that as the circular cross section of a tube becomes elliptic in the passive scenario, the cross sectional area of the elliptic cross section becomes smaller than that of the circular cross section, and therefore a smaller volume of fluid will be accelerated and decelerated in the tube of elliptic cross section than in the circular one. It follows that the maximum flow rate reached at the peak of each oscillatory cycle might actually be higher in the tube of elliptic cross section. This was shown, however, not to be the case for steady flow in Figure 7. The reason for this is that the acceleration and deceleration peaks depend not only on the volume of fluid being accelerated but also on the opposition to that acceleration by the level and distribution of shear stress on the boundary. However, even for steady flow, shear stress at the tube wall can be higher in the tube of elliptic cross section, and is in both scenarios for \(f_{e}\geq 0.6\), as shown in Figure 6. For pulsatile flow, Figure 14 shows that this remains true for pulsatile flow. The pulsatile flow rates shown in Figures 15-17 do not become higher for tubes of elliptic cross section than in circular cross section, even in the active scenario. They show moderate dependence on imposed frequency \(\omega\) with a general downward trend with increasing frequency. They also show a decrease in maximum flow for tubes with smaller fractions \(f_{e}\) of the original radius. They do not show a great effect of the two different scenarios, active versus passive. Indeed, those figures show that there is very little effect on maximum flow rate between the scenarios. The maximum velocity profiles show similar behaviour, and are not plotted here as being redundant. Tubes of elliptic cross section with smaller \(f_{e}\) show greater dependence of maximum pulsatile flow rate (and similarly velocity, not shown) on the imposed frequency. Conversely, tubes of smaller \(f_{e}\) show weaker dependence of shear stress on the imposed frequency \(\omega\). As a final consideration, the dependence of oscillatory properties on frequency \(\omega\) seen in the figures can be interpreted as the way the first few harmonics of a composite pressure wave would be individually affected under the two scenarios being considered. We do not otherwise pursue here the idea of a composite pressure wave. ## 6 Concluding remarks The problem of pulsatile flow in tubes of elliptic cross sections is important from a physiological as well as mathematical perspective, and the aim of our study was to examine this Figure 9: Left: A blood vessel of circular cross section with circumference \(1.0\)cm and radius \(a=1.0/(2\pi)\)cm (blue dashed line) is deformed by imposed vertical forces to ellipses with semiminor axis \(\beta=f_{e}a\) with \(f_{e}=0.6\), in two different ways: under the active scenario, neuromuscular control relaxes the vessel wall so it stretches in order to maintain the initial cross-sectional area (red curves), and under the passive scenario, the vessel wall stays at \(1.0\)cm in circumference (black curves), thereby losing some cross-sectional area. Right: The resulting peak shear stress over one cycle of oscillation from equation (10), scaled by the steady wall stress \(\hat{\tau}_{0,c}\) from equation (18) in the case \(\alpha=\beta=a\), the radius of the circle. Both scenarios are graphed at their peaks in time, together with the wall stress from equation (11) (blue dashed line) from the original circle. The wider elliptical vessel has the smallest minimum oscillatory wall shear stress, although the maximum is greater than that of the original circle. problem from both of these perspectives, using a tube of elliptic cross section as a model of a deformed blood vessel. While this is clearly a simplified model of the many ways in which a blood vessel may be deformed, it allowed us to explore a full range of distortions of a tube of circular cross section, from being fully open to almost closed. More important than the final form into which a vessel is deformed are the constraints and scenarios under which the transformation from circular to elliptic cross section takes place. The two scenarios which we have considered highlight the mathematical and physiological aspects of the problem and provide useful information on the way the neurovascular control system may respond to the deformation of a blood vessel in the physiological setting. In particular, the ability of the control system to maintain a constant cross sectional area under the active scenario is clearly limited to only small or moderate departures from the circular cross section. When the departure from circular cross section is large (high ellipticity, low fraction \(f_{e}\)), a prohibitively large increase in the circumference of the wall would be required to maintain the cross sectional area available for the flow, as illustrated in Figure 4. We have extended both the scope and the range of results currently available for this problem by using new methodology to overcome difficulties encountered in the solution of the governing Mathieu equations and in the numerical evaluation of Mathieu functions in the past. Specifically, we used a careful spectral method, including explicit solution in the case of double eigenvalues, for the solution of the governing equations. We used extended precision where necessary to overcome issues of ill-conditioning, for \(f_{e}\) very close to 1 which is paradoxically the difficult case. We believe that this novel approach offers a useful new tool in further study of pulsatile blood flow under various pathological conditions. Figure 10: One quarter of a cycle of an oscillation starting at a maximum, with each curve showing (half) the velocity profile from equation (42) at successive instants. The top curve is at time \(t=0\). As the time \(t\) is sampled over the quarter cycle, the curves go down. In the next quarter cycle (not shown) they would go further down to the minimum; then they would come up over the next half-cycle (not shown, because the overlapping lines would be harder to interpret) to the beginning again. The particulars of this picture are that the original circumference was \(c=2.0\)cm, the semiminor axis of the ellipse is a fraction \(f_{e}=0.95\) of the original circular radius, the scenario is active so that the semimajor axis is \(1/f_{e}=1.053\) of the original circular radius, and the frequency is \(\omega=11.2847265810330\)Hz (which is slightly higher than for most of our simulations). This is, as it happens, at a double eigenvalue, which is why so many decimal places of the frequency are printed. Fractions \(f_{e}\) very close to 1 are harder for the Mathieu expansion to model, as indicated in the text, but this fraction, \(f_{e}=0.95\), is routine, and slight deformations from circular cross section are more likely to occur, so this is a reasonably realistic case.
2301.10485
LTL Reactive Synthesis with a Few Hints
We study a variant of the problem of synthesizing Mealy machines that enforce LTL specifications against all possible behaviours of the environment including hostile ones. In the variant studied here, the user provides the high level LTL specification {\phi} of the system to design, and a set E of examples of executions that the solution must produce. Our synthesis algorithm works in two phases. First, it generalizes the decisions taken along the examples E using tailored extensions of automata learning algorithms. This phase generalizes the user-provided examples in E while preserving realizability of {\phi}. Second, the algorithm turns the (usually) incomplete Mealy machine obtained by the learning phase into a complete Mealy machine that realizes {\phi}. The examples are used to guide the synthesis procedure. We provide a completeness result that shows that our procedure can learn any Mealy machine M that realizes {\phi} with a small (polynomial) set of examples. We also show that our problem, that generalizes the classical LTL synthesis problem (i.e. when E = {\emptyset}), matches its worst-case complexity. The additional cost of learning from E is even polynomial in the size of E and in the size of a symbolic representation of solutions that realize {\phi}. This symbolic representation is computed by the synthesis algorithm implemented in Acacia-Bonzai when solving the plain LTL synthesis problem. We illustrate the practical interest of our approach on a set of examples.
Mrudula Balachander, Emmanuel Filiot, Jean-François Raskin
2023-01-25T09:45:06Z
http://arxiv.org/abs/2301.10485v1
# LTL Reactive Synthesis with a Few Hints ###### Abstract We study a variant of the problem of synthesizing Mealy machines that enforce LTL specifications against all possible behaviours of the environment including hostile ones. In the variant studied here, the user provides the high level LTL specification \(\varphi\) of the system to design, and a set \(E\) of examples of executions that the solution must produce. Our synthesis algorithm works in two phases. First, it generalizes the decisions taken along the examples \(E\) using tailored extensions of automata learning algorithms. This phase generalizes the user-provided examples in \(E\) while preserving realizability of \(\varphi\). Second, the algorithm turns the (usually) incomplete Mealy machine obtained by the learning phase into a complete Mealy machine that realizes \(\varphi\). The examples are used to guide the synthesis procedure. We provide a completness result that shows that our procedure can learn any Mealy machine \(M\) that realizes \(\varphi\) with a small (polynomial) set of examples. We also show that our problem, that generalizes the classical LTL synthesis problem (i.e. when \(E=\emptyset\)), matches its worst-case complexity. The additional cost of learning from \(E\) is even polynomial in the size of \(E\) and in the size of a symbolic representation of solutions that realize \(\varphi\). This symbolic representation is computed by the synthesis algorithm implemented in Acacia-Bonzai when solving the plain LTL synthesis problem. We illustrate the practical interest of our approach on a set of examples. ## 1 Introduction Reactive systems are notoriously difficult to design and even to specify correctly [1, 15]. As a consequence, formal methods have emerged as useful tools to help designers to built reactive systems that are correct. For instance, model-checking asks the designer to provide a model, in the form of a Mealy machine \(\mathcal{M}\), that describes the reactions of the system to events generated by its environment, together with a description of the _core correctness properties_ that must be enforced. Those properties are expressed in a logical formalism, typically as an LTL formula \(\varphi_{\mathsf{CORE}}\). Then an algorithm decides if \(\mathcal{M}\models\varphi_{\mathsf{CORE}}\), i.e. if all executions of the system in its environment satisfy the specification. Automatic reactive synthesis is more ambitious: it aims at automatically generating a model from a high level description of the "_what_" needs to be done instead of the "_how_" it has to be done. Thus the user is only required to provide an LTL specification \(\varphi\) and the algorithm automatically generates a Mealy machine \(\mathcal{M}\) such that \(\mathcal{M}\models\varphi\) whenever \(\varphi\) is _realizable_. Unfortunately, it is most of the time not sufficient to provide the core correctness properties \(\varphi_{\mathsf{CORE}}\) to obtain a Mealy machine \(\mathcal{M}\) that is useful in practice, as illustrated next. Example 1: [Synthesis from \(\varphi_{\mathsf{CORE}}\) - Mutual exclusion] Let us consider the classical problem of _mutual exclusion_. In the simplest form of this problem, we need to design an arbiter that receives requests from two processes, modeled by two atomic propositions \(r_{1}\) and \(r_{2}\) controlled by the environment, and that grants accesses to the critical section, modeled as two atomic propositions \(g_{1}\) and \(g_{2}\) controlled by the system. The core correctness properties (the _what_) are: \((i)\) mutual access, i.e. it is never the case that the access is granted to both processes at the same time, \((ii)\) fairness, i.e. processes that have requested access eventually get access to the critical section. These core correctness specifications for mutual exclusion (ME) are easily expressed in LTL as follows: \(\varphi_{\mathsf{CORE}}^{\mathsf{ME}}\equiv\Box(\neg g_{1}\vee\neg g_{2}) \wedge\Box(r_{1}\to\lozenge g_{1})\wedge\Box(r_{2}\to\lozenge g_{2})\). Indeed, this formula expresses the core correctness properties that we would model check no matter _how_\(\mathcal{M}\) implements mutual exclusion, e.g. Peterson, Dedekker, Backery algorithms, etc. Unfortunately, if we submit \(\varphi_{\mathsf{CORE}}^{\mathsf{ME}}\) to an LTL synthesis procedure, implemented in tools like Acacia-Bonzai[10], BoSy[19], or Strix[27], we get the solution \(\mathcal{M}\) depicted in 1-(left) (all three tools return this solution). While this solution is perfectly correct and realizes the specification \(\varphi_{\mathsf{CORE}}^{\mathsf{ME}}\), the solution ignores the inputs from the environment and grants access to the critical sections in a round robin fashion. Arguably, it may not be considered as an efficient solution to the mutual exclusion problem. This illustrates the limits of the synthesis algorithm to solve the design problem by providing _only_ the core correctness specification of the problem, i.e. the _what_, only. To produce useful solutions to the mutual exclusion problem, more guidance must be provided. The main question is now: _how should we specify these additional properties?_ Obviously, if we want to use the "plain" LTL synthesis algorithm, there is no choice: we need to reinforce the specification \(\varphi_{\mathsf{CORE}}^{\mathsf{ME}}\) with additional lower level properties \(\varphi_{\mathsf{LOW}}^{\mathsf{ME}}\). Let us go back to our running example. Example 2: [Synthesis from \(\varphi_{\mathsf{CORE}}^{\mathsf{ME}}\) and \(\varphi_{\mathsf{LOW}}^{\mathsf{ME}}\)] To avoid solutions with _unsolicited grants_, we need to reinforce the core specification. The Strix online demo website proposes to add the following 3 LTL formulas \(\varphi_{\mathsf{LOW}}^{\mathsf{ME}}\) to \(\varphi_{\mathsf{CORE}}^{\mathsf{ME}}\) (see Full abitrer \(n=2\), at [https://meyerphi.github.io/strix-demo/](https://meyerphi.github.io/strix-demo/)): (1) \(\bigwedge_{i\in\{1,2\}}\Box((g_{i}\wedge\Box\neg r_{i})\to\lozenge\neg g_{i})\), (2) \(\bigwedge_{i\in\{1,2\}}\Box(g_{i}\wedge\bigcirc(\neg r_{i}\wedge\neg g_{i}) \to\bigcirc(r_{i}\mathsf{R}\neg g_{i}))\), and (3) \(\bigwedge_{i\in\{1,2\}}(r_{i}\mathsf{R}\neg g_{i})\). Now, while the specification \(\varphi_{\mathsf{CORE}}^{\mathsf{ME}}\wedge\varphi_{\mathsf{LOW}}^{\mathsf{ ME}}\) allows Strix to provide us with a better solution, it is more complex than needed (it has 9 states and can be seen in App. C) and clearly does not look like an optimal solution to our mutual exclusion problem. For instance, the model of Fig. 1-(right) is arguably more natural. How can we get this model without coding it into the LTL specification, which would diminish greatly the interest of using a synthesis procedure in the first place? In general, higher level properties are ones that are concerned with safety and are the ones needed to be verified on all implementations. In contrast, lower level properties are more about a specific implementation, i.e., they talk more about expected behaviour and are concerned with the efficiency of the implementation. At this point, it is legitimate to question the adequacy of LTL as a specification language for _lower level_ properties, and so as a way to guide the synthesis procedure towards relevant solutions to realize \(\varphi_{\mathsf{CORE}}\). In this paper, we introduce an alternative to guide synthesis toward useful solutions that realize \(\varphi_{\mathsf{CORE}}\): we propose to use examples of executions that illustrate behaviors of expected solutions. We then restrict the search to solutions that _generalize_ those examples. Examples, or scenarios of executions, are accepted in requirement engineering as an adequate tool to elicit requirements about complex systems [14]. For reactive system design, examples are particularly well-suited as they are usually much easier to formulate than full blown solutions, or even partial solutions. It is because, when formulating examples, the user controls _both_ the inputs _and_ the outputs, avoiding the main difficulty of reactive system design: having to cope with _all_ possible environment inputs. We illustrate this on our running example. Example 3: [Synthesis from \(\varphi_{\mathsf{CORE}}^{\mathsf{ME}}\) and examples] Let us keep, as the LTL specification, \(\varphi_{\mathsf{CORE}}^{\mathsf{ME}}\) only, and let us consider the following simple prefix of executions that illustrate how solutions to mutual exclusion should behave: 1. \(\{!r_{1},!r_{2}\}\).\(\{!g_{1},!g_{2}\}\#\{r_{1},!r_{2}\}\).\(\{g_{1},!g_{2}\}\#\{!r_{1},r_{2}\}\).\(\{!g_{1},!g_{2}\}\) 2. \(\{r_{1},r_{2}\}\).\(\{g_{1},!g_{2}\}\#\{!r_{1},!r_{2}\}\).\(\{!g_{1},g_{2}\}\) These prefixes of traces prescribe reactions to typical _fixed_ finite sequences of inputs: (1) if there is no request initially, then no access is granted (note that this excludes already the round robin solution), if process 1 requests and subsequently Figure 1: (Left) The solution provided by Strix to the mutual exclusion problem for the high level specification \(\varphi_{LOW}^{\mathsf{ME}}\). Edge labels are of the form \(\varphi/\psi\) where \(\varphi\) is a Boolean formula on the input atomic propositions (the Boolean variables controlled by the environment) and \(\psi\) is a maximally consistent conjunction of literals over the set of output propositions (the Boolean variables controlled by the system). (Right) A natural solution that we would write by hand, and is automatically produced by our learning and synthesis algorithm for the same specification together with two simple examples. process 2 requests, process 1 is granted first and then process 2 is granted after, (2) if both process request simultaneously, then process 1 is granted first and then process 2 is granted after. Given those two simple traces together with \(\varphi_{\mathsf{CORE}}\), our algorithm generates the solution of Fig. 1-(right). Arguably, the solution is now simple and natural. ContributionsFirst, we provide a synthesis algorithm SynthLearn that, given an LTL specification \(\varphi_{\mathsf{CORE}}\) and a finite set \(E\) of prefixes of executions, returns a Mealy machine \(\mathcal{M}\) such that \(\mathcal{M}\models\varphi_{\mathsf{CORE}}\), i.e. \(\mathcal{M}\) realizes \(\varphi_{\mathsf{CORE}}\), and \(E\subseteq\mathsf{Prefix}(L(\mathcal{M}))\), i.e. \(\mathcal{M}\) is compatible with the examples in \(E\), if such a machine \(\mathcal{M}\) exists. It returns _unrealizable_ otherwise. Additionally, we require SynthLearn to _generalize_ the decisions illustrated in \(E\). This learnability requirement is usually formalized in automata learning with a _completeness criterium_ that we adapt here as follows: for all specifications \(\varphi_{\mathsf{CORE}}\), and for all Mealy machines \(\mathcal{M}\) such that \(\mathcal{M}\models\varphi_{\mathsf{CORE}}\), there is a small set of examples \(E\) (polynomial in \(|\mathcal{M}|\)) such that \(L(\textsc{SynthLearn}(\varphi_{\mathsf{CORE}},E))=L(\mathcal{M})\). We prove this completeness result in Theorem 4 for safety specifications and extend it to \(\omega\)-regular and LTL specifications in Section 4, by reduction to safety. Second, we prove that the worst-case execution time of SynthLearn is 2ExpTime (Theorem 4.1), and this is worst-case optimal as the plain LTL synthesis problem (when \(E=\emptyset\)) is already known to be 2ExpTime-Complete[29]. SynthLearn first _generalizes_ the examples provided by the user while maintaining realizability of \(\varphi_{\mathsf{CORE}}\). This generalization leads to a Mealy machine with possibly missing transitions (called a preMealy machine). Then, this preMealy machine is extended into a (full) Mealy machine that realizes \(\varphi_{\mathsf{CORE}}\) against all behaviors of the environment. During the completion phase, SynthLearn reuses as much as possible decisions that have been generalized from the examples. The generalization phase is essential to get the most out of the examples. Running classical synthesis algorithms on \(\varphi_{\mathsf{CORE}}\wedge\varphi_{E}\), where \(\varphi_{E}\) is an LTL encoding of \(E\), often leads to more complex machines that fail to generalize the decisions taken along the examples in \(E\). While the overall complexity of SynthLearn is 2ExpTime and optimal, we show that it is only polynomial in the size of \(E\) and in a well-chosen symbolic representation a set of Mealy machines that realize \(\varphi_{\mathsf{CORE}}\), see Theorem 4.2. This symbolic representation takes the form of an antichain of functions and tends to be compact in practice [21]. It is computed by default when Acacia-Bonzai is solving the plain LTL synthesis problem of \(\varphi_{\mathsf{CORE}}\). So, generalizing examples while maintaining realizability only comes at a marginal polynomial cost. We have implemented our synthesis algorithm in a prototype, which uses Acacia-Bonzai to compute the symbolic antichain representation. We report on the results we obtain on several examples. Related worksScenarios of executions have been advocated by researchers in requirements engineering to elicite specifications, see e.g. [14, 16] and references therein. In [30], learning techniques are used to transform examples into LTL formulas that generalize them. Those methods are complementary to our work, as they can be used to obtain the high level specification \(\varphi_{\mathsf{CORE}}\). In non-vacuous synthesis [7], examples are added automatically to an LTL specification in order to force the synthesis procedure to generate solutions that are non-vacuous in the sense of [25]. The examples are generated directly from the syntax of the LTL specification and they cannot be proposed by the user. This makes our approach and this approach orthogonal and complementary. Indeed, we could use the examples generated automatically by the non-vacuous approach and ask the user to validate them as desirable or not. Our method is more flexible, it is semi-automatic and user centric: the user can provide any example he/she likes and so it offers more flexibility to drive the synthesis procedure to solutions that the user deems as interesting. Furthermore, our synthesis procedure is based on learning algorithms, while the algorithm in [7] is based on constraint solving and it does not offer guarantees of generalization contrary to our algorithm (see Theorem 4). Supplementing the formal specification with additional user-provided information is at the core of the _syntax-guided synthesis_ framework (SyGuS [3]), implemented for instance in _program by sketching_[33]: in SyGuS, the specification is a logical formula and candidate programs are syntactically restricted by a user-provided grammar, to limit and guide the search. The search is done by using counter-example guided inductive synthesis techniques (CEGIS) which rely on learning [34]. In contrast to our approach, examples are not user-provided but automatically generated by model-checking the candidate programs against the specification. The techniques are also orthogonal to ours: SyGuS targets programs syntactically defined by expressions over a decidable background theory, and heavily relies on SAT/SMT solvers. Using examples to synthesise programs (_programming by example_) has been for instance explored in the context of string processing programs for spreadsheets, based on learning [32], and is a current trend in AI (see for example [28] and the citations therein). However this approach only relies on examples and not on logical specifications. [4] explores the use of formal specifications and scenarios to synthesize distributed protocols. Their approach also follows two phases: first, an incomplete machine is built from the scenarios and second, it is turned into a complete one. But there are two important differences with our work. First, their first phase does not rely on learning techniques and does not try to generalize the provided examples. Second, in their setting, all actions are controllable and there is no adversarial environment, so they are solving a satisfiability problem and not a realizability problem as in our case. Their problem is thus computationally less demanding than the problem we solve: Pspace versus 2ExpTime for LTL specs. The synthesis problem targeted in this paper extends the LTL synthesis problem. Modern solutions for this problem use automata constructions that avoid Safra's construction as first proposed in [26], and simplified in [31, 20], and more recently in [18]. Efficient implementations of Safraless constructions are available, see e.g. [8, 19, 27, 17]. Several previous works have proposed alternative approaches to improve on the quality of solutions that synthesis algorithms can offer. A popular research direction, orthogonal and complementary to the one proposed here, is to extend the formal specification with quantitative aspects, see e.g. [5, 9, 24, 2], and only synthesize solutions that are optimal. The first phase of our algorithm is inspired by automata learning techniques based on state merging algorithms like RPNI [23, 22]. Those learning algorithms need to be modified carefully to generate partial solutions that preserve realizability of \(\varphi_{\mathsf{CORE}}\). Proving completeness as well as termination of the completion phase in this context requires particular care. ## 2 Preliminaries on the reactive synthesis problem Words, languages and automataAn alphabet is a finite set of symbols. A _word_\(u\) (resp. \(\omega\)-word) over an alphabet \(\Sigma\) is a finite (resp. infinite sequence) of symbols from \(\Sigma\). We write \(\epsilon\) for the empty word, and denote by \(|u|\in\mathbb{N}\cup\{\infty\}\) the length of \(u\). In particular, \(|\epsilon|=0\). For \(1\leq i\leq j\leq|u|\), we let \(u[i{:}j]\) be the infix of \(u\) from position \(i\) to position \(j\), both included, and write \(u[i]\) instead of \(u[i{:}i]\). The set of finite (resp. \(\omega\)-) words over \(\Sigma\) is denoted by \(\Sigma^{*}\) (resp. \(\Sigma^{\omega}\)). We let \(\Sigma^{\infty}=\Sigma^{*}\cup\Sigma^{\omega}\). Given two words \(u\in\Sigma^{*}\) and \(v\in\Sigma^{\infty}\), \(u\) is a _prefix_ of \(v\), written \(u\preceq v\), if \(v=uw\) for some \(w\in\Sigma^{\infty}\). The set of prefixes of \(v\) is denoted by \(\mathsf{Prefs}(v)\). Finite words are linearly ordered according to the length-lexicographic order \(\preceq_{ll}\), assuming a linear order \(<_{\Sigma}\) over \(\Sigma\): \(u\preceq_{ll}v\) if \(|u|<|v|\) or \(|u|=|v|\) and \(u=p\sigma_{1}u^{\prime}\), \(v=p\sigma_{2}v^{\prime}\) for some \(p,u^{\prime},v^{\prime}\in\Sigma^{*}\) and some \(\sigma_{1}<_{\Sigma}\sigma_{2}\). In this paper, whenever we refer to the order \(\preceq_{ll}\) for words over some alphabet, we implicitly assume the existence of an arbitrary linear order over that alphabet. A _language_ (resp. \(\omega\)-language) over an alphabet \(\Sigma\) is a subset \(L\subseteq\Sigma^{*}\) (resp. \(L\subseteq\Sigma^{\omega}\)). In this paper, we fix two alphabets \(\mathcal{I}\) and \(\mathcal{O}\) whose elements are called inputs and outputs respectively. Given a word \(u\in(\mathcal{I}\mathcal{O})^{\infty}\), we let \(\mathsf{in}(u)\in\mathcal{I}^{\infty}\) be the word obtained by erasing all \(\mathcal{O}\)-symbols from \(u\). We define \(\mathsf{out}(u)\) similarly and naturally extend both functions to languages. Automata over \(\omega\)-wordsA _parity automaton_ is a tuple \(\mathcal{A}=(Q,Q_{\mathsf{init}},\Sigma,\delta,d)\) where \(Q\) is a finite non empty set of states, \(Q_{\mathsf{init}}\subseteq Q\) is a set of initial states, \(\Sigma\) is a finite non empty alphabet, \(\delta:Q\times\Sigma\to 2^{Q}\setminus\{\emptyset\}\) is the transition function, and \(d:Q\to\mathbb{N}\) is a parity function. The automaton \(\mathcal{A}\) is _deterministic_ when \(|Q_{\mathsf{init}}|=1\) and \(|\delta(q,\sigma)|=1\) for all \(q\in Q\). The transition function is extended naturally into a function \(\mathsf{Post}^{*}:Q\times\Sigma^{*}\to 2^{Q}\setminus\{\emptyset\}\) inductively as follows: \(\mathsf{Post}^{*}(q,\epsilon)=\{q\}\) for all \(q\in Q\) and for all \((u,\sigma)\in\Sigma^{*}\times\Sigma\), \(\mathsf{Post}^{*}(q,u\sigma)=\bigcup_{q^{\prime}\in\mathsf{Post}^{*}(q,u)} \delta(q^{\prime},\sigma)\). A run of \(\mathcal{A}\) on an \(\omega\)-word \(w=w_{0}w_{1}\dots\) is an infinite sequence of states \(r=q_{0}q_{1}\dots\) such that \(q_{0}\in Q_{\mathsf{init}}\), and for all \(i\in\mathbb{N}\), \(q_{i+1}\in\delta(q_{i},w_{i})\). The run \(r\) is said to be _accepting_ if the minimal colour it visits infinitely often is even, i.e. \(\liminf(d(q_{i}))_{i\geq 0}\) is _even_. We say that \(\mathcal{A}\) is a _Buchi automaton_ when \(\mathsf{dom}(d)=\{0,1\}\) (1-coloured states are called accepting states), a _co-Buchi automaton_ when \(\mathsf{dom}(d)=\{1,2\}\), a _safety automaton_ if it is a Buchi automaton such that the set of 1-coloured states, called _unsafe states_ and denoted \(Q_{\mathsf{usf}}\), forms a _trap_: for all \(q\in Q_{\mathsf{usf}}\), for all \(\sigma\in\Sigma\), \(\delta(q,\sigma)\subseteq Q_{\mathsf{usf}}\), and a _reachability automaton_ if it is \(\{0,1\}\)-coloured and the set of 0-coloured states forms a trap. Finally, we consider the existential and universal interpretations of nondeterminism, leading to two different notions of \(\omega\)-word languages: under the _existential (resp. universal) interpretation_, a word \(w\in\Sigma^{\omega}\) is in the language of \(\mathcal{A}\), if there exists a run \(r\) on \(w\) such that \(r\) is accepting (resp. for all runs \(r\) on \(w\), \(r\) is accepting). We denote the two languages defined by these two interpretations \(L^{\exists}(\mathcal{A})\) and \(L^{\forall}(\mathcal{A})\) respectively. Note that if \(\mathcal{A}\) is deterministic, then the existential and universal interpretations agree, and we write \(L(\mathcal{A})\) for \(L^{\forall}(\mathcal{A})=L^{\exists}(\mathcal{A})\). Sometimes, for a deterministic automaton \(\mathcal{A}\), we change the initial state to a state \(q\in Q\), and note \(\mathcal{A}[q]\) for the deterministic automaton \(\mathcal{A}\) where the initial state is fixed to the singleton \(\{q\}\). For a _co-Buchi automaton_, we also define a strengthening of the acceptance condition, called \(K\)-co-Buchi, which requires, for \(K\in\mathbb{N}\), that a run visits at most \(K\) times a state labelled with \(1\) to be accepting. Formally, a run \(r=q_{0}q_{1}\ldots q_{n}\ldots\) is _accepting_ for the \(K\)-co-Buchi acceptance condition if \(|\{i\geq 0\mid d(q_{i}))=1\}|\leq K\). The language defined by \(\mathcal{A}\) for the \(K\)-co-Buchi acceptance condition and universal interpretation is denoted by \(L^{\forall}_{K}(\mathcal{A})\). Note that this language is a _safety_ language because if a prefix of a word \(p\in\Sigma^{*}\) is such that \(\mathcal{A}\) has a run prefix on \(p\) that visits more than \(K\) times a states labelled with color \(1\), then all possible extensions \(w\in\Sigma^{\omega}\) of \(p\) are rejected by \(\mathcal{A}\). (Pre)Mealy machinesGiven a (partial) function \(f\) from a set \(X\) to a set \(Y\), we denote by \(\mathsf{dom}(f)\) its domain, i.e. the of elements \(x\in X\) such that \(f(x)\) is defined. A _preMealy machine_\(\mathcal{M}\) on an input alphabet \(\mathcal{I}\) and output alphabet \(\mathcal{O}\) is a triple \((M,m_{\mathsf{init}},\Delta)\) such that \(M\) is a non-empty set of states, \(m_{\mathsf{init}}\in M\) is the initial state, \(\Delta:Q\times\mathcal{I}\to\mathcal{O}\times M\) is a partial function. A pair \((m,\mathsf{i})\) is a hole in \(\mathcal{M}\) if \((m,\mathsf{i})\not\in\mathsf{dom}(\Delta)\). A _Mealy machine_ is a preMealy machine such that \(\Delta\) is total, i.e., \(\mathsf{dom}(\Delta)=M\times\mathcal{I}\). We define two semantics of a preMealy machine \(\mathcal{M}=(M,m_{\mathsf{init}},\Delta)\) in terms of the languages of finite and infinite words over \(\mathcal{I}\cup\mathcal{O}\) they define. First, we define two (possibly partial functions) \(\mathsf{Post}_{\mathcal{M}}:M\times\mathcal{I}\to M\) and \(\mathsf{Out}_{\mathcal{M}}:M\times\mathcal{I}\to\mathcal{O}\) such that \(\Delta(m,\mathsf{i})=(\mathsf{Post}_{\mathcal{M}}(m,\mathsf{i}),\mathsf{Out}_{ \mathcal{M}}(m,\mathsf{i}))\) for all \((m,\mathsf{i})\in M\times\mathcal{I}\) if \(\Delta(m,\mathsf{i})\) is defined. We naturally extend these two functions to any sequence of inputs \(u\in\mathcal{I}^{+}\), denoted \(\mathsf{Post}^{*}_{\mathcal{M}}\) and \(\mathsf{Out}^{*}_{\mathcal{M}}\). In particular, for \(u\in\mathcal{I}^{+}\), \(\mathsf{Post}^{*}_{\mathcal{M}}(m,u)\) is the state reached by \(\mathcal{M}\) when reading \(u\) from \(m\), while \(\mathsf{Out}^{*}_{\mathcal{M}}(m,u)\) is the last output in \(\mathcal{O}\) produced by \(\mathcal{M}\) when reading \(u\). The subcript \(\mathcal{M}\) is ommitted when \(\mathcal{M}\) is clear from the context. Now, the language \(L(\mathcal{M})\) of finite words in \((\mathcal{IO})^{*}\) accepted by \(\mathcal{M}\) is defined as \(L(\mathcal{M})=\{\mathsf{i}_{1}\mathsf{o}_{1}\ldots\mathsf{i}_{n}\mathsf{o}_{ n}\mid\forall 1\leq j\leq n,\ \mathsf{Post}^{*}_{\mathcal{M}}(m_{\mathsf{init}}, \mathsf{i}_{1}\ldots\mathsf{i}_{j})\) is defined and \(\mathsf{o}_{j}=\mathsf{Out}^{*}_{\mathcal{M}}(m_{\mathsf{init}},\mathsf{i}_{1 }\ldots\mathsf{i}_{j})\}\). The language \(L_{\omega}(\mathcal{M})\) of infinite words accepted by \(\mathcal{M}\) is the topological closure of \(L(\mathcal{M})\): \(L_{\omega}(\mathcal{M})=\{w\in(\mathcal{IO})^{\omega}\mid\mathsf{Pref}(w)\cap( \mathcal{IO})^{*}\subseteq L(\mathcal{M})\}\). The reactive synthesis problemA _specification_ is a language \(\mathcal{S}\subseteq(\mathcal{IO})^{\omega}\). The _reactive synthesis problem_ (or just synthesis problem for short) is the problem of constructing, given a specification \(\mathcal{S}\), a Mealy machine \(\mathcal{M}\) such that \(L_{\omega}(\mathcal{M})\subseteq\mathcal{S}\) if it exists. Such a machine \(\mathcal{M}\) is said to _realize_ the specification \(\mathcal{S}\), also written \(\mathcal{M}\models\mathcal{S}\). We also say that \(\mathcal{S}\) is _realizable_ if some Mealy machine \(\mathcal{M}\) realizes it. The induced decision problem is called the _realizability problem_. It is well-known that if \(\mathcal{S}\) is \(\omega\)-regular (recognizable by a parity automaton [35]) the realizability problem is decidable [1] and moreover, a Mealy machine realizing the specification can be effectively constructed. The realizability problem is \(2\textsc{ExpTime-Complete}\) if \(\mathcal{S}\) is given as an LTL formula [29] and ExpTime-Complete if \(\mathcal{S}\) is given as a universal coBuchi automaton. Theorem 3.1 ([6]): _The realizability problem for a specification \(\mathcal{S}\) given as a universal coBuchi automaton \(\mathcal{A}\) is ExpTime-Complete. Moreover, if \(\mathcal{S}\) is realizable and \(\mathcal{A}\) has \(n\) states, then \(\mathcal{S}\) is realizable by a Mealy machine with \(2^{O(nlog_{2}n)}\) states._ We generalize this result to the following realizability problem which we describe first informally. Given a specification \(\mathcal{S}\) and a preMealy machine \(\mathcal{P}\), the goal is to decide whether \(\mathcal{P}\) can be completed into a Mealy machine which realizes \(\mathcal{S}\). We now define this problem formally. Given two preMealy machines \(\mathcal{P}_{1},\mathcal{P}_{2}\), we write \(\mathcal{P}_{1}\preceq\mathcal{P}_{2}\) if \(\mathcal{P}_{1}\) is a subgraph of \(\mathcal{P}_{2}\) in the following sense: there exists an injective mapping \(\Phi\) from the states of \(\mathcal{P}_{1}\) to the states of \(\mathcal{P}_{2}\) which preserves the initial state (\(s_{0}\) is the initial state of \(\mathcal{P}_{1}\) iff \(\Phi(s_{0})\) is the initial state of \(\mathcal{P}_{2}\)) and the transitions \((\Delta_{\mathcal{P}_{1}}(p,\mathfrak{i})=(\mathfrak{o},q)\) iff \(\Delta_{\mathcal{P}_{2}}(\Phi(p),\mathfrak{i})=(\mathfrak{o},\Phi(q))\). As a consequence, \(L(\mathcal{P}_{1})\subseteq L(\mathcal{P}_{2})\) and \(L_{\omega}(\mathcal{P}_{1})\subseteq L_{\omega}(\mathcal{P}_{2})\). Given a preMealy machine \(\mathcal{P}\), we say that a specification \(\mathcal{S}\)_is \(\mathcal{P}\)-realizable_ if there exists a Mealy machine \(\mathcal{M}\) such that \(\mathcal{P}\preceq\mathcal{M}\) and \(\mathcal{M}\) realizes \(\mathcal{S}\). Note that if \(\mathcal{P}\) is a (complete) Mealy machine, \(\mathcal{S}\) is \(\mathcal{P}\)-realizable iff \(\mathcal{P}\) realizes \(\mathcal{S}\). Theorem 3.2: _Given a universal co-Buchi automaton \(\mathcal{A}\) with \(n\) states defining a specification \(\mathcal{S}=L^{\forall}(\mathcal{A})\) and a preMealy machine \(\mathcal{P}\) with \(m\) states and \(n_{h}\) holes, deciding whether \(\mathcal{S}\) is \(\mathcal{P}\)-realizable is ExpTime-hard and in ExpTime (in \(n\) and polynomial in \(m\)). Moreover, if \(\mathcal{S}\) is \(\mathcal{P}\)-realizable, it is \(\mathcal{P}\)-realizable by a Mealy machine with \(m+n_{h}2^{O(nlog_{2}n)}\) states. Hardness holds even if \(\mathcal{P}\) has two states and \(\mathcal{A}\) is a deterministic reachability automaton._ Brought proof from Appendix to here. Before proving Theorem 3.2, let us note that the \(\mathcal{P}\)-realizability problem generalizes the classical realizability, as the latter is equivalent to the \(\mathcal{P}_{0}\)-realizability where \(\mathcal{P}_{0}\) is the preMealy machine composed of single state (which is initial) without any transition. So, we inherit the ExpTime lower bound of Theorem 3.1. However, we prove that the \(\mathcal{P}\)-realizability problem is intrinsically harder: indeed, we show that the ExpTime hardness holds even if \(\mathcal{P}\) is a fixed preMealy machine and \(\mathcal{S}\) is given as a _deterministic reachability_ automaton. This is in contrast to the classical realizability problem: deciding the realizability of a specification given as a deterministic reachability automaton is in PTime[11]. Our synthesis algorithm from specifications and examples extensively rely on sucessive calls to a \(\mathcal{P}\)-realizability checker, for various preMealy machines \(\mathcal{P}\). However, we show in Sec 4 that modulo pre-computing, in worst-case exponential time, some symbolic (and in practice compact) representation of some realizable configurations of the specification automaton, all those calls can be done in polynomial time in this representation. Proof of theorem 2.: We first prove the upper-bound. Let \(Q_{\mathcal{P}}\) be the set of states of \(\mathcal{P}\), \(\Delta_{\mathcal{P}}\) its transition function and \(p_{0}\) its initial state. For any \(p\in Q_{\mathcal{P}}\), we define its _left language_\(\mathsf{Left}_{p}\) as \[\mathsf{Left}_{p}=\{u\in(I.O)^{*}\mid\mathsf{Post}_{\mathcal{P}}^{*}(p_{0},u)=p\}\] Then, \(\mathcal{P}\)-realizability is characterized by the following property: Claim.: \(\mathcal{S}\) is \(\mathcal{P}\)-realizable iff, \(L_{\omega}(\mathcal{P})\subseteq\mathcal{S}\) and for every hole \(h=(p,\mathsf{i})\) of \(P\), there exists \(\mathsf{o}_{h}\in\mathcal{O}\) and a Mealy machine \(\mathcal{M}_{h}\) such that for all \(u\in\mathsf{Left}_{p}\), \(\mathcal{M}_{h}\) realizes \((u\mathsf{io}_{h})^{-1}\mathcal{S}\).1 Footnote 1: For an alphabet \(\Sigma\), a set \(A\subseteq\Sigma^{\omega}\) and \(u\in\Sigma^{*}\), \(u^{-1}A=\{v\in\Sigma^{\omega}\mid uv\in A\}\). Proof of claim.: For the 'if' direction, we prove that \(\mathcal{P}\) can be extended into a Mealy machine \(\mathcal{M}\) which \(\mathcal{P}\)-realizes \(\mathcal{S}\) as follows: \(\mathcal{M}\) consists of \(\mathcal{P}\) taken in disjoint union, for all holes \(h\) of \(\mathcal{P}\), with the Mealy machine \(\mathcal{M}_{h}\), extended with the transition \(\Delta_{\mathcal{M}}(h)=(\mathsf{o}_{h},\mathsf{init}_{h})\) where \(\mathsf{init}_{h}\) is the initial state of \(\mathcal{M}_{h}\). Clearly, \(\mathcal{P}\) is a subgraph of \(\mathcal{M}\). We prove that \(\mathcal{M}\) realizes \(\mathcal{S}\). Let \(w\in L_{\omega}(\mathcal{M})\). Suppose that \(w\not\in\mathcal{S}\) and let us derive a contradiction. Since \(L_{\omega}(\mathcal{P})\subseteq L^{\forall}(\mathcal{A})\), \(w\not\in L_{\omega}(\mathcal{P})\). It implies that the execution of \(\mathcal{M}\) on \(w\) necessarily visits a hole \(h=(p,\mathsf{i})\) of \(\mathcal{P}\). So, \(w\) can be decomposed as \(w=u\mathsf{io}_{h}v\) where \(u\) is the longest prefix of \(w\) such that \(u\in\mathsf{Left}_{p}\). Since \(w\not\in\mathcal{S}\), we get that \(v\not\in(u\mathsf{io}_{h})^{-1}\mathcal{S}\). By definition of \(\mathcal{M}\), we have \(v\in L_{\omega}(\mathcal{M}_{h})\), so \(\mathcal{M}_{h}\) does not realize \((u\mathsf{io}_{h})^{-1}\mathcal{S}\), which is a contradiction. Conversely, suppose that \(\mathcal{S}\) is \(\mathcal{P}\)-realizable by some Mealy machine \(\mathcal{M}\). Since \(L_{\omega}(\mathcal{P})\subseteq L_{\omega}(\mathcal{M})\) and \(L_{\omega}(\mathcal{M})\subseteq\mathcal{S}\), we get \(L_{\omega}(\mathcal{P})\subseteq\mathcal{S}\). Now, consider a hole \(h=(p,\mathsf{i})\). Since \(\mathcal{P}\) is a subgraph of \(\mathcal{M}\), \(p\) is a state of \(\mathcal{M}\) and since \(\Delta_{\mathcal{M}}\) is total, there exists \(\mathsf{o}_{h}\in\mathcal{O}\) such that \(\Delta_{\mathcal{M}}(h)=(\mathsf{o}_{h},p^{\prime})\) for some state \(p^{\prime}\) of \(\mathcal{M}\). Consider the machine \(\mathcal{M}_{p^{\prime}}\) which is identical to \(\mathcal{M}\) except that its initial state is \(p^{\prime}\colon\mathcal{M}_{p^{\prime}}\) is a Mealy machine which realizes \((u\mathsf{io}_{h})^{-1}\mathcal{S}\) for all \(u\in\mathsf{Left}_{p}\). Indeed, let \(v\in L_{\omega}(\mathcal{M}_{p^{\prime}})\). By definition of \(\mathcal{M}_{p^{\prime}}\), we have \(u\mathsf{io}_{h}v\in L_{\omega}(\mathcal{M})\subseteq\mathcal{S}\). Hence, \(v\in(u\mathsf{io}_{h})^{-1}\mathcal{S}\). It remains to show that the characterization of the claim can be decided in ExpTime. First, deciding whether \(L_{\omega}(\mathcal{P})\subseteq L^{\forall}(\mathcal{A})=\mathcal{S}\) is a standard automata inclusion problem. Indeed, \(\mathcal{P}\) is can be viewed as a deterministic Buchi automaton all states of which are accepting, and \(\mathcal{A}\) is a universal co-Buchi automaton, which can be complemented in linear-time into a non-deterministic Buchi automaton \(\mathcal{B}\). Then, it suffices to test whether \(L_{\omega}(\mathcal{P})\cap L^{\exists}(\mathcal{B})=\varnothing\). This is doable in PTime in the size of both machines. So, testing whether \(L_{\omega}(\mathcal{P})\subseteq L^{\forall}(\mathcal{A})=\mathcal{S}\) can be done in PTime. Now, we want to decide the second part of the characterization. Note that given a hole \(h=(p,\mathsf{i})\) and \(\mathsf{o}_{h}\in\mathcal{O}\), there exists a Mealy machine \(\mathcal{M}_{h}\) such that for all \(u\in\mathsf{Left}_{p}\), \(\mathcal{M}_{h}\) realizes \((u\mathsf{io}_{h})^{-1}\mathcal{S}\), iff the specification \(\bigcap_{u\in\mathsf{Left}_{p}}(u\mathsf{io}_{h})^{-1}\mathcal{S}\) is realizable. Given \(h\) and \(\mathsf{o}_{h}\), we construct in linear-time a universal co-Buchi automaton recognizing \(\bigcap_{u\in\mathsf{Left}_{p}}(u\mathsf{io}_{h})^{-1}\mathcal{S}\). First, we compute the set of states \[R_{p}^{\mathcal{A},\mathcal{P}}=\{q\in Q_{\mathcal{A}}\mid\exists u\in(I.O)^{*}, \mathsf{Post}_{\mathcal{P}}^{*}(p_{0},u)=p\land\mathsf{Post}_{\mathcal{A}}^{*}(q _{0},u\mathsf{io}_{h})=q\}\] This can be done in ptime. Then, we define the universal co-Buchi automaton denoted \(\mathcal{A}_{p}\) which is exactly \(\mathcal{A}\) where the set of initial states is set to \(R_{p}^{\mathcal{A},\mathcal{P}}\). We have \(L^{\forall}(\mathcal{A}_{p})=\bigcap_{u\in\mathsf{Left}_{p}}(u\mathsf{i} \mathsf{o}_{h})^{-1}\mathcal{S}\), and then we use Theorem 1 to decide, in ExpTime in the size of \(\mathcal{A}_{p}\), which is linear in the size of \(\mathcal{A}\), whether \(L^{\forall}(\mathcal{A}[p])\) is realizable. If \(\mathcal{S}\) is \(\mathcal{P}\)-realizable, then it is \(\mathcal{P}\)-realizable by the machine \(\mathcal{M}\) as constructed in the proof of the claim. For each hole \(h=(p,\mathsf{i})\) of \(\mathcal{P}\), by Theorem 1, we can bound the size of the machine \(\mathcal{M}_{h}\) by \(2^{O(nlog_{2}n)}\) where \(n\) is the number of states of \(\mathcal{A}_{p}\), which is exactly the number of states of \(\mathcal{A}\). So, if \(\mathcal{P}\) has \(n_{h}\) holes, \(\mathcal{S}\) is \(\mathcal{P}\)-realizable by a Mealy machine with \(m+n_{h}2^{O(nlog_{2}n)}\) states. For the lower bound, we reduce the problem of deciding whether the intersection of \(n\) languages of finite trees is non-empty, when those languages are defined by deterministic top-down tree automata. This problem is known to be ExpTime-c [13]. This allows us to show the lower bound for \(\mathcal{P}\)-realizability even for specifications given by deterministic reachability automata. This is in contrast to plain realizability, which is solvable in PTime for this class of specifications [12]. Intuitively, high-level reason why \(\mathcal{P}\)-realizability is harder than realizability is because \(\mathcal{P}\) imposes strong constraints on the solution. In particular, it enforces that the system which \(\mathcal{P}\)-realizes \(\mathcal{S}\) behaves the same after any prefix which reaches the same state of \(\mathcal{P}\). This is why in the ExpTime solution above one needs to check realizability of intersection of specifications of the form \(\bigcap_{u\in\mathsf{Left}_{p}}(u\mathsf{i}\mathsf{o}_{h})^{-1}\mathcal{S}\), which is a harder problem than trying to realize monolithic specifications. We now give the detailed proof to obtain the lower-bound. It reduces the following ExpTime-c problem [13]: given \(n\) deterministic top-down tree automata \((\mathcal{T}_{i})_{i=1}^{n}\), decide whether \(\bigcap_{i=1}^{n}L(\mathcal{T}_{i})\neq\varnothing\). The main idea is already captured by the restricted problem where the \(\mathcal{T}_{i}\) are DFA, known to be PSpace-c, so we first expose that case. Let \((\mathcal{D}_{i}=(Q_{i},in_{i},F_{i},\delta_{i}))_{i=1}^{n}\) be \(n\) DFA over some alphabet \(\Sigma\). We let \(\mathcal{I}=\{\mathsf{i}_{1},\ldots,\mathsf{i}_{n}\}\) and \(\mathcal{O}=\Sigma\cup\{\mathsf{skip},\mathsf{exit}\}\). For all \(j\in\{1,\ldots,n\}\), we let \(\mathcal{I}\otimes L(\mathcal{D}_{j})\) the set of words of the form \(\lambda_{1}\sigma_{1}\lambda_{2}\sigma_{2}\ldots\lambda_{k}\sigma_{k}\in( \mathcal{I}\mathcal{O})^{*}\) such that \(\sigma_{1}\ldots\sigma_{k}\in L(\mathcal{D}_{j})\). Consider the following specification: \[\mathcal{S}=\bigcup_{j=1}^{n}\{\mathsf{i}_{j}.\mathsf{skip}.u.\mathsf{i}. \mathsf{exit}.x\mid u\in\mathcal{I}\otimes L(\mathcal{D}_{j}),\mathsf{i}\in \mathcal{I},x\in(\mathcal{I}\mathcal{O})^{\omega}\}\] We also define the following 2-states preMealy machine \(\mathcal{P}\): from its initial state \(m_{0}\), whenever it reads \(\mathsf{i}_{j}\) for any \(j=1,\ldots,n\), it outputs \(\mathsf{skip}\) and move to its second state \(m\), which is a hole. We prove that: 1. \(\mathcal{S}\) is recognizable by a deterministic reachability automaton \(\mathcal{A}_{\mathcal{S}}\) of polynomial size 2. \(\mathcal{S}\) is \(\mathcal{P}\)-realizable iff \(\bigcap_{i=1}^{n}L(\mathcal{D}_{i})\neq\varnothing\). First, note that \(\mathcal{S}\) is recognizable by a deterministic reachability automaton \(\mathcal{A}_{\mathcal{S}}\) of polynomial size. Informally, each automaton \(\mathcal{D}_{i}\) is modified in such a way that any input symbol from \(\mathcal{I}\) can be read in between two output letters, so that it recognizes \(\mathcal{I}\otimes L(\mathcal{D}_{i})\). Let us write \(\mathcal{I}\otimes\mathcal{D}_{i}\) the modified automaton, and assume all the automata \(\mathcal{I}\otimes\mathcal{D}_{i}\) have disjoint sets of states. From its single initial state, \(\mathcal{A}_{\mathcal{S}}\) can read for all \(j=1,\ldots,n\) the sequence of two symbols \(\mathsf{i}_{j}.\mathsf{skip}\) and go the initial state of \(\mathcal{I}\otimes\mathcal{D}_{j}\). Additionally, we add a single state \(q_{reach}\), the unique state to be accepting (in the sense that it has colour \(0\) while any other state has colour \(1\)). From \(q_{reach}\), any sequence is accepting (it is a trap). Finally, for all accepting states \(q_{f}\) of \(\mathcal{D}_{j}\), and all inputs \(\mathsf{i}\in\mathcal{I}\), we make \(\mathcal{A}_{\mathcal{S}}\) transition to \(q_{reach}\) when reading \(\mathsf{i}.\mathsf{exit}\) from state \(q_{f}\). For the second assertion, the main intuitive idea behind its proof is that \(\mathcal{P}\) transitions to the same state \(m\) for any possible initial input while \(\mathcal{A}_{\mathcal{S}}\) transitions to different states. Therefore, \(\mathcal{P}\) enforces that whatever the initial input \(i_{j}\), the same strategy should be played afterwards, while on the other hand, the definition of \(\mathcal{S}\) is dependent on the initial input. Formally, suppose that \(\mathcal{M}\) is a Mealy machine \(\mathcal{P}\)-realizing \(\mathcal{S}\). Then, since \(\mathcal{P}\) is a subgraph of \(\mathcal{M}\), the language of \(\mathcal{M}\) is necessarily of the form \[L_{\omega}(\mathcal{M})=\mathcal{I}.\mathsf{skip}.L^{\prime}\qquad\qquad(1)\] for some \(L^{\prime}\) such that \(\mathcal{I}.\mathsf{skip}.L^{\prime}\subseteq\mathcal{S}\). Let \(w\in L_{\omega}(\mathcal{M})\). It is necessarily of the form \(w=\mathsf{i}_{j}.\mathsf{skip}.u.\mathsf{i}.\mathsf{exit}.x\) for some \(j=1,\ldots,n\), \(u\in(\mathcal{I}\otimes L(\mathcal{D}_{j})\), \(\mathsf{i}\in\mathcal{I}\) and \(x\in(\mathcal{I}\mathcal{O})^{\omega}\). From (1), we get that for any other \(j^{\prime}\neq j\), \(w^{\prime}=\mathsf{i}_{j^{\prime}}.\mathsf{skip}.u.\mathsf{i}.\mathsf{exit}.x \in L_{\omega}(\mathcal{M})\) and therefore, \(u\in(\mathcal{I}\otimes L(\mathcal{D}_{j^{\prime}})\). So, \(\bigcap_{i=1}^{n}L(\mathcal{D}_{j})\neq\varnothing\). The converse is proved similarly: if \(v\in\bigcap_{i=1}^{n}L(\mathcal{D}_{j})\), then to \(\mathcal{P}\)-realize \(\mathcal{S}\), it suffices for the system to play \(\mathsf{skip}\), then \(v\), and then \(\mathsf{exit}\) forever. This strategy can easily be described by a Mealy machine extending \(\mathcal{P}\). This shows \(\mathsf{PSpace}\)-hardness. The extension of the latter reduction to deterministic top-down tree automata (over finite binary \(\Sigma\)-trees) is standard: the environment picks the direction \(\{1,2\}\) in the tree while the system picks the labels. We let \(\mathcal{I}=\{\mathsf{i}_{1},\ldots,\mathsf{i}_{j}\}\cup\{1,2\}\) and \(\mathcal{O}=\Sigma\cup\{\mathsf{exit},\mathsf{skip}\}\) as before. The specification \(\mathcal{S}\) is modified as follows: \(\mathcal{S}=\bigcup_{j=1}^{n}\mathcal{S}_{j}\) where each \(\mathcal{S}_{j}\) is the set of words of the form \(\mathsf{i}_{j}.\mathsf{skip}.u.\mathsf{i}.\mathsf{exit}.x\) such that there exists finite binary tree \(t\in L(\mathcal{T}_{j})\) such that \(u\) is a root-to-leaf branch of \(t\), i.e. \(u=d_{1}\sigma_{1}\ldots d_{k}\sigma_{k}\) where each \(d_{i}\in\{1,2\}\) is a direction, and each \(\sigma_{i}\) is the label of the node of \(t\) identified by the root-to-node path \(d_{1}\ldots d_{i}\). The preMealy machine \(\mathcal{P}\) is the same as before, and it is easily seen that the new specification \(\mathcal{S}\) is definable by a deterministic reachability automaton of polynomial size: this is due to the fact that the tree automata are deterministic top-down, and the path languages of deterministic top-down tree automata are regular, recognizable by DFA of polynomial size [13]. Let us sketch the correctness of the construction. If \(t\in\bigcap_{i=1}^{n}L(\mathcal{T}_{i})\), then \(\mathcal{P}\) can be extended into a full Mealy machine which after the state \(m\) exactly mimics the structure of \(t\): states are paths in \(t\) and when getting a new direction as input, it outputs the label of \(t\) reached following that direction. If instead, the current path is a leaf of \(t\), then the Mealy machine, whatever it receives as input in the future, outputs \(\mathsf{exit}\) forever. This machine is guaranteed to realize the specification, because whatever the initial input, all the branches of the tree induced by the choices of the environment are accepted by all the tree automata. Conversely, if there is a Mealy machine \(\mathcal{M}\) extending \(\mathcal{P}\) and realizing the specification, then whatever the initial input, it plays the same strategy afterwards. It is then possible to reconstruct a tree accepted by all tree automata using the choices made by the environment (directions), which describe paths in the tree, and the choices made by the system, which correspond to the labels of nodes identified by those paths. Since exit must eventually be output on all outcomes, the tree construct in such a way is guaranteed to be finite. ## 3 Synthesis from safety specifications and examples In this section, we present the learning framework we use to synthesise Mealy machines from examples, and safety specifications. Its generalization to any \(\omega\)-regular specification is described in Section 4 and solved by reduction to safety specifications. It is a two-phase algorithm that is informally described here:(1) it tries to generalize the examples as much as possible while maintaining realizability of the specification, and outputs a preMealy machine, (2) it completes the preMealy machine into a full Mealy machine. ### Phase 1: Generalizing the examples This phase exploits the examples by generalizing them as much as possible while maintaining realizability of the specification. It outputs a preMealy machine which is consistent with the examples and realizes the specification, if it exists. It is an RPNI-like learning algorithm [23, 22] which includes specific tests to maintain realizability of the specification. The first step of this phase involves building a tree-shaped preMealy machine whose accepted language is exactly the set of prefixes \(\mathsf{Prefs}(E)\) of the given set of examples \(E\), called a _prefix-tree acceptor_ (PTA). Formally, we define PTA as follows: Prefix Tree AcceptorA set \(E\subseteq(\mathcal{IO})^{*}\) (not necessarily finite) is _consistent_ if for all \(e\in\mathsf{Prefs}(E)\cap(\mathcal{IO})^{*}\mathcal{I}\), there exists a unique output denoted \(\mathsf{o}_{E}(e)\in\mathcal{O}\) such that \(e.\mathsf{o}_{E}(e)\in\mathsf{Prefs}(E)\). When \(E\) is consistent and finite, we can canonically associate with \(E\) a tree-shaped preMealy machine denoted \(\mathsf{PTA}(E)\) such that \(L(\mathsf{PTA}(E))=\mathsf{Prefs}(E)\cap(\mathcal{IO})^{*}\), as follows: \[\mathsf{PTA}(E)=(\mathsf{Prefs}(E)\cap(\mathcal{IO})^{*},\epsilon,(e,\mathsf{ i})\mapsto(\mathsf{o}_{E}(e\mathsf{i}),e\mathsf{i}\mathsf{o}_{E}(e\mathsf{i})))\] Example 4: Let \(\mathcal{I}=\{\mathsf{i},\mathsf{i}^{\prime}\}\) and \(\mathcal{O}=\{\mathsf{o},\mathsf{o}^{\prime}\}\) and consider \(E_{0}=\{\mathsf{i}^{\prime}\mathsf{o},\mathsf{i}\mathsf{o}\mathsf{i}\mathsf{o }^{\prime}\mathsf{o}^{\prime}\}\). Then \(E_{0}\) is consistent and \(\mathsf{PTA}(E_{0})\) is depicted on the left of Figure 2. For conciseness, we denote its states by \(0,\ldots,4\) where \(0=\epsilon\), \(1=\mathsf{i}^{\prime}\mathsf{o}\), \(2=\mathsf{i}\mathsf{o}\), \(3=\mathsf{i}\mathsf{o}\mathsf{i}\mathsf{o}\) and \(4=\mathsf{i}\mathsf{o}\mathsf{i}\mathsf{o}^{\prime}\mathsf{o}^{\prime}\). In the next step of this phase, the algorithm tries to merge as many as possible states of the PTA. The strategy used to select a state to merge another given state with, is a parameter of the algorithm, and is called a _merging strategy_\(\sigma_{G}\). Formally, a _merging_ strategy \(\sigma_{G}\) is defined over \(4\)-tuples \((\mathcal{M},m,E,X)\) where \(\mathcal{M}\) is a preMealy machine, \(m\) is a state of \(\mathcal{M}\), \(E\) is a set of examples and \(X\) is subset of states of \(\mathcal{M}\) (the candidate states to merge \(m\) with), and returns a state of \(X\), i.e., \(\sigma_{G}(\mathcal{M},m,E,X)\in X\). The formal definition is as follows: State mergingWe now define the classical state merging operation of RPNI adapted to Mealy machines. An equivalence relation \(\sim\) over \(M\) is called _a congruence_ for \(\mathcal{M}\) if for all \(x\sim x^{\prime}\) and \(\mathsf{i}\in\mathcal{I}\), if \(\Delta_{\mathcal{M}}(x,\mathsf{i})\) and \(\Delta_{\mathcal{M}}(x^{\prime},\mathsf{i})\) are both defined, then \(\mathsf{Post}_{\mathcal{M}}(x,\mathsf{i})\sim\mathsf{Post}_{\mathcal{M}}(x^{ \prime},\mathsf{i})\). It is _Mealy-congruence_ for \(\mathcal{M}\) if additionally, \(\mathsf{Out}_{\mathcal{M}}(x,\mathsf{i})=\mathsf{Out}_{\mathcal{M}}(x^{\prime },\mathsf{i})\). When \(\mathcal{M}\) is clear from the context, we simply say congruence and Mealy-congruence. If \(\sim\) is an Mealy-congruence, then the following preMealy machine (called the quotient of \(\mathcal{M}\) by \(\sim\)) is a well-defined preMealy machine (it does not depend on the choice of representatives): \(\mathcal{M}/_{\sim}=(M/_{\sim},[m_{\mathsf{init}}],([s],\mathsf{i})\mapsto( \mathsf{Out}(s,\mathsf{i}),[\mathsf{Post}(s,\mathsf{i})]))\). In this definition, \([s]\) denotes the class of \(s\) by \(\sim\), and we take a representative \(s\) such that \(\Delta(s,\mathsf{i})\) is defined. If no such representative exists, the transition is undefined on \(\mathsf{i}\). The pseudo-code for Phase 1 is given by Algo 1. We provide here a running example to better illustrate the working of algorithm. Initially, the algorithm tests whether the set of examples \(E\) is consistent2 and if that is the case, whether \(\mathsf{PTA}(E)\) can be completed into a Mealy machine realizing the given specification \(\mathcal{S}\), thanks to Theorem 2. Footnote 2: \(E\) is consistent if outputs uniquely depends on prefixes. Formally, it means for all prefixes \(u\in\mathsf{Prefs}(E)\cap(\mathcal{IO})^{*}\mathcal{I}\), there is a unique output \(\mathsf{o}\in\mathcal{O}\) such that \(u\mathsf{o}\in\mathsf{Prefs}(E)\). Example 5: [Synthesis from \(\varphi^{\mathsf{ME}}_{\mathsf{CORE}}\) and examples] Let us consider the classical problem of mutual exclusion described in Example 3 with the LTL specification, \(\varphi^{\mathsf{ME}}_{\mathsf{CORE}}\), and the prefixes of executions: 1. \(\{!r_{1},!r_{2}\}\).\(\{!g_{1},!g_{2}\}\#\{r_{1},!r_{2}\}\).\(\{g_{1},!g_{2}\}\#\{!r_{1},r_{2}\}\).\(\{!g_{1},g_{2}\}\) 2. \(\{r_{1},r_{2}\}\).\(\{g_{1},!g_{2}\}\#\{!r_{1},!r_{2}\}\).\(\{!g_{1},g_{2}\}\) We begin by building the \(\mathsf{PTA}\) as shown in Fig. 3 and then check if \(\varphi^{\mathsf{ME}}_{\mathsf{CORE}}\) is \(\mathsf{PTA}-\mathsf{realizable}\). Figure 2: The preMealy machine \(\mathsf{PTA}(\{\mathsf{i}^{\prime}\mathsf{o},\mathsf{ioioi}^{\prime}\mathsf{ o}^{\prime}\})\) of Example 4 and its quotient by the equivalence relation induced by the partition \(\{\{0,1\},\{2,3,4\}\}\) as described in Example 6. If that is the case, then it takes all prefixes of \(E\) as the set of examples, and enters a loop which consists in iteratively coarsening again and again some congruence \(\sim\) over the states of \(\mathsf{PTA}(E)\), by merging some of its classes. The congruence \(\sim\) is initially the finest equivalence relation. It does the coarsening in a specific order: examples (which are states of \(\mathsf{PTA}(E)\)) are taken in length-lexicographic order. When entering the loop with example \(e\), the algorithm computes at line 5 all the states, i.e., all the examples \(e^{\prime}\) which have been processed already by the loop (\(e^{\prime}\prec_{ll}e\)) and whose current class can be merged with the class of \(e\) (predicate \(\mathsf{Mergeable}(\mathsf{PTA}(E),\sim,e,e^{\prime})\)). State merging is a standard operation in automata learning algorithms which intuitively means that merging the \(\sim\)-class of \(e\) and the \(\sim\)-class of \(e^{\prime}\), and propagating this merge to the descendants of \(e\) and \(e^{\prime}\), does not result any conflict. At line 6, it filters the previous set by keeping only the states which, when merged with \(e\), produce a preMealy machine which can be completed into a Mealy machine realizing \(\mathcal{S}\) (again by Theorem 2). If after the filtering there are still several candidates for merge, one of them is selected with the merging strategy \(\sigma_{G}\) and the equivalence relation is then coarsened via class merging (operation \(\mathsf{MergeClass}(\mathsf{PTA}(E),\sim,e,e^{\prime})\)). At the end, the algorithm returns the quotient of \(\mathsf{PTA}(E)\) by the computed Mealy-congruence. As a side remark, when \(\mathcal{S}\) is universal, i.e. \(\mathcal{S}=(\mathcal{IO})^{\omega}\), then it is realizable by _any_ Mealy machine and therefore line 6 does not filter any of the candidates for merge. So, when \(\mathcal{S}\) is universal, Algo 1 can be seen as an RPNI variant for learning preMealy machines. Example 5 contd: Synthesis from \(\varphi^{\mathsf{ME}}_{\mathsf{CORE}}\) and examplesWe note that each state \(m\) of the \(\mathsf{PTA}\) in Fig. 3 are \(\sim\)-class \(e\), where \(e\) is the shortest prefix such that \(\Delta(q_{\mathsf{init}},e)=m\). We then check if \(\varphi^{\mathsf{ME}}_{\mathsf{CORE}}\) is \(\mathsf{PTA}-\mathsf{realizable}^{3}\) which we find to be the case. We note that each state is labelled in the length-lexicographic order. We then begin the process of merging states in the aforementioned order as shown in Fig. 4 and Fig. 5. ### Phase 2: completion of preMealy machines into Mealy machines As it only constructs the PTA and tries to merge its states, the generalization phase might not return a (complete) Mealy machine. In other words, the machine it returns might still contain some holes (missing transitions). The objective of this second phase is to complete those holes into a Mealy machine, while realizing the specification. More precisely, when a transition is not defined from some state \(m\) and some input \(\mathsf{i}\in\mathcal{I}\), the algorithm must select an output symbol \(\mathsf{o}\in\mathcal{O}\) and a state \(m^{\prime}\) to transition to, which can be either an existing state or a new state to be created (in that case, we write \(m^{\prime}=\mathsf{fresh}\) to denote the fact that \(m^{\prime}\) is a fresh state). In our implementation, if it is possible to reuse a state \(m^{\prime}\) that was created during the generalization phase, it is favoured over other states, in order to exploit the examples. However, the algorithm for the completion phase we describe now does not depend on any particular strategy to pick states. Therefore, it is parameterized by a _completion strategy_\(\sigma_{C}\), defined over all triples \((\mathcal{M},m,\mathsf{i},X)\) where \(\mathcal{M}\) is a preMealy machine with set of states Figure 4: The merging phase of preMealy machine \(\mathsf{PTA}\) of Example 5. \(M\), \((m,\mathsf{i})\) is a hole of \(\mathcal{M}\), and \(X\subseteq\mathcal{O}\times(M\cup\{\mathsf{fresh}\})\) is a list of candidate pairs \((\mathsf{o},m^{\prime})\). It returns an element of \(X\), i.e., \(\sigma_{C}(\mathcal{M},m,\mathsf{i},X)\in X\). In addition to \(\sigma_{C}\), the completion algorithm takes as input a preMealy machine \(\mathcal{M}_{0}\) and a specification \(\mathcal{S}\), and outputs a Mealy machine which \(\mathcal{M}_{0}\)-realizes \(\mathcal{S}\), if it exists. The pseudo-code is given in Algo 2. Initially, it tests whether \(\mathcal{S}\) is \(\mathcal{M}_{0}\)-realizable, otherwise it returns UNREAL. Then, it keeps on completing holes of \(\mathcal{M}_{0}\). The computation of the list of output/state candidates is done at the loop of line 5. Note that the **for**-loop iterates over \(M\cup\{\mathsf{fresh}()\}\), where \(\mathsf{fresh}()\) is a procedure that returns a fresh state not in \(M\). The algorithm maintains the invariant that at any iteration of the **while**-loop, \(\mathcal{S}\) is \(\mathcal{M}\)-realizable, thanks to the test at line 7, based on Theorem 2. Therefore, the list of candidates is necessarily non-empty. Amongst those candidates, a single one is selected and the transition on \((m,\mathsf{i})\) is added to \(\mathcal{M}\) accordingly at line 10. ### Two-phase synthesis algorithm from specifications and examples The two-phase synthesis algorithm for safety specifications and examples, called \(\textsc{SynthSafe}(E,\mathcal{S},\sigma_{G},\sigma_{C})\) works as follows: it takes as input a set of examples \(E\), a specification \(\mathcal{S}\) given as a deterministic safety automaton, a generalizing and completion strategies \(\sigma_{G},\sigma_{C}\) respectively. It returns a Mealy machine \(\mathcal{M}\) which realizes \(\mathcal{S}\) and such that \(E\subseteq L(\mathcal{M})\) if it exists. In a first steps, it calls \(\textsc{Gen}(E,\mathcal{S},\sigma_{G})\). If this calls returns UNREAL, then \(\textsc{SynthSafe}\) return Figure 5: The merging phase of preMealy machine \(\mathsf{PTA}\) of Example 5 contd. UNREAL as well. Otherwise, the call to Gen returns a preMealy machine \(\mathcal{M}_{0}\). In a second step, SynthSafe calls \(\textsc{Comp}(\mathcal{M}_{0},\mathcal{S},\sigma_{C})\). If this call returns UNREAL, so does SynthSafe, otherwise SynthSafe returns the Mealy machine computed by Comp. The pseudo-code of SynthSafe can be found in Algo. 3. The completion procedure may not terminate for some completion strategies. It is because the completion strategy could for instance keep on selecting pairs of the form \((\mathsf{o},m^{\prime})\) where \(m^{\prime}\) is a fresh state. However we prove that it always terminates for _lazy_ completion strategies. A completion strategy \(\sigma_{C}\) is said to be _lazy_ if it favours existing states, which formally means that if \(X\setminus(\mathcal{O}\times\{\mathsf{fresh}\})\neq\varnothing\), then \(\sigma_{C}(\mathcal{M},m,\mathsf{i},X)\not\in\mathcal{O}\times\{\mathsf{fresh}\}\). The first theorem establishes correctness and termination of the algorithm for lazy completion strategies (we assume that the functions \(\sigma_{G}\) and \(\sigma_{C}\) are computable in worst-case exponential time in the size of their inputs). **Theorem 3** (termination and correctness).: _For all finite sets of examples \(E\subseteq(\mathcal{I}.\mathcal{O})^{*}\), all specifications \(\mathcal{S}\subseteq(\mathcal{I}.\mathcal{O})^{\omega}\) given as a deterministic safety automaton \(\mathcal{A}\) with \(n\) states, all merging strategies \(\sigma_{G}\) and all completion strategies \(\sigma_{C}\), if SynthSafe\((E,\mathcal{S},\sigma_{G},\sigma_{C})\) terminates then, it returns a Mealy machine \(\mathcal{M}\) such that \(E\subseteq L(\mathcal{M})\) and \(\mathcal{M}\) realizes \(\mathcal{S}\), if it exists, otherwise it returns UNREAL. Moreover, SynthSafe\((E,\mathcal{S},\sigma_{G},\sigma_{C})\) terminates if \(\sigma_{C}\) is lazy, in worst-case exponential time (polynomial in the size4 of \(E\) and exponential in \(n\))._ Footnote 4: The size of \(E\) is the sum of the lengths of the examples of \(E\). The proof of the latter theorem is a consequence of several results proved on the generalization and completion phases, and is given in App. D.9. Intuitively, the complexity is dominated by the complexity of checking \(\mathcal{P}\)-realizability (Theorem 2) and the termination time of the completion procedure, which we prove to be worst-case exponential in \(n\). The assumption that the specification is a deterministic safety automaton \(\mathcal{A}\) is used when proving termination of the completion algorithm. Intuitively, to any state \(m\) of the so far constructed preMealy machine \(\mathcal{M}\), we associate the subset of states \(Q_{m}\) of \(\mathcal{A}\) which are reachable in \(\mathcal{A}\) when reading prefixes that reach \(m\) in \(\mathcal{M}\). We prove that when a transition to a fresh state \(m^{\prime}\) is added to \(\mathcal{M}\) and \(Q_{m^{\prime}}\subseteq Q_{m}\), then \(m\) could have been reused instead of \(m^{\prime}\) (Lemma 11 in App. D.6). This is possible as such subsets are sufficient to summarize the behaviour of \(\mathcal{A}\) on infinite suffixes, because it is a safety condition. We also show some monotonicity property of the subsets \(Q_{m}\) when more transitions are added to \(\mathcal{M}\), allowing to bound the termination time by the length of the longest chain of \(\subseteq\)-antichains of subsets, which is worst-case exponential in the number of states of \(\mathcal{A}\) (Lemma 13 in App. D.6). A Mealy machine \(\mathcal{T}\) is minimal if for all Mealy machine \(\mathcal{M}\) such that \(L(\mathcal{T})=L(\mathcal{M})\), the number of states of \(\mathcal{M}\) is at least that of \(\mathcal{T}\). The next result, proved in App. D.10, states that any minimal Mealy machine realizing a specification \(\mathcal{S}\) can be returned by our synthesis algorithm, providing representative examples. **Theorem 4** (Mealy completeness).: _For all specifications \(\mathcal{S}\subseteq(\mathcal{I}.\mathcal{O})^{\omega}\) given as a deterministic safety automaton, for all minimal Mealy machines \(\mathcal{M}\) realizing \(\mathcal{S}\), there exists a finite set of examples \(E\subseteq(\mathcal{I}.\mathcal{O})^{*}\), of size polynomial in the size of \(\mathcal{M}\), such that for all generalizing strategies \(\sigma_{G}\) and completion strategies \(\sigma_{C}\), and all sets of examples \(E^{\prime}\) s.t. \(E\subseteq E^{\prime}\subseteq L(\mathcal{M})\), \(\textsc{SynthSafe}(E^{\prime},\mathcal{S},\sigma_{G},\sigma_{C})=\mathcal{M}\)._ The polynomial upper bound given in the statement of Theorem 4 is more precisely the following: the cardinality of \(E\) is \(O(m+n^{2})\) where \(n\) is the number of states of \(\mathcal{M}\) while \(m\) is its number of transitions. Moreover, each example \(e\in E\) has length \(O(n^{2})\). More details can be found in Remark 1. ``` Input: A specification \(\mathcal{S}\subseteq(\mathcal{I}.\mathcal{O})^{*}\) given as a deterministic safety automaton, a finite set of examples \(E\subseteq(\mathcal{I}.\mathcal{O})^{*}\), a generalizing and a completion strategies \(\sigma_{G},\sigma_{C}\) Output: A Mealy machine \(\mathcal{M}\) such that \(E\subseteq L(\mathcal{M})\) and \(\mathcal{M}\) realizes \(\mathcal{S}\) if it exists, otherwise UNREAL. 1ifGen\((E,\mathcal{S},\sigma_{G})\neq\)UNREALthen 2\(\mathcal{M}_{0}\leftarrow\textsc{Gen}(E,\mathcal{S},\sigma_{G})\)// Returns a preMealy machine generalizing the set of examples according to \(\sigma_{G}\) and such that \(\mathcal{S}\) is \(\mathcal{M}_{0}\)-realizable 3else 4returnUNREAL 5ifComp\((\mathcal{M}_{0},\mathcal{S},\sigma_{C})\neq\)UNREALthen 6\(\mathcal{M}\leftarrow\textsc{Comp}(\mathcal{M}_{0},\mathcal{S},\sigma_{C})\)// Complete \(\mathcal{M}_{0}\) by creating new states or reusing states according to \(\sigma_{C}\) return \(\mathcal{M}\) 7 8else 9returnUNREAL ``` **Algorithm 3**SynthSafe(\(E\),\(\mathcal{S}\),\(\sigma_{G}\),\(\sigma_{C}\)) - synthesis algorithm from specification and examples Remark 1: We bound here the size of the characteristic sample \(E_{\mathcal{T}}\). Let \(n\) and \(m\) be the number of states and transitions of \(\mathcal{T}\) respectively. Then, for all states \(t\), \(s_{t}\) has length at most \(n-1\), and so for all \(p=(t,\mathsf{i})\) such that \(\Delta(p)\) is defined, \(e_{p}=f_{\mathsf{i}\mathsf{o}}^{\mathcal{T}}(s_{t})\) has length at most \(2n\). Given two different states \(t\neq t^{\prime}\), \(d_{t,t^{\prime}}\) has length at most \(n^{2}\). Therefore, \(v_{t,t^{\prime}}\) has length at most \(2(n+n^{2})\). There are at most \(m\) words \(e_{p}\) and \(n^{2}\) words \(v_{t,t^{\prime}}\). So overall, the cardinality of \(E_{\mathcal{T}}\) is bounded by \(m+n^{2}\) and its size is bounded by \(mn+2(n^{3}+n^{4})\). ## 4 Synthesis from \(\omega\)-regular specifications and examples We now consider the case where the specification \(\mathcal{S}\) is given as universal coBuchi automaton, in Section 4. We consider this class of specifications as it is complete for \(\omega\)-regular languages and allow for compact symbolic representations. Further in this section, we consider the case of LTL specifications. Specifications given as universal coBuchi automataOur solution for \(\omega\)-regular specifications relies on a reduction to the safety case treated in Sec. 3. It relies on previous works that develop so called Safraless algorithms for \(\omega\)-regular reactive synthesis [26, 31, 20]. The main idea is to strengthen the acceptance condition of the automaton from coBuchi to \(K\)-coBuchi, which is a safety acceptance condition. It is complete for the plain synthesis problem (w/o examples) if \(K\) is large enough (in the worst-case exponential in the number of states of the automaton, see for instance [20]). Moreover, it allows for incremental synthesis algorithms: if the specification defined by the automaton with a \(k\)-coBuchi acceptance condition is realizable, for \(k\leq K\), so is the specification defined by taking \(K\)-coBuchi acceptance. Here, as we also take examples into account, we need to slightly adapt the results. Theorem 4.1.: _Given a universal co-Buchi automaton \(\mathcal{A}\) with \(n\) states defining a specificaton \(\mathcal{S}=L^{\forall}(\mathcal{A})\) and a preMealy machine \(\mathcal{P}\) with \(m\) states, we have that \(\mathcal{S}\) is \(\mathcal{P}\)-realizable if and only if \(\mathcal{S}^{\prime}=L^{\forall}_{K}(A)\) is \(\mathcal{P}\)-realizable for \(K=nm|\mathcal{I}|2^{\mathbf{O}(n\log_{2}n)}\)._ Proof.: According to Theorem 4.1, given a universal co-Buchi automaton \(\mathcal{A}\) with \(n\) states defining a specification \(\mathcal{S}\), and a preMealy machine \(\mathcal{P}\) with \(m\) states and \(n_{h}\) holes, \(\mathcal{S}\) is \(\mathcal{P}\)-realizable iff it is \(\mathcal{P}\)-realizable by a Mealy machine with \(m+n_{h}2^{O(nlog_{2}n)}\) states. Let \(\mathcal{M}\) be such a Mealy machine. The rest of the proof relies on the following lemma: Lemma 1 ([20]).: _Let \(\mathcal{A}\) be a universal coBuchi automaton with \(\alpha\) states and \(\mathcal{M}\) a Mealy machine with \(\beta\) states, we have that \(L_{\omega}(\mathcal{M})\subseteq L^{\forall}(\mathcal{A})\) iff \(L_{\omega}(\mathcal{M})\subseteq L^{\forall}_{k}(\mathcal{A})\) for \(k=\alpha\times\beta\)._ Therefore, we get that \(\mathcal{M}\) realizes \(L^{\forall}_{K}(\mathcal{A})\) for \(K=n\times(m+n_{h}2^{O(nlog_{2}n)})\leq nm|\mathcal{I}|2^{O(nlog_{2}n)}\). Conversely, any machine realizing \(L^{\forall}_{k}(\mathcal{A})\), for any \(k\), also realizes \(L^{\forall}(\mathcal{A})\). The below lemma follows immediately: Lemma 2.: _For all co-Buchi automata \(\mathcal{A}\), for all preMealy machines \(\mathcal{P}\), for all \(k_{1}\leq k_{2}\), we have that \(L^{\forall}_{k_{1}}(\mathcal{A})\subseteq L^{\forall}_{k_{2}}(\mathcal{A})\) and so if \(L^{\forall}_{k_{1}}(\mathcal{A})\) is \(\mathcal{P}\)-realizable then \(L^{\forall}_{k_{2}}(\mathcal{A})\) is \(\mathcal{P}\)-realizable. Furthermore for all \(k\geq 0\), if \(\mathcal{S}^{\prime}=L^{\forall}_{k}(A)\) is \(\mathcal{P}\)-realizable then \(\mathcal{S}=L^{\forall}(\mathcal{A})\) is \(\mathcal{P}\)-realizable._ Thanks to the latter two results applied to \(\mathcal{P}=\mathsf{PTA}(E)\) for a set \(E\) of examples of size \(m\), we can design an algorithm for synthesising Mealy machines from a specification defined by a universal coBuchi automaton \(\mathcal{A}\) with \(n\) states and \(E\): it calls SynthSafe on the safety specification \(L^{\forall}_{k}(\mathcal{A})\) and \(E\) for increasing values of \(k\), until it concludes positively, or reach the bound \(K=2^{\mathbf{O}(mn\log_{2}mn)}+1\). In the latter case, it returns UNREAL. However, to apply SynthSafe properly, \(L^{\forall}_{k}(\mathcal{A})\) must be represented by a deterministic safety automaton. This is possible as \(k\)-coBuchi automata are determinizable [20]. DeterminizationThe determinization of \(k\)-co-Buchi automata \(\mathcal{A}\) relies on a simple generalization of the subset construction: in addition to remembering the set of states that can be reached by a prefix of a run while reading an infinite word, the construction counts the maximal number of times a run prefix that reaches a given state \(q\) has visited states labelled with color \(1\) (remember that a run can visit at most \(k\) such states to be accepting). The states of the deterministic automaton are so-called _counting functions_, formally defined for a co-Buchi automaton \(\mathcal{A}=(Q,q_{\mathsf{init}},\Sigma,\delta,d)\) and \(k\in\mathbb{N}\), as the set noted \(CF(\mathcal{A},k)\) of functions \(f:Q\to\{-1,0,1,\ldots,k,k+1\}\). If \(f(q)=-1\) for some state \(q\), it means that \(q\) is inactive (no run of \(\mathcal{A}\) reach \(q\) on the current prefix). The initial counting function \(f_{\mathsf{init}}\) maps all \(1\)-colored initial states to \(1\), all \(0\)-colored initial states to \(0\) and all other states to \(-1\). We denote by \(\mathcal{D}(\mathcal{A},k)=(Q^{\mathcal{D}}=CF(\mathcal{A},k),q_{\mathsf{init} }^{\mathcal{D}}=f_{\mathsf{init}},\Sigma,\delta^{\mathcal{D}},Q_{\mathsf{usf}}^ {\mathcal{D}})\) the deterministic automaton obtained by this determinization procedure. We now provide a formal description below: Definition 1 (Determization with \(CF(\mathcal{A},k)\)): Let \(\mathcal{A}=(Q,q_{\mathsf{init}},\Sigma,\delta,d)\) be a co-Buchi automaton and \(k\in\mathbb{N}\). We associate to the pair \((\mathcal{A},k)\), the deterministic safety automaton \(\mathcal{D}(\mathcal{A},k)=(Q^{\mathcal{D}},q_{\mathsf{init}}^{\mathcal{D}}, \Sigma,\delta^{\mathcal{D}},Q_{\mathsf{usf}}^{\mathcal{D}})\) where: 1. \(Q^{\mathcal{D}}=CF(\mathcal{A},k)\) is the set of \(k\)-counting functions for \(\mathcal{A}\). 2. \(q_{\mathsf{init}}^{\mathcal{D}}=f_{0}\) where \(f_{0}(q)=-1\) for all \(q\neq q_{\mathsf{init}}\), and \(f_{0}(q)=0\) for \(q=q_{\mathsf{init}}\) and \(d(q_{\mathsf{init}})=2\), and \(f_{0}(q)=1\) for \(q=q_{\mathsf{init}}\) and \(d(q_{\mathsf{init}})=1\). Informally, the states that have been assigned the value \(-1\) are inactive. Initially, only \(q=q_{\mathsf{init}}\) is active. If it is labelled with color \(1\), its counter equals \(1\), otherwise it is equal to \(0\). 3. For all \(f\in CF(\mathcal{A},k)\), and \(\sigma\in\Sigma\), the transition function \(\delta^{\mathcal{D}}\) is defined as follows: \(\delta^{\mathcal{D}}(f_{1},\sigma)=f_{2}\) where for all \(q\in Q\), \(f_{2}(q)=\) \[\min\left(\left(\max_{q^{\prime}\in Q:f_{1}(q^{\prime})\geq 0\wedge q\in \delta(q^{\prime},\sigma)}f_{1}(q^{\prime})\right)+x,k+1\right),\text{ with }x=1\text{ if }d(q)=1\text{, and }x=0\text{ if }d(q)=2.\] 4. The set of unsafe counting functions is defined5 as \(Q_{\mathsf{usf}}^{\mathcal{D}}=\{f\mid\exists q\in Q\cdot f(q)=k+1\}\). Footnote 5: It is easy to check that \(Q_{\mathsf{usf}}^{\mathcal{D}}\) is a trap as required. The language defined by \(\mathcal{D}(\mathcal{A},k)\) is the set of infinite words \(w\in\Sigma^{\omega}\) such that the unique run of \(\mathcal{D}(\mathcal{A},k)\) on \(w\) never visits a state (counting function) \(f\) such that \(d(f)=1\). This (safety) language of infinite words is denoted by \(L(\mathcal{D}(\mathcal{A},k))\). The size of \(\mathcal{D}(\mathcal{A},k)\) is bounded by \(k^{\mathbf{O}(|\mathcal{A}|)}\). Lemma 3 (\(\mathcal{D}(\mathcal{A},k)\) correctness, [20]): _For all universal co-Buchi automaton \(\mathcal{A}\), for all \(k\in\mathbb{N}\), \(L_{k}^{\mathsf{V}}(\mathcal{A})=L(\mathcal{D}(\mathcal{A},k))\)._ We can now give algorithm SynthLearn, in pseudo-code, as Algo 4. Complexity considerations and improving the upper-boundAs the automaton \(\mathcal{D}(\mathcal{A},k)\) is in the worst-case exponential in the size of the automaton \(\mathcal{A}\), a direct application of Theorem 3 yields a doubly exponential time procedure. This complexity is a consequence of the fact that the \(\mathcal{P}\)-realizability problem is Exptime in the size of the deterministic automaton as shown in Theorem 2, and that the termination of the completion procedure is also worst-case exponential in the size of the deterministic automaton. We show that we can improve the complexity of each call to SynthSafe and obtain an optimal worst-case (single) exponential complexity. We provide an algorithm to check \(\mathcal{P}\)-realizability of a specification \(\mathcal{S}=L_{k}^{\mathsf{V}}(\mathcal{A})\) that runs in time singly exponential in the size of \(\mathcal{A}\) and polynomial in \(k\) and the size of \(\mathcal{P}\). Second, we provide a finer complexity analysis for the termination of the completion algorithm, which exhibits a worst case exponential time in \(|\mathcal{A}|\). Those two improvements lead to an overall complexity of SynthLearn which is exponential in the size of the specification \(\mathcal{A}\) and polynomial in the set of examples \(|E|\). This is provably worst-case optimal because for \(E=\emptyset\) the problem is already ExpTime-Complete. We explain next the first improvement, the upper-bound for termination. We establish an upper-bound on the number of iterations needed to complete the preMealy machine output by the procedure Gen at the end of the first phase of our synthesis algorithm (in the case of a specification given by a \(k\)-coBuchi automaton \(\mathcal{A}\) is realizable). To obtain the required exponential bound, we rely on the maximal length of chains of antichains of counting functions partially ordered as follows: let \(A\in\mathcal{AC}_{\preceq}(CF(\mathcal{A},k))\) and \(B\in\mathcal{AC}_{\preceq}(CF(\mathcal{A},k))\), then \(A\trianglelefteq_{\mathsf{CF}}B\) if and only if \(\forall f\in A\cdot\exists g\in B\cdot f\preceq g\). The length of those chains is bounded by \(k^{\mathbf{O}(n)}\): Lemma 4.: _Any \(\lhd_{\mathsf{CF}}\)-chain in \((\mathcal{AC}_{\preceq}(CF(\mathcal{A},k)),\trianglelefteq_{\mathsf{CF}})\) has length at most \(k^{\mathbf{O}(n)}\) where \(n\) is the number of states in \(\mathcal{A}\)._ Proof.: Just as in the proof of Lemma 12, for an antichain \(X=\{f_{1},\ldots,f_{n}\}\) of counting functions, we define \(\downarrow\!\!X\) its downward closure with respect to \(\preceq\). Then, given another antichain \(Y\), we get that \(X\lhd_{\mathsf{CF}}Y\) iff \(\downarrow\!\!X\subseteq\downarrow\!Y\). Therefore the maximal length of a \(\lhd_{\mathsf{CF}}\)-chain is bounded by the number of counting functions, which is \(k^{\mathbf{O}(n)}\). Checking \(\mathcal{P}\)-realizability of a specification \(\mathcal{S}=L^{\forall}_{k}(\mathcal{A})\)To obtain a better complexity, we exploit some structure that exists in the deterministic automaton \(\mathcal{D}(\mathcal{A},k)\). First, the set of counting functions \(CF(\mathcal{A},k)\) forms a complete lattice for the partial order \(\preceq\) defined by \(f_{1}\preceq f_{2}\) if \(f_{1}(q)\leq f_{2}(q)\) for all states \(q\). We denote by \(f_{1}\bigsqcup f_{2}\) the least upper-bound of \(f_{1},f_{2}\), and by \(W^{\mathcal{A}}_{k}\) the set of counting functions \(f\) such that the specification \(L(\mathcal{D}(\mathcal{A},k)[f])\) is realizable (i.e. the specification defined by \(\mathcal{D}(\mathcal{A},k)\) with initial state \(f\)). It is known that \(W_{k}^{\mathcal{A}}\) is downward-closed for \(\preceq\)[20], because for all \(f_{1}\preceq f_{2}\), any machine realizing \(L(\mathcal{D}(\mathcal{A},k)[f_{2}])\) also realizes \(L(\mathcal{D}(\mathcal{A},k)[f_{1}])\). Therefore, \(W_{k}^{\mathcal{A}}\) can be represented compactly by the antichain \(\lceil W_{k}^{\mathcal{A}}\rceil\) of its \(\preceq\)-maximal elements. Now, the first improvement is obtained thanks to the following result: Lemma 5: _Given a preMealy \(\mathcal{P}=(M,m_{0},\Delta)\), a co-Buchi automata \(\mathcal{A}\), and \(k\in\mathbb{N}\). For all states \(m\in M\), we let \(F^{*}(m)=\bigsqcup\{f\mid\exists u\in(\mathcal{I}\mathcal{O})^{*}\cdot\textsf{ Post}_{\mathcal{P}}^{*}(m_{0},u)=m\wedge\textsf{Post}_{\mathcal{D}}(f_{0},u)=f\}\). Then, \(L(\mathcal{D}(\mathcal{A},k))\) is \(\mathcal{P}\)-realizable iff there does not exist \(m\in M\) such that \(F^{*}(m)\not\in W_{k}^{\mathcal{A}}\)._ It is easily shown that the operator \(F^{*}\) can be computed in prime. Thus, the latter lemma implies that there is a polynomial time algorithm in \(|\mathcal{P}|\), \(|\mathcal{A}|\), \(k\in\mathbb{N}\), and the size of \(\lceil W_{k}^{\mathcal{A}}\rceil\) to check the \(\mathcal{P}\)-realizability of \(L^{\forall}(\mathcal{A})\). Formal details can be found in App. E.1. We end this subsection by summarizing the behavior of our synthesis algorithm for \(\omega\)-regular specifications defined as universal co-Buchi automata. Theorem 4.1: _Given a universal coBuchi automaton \(\mathcal{A}\) and a set of examples \(E\), the synthesis algorithm SynthLearn returns, if it exists, a Mealy machine \(\mathcal{M}\) such that \(E\subseteq L(\mathcal{M})\) and \(L_{\omega}(\mathcal{M})\subseteq L^{\forall}(\mathcal{A})\), in worst-case exponential time in the size of \(\mathcal{A}\) and polynomial in the size of \(E\). Otherwise, it returns UNREAL._ Notice that Alg. 4 calls Alg. 1 which itself calls the procedure that checks \(\mathcal{P}\)-realizability and checking \(\mathcal{P}\)-realizability is in polynomial time as we compute the fixpoint and check if it is safe. Specifications given as an LTL formulaWe are now in position to apply Alg. 4 to a specification given as LTL formula \(\varphi\). Indeed, thanks to the results of the subsection above, to provide an algorithm for LTL specifications, we only need to translate \(\varphi\) into a universal co-Buchi automaton. This can be done according to the next lemma. It is well-known (see [26]), that given an LTL formula \(\varphi\) over two sets of atomic propositions \(P_{\mathcal{I}}\) and \(P_{\mathcal{O}}\), we can construct in exponential time a universal co-Buchi automaton \(\mathcal{A}_{\varphi}\) such that \(L^{\forall}(\mathcal{A}_{\varphi})=[\![\varphi]\!]\), i.e. \(\mathcal{A}\) recognizes exactly the set of words \(w\in(2^{P_{\mathcal{I}}}2^{P_{\mathcal{O}}})^{\omega}\) that satisfy \(\varphi\). We then get the following theorem that gives the complexity of our synthesis algorithm for a set of examples \(E\) and an LTL formula \(\varphi\), complexity which is provably worst-case optimal as deciding if \([\![\varphi]\!]\) is realizable with \(E=\emptyset\), i.e. the plain LTL realizability problem, is already 2ExpTime-Complete[29]. Theorem 4.2: _Given an LTL formula \(\varphi\) and a set of examples \(E\), the synthesis algorithm SynthLearn returns a Mealy machine \(\mathcal{M}\) such that \(E\subseteq L(\mathcal{M})\) and \(L_{\omega}(\mathcal{M})\subseteq[\![\varphi]\!]\) if it exists, in worst-case doubly exponential time in the size of \(\varphi\) and polynomial in the size of \(E\). Otherwise it returns UNREAL._ ## 5 Implementation and Case study We have implemented the algorithm SynthLearn of the previous section in a prototype tool, in Python, using the tool Acacia-Bonzai [10] to manipulate antichains of counting functions. We first explain the heuristics we have used to define state-merging and completion strategies, and then demonstrate how our implementation behaves on a case study whose goal is to synthesize the controller for an elevator. The interested reader can find in App. A other case studies, including a controller for an e-bike and two variations on mutual exclusion. ### Merging and completion strategies To implement the algorithms of previous sections, we need to fix strategies to choose among candidates for possible merges during the generalization phase and possible choices of outputs during the completion phase. The strategies that we have implemented are as follows. First, we consider a _merging_ strategy \(\sigma_{G}\) which is defined over \(4\)-tuples \((\mathcal{M},m,E,X)\) where \(\mathcal{M}\) is a preMealy machine, \(m\) is a state of \(\mathcal{M}\), \(E\) is a set of examples and \(X\) is subset of states of \(\mathcal{M}\) for which a merge is possible, and returns a state of \(X\) with the following properties. Given an example \(e\) that leads in the current preMealy machine to a state \(m\) and a set of candidates \(\{m_{1},m_{2},\ldots,m_{k}\}\) for merging as computed in line 7 of Algorithm 1, we associate to each state \(m_{i}\) the counting functions computed by the fixed point \(F^{*}\) on the current preMealy machine. Our merging strategy then choose one state \(m_{i}\) labelled with a \(\preceq\)-minimal elements in this set. Intuitively, favouring minimal counting functions preserves as much as possible the set of behaviors that are possible after the example \(e\). Indeed, by Lemma 15, we know that if \(f_{1}\preceq f_{2}\) then \(L(\mathcal{D}(\mathcal{A},k)[f_{2}])\subseteq L(\mathcal{D}(\mathcal{A},k)[ f_{1}])\). Second, we consider a _completion strategy_\(\sigma_{C}\) which is a function defined over all triples \((\mathcal{M},m,\mathfrak{i},X)\) where \(\mathcal{M}\) is the current preMealy machine with set of states \(M\), \((m,\mathfrak{i})\) is a hole of \(\mathcal{M}\), and \(X\subseteq\mathcal{O}\times(M\cup\{\mathsf{fresh}\})\) is a list of candidate pairs \((\mathsf{o},m^{\prime})\). It returns an element of \(X\), i.e., \(\sigma_{C}(\mathcal{M},m,\mathfrak{i},X)\in X\) and it has the following properties. Remember that, for ensuring termination, the completion strategy \(\sigma_{C}\) must be _lazy_, i.e. if \(X\setminus(\mathcal{O}\times\{\mathsf{fresh}\})\neq\varnothing\), then \(\sigma_{C}(\mathcal{M},m,\mathfrak{i},X)\not\in\mathcal{O}\times\{\mathsf{fresh}\}\). Then among the set of possible candidates \(\{(\mathsf{o}_{1},m_{1}),(\mathsf{o}_{2},m_{2}),\ldots,(\mathsf{o}_{k},m_{k})\}\), we again favour states associated with \(\preceq\)-minimal counting functions computed by \(F^{*}\) on the current preMealy machine. Merging and completion strategies implemented in our prototypeOur tool implements a _merging_ strategy \(\sigma_{G}\) where, given an example \(e\) that leads in the current preMealy machine to a state \(m\) and a set \(\{m_{1},m_{2},\ldots,m_{k}\}\) of candidates for merging, as computed in line 7 of Algorithm 1, we choose state \(m_{i}\) with a \(\preceq\)-minimal counting function \(F^{*}(m_{i})\), as defined in Lemma 5. Intuitively, favouring minimal counting functions preserves as much as possible the set of behaviors that are possible after the example \(e\). Our tool also implements a _completion strategy_\(\sigma_{C}\), where for every hole \((m,\mathfrak{i})\) of the preMealy machine \(\mathcal{M}\) and out of the list of candidate pairs, selects an element which again favour states associated with \(\preceq\)-minimal counting functions. ### Case Studies Lift Controller ExampleWe illustrate how to use our tool to construct a suitable controller for a two-floor elevator system. Considering two floors is sufficient enough to illustrate most of the main difficulties of a more general elevator. Inputs of the controller are given by two atomic propositions b0 and b1, which are true whenever the button at floor 0 (resp. floor 1) is pressed by a user. Outputs are given by the atomic propositions f0 and f1, true whenever the elevator is at floor 0 (resp. floor 1); and ser, true whenever the elevator is _serving_ the current floor (i.e. doors are opened). This controller should ensure the following core properties: 1. **Functional Guarantee:** whenever a button of floor 0 (resp. floor 1) is pressed, the elevator must eventually _serve_ floor 0 (resp. floor 1): \[\texttt{G(b0 -> F (f0 & ser)) & G(b1 -> F (f1 & ser))}\] 2. **Safety Guarantee:** The elevator is always at one floor exactly: G(f0<->lf1) 3. **Safety Guarantee:** The elevator cannot transition between two floors when doors are opened: G((f0 & ser) -> X(!f1)) & G((f1 & ser) -> X(!f0)) 4. **Initial State:** The elevator should be in floor 0 initially: f0 Additionally, we make the following **assumption**: whenever a button of floor 0 (or floor 1) is pressed, it must remain pressed until the floor has been served, i.e., G(b0 -> (b0 W (f0 & ser))) & G(b1 -> (b1 W (f1 & ser))). Before going into the details of this example, let us explain the methodology that we apply to use our tool on this example. We start by providing only the high level specification \(\varphi_{\texttt{CORE}}\) for the elevator given above. We obtain a first Mealy machine from the tool. We then observe the machine to identify prefix of behaviours that we are unhappy with, and for which we can provide better alternative decisions. Then we run the tool on \(\varphi_{\texttt{CORE}}\) and the examples that we have identified, and we get a new machine, and we proceed like that up to a point where we are satisfied with the synthesized Mealy machine. Let us now give details. When our tool is provided with this specification without any examples, we get the machine depicted in fig. 6. This solution makes the controller switch between floor 0 and floor 1, sometimes unnecessarily. For instance, consider the trace s # {!b0 &!b1}{!f0 & f1 &!ser} # {!b0 &!b1}{f0 &!f1 &!ser}, where we let s = {!b0 & b1}{f0 &!f1 &!ser} Figure 6: Machine returned by our tool on the elevator specification w/o examples. Here, \(q0\) represents the state where f0 is served when required, \(q1\) represents the state where b1 is pending, \(q2\) represents state where f1 is served, \(q3\) represents the state where b0 is pending. !b0 & b1}{!f0 & f1 & ser}. Here, we note that the transition goes back to state \(q_{0}\), where the elevator is at floor 0, when the elevator could have remained at floor 1 after serving floor 1. The methodology described above allows us to identify the following three examples: 1. The 1st trace states that after serving floor 1, the elevator must remain at floor 1 as b0 is false: s # {!b0 &!b1}{!f0 & f1 &!ser} # {!b0 &!b1}{!f0 & f1 &!ser} 2. The 2nd trace states that the elevator must remain at floor 0, as b1 is false: {!b0 &!b1}{f0 &!f1 &!ser} # {!b0 &!b1}{f0 &!f1 &!ser} 3. The 3rd trace ensures that after s, there is no unnecessary delay in serving floor 0 after floor 1 is served in s: s # {b0 &!b1}{!f0 & f1 &!ser} # {b0 &!b1}{f0 &!f1 & ser} With those additional examples, our tool outputs the machine of fig. 7, which generalizes them and now ensures that moves of the elevator occur only when required. For example, the end of the first trace has been generalized into a loop on state \(q_{1}\) ensuring that the elevator does not go to floor 0 from floor 1 unless b0 is pressed. We note that the number of examples provided here is much smaller than the theoretical (polynomial) upper bound proved in Theorem 4. ## 6 Conclusion In this paper, we have introduced the problem of _synthesis with a few hints_. This variant of the synthesis problem allows the user to guide synthesis using examples of expected executions of high quality solutions. Existing synthesis tools may not provide natural solutions when fed with high-level specifications only, and as providing complete specification goes against the very goal of synthesis, we believe that our algorithm has a greater potential in practice. On the theoretical side, we have studied in details the computational complexity of problems that need to be solved during our new synthesis procedure. We have proved that our algorithm is _complete_ in the sense that any Mealy machine \(\mathcal{M}\) that realizes a specification \(\varphi\) can be obtained by our algorithm from \(\varphi\) Figure 7: Mealy machine returned by our tool on the elevator specification with additional examples. The preMealy machine obtained after generalizing the examples and before completion is highlighted in red. This took 3.10s to be generated. and a sufficiently rich example set \(E\), whose size is bounded polynomially in the size of \(\mathcal{M}\). On the practical side, we have implemented our algorithm in a prototype tool that extends Acacia-Bonzai [10] with tailored state-merging learning algorithms. We have shown that only a small number of examples are necessary to obtain high quality machines from high-level LTL specifications only. The tool is not fully optimized yet. While this is sufficient to demonstrate the relevance of our approach, we will work on efficiency aspects of the implementation. As future works, we will consider extensions of the user interface to interactively and concisely specify sets of (counter-)examples to solutions output by the tool. In the same line, an interesting future direction is to handle parametric examples (e.g. elevator with the number of floors given as parameter). This would require to provide a concise syntax to define parametric examples and to design efficient synthesis algorithm in this setting. We will also consider the possibility to formulate negative examples, as our theoretical results readily extend to this case and their integration in the implementation should be easy.
2306.05991
Approximate information state based convergence analysis of recurrent Q-learning
In spite of the large literature on reinforcement learning (RL) algorithms for partially observable Markov decision processes (POMDPs), a complete theoretical understanding is still lacking. In a partially observable setting, the history of data available to the agent increases over time so most practical algorithms either truncate the history to a finite window or compress it using a recurrent neural network leading to an agent state that is non-Markovian. In this paper, it is shown that in spite of the lack of the Markov property, recurrent Q-learning (RQL) converges in the tabular setting. Moreover, it is shown that the quality of the converged limit depends on the quality of the representation which is quantified in terms of what is known as an approximate information state (AIS). Based on this characterization of the approximation error, a variant of RQL with AIS losses is presented. This variant performs better than a strong baseline for RQL that does not use AIS losses. It is demonstrated that there is a strong correlation between the performance of RQL over time and the loss associated with the AIS representation.
Erfan Seyedsalehi, Nima Akbarzadeh, Amit Sinha, Aditya Mahajan
2023-06-09T15:59:39Z
http://arxiv.org/abs/2306.05991v1
# Approximate information state based convergence analysis of recurrent Q-learning ###### Abstract In spite of the large literature on reinforcement learning (RL) algorithms for partially observable Markov decision processes (POMDPs), a complete theoretical understanding is still lacking. In a partially observable setting, the history of data available to the agent increases over time so most practical algorithms either truncate the history to a finite window or compress it using a recurrent neural network leading to an agent state that is non-Markovian. In this paper, it is shown that in spite of the lack of the Markov property, recurrent Q-learning (RQL) converges in the tabular setting. Moreover, it is shown that the quality of the converged limit depends on the quality of the representation which is quantified in terms of what is known as an approximate information state (AIS). Based on this characterization of the approximation error, a variant of RQL with AIS losses is presented. This variant performs better than a strong baseline for RQL that does not use AIS losses. It is demonstrated that there is a strong correlation between the performance of RQL over time and the loss associated with the AIS representation. ## 1 Introduction In recent years, Reinforcement Learning (RL) has witnessed many successes such as achieving human-level performance in Go [21], learning to play Atari [18, 19], as well as solving many control problems arising in engineering and robotics [22, 23, 24, 25, 26]. These successes are achieved by algorithms with strong theoretical basis [27, 28]. However, RL theory, for the most part, is limited to models with full state information. In various applications such as finance, healthcare, and robotics, the agent does not observe the full state of the environment. Such partially observed systems are mathematically modeled as partially observable Markov decision processes (POMDPs). When the system model is known, POMDPs can be viewed as MDPs by considering the belief state (i.e., the posterior distribution of the partially observed environmental state) as an information state [2]. Furthermore, there are various efficient algorithms to compute approximately optimal planning solutions [29]. However, it is not possible to generalize these planning results to develop learning algorithms because constructing the belief state requires knowledge of the model. So, an agent operating in an unknown partially observed environment cannot construct a belief state based on its observation. Two approaches are commonly used in the literature to circumvent this conceptual difficulty (i) use a finite window of observations (rather than the full history) and (ii) use a recursively updateable (or recurrent) _agent state_. A key difficulty in analyzing these learning algorithms is that the state of the agent may evolve in a non-Markovian manner. Furthermore, for the recurrent algorithm, the representation mapping histories to agent states needs to be learnt in parallel, which is especially difficult in sparse reward environments. So, even though there is a rich and large literature on RL theory for POMDPs (see [14; 15; 16; 17; 18; 19; 20; 21; 22], and follow-up literature), much of the literature either analyzes the case where the agent does not have a memory, or only provides empirical evidence but does not include a detailed convergence or approximation analysis. In this paper, we investigate one of the most popular RL algorithms for POMDPs: Recurrent Q-learning (RQL), which uses a recurrent neural network (RNN) for approximating a history-based Q-function. RQL was initially proposed by [14; 15] with a substantial follow up literature [16; 17; 18; 21]. There is growing empirical evidence suggesting that variants of RQL work well in practice [14; 15; 16; 17; 18; 19]. However, a detailed theoretical understanding of the algorithm is lacking. Review of theoretical papers analyzing variants of Q-learning for POMDPsThere are a few recent papers which analyze closely related problems. A general framework of approximation for POMDPs based on the notion of approximate information state (AIS) is proposed in [14]. It is shown that a planning policy computed using an AIS is approximately optimal with bounded loss of optimality. Furthermore, an actor-critic algorithm which uses the AIS-approximation losses as an auxiliary loss is presented and it is demonstrated that the proposed algorithm has good empirical performance. Even though AIS may be viewed as a recurrent agent state, the analysis presented in [14] is for actor-critic algorithms and is not directly applicable to RQL. Approximate planning and Q-learning for POMDPs with a finite window of observations is presented in [13; 14], where the approximation error and convergence are quantified. The special case of just using the current observation has also been analyzed in [15]. Even though our analysis uses similar technical tools as [15; 16; 17], the analysis of these papers is for Q-learning with finite window of observations and is not directly applicable to RQL. Regret guarantees for RL agents operating in non-Markovian environments and using an optimistic variant of Q-learning is presented in [18]. Even though the agent state in [18] is a recurrent state, the analysis of [18] is tuned for an optimistic variant of Q-learning and is not directly applicable to RQL. The papers closest to our work are [19; 18] which establish convergence of Q-learning in a non-Markovian environment under the assumption that the state-observation-action process is stationary and ergodic (an additional technical assumption of state uniformity is also imposed in [19]). Asymptotic rates of convergence are also characterized in [18]. However, [19; 18] do not present explicit approximation bounds. In our analysis, we do not assume that the system is stationary and provide approximation bounds. ContributionsOur main contributions are as follows. First, we show that in spite of the non-Markovian evolution of the agent state, RQL converges. As far as we are aware, this is the first result that establishes the convergence of RQL without making any assumptions on the stationarity of the agent state. Second, using ideas from approximate information state (AIS) [14], we quantify the quality of the converged limit of RQL in terms of error in representation. Third, we propose a variant of RQL called RQL-AIS which incorporates AIS losses. We illustrate via detailed numerical experiments that RQL-AIS learns better than R2D2 [15], which is the state-of-the-art RQL algorithm for POMDPs. We also empirically demonstrate that there is a strong correlation between the performance of RQL over time and the loss associated with the AIS representation. ## 2 Background Partially observable Markov decision processes (POMDPs)A partially observable Markov decision process (POMDP) is a tuple \(\langle\mathcal{S},\mathcal{Y},\mathcal{A},P,O,r,\gamma\rangle\) where \(\mathcal{S}\) denotes the state space, \(\mathcal{Y}\) denotes the observation space, \(\mathcal{A}\) denotes the action space, \(P\colon\mathcal{S}\times\mathcal{A}\to\Delta(\mathcal{S})\) denote the state transition matrix, \(O\colon\mathcal{S}\times\mathcal{A}\to\Delta(\mathcal{Y})\) denotes the observation probability matrix, \(r\colon\mathcal{S}\times\mathcal{A}\to\mathds{R}\) is the reward function and \(\gamma\in[0,1)\) denotes the discount factor. We follow the standard notation from probability theory and use uppercase letters to denote random variables and lowercase letters to denote their realizations. In particular, we use \(S_{t}\), \(Y_{t}\), \(A_{t}\) to denote the state, observation, and action at time \(t\) and \(H_{t}=(Y_{1},A_{1},Y_{2},A_{2},\dots,Y_{t})\) to denote the history of observations and actions until time \(t\). Let \(\mathcal{H}_{t}=\mathcal{Y}^{t}\times\mathcal{A}^{t-1}\) denote the space of all histories until time \(t\). We use \(R_{t}=r(S_{t},A_{t})\) to denote the random reward received at time \(t\). A policy \(\pi=(\pi_{1},\pi_{2},\dots)\) is a collection of history dependent randomized decision rules \(\pi_{t}\colon\mathcal{H}_{t}\to\Delta(\mathcal{A})\) such that the action at time \(t\) is chosen according to \(A_{t}\sim\pi_{t}(H_{t})\). The performance of any policy \(\pi\) starting from history \(h_{t}\in\mathcal{H}_{t}\) at time \(t\) is given by the value function \(V_{t}^{\pi}(h_{t})\) defined as \[V_{t}^{\pi}(h_{t})\coloneqq\mathrm{E}^{\pi}\Big{[}\sum_{\tau=t}^{\infty} \gamma^{\tau-t}r(S_{\tau},A_{\tau})\ \Big{|}\ H_{t}=h_{t}\Big{]}. \tag{1}\] The corresponding action-value function or Q-function \(Q_{t}^{\pi}(h_{t},a_{t})\) is defined as \[Q_{t}^{\pi}(h_{t},a_{t})\coloneqq\mathrm{E}^{\pi}\big{[}r(S_{t},A_{t})+\gamma V _{t+1}^{\pi}(H_{t+1})\ \big{|}\ H_{t}=h_{t},A_{t}=a_{t}\big{]}. \tag{2}\] A policy \(\pi^{\star}\) is called _optimal_ if for every other policy \(\pi\), we have \(V_{t}^{\pi^{\star}}(h_{t})\geq V_{t}^{\pi}(h_{t})\), for all \(t\in\mathds{Z}_{>0}\) and \(h_{t}\in\mathcal{H}_{t}\). The value function and action-value function of optimal policies are denoted by \(V_{t}^{\star}\) and \(Q_{t}^{\star}\). Integral Probability Metrics (IPMs)Integral probability metrics (IPMs) are a family of semi-metrics on probability measures defined in terms of a dual relationship [10]. **Definition 1**.: Let \((\mathcal{X},\mathcal{G})\) be a measurable space and \(\mathfrak{F}\) be a class of measurable real-valued functions on \((\mathcal{X},\mathcal{G})\). The integral probability metric (IPM) between two probability distributions \(\mu,\nu\in\mathscr{P}(\mathcal{X})\) with respect to the function class \(\mathfrak{F}\) is defined as \(d_{\mathfrak{F}}(\mu,\nu)\coloneqq\sup_{f\in\mathfrak{F}}\bigl{|}\int_{ \mathcal{X}}fd\mu-\int_{\mathcal{X}}fd\nu\bigr{|}\). A key property of IPMs is that for any function \(f\) (not necessarily in \(\mathfrak{F}\)), we have \[\Bigl{|}\int_{\mathcal{X}}fd\mu-\int_{\mathcal{X}}fd\nu\Bigr{|}\leq\rho_{ \mathfrak{F}}(f)\cdot d_{\mathfrak{F}}(\mu,\nu), \tag{3}\] where \(\rho_{\mathfrak{F}}(f)\coloneqq\inf\{\rho\in\mathds{R}_{>0}:\rho^{-1}f\in \mathfrak{F}\}\) is called the Minkowski functional of \(f\). Some examples of IPMs are as follows: (i) **Total variation distance** where \(\mathfrak{F}=\mathfrak{F}_{\mathrm{TV}}\coloneqq\{f:\mathrm{span}(f)\leq 1\}\) (where \(\mathrm{span}(f)\) is the span semi-norm of a function). For this case, \(\rho_{\mathrm{TV}}(f)=\mathrm{span}(f)\). (ii) **Wasserstein distance** where \(\mathfrak{F}=\mathfrak{F}_{\mathrm{Was}}\coloneqq\{f:\mathrm{Lip}(f)\leq 1\}\) (where \(\mathcal{X}\) is a metric space and \(\mathrm{Lip}(f)\) is the Lipschitz constant of the function \(f\), computed with respect to the metric on \(\mathcal{X}\)). For this case, \(\rho_{\mathrm{Was}}(f)=\mathrm{Lip}(f)\). (iii) **Maximum mean discrepancy (MMD)** where \(\mathfrak{F}=\mathfrak{F}_{\mathrm{MMD}}\coloneqq\{f\in\mathcal{H}\colon\|f\|_ {\mathcal{H}}\leq 1\}\) (where \(\mathcal{H}\) is a reproducing kernel Hilbert space of real-valued functions on \(\mathcal{X}\) and \(\|f\|_{\mathcal{H}}\) is the Hilbert space norm of \(f\)). For this case, \(\rho_{\mathrm{MMD}}(f)=\|f\|_{\mathcal{H}}\). Approximate information state (AIS)Approximate information state (AIS) is a self-predictive representation for POMDPs, first proposed in [11]. **Definition 2**.: Given a function class \(\mathfrak{F}\) and a measurable space \(\mathcal{Z}\), an \((\varepsilon_{t},\delta_{t})_{t\geq 1}\) AIS-generator is a tuple \(\langle\{\sigma_{t}\}_{t\geq 1},\tilde{P},\tilde{r}\rangle\) of history compression functions \(\sigma_{t}\colon\mathcal{H}_{t}\to\mathcal{Z}\), transition approximator \(\tilde{P}\colon\mathcal{Z}\times\mathcal{A}\to\Delta(\mathcal{Z})\), and reward approximator \(\tilde{r}\colon\mathcal{Z}\times\mathcal{A}\to\mathds{R}\) such that for all \(h_{t}\in\mathcal{H}_{t}\) and \(a_{t}\in\mathcal{A}\) \[|\mathbb{E}\left[R_{t}\ |\ H_{t}=h_{t},A_{t}=a_{t}\right]-\tilde{r}\left( \sigma_{t}\left(h_{t}\right),a_{t}\right)|\leq\varepsilon_{t},\] \[d_{\mathfrak{F}}\big{(}\mathbb{P}\left(Z=\cdot\ |\ H_{t}=h_{t},A_{t}=a_{t} \right),\tilde{P}(\cdot\ |\ \sigma_{t}\left(h_{t}\right),a_{t})\big{)}\leq\delta_{t}.\] Given an AIS generator, consider the following dynamic program: \[\tilde{Q}(z,a)=\tilde{r}(z,a)+\gamma\int_{\mathcal{Z}}\tilde{P}(dz^{\prime}\ |\ z,a)\max_{\tilde{a}\in\mathcal{A}}\tilde{Q}(z^{\prime},\tilde{a}). \tag{4}\] Let \(\tilde{Q}^{\star}\) denote the unique fixed point of (4). Define \(\tilde{V}^{\star}\colon\mathcal{Z}\to\mathds{R}\) to be the value function corresponding to \(\tilde{Q}^{\star}\) and \(\tilde{\pi}^{\star}\colon\mathcal{Z}\to\mathcal{A}\) to be the greedy policy1 with respect to \(\tilde{Q}^{\star}\), i.e., Footnote 1: To avoid ambiguity due to the non-uniqueness of the arg-max, we assume that there is a deterministic rule to break ties is pre-specified so that the arg-max is always unique. \[\tilde{V}^{\star}(z)=\max_{a\in\mathcal{A}}\tilde{Q}^{\star}(z,a),\quad\text{ and}\quad\tilde{\pi}^{\star}(z)=\arg\max_{a\in\mathcal{A}}\tilde{Q}^{\star}(z,a).\] Then, the following result is a generalization of [11, Theorem 27]. **Theorem 1**.: Let \(\tilde{\pi}=(\tilde{\pi}_{1},\tilde{\pi}_{2},\dots)\) be a time-varying and history-dependent policy given by \(\tilde{\pi}_{t}(h_{t})=\tilde{\pi}^{*}(\sigma_{t}(h_{t}))\). Then, for any time \(t\) and any history \(h_{t}\in\mathcal{H}_{t}\) and action \(a_{t}\in\mathcal{A}\), we have * **Bounds on value approximation:** \[\big{|}Q_{t}^{\star}(h_{t},a_{t})-\tilde{Q}^{\star}(\sigma_{t}(h_{ t}),a_{t})\big{|} \leq(1-\gamma)^{-1}\big{[}\bar{\varepsilon}_{t}+\gamma\bar{\delta}_{t}\rho_{ \tilde{\mathcal{G}}}(\tilde{V}^{\star})\big{]},\] (5) \[\big{|}V_{t}^{\star}(h_{t})-\tilde{V}^{\star}(\sigma_{t}(h_{t})) \big{|} \leq(1-\gamma)^{-1}\big{[}\bar{\varepsilon}_{t}+\gamma\bar{\delta }_{t}\rho_{\tilde{\mathcal{G}}}(\tilde{V}^{\star})\big{]},\] (6) where \(\bar{\varepsilon}_{t}=(1-\gamma)\sum_{\tau=t}^{\infty}\gamma^{\tau-t}\varepsilon _{\tau}\) and \(\bar{\delta}_{t}=(1-\gamma)\sum_{\tau=t}^{\infty}\gamma^{\tau-t}\delta_{\tau}\). * **Bounds on policy approximation:** \[\big{|}V_{t}^{\star}(h_{t})-V_{t}^{\tilde{\pi}}(h_{t})\big{|} \leq 2(1-\gamma)^{-1}\big{[}\bar{\varepsilon}_{t}+\gamma\bar{\delta}_{t} \rho_{\tilde{\mathcal{G}}}(\tilde{V}^{\star})\big{]}.\] (7) Recurrent Neural Networks (RNN)Recurrent neural networks (RNNs) are neural networks with feedback connections that are used to process sequential data by keeping track of a state. At an abstract level, we may model an RNN with a hidden state2\(z_{t}\) to be a function of the past sequence of inputs \(x_{1},\dots,x_{t}\), which is updated recursively using a non-linear activation function: \(z_{t}=f(z_{t-1},x_{t})\). Typically, \(f(\cdot)\) is a parameterized family of functions, e.g., in vanilla RNN, \(f(z_{t-1},x_{t})=\tanh(W_{zz}z_{t-1}+W_{zx}x_{t}+W_{b})\), where \((W_{zz},W_{zx},W_{b})\) are parameters. In practice, one uses more sophisticated RNN architectures such as long short-term memory (LSTM) [10] or gated recurrent units (GRUs) [13], which avoid the problem of vanishing gradients. Footnote 2: Normally, the hidden state of an RNN is denoted using \(h_{t}\). However, we are using \(h_{t}\) to denote the history of a POMDP. So, we use \(z_{t}\) to denote the hidden state of an RNN. Recurrent Q-learningRecurrent Q-learning (RQL) is a variant of Q-learning algorithm for POMDPs, which uses an RNN to estimate the Q-function \(Q_{t}(h_{t},a_{t})\)[12, 13]. In particular, an RNN with input \((Y_{t},A_{t-1})\) is used to generate a hidden state \(z_{t}\in\mathcal{Z}\) which is updated recursively as \(z_{t}=f(z_{t-1},y_{t},a_{t-1})\), where \(f(\cdot)\) is the update function of an RNN. We will sometimes write \(z_{t}=\sigma_{t}(h_{t})\) to highlight the fact that \(z_{t}\) is a function of the history \(h_{t}\). In RQL the learning agent uses an exploration policy \(\pi_{\text{expl}}\) to generate experience and updates an estimate of the Q-function using the following recursion: \[\widehat{Q}_{t+1}(z_{t},a_{t})=\widehat{Q}_{t}(z_{t},a_{t})+\alpha_{t}(z_{t}, a_{t})\big{[}R_{t}+\gamma\max_{\bar{a}\in\mathcal{A}}\widehat{Q}_{t}(z_{t+1}, \bar{a})-\widehat{Q}_{t}(z_{t},a_{t})\big{]} \tag{8}\] where \(\{\alpha_{t}(z_{t},a_{t})\}_{t\geq 1}\) is the learning rate. Define \(\hat{V}_{t}:\mathcal{Z}\to\mathds{R}\) to be the value function corresponding to \(\widehat{Q}_{t}\) and \(\hat{\pi}_{t}\colon\mathcal{Z}\to\mathcal{A}\) to be the greedy policy w.r.t. \(\widehat{Q}_{t}\), i.e., \[\hat{V}_{t}(z)=\max_{a\in\mathcal{A}}\widehat{Q}_{t}(z,a)\quad\text{and}\quad \hat{\pi}_{t}(z)=\arg\max_{a\in\mathcal{A}}\widehat{Q}_{t}(z,a).\] ## 3 Theoretical results The key challenge in characterizing the convergence of RQL is that the agent state \(\{Z_{t}\}_{t\geq 1}\) is not a controlled Markov process. Therefore, the standard results on the convergence of Q-learning [11] are not directly applicable. In Sec. 3.1, we show that it is possible to adapt the standard convergence arguments to show that RQL converges. The quality of the converged solution depends on choice of the exploration policy as well as the representation. The dependence of the representation is not surprising. For example, it is clear that when the representation is bad (e.g., a representation that maps all histories to a single agent state), then RQL will converge to a limit which is far from optimal. So, it is important to quantify the degree of sub-optimality of the converged limit. We do so in Sec. 3.2. ### Establishing the convergence of RQL **Lemma 1**.: Under any policy \(\pi\colon\mathcal{Z}\to\Delta(\mathcal{A})\), the process \(\{(S_{t},Y_{t},Z_{t},A_{t})\}_{t\geq 1}\) is a Markov chain. We impose the following assumptions: **(A1)**: The state space, action space, and the recurrent state space are finite. **(A2)**: The exploration policy \(\pi_{\text{expl}}\colon\mathcal{Z}\to\Delta(\mathcal{A})\) is such that the Markov chain \(\{(S_{t},Y_{t},Z_{t},A_{t})\}_{t\geq 1}\) has a unique stationary distribution \(\xi\). Moreover, for every \((s,y,z,a)\), \(\xi(s,y,z,a)>0\). **(A3)**: The learning rate \(\alpha_{t}(z,a)\) is given by \(\alpha_{t}(z,a)=\mathds{1}_{\{Z_{t}=z,A_{t}=a\}}/\big{(}1+\sum_{r=0}^{t} \mathds{1}_{\{Z_{\tau}=z,A_{\tau}=a\}}\big{)}\). We impose assumption **(A1)** to analyze the simplest version of the RQL. Assumption **(A2)** is a mild assumption on the exploration policy and is commonly assumed in several variations of Q-learning with function approximation [14]. Assumption **(A3)** is a common assumption on the step-size of stochastic approximation algorithms. For the ease of notation, we continue to use \(\xi\) to denote marginal and conditional distributions w.r.t. \(\xi\). For example, \(\xi(y,z,a)=\sum_{s\in\mathcal{S}}\xi(s,y,z,a)\) and similar notation holds for other marginals. Similarly, \(\xi(s|z)=\xi(s,z)/\xi(z)\) and similar notation holds for other conditional distributions. Given a steady-state distribution \(\xi\) corresponding to the exploration policy, define a reward function \(r_{\xi}\colon\mathcal{Z}\times\mathcal{A}\to\mathds{R}\) and transition probability \(P_{\xi}\colon\mathcal{Z}\times\mathcal{A}\to\Delta(\mathcal{Z})\) as follows: \[r_{\xi}(z,a) =\sum_{s\in\mathcal{S}}r(s,a)\xi(s\mid z,a),\] \[P_{\xi}(z^{\prime}\mid z,a) =\sum_{s\in\mathcal{S}}\xi(s\mid z,a)\sum_{s^{\prime}\in \mathcal{S}}P(s^{\prime}|s,a)\sum_{y^{\prime}\in\mathcal{Y}}O(y^{\prime}|s^{ \prime},a)\mathds{1}_{\{z^{\prime}=f(z,y^{\prime},a)\}}.\] Furthermore, define \(Q_{\xi}^{\star}\) to be the unique fixed point of the following fixed point equation: \[Q_{\xi}^{\star}(z,a)=r_{\xi}(z,a)+\gamma\sum_{z^{\prime}\in \mathcal{Z}}P_{\xi}(z^{\prime}\mid z,a)\max_{\tilde{a}\in\mathcal{A}}Q_{\xi}^ {\star}(z,\tilde{a}). \tag{9}\] Define \(V_{\xi}^{\star}\colon\mathcal{Z}\to\mathds{R}\) be the value function corresponding to \(Q_{\xi}^{\star}\) and \(\pi_{\xi}^{\star}\colon\mathcal{Z}\to\mathcal{A}\) to be the greedy policy with respect to \(Q_{\xi}^{\star}\), i.e., \[V_{\xi}^{\star}(z)=\max_{a\in\mathcal{A}}Q_{\xi}^{\star}(z,a), \quad\text{and}\quad\pi_{\xi}^{\star}(z)=\arg\max_{a\in\mathcal{A}}Q_{\xi}^{ \star}(z,a).\] **Theorem 2**.: Under Assumptions **(A1)**-**(A3)**, the iterates \(\{\widehat{Q}_{t}\}_{t\geq 1}\) of (8) converge almost surely to \(Q_{\xi}^{\star}\) given by (9). Therefore \(\{\hat{\pi}_{t}\}_{t\geq 1}\) converges to \(\pi_{\xi}^{\star}\) (see footnote 1 for uniqueness of arg-max). Proof outline.: The main idea of the proof is inspired from [11, 14]. To establish that \(\widehat{Q}_{t}\to Q_{\xi}^{\star}\), a.s., we will show that \(\Delta_{t}\coloneqq\widehat{Q}_{t}-Q_{\xi}^{\star}\to 0\), a.s. Define \(\hat{V}_{t}(z)=\max_{a\in\mathcal{A}}\widehat{Q}_{t}(z,a)\). Combining (8) and (9), we get that \[\Delta_{t+1}(z,a)=(1-\alpha_{t}(z,a))\Delta_{t}(z,a)+\alpha_{t}(z,a)\big{[}F_{ t}^{1}(z,a)+F_{t}^{2}(z,a)\big{]}, \tag{10}\] where \[F_{t}^{1}(z,a) =\gamma\hat{V}_{t}(Z_{t+1})-\gamma V_{\xi}^{\star}(Z_{t+1}), \tag{11}\] \[F_{t}^{2}(z,a) =R_{t}-r_{\xi}(z,a)+\gamma V_{\xi}^{\star}(Z_{t+1})-\gamma\sum_{z ^{\prime}\in\mathcal{Z}}P_{\xi}(z^{\prime}|z,a)V_{\xi}^{\star}(z^{\prime}). \tag{12}\] Following [11], we view (10) as a linear system with two inputs and do "state splitting" to write \[\Delta_{t}(z,a)=W_{t}^{1}(z,a)+W_{t}^{2}(z,a) \tag{13}\] where for \(i\in\{1,2\}\), each "state component" \(W_{t}^{1}(z,a)\) is initialized to \(0\) and evolves for \(t\geq 1\) as \[W_{t+1}^{i}(z,a)=(1-\alpha_{t}(z,a))W_{t}^{i}(z,a)+\alpha_{t}(z,a)F_{t}^{i}(z, a),\quad i\in\{1,2\}.\] From (11), we have that \[F_{t}^{1}(z,a)=\gamma\big{[}\hat{V}_{t}(Z_{t+1})-V_{\xi}^{\star}(Z_{t+1}) \big{]}\leq\gamma\|\hat{V}_{t}-V_{\xi}^{\star}\|_{\infty}\leq\gamma\|\widehat{Q }_{t}-Q_{\xi}^{\star}\|_{\infty}=\gamma\|\Delta_{t}\|_{\infty}. \tag{14}\] Using assumptions **(A1)**-**(A3)**, we can show that \(W_{t}^{2}(z,a)\to 0\), a.s., for all \((z,a)\). See the supplementary material for proof. Therefore, there exists a \(\Gamma(\Omega_{0})\) such that \(\mathds{P}(\Omega_{0})=1\) and for every \(\omega\in\Omega_{0}\) and any \(\epsilon>0\), there exists a \(T(\omega,\epsilon)\) such that for all \(t>T(\omega,\epsilon)\), \(|W_{t}^{2}(z,a)|<\epsilon\), a.s., for all \((z,a)\). Now, pick \(C\) such that \(\gamma(1+1/C)<1\). For any \(t>T(\omega,\epsilon)\) if \(\|W_{t}^{1}(z,a)\|_{\infty}>C\epsilon\), then \[F_{t}^{1}(z,a)\leq\gamma\|\Delta_{t}\|_{\infty}\leq\gamma\|W_{t}^{1}\|_{\infty}+ \gamma\epsilon<\gamma\big{(}1+\tfrac{1}{C}\big{)}\|W_{t}^{1}\|_{\infty}<\|W_{t}^{1} \|_{\infty}, \tag{15}\] where the first inequality uses (13), the second uses the triangle inequality, and the others follow from the definition of \(C\) and \(\epsilon\). Consequently, for any \(t>T(\omega,\epsilon)\) and \(\|W_{t}^{1}(z,a)\|_{\infty}>C\epsilon\), we have that \[W_{t+1}^{1}(z,a)=(1-\alpha_{t}(z,a))W_{t}^{1}(z,a)+\alpha_{t}(z,a)F_{t}^{1}(z,a )<\|W_{t}^{1}\|_{\infty}. \tag{16}\] Hence, when \(\|W_{t}^{1}\|_{\infty}>C\epsilon\), it decreases monotonically. So, there are two possibilities: either it gets below \(C\epsilon\) or it never goes below \(C\epsilon\). In the supplementary material, we show that the process cannot stay above \(C\epsilon\) all the time. Hence, it must hit below \(C\epsilon\) at some point. In the supplementary material, we show that once the process hits below \(C\epsilon\), it stays there. Thus, we have shown that for all sufficiently large \(t\), \(\|W_{t}^{1}\|_{\infty}<C\epsilon\), a.s. Since \(\epsilon\) is arbitrary, this implies that \(W_{t}^{1}(z,a)\to 0\), a.s., for all \((z,a)\). Combining this with the fact that \(W_{t}^{3}(z,a)\to 0\), we get that \(\Delta_{t}(z,a)=W_{t}^{1}(z,a)+W_{t}^{2}(z,a)\to 0\), as, for all \((z,a)\). Let \(\mathcal{F}_{t}=\sigma(W_{1:t}^{1},F_{1:t}^{1},\alpha_{1:t})\). Then, \(\|\mathbb{E}[F_{t}^{1}(z,a)\mid\mathcal{F}_{t}]\|_{\infty}\leq\mathbb{E}[\|F_ {t}^{1}(z,a)\|_{\infty}\mid\mathcal{F}_{t}]\leq\gamma\|\Delta_{t}\|_{\infty}\). Moreover, since the state space is finite, the per-step reward is bounded. Therefore, both \(\hat{V}_{t}(Z_{t+1})\) and \(V_{\xi}^{\star}(Z_{t+1})\) are bounded. Therefore, there exists a constant \(C\) such that \(\text{var}(F_{t}^{1}(z,a)\mid\mathcal{F}_{t})\leq C\). Thus, the iteration for \(W_{t}^{1}(z,a)\) satisfies all conditions of [13, Theorem 1], and by that result, \(W_{t}^{1}(z,a)\to 0\), a.s., for all \((z,a)\). For convenience, we restate [13, Theorem 1] in the supplementary material. **Remark 1**.: In the special case when \(z_{t}=(y_{t-n:t},a_{t-n:t-1})\) is the last \(n\) observations and actions (i.e., frame-stacking), the result of Theorem 2 recovers the result of [10, Theorem 4.1, (i)]. **Remark 2**.: In [14, Theorem 1], it was shown that the results of Theorem 2 hold if **(A2)** and **(A3)** are replaced by the following: **(A2')**: Assumption **(A2)** holds and the initial distribution satisfies \(\mathbb{P}(S_{1}=s,Z_{1}=z)=\xi(s,z)\). **(A3')**: The learning rate \(\alpha_{t}(z,a)\) satisfies \(\sum_{t\geq 1}\alpha_{t}(z,a)=\infty\) and \(\sum_{t\geq 1}\alpha_{t}^{2}(z,a)<\infty\), a.s. Note that **(A2')** is stronger than **(A2)** and implies that the process \(\{(S_{t},Y_{t},Z_{t},A_{t})\}_{t\geq 1}\) is stationary and ergodic. However, the learning rate condition **(A3')** is weaker than **(A3)**. Theorem 2 addresses the main challenge in convergence analysis of RQL: the non-Markovian dynamics. We have shown that RQL converges. So, the next question is: How good is the converged solution compared to the optimal? We address this question in the next section. ### Characterizing the approximation error of the converged value Our key observation is that for any \(\mathfrak{F}\), \((\sigma_{t},P_{\xi},r_{\xi})\) is an \((\epsilon_{t},\delta_{t})_{t\geq 1}\) AIS-generator where \[\begin{split}\varepsilon_{t}&\coloneqq\max_{h_{t} \in\mathcal{H}_{t},a_{t}\in\mathcal{A}}\bigl{|}\mathbb{E}[r(S_{t},A_{t})\mid H _{t}=h_{t},A_{t}=a_{t}]-r_{\xi}(\sigma_{t}(h_{t}),a_{t})\bigr{|},\\ \delta_{t}&\coloneqq\max_{h_{t}\in\mathcal{H}_{t},a_{t }\in\mathcal{A}}d_{\mathfrak{F}}\bigl{(}\mathbb{P}(Z_{t+1}=\cdot\mid H_{t}=h_{ t},A_{t}=a_{t}),P_{\xi}(\cdot\mid\sigma_{t}(h_{t}),a_{t})\bigr{)}.\end{split}\] Therefore, an immediate implication of Theorem 1 is the following. **Theorem 3**.: Let \(\tilde{\pi}=(\tilde{\pi}_{1},\tilde{\pi}_{2},\dots)\) be a time-varying and history-dependent policy given by \(\tilde{\pi}_{t}(h_{t})=\pi_{\xi}^{\star}(\sigma_{t}(h_{t}))\). Then, for any time \(t\) and any history \(h_{t}\in\mathcal{H}_{t}\) and action \(a_{t}\in\mathcal{A}\), we have: * **Bounds on value approximation:** \[\bigl{|}Q_{t}^{\star}(h_{t},a_{t})-Q_{\xi}^{\star}(\sigma_{t}(h_{t}),a_{t}) \bigr{|} \leq(1-\gamma)^{-1}\bigl{[}\bar{\varepsilon}_{t}+\gamma\bar{ \delta}_{t}\rho_{\mathfrak{F}}(V_{\xi}^{\star})\bigr{]},\] (17) \[\bigl{|}V_{t}^{\star}(h_{t})-V_{\xi}^{\star}(\sigma_{t}(h_{t})) \bigr{|} \leq(1-\gamma)^{-1}\bigl{[}\bar{\varepsilon}_{t}+\gamma\bar{ \delta}_{t}\rho_{\mathfrak{F}}(V_{\xi}^{\star})\bigr{]},\] (18) where \(\bar{\varepsilon}_{t}=(1-\gamma)\sum_{\tau=t}^{\infty}\gamma^{\tau-t} \varepsilon_{\tau}\) and \(\bar{\delta}_{t}=(1-\gamma)\sum_{\tau=t}^{\infty}\gamma^{\tau-t}\delta_{\tau}\). * **Bounds on policy approximation:** \[\bigl{|}V_{t}^{\star}(h_{t})-V_{t}^{\tilde{\pi}}(h_{t})\bigr{|} \leq 2(1-\gamma)^{-1}\bigl{[}\bar{\varepsilon}_{t}+\gamma\bar{\delta}_{t} \rho_{\mathfrak{F}}(V_{\xi}^{\star})\bigr{]}.\] (19) **Remark 3**.: We may upper bound \(\bar{\varepsilon}_{t}\) by \(\bar{\varepsilon}_{t}^{\circ}\coloneqq\sup_{\tau\geq t}\varepsilon_{\tau}\) and \(\bar{\delta}_{t}\) by \(\bar{\delta}_{t}^{\circ}\coloneqq\sup_{\tau\geq t}\delta_{\tau}\). The bound of Theorem 3 is _instance dependent_ because it depends on the value function \(V_{\xi}^{\star}\). In the supplementary material, we illustrate _instance independent_ bounds by upper bounding \(\rho_{\mathfrak{F}}(V_{\xi}^{\star})\) in terms of properties of the transition and reward functions. ## 4 RQL with AIS losses The results of Theorem3 suggest that the performance of RQL could be enhanced by improving the representation function \(f\) so as to minimize the approximation losses \(\varepsilon=(\varepsilon_{1},\varepsilon_{2},\dots)\) and \(\delta=(\delta_{1},\delta_{2},\dots)\). A similar idea was proposed in [14] to improve the performance of actor-critic algorithms. In this section, we verify that adding such _AIS losses_ improves the performance of RQL. ### Adding AIS losses to RQL The key idea is to model each component of the AIS generator as a parametric family of functions/distribution and then use stochastic gradient descent to update these parameters. Note that in RQL, we use a RNN to model the state update function \(f\). As explained in [14, Proposition 8], in this case we can use an observation predictor \(\tilde{P}^{y}\colon\mathcal{Z}\times\mathcal{A}\to\Delta(\mathcal{Y})\) as a proxy for the state predictor \(\tilde{P}\) and replace \(\delta_{t}\) by \(\tilde{\delta}_{t}/\bar{\kappa}_{\tilde{\mathcal{G}}}(f)\), where \[\tilde{\delta}_{t}\coloneqq\max_{h_{t}\in\mathcal{H}_{t},a_{t}\in\mathcal{A}} d_{\tilde{\mathcal{G}}}\big{(}\mathds{P}(Y_{t+1}=\cdot\mid H_{t}=h_{t},A_{t}=a_{t}), \tilde{P}^{y}(\cdot\mid\sigma_{t}(h_{t}),a_{t})\big{)}.\] and \(\bar{\kappa}_{\tilde{\mathcal{G}}}(f)=\sup_{z\in\mathcal{Z},\alpha\in\mathcal{ A}}\kappa_{\tilde{\mathcal{G}}}(f(z,\cdot,a))\). Let \(\psi\) denote the combined parameters of the AIS-generator. Then, we could choose the AIS loss function \(\mathcal{L}_{\text{AIS}}\) as any monotonic function of \((\varepsilon_{t},\delta_{t})\) because reducing \((\varepsilon_{t},\delta_{t})\) reduces the upper bound of Theorem3. As suggested in [14], we choose the AIS loss as \[\mathcal{L}_{\text{AIS}}(\psi)=\frac{1}{T}\sum_{t=1}^{T}(\lambda\varepsilon^ {2}+(1-\lambda)\bar{\delta}^{2})\] where \(\lambda\) is a hyper-parameter and \(T\) is the batch size. A detailed discussion of the choice of IPM is presented in [14]. We choose \(d_{\tilde{\mathcal{G}}}\) as \(\ell_{2}\)-distance based MMD [see 14, Proposition 32] for which case the AIS loss can be simplified as [14, Proposition 33]: \[\mathcal{L}_{\text{AIS}}(\psi)=\frac{1}{T}\sum_{t=1}^{T}\Bigl{[}\lambda\big{|} R_{t}-\tilde{r}(Z_{t},A_{t})\big{|}^{2}+(1-\lambda)(M_{t}^{y}-2Y_{t})^{\intercal}M _{t}^{y}\Bigr{]}+\text{const}(\psi)\] where \(M_{t}^{y}\) is the mean of the distribution \(\tilde{P}^{y}(\cdot\mid Z_{t},A_{t})\) and \(\text{const}(\psi)\) are terms which do not depend on \(\psi\) and can therefore be ignored. Then, we update the parameters \(\psi\) of the AIS generator using stochastic gradient descent. The updates of the representation using AIS losses is carried out in parallel with RQL. For our experiments, we choose R2D2 [14], which generalizes Double Q-learning (DQL) with replay Figure 1: Network architecture of RQL with AIS losses buffers to RNNs [10]. Note that as in [14], we do not backpropagate the Q-learning losses to the AIS generator. A block diagram showing the network architecture is shown in Figure 1. The complete implementation details are presented in the supplementary material. ### Empirical evaluation In this section, we compare the performance of RQL-AIS (the algorithm described in the previous section) with a non-distributed3 variant of R2D2 proposed in [11], which we label as ND-R2D2. The exact implementation details, including the choice of hyper-parameters are presented in the supplementary material. We want to emphasize that the two implementations are identical except that RQL-AIS includes the AIS loss block and updates the parameters \(\psi\) of the representation by backpropogating the AIS loss \(\mathcal{L}_{\text{AIS}}(\psi)\) rather than backpropogating the Q-learning losses. Footnote 3: In [11], the distributed setup is introduced to allow for parallel data collection and training in environments requiring very high number of interactions. This is not the case in our setup and both the RQL-AIS and R2D2 are implemented in a non-distributed manner. We evaluate the two algorithms on 22 environments from the MiniGrid benchmark [12], which are partially observed MiniGrid environments with tasks with increasing complexity, described in detail in the supplementary material. In all environments, the layout at the start of each episode is randomly chosen, so the agent cannot solve the task by memorizing an exact sequence of actions. Similar to [14, 15], before running the experiments, we train an autoencoder on a dataset of random agent observations to compress the observations of the agent into compact vectors. The weights of the autoencoder are kept frozen during the RL experiments. We train both RQL-AIS and ND-R2D2 for \(T=4\cdot 10^{6}\) environment steps for \(N=5\) seeds. During the data gathering phase, the agent behaves according to the epsilon-greedy approach [13] to allow for exploration of the environment with epsilon value exponentially decreasing over time. We run two set of studies: one with uniform sampling from the replay buffer and the other using prioritized \begin{table} \begin{tabular}{l c c} \hline \hline **Environment** & **RQL-AIS** & **ND-R2D2** \\ \hline SimpleCrossing59N1 & 0.870\(\pm\)0.0059 & 0.969\(\pm\)0.0051 \\ SimpleCrossing59N2 & 0.820\(\pm\)0.0103 & 0.858\(\pm\)0.020 \\ SimpleCrossing59N3 & 0.924\(\pm\)0.0103 & 0.924\(\pm\)0.010 \\ SimpleCrossing511N5 & 0.771\(\pm\)0.010 & 0.407\(\pm\)0.233 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of RQL-AIS and ND-R2D2 on the MiniGrid benchmark Figure 2: Results from 8 selected MiniGrid environments. RQL-AIS successfully solves all 8 while ND-R2D2 fails at solving 4 environments. This demonstrates the effectiveness of AIS state representations in empirical settings. experience replay (PER) [14]. We report the results for uniform sampling here and the results for PER in the supplementary material. The mean and standard deviation of the final performance for both algorithms is shown in Table 1. Figure 2 shows the training curves for 8 representative environments. Training curves for all environments are included in the supplementary material. Discussion of the resultsFor the simpler environments, both RQL-AIS and ND-R2D2 learn to solve the task, with ND-R2D2 performing slightly better. However, there are seven environments where ND-R2D2 fails to learn. This is not surprising as the MiniGrid environments are sparse reward environments which are used as a benchmark for research on exploration methods in RL [11; 12; 13; 14; 15; 16]. ND-R2D2, which does not include any exploration bonuses, fails to learn in some of the larger environments. What is surprising is that RQL-AIS is still able to learn in such environments without any exploration bonuses. We show in the supplementary material that adding prioritized experience replay boosts the performance of RQL-AIS in the harder environments. What is the impact of AIS losses on learning?To understand the impact of AIS losses on learning, we compare the evolution of the MMD distance and the episodic return with the number of environment steps. The results are shown in Figure 3, where the environment steps are plotted on a log scale. In each environment, when the MMD loss is initially high, the episodic return does not improve considerably. However, when the MMD distance loss becomes smaller (\(\approx 0.5\)), the episodic return starts improving. This suggests that it takes some time for the agent to learn a good representation through the AIS losses, and once a good representation has been obtained (small MMD loss), RL losses through Q-learning are much more effective in training policies, thus allowing policies to improve much quicker, i.e., with fewer samples. This makes sense because optimizing the AIS losses effectively gives the same representation to several different possible histories (but identical from the perspective of performance), which then allows Q-learning updates to propagate through all these possible histories instead of just a single history trajectory. The ability to map more histories to fewer representations accurately improves with the quality of approximation of the AIS. Thus, AIS losses not only help in establishing theoretical performance bounds, but it is evident that they help in learning in empirical experiments. ## 5 Conclusion In this work, we establish the convergence of recurrent Q-learning (RQL) in the tabular setting for POMDPs using a representation of the history called an approximate information state (AIS). We also establish upper bounds on the degree of sub-optimality of the converged solution. These bounds quantify the relationship between the quality of representation and the quality of the converged solution of RQL. Based on these bounds, a variant of RQL called RQL-AIS was proposed and it was observed that RQL-AIS performs better than the state-of-the-art baseline ND-R2D2 on the Figure 3: Episodic Return (left-axis) and MMD loss (right-axis, on log scale) plots for the RQL-AIS method on 8 selected MiniGrid environments. There is a close correlation between the drop in MMD loss value and improvement in episodic return. Both Environment Steps and MMD loss are in the logarithmic scale. MiniGrid benchmark. A detailed comparison of the time evolution of AIS losses and performance strongly suggests that the improvement in performance is correlated to the decrease in AIS losses. In conclusion, the results of this paper show that AIS is a useful theoretical tool for analyzing the performance of RQL in POMDPs and also an effective block for algorithm design for POMDPs. Our theoretical analysis is restricted to the tabular setting. Interesting future work includes generalizing the analysis to more general models, including using function approximation for approximating the Q-function. Another interesting avenue is formally showing convergence of an algorithm which learns the AIS and Q-function in parallel with two time-scale methods. ## Acknowledgements This work was supported in part by Natural Science and Engineering Council of Canada through Discovery Grant RGPIN-2021-03511 and Alliance International Catalyst Grant ALLRP 571054-21. The numerical experiments were enabled in part by compute resources provided by the Digital Research Alliance of Canada.
2310.18940
Language Agents with Reinforcement Learning for Strategic Play in the Werewolf Game
Agents built with large language models (LLMs) have shown great potential across a wide range of domains. However, in complex decision-making tasks, pure LLM-based agents tend to exhibit intrinsic bias in their choice of actions, which is inherited from the model's training data and results in suboptimal performance. To develop strategic language agents, i.e., agents that generate flexible language actions and possess strong decision-making abilities, we propose a novel framework that powers LLM-based agents with reinforcement learning (RL). We consider Werewolf, a popular social deduction game, as a challenging testbed that emphasizes versatile communication and strategic gameplay. To mitigate the intrinsic bias in language actions, our agents use an LLM to perform deductive reasoning and generate a diverse set of action candidates. Then an RL policy trained to optimize the decision-making ability chooses an action from the candidates to play in the game. Extensive experiments show that our agents overcome the intrinsic bias and outperform existing LLM-based agents in the Werewolf game. We also conduct human-agent experiments and find that our agents achieve human-level performance and demonstrate strong strategic play.
Zelai Xu, Chao Yu, Fei Fang, Yu Wang, Yi Wu
2023-10-29T09:02:57Z
http://arxiv.org/abs/2310.18940v3
# Language Agents with Reinforcement Learning for Strategic Play in the Werewolf Game ###### Abstract Agents built with large language models (LLMs) have recently achieved great advancements. However, most of the efforts focus on single-agent or cooperative settings, leaving more general multi-agent environments underexplored. We propose a new framework powered by reinforcement learning (RL) to develop strategic language agents, i.e., LLM-based agents with strategic thinking ability, for a popular language game, Werewolf. Werewolf is a social deduction game with hidden roles that involves both cooperation and competition and emphasizes deceptive communication and diverse gameplay. Our agent tackles this game by first using LLMs to reason about potential deceptions and generate a set of strategically diverse actions. Then an RL policy, which selects an action from the candidates, is learned by population-based training to enhance the agents' decision-making ability. By combining LLMs with the RL policy, our agent produces a variety of emergent strategies, achieves the highest win rate against other LLM-based agents, and stays robust against adversarial human players in the Werewolf game. ## 1 Introduction Developing agents that are capable of logical thinking, strategic planning, and communicating with humans has been a longstanding aspiration (Wooldridge and Jennings, 1995; Goodwin, 1995) Due to the remarkable reasoning power and emergent generalization ability, large language models (LLMs) have shown great potential in constructing intelligent agents and have already led to many recent advancements (Ouyang et al., 2022; Wei et al., 2022; OpenAI, 2023). These LLM-based agents demonstrate proficiency in solving tasks in web surfing (Nakano et al., 2021; Yao et al., 2022), complex video games (Wang et al., 2023; Zhu et al., 2023), and real-world applications (Ahn et al., 2022; Brohan et al., 2023). Moreover, when interacting with other players, LLM-based agents exhibit the ability to generate human-like behaviors (Park et al., 2023; Gao et al., 2023) and achieve zero-shot multi-agent cooperation (Li et al., 2023; Chen et al., 2023). Although much progress has been made in designing LLM-based agents, most works focus on single-agent or fully cooperative tasks. Some other works (Meta et al., 2022) build language agents for more general environments but rely on predefined atomic actions. By contrast, real-world communications between humans are based on natural languages and require both cooperation and competition. Existing research efforts can be limited in the case of deploying agents in these more complex multi-agent scenarios. We consider the Werewolf game as a challenging mixed cooperative-competitive multi-agent testbed for LLM-based agents and examine their performance by playing against other LLM-based agents and human players. Werewolf is one of the most popular social deduction games where two teams of players with hidden roles need to communicate with natural languages to discover each other's identity and eliminate their opponents. During gameplay, the Werewolves need to conceal or lie about their roles to avoid suspicions, while the Villagers aim to gather information and find hidden opponents. The game is characterized by discussions, debates, and accusations as agents try to figure out others' true identities, which requires strong communication and strategic thinking that challenge the ability of LLM-based agents. One key challenge of the Werewolf game is to identify the hidden roles from unreliable information with potential decepoints. With the existence of opponents with unknown identities, the communication between players can be uninformative or even deceptive. Agents must distinguish between truths and lies and carefully reason to deduce the true identities of other players. Prior work on LLM-based agents for single-agent (Yao et al., 2022b) or cooperative tasks (Mandi et al., 2023) cannot handle this challenge well as their reasoning and actions are grounded in credible information and can be misled by the manipulative statements to make wrong decisions (Wang et al., 2023b). Moreover, the competitive nature of this game requires agents to employ strategically diverse actions to avoid being exploited by their opponents. If agents always adopt the same strategy, the fixed patterns in their play can be perceived and leveraged by skilled players to gain a significant advantage. For example, if the Werewolves invariably defend their accused teammates and follow their votes, the Villagers can easily reveal them through this pattern and cooperate to defeat them. This requires agents to have strong diversity in their gameplay strategies to reduce predictable behavior patterns, so that they can be less exploitable in the game. Unfortunately, existing LLM-based agents (Xu et al., 2023a) rarely consider their behavior exploitability and tend to take actions with clear strategic patterns in their play, making them vulnerable to real human players. In this work, we propose a framework that combines LLMs and reinforcement learning (RL) to build strategic language agents, i.e., LLM-based agents with strategic thinking ability, to tackle the aforementioned challenges in the Werewolf game. Our agent uses an LLM to organize key information to reason about hidden roles and generates a diverse set of action candidates. Then we learn an RL policy by population-based training to output final actions from the candidates and achieve strong strategic play. More specifically, our agent consists of three components. The first deduction component performs reasoning over the game history. It categorizes the whole game history into a list of atomic information according to their importance and reliability and uses an LLM to deduce the hidden roles of other players. The categorized information and deduction result are used as the input for the diverse action generation component to prompt the LLM for a set of strategically diverse action candidates. These candidates provide our agent with a variety of play styles, making it possible for the agent to learn diverse and unpredictable behaviors. The last component is an RL policy to select the output actions and optimize overall decision-making performances. To make the final policy more unexplitable, we generate a pool of fixed LLM-based agents with different styles and use population-based training to further improve the RL policy by playing against itself, its past versions, and the agents in the pool. To demonstrate the effectiveness of our framework, we perform a round-robin tournament evaluation between our agent and three other Werewolf baseline agents, where our agent consistently achieves the highest win rate. We then evaluate our agent against real humans and find it is robust to adversarial human players and achieves higher win rates than average humans in single-human settings where the rest players are all LLM-based agents. Moreover, we show that the RL policy trained with one LLM can be directly deployed to other LLMs to improve their performance in decision-making, showing zero-shot transfer capability of the RL policy. We also perform an empirical behavior analysis and find that our agent exhibits a diverse range of emergent strategies like concealment, cooperation, bluffing, and sacrificing, which are often utilized by skilled human players. ## 2 Related work **Building agents with large language models.** There is a recent trend in developing agents with large language models (LLMs) for various domains including website environment (Nakano et al., 2021; Yao et al., 2022a; Deng et al., 2023), game and simulation (Wang et al., 2023c;a; Zhu et al., 2023; Huang et al., 2022a), real-world scenarios (Ahn et al., 2022; Brohan et al., 2023; Vemprala et al., 2023; Huang et al., 2022b), and multi-agent interaction (Park et al., 2023; Li et al., 2023; Chen et al., 2023; Mandi et al., 2023). A shared foundation of these works is to utilize LLMs for planning and decision-making. One widely used approach to improve these abilities is task decomposition. Chain-of-thought (CoT) (Wei et al., 2022b) decomposes harder tasks into simpler ones by asking the model to think step-by-step. Tree-of-thoughts (ToT) (Yao et al., 2023) extends CoT by generating multiple thoughts at each step to create a tree structure and planning by searching in the tree. Work by Gandhi et al. (2023) further combines CoT with few-shot examples to enable strategic reasoning in matrix games and negotiation games. Another line of work uses self-reflection that allows agents to reflect on previous mistakes and refine their actions (Yao et al., 2022b; Shinn et al., 2023), or uses LLMs to design reward function for training RL agents (Kwon et al., 2023; Ma et al., 2023). Our work takes a different approach by utilizing LLMs to generate candidate actions and training an RL policy to optimize decision-making. While many LLM-based agents have been built for single-agent or cooperative scenarios, there has been limited effort in developing agents with LLMs in multi-agent mixed cooperative-competitive environments like the Werewolf game. One representative work is Cicero (Meta et al., 2022) which combines LLMs with RL to achieve human-level play in the game of Diplomacy. The main difference between Cicero and our method is that Cicero uses the RL policy to choose from a predefined action set. By contrast, the actions in our methods are natural languages generated by LLMs during the game, and the RL policy is used to choose from these actions which are not known in advance. Another closely related work includes the concurrent study (Xu et al., 2023) that also builds an LLM-based Werewolf agent. Their agent is purely based on LLMs and uses heuristic retrieval of key information and reflection on past experiences to enhance its ability, while our agent combines LLMs with an RL policy to further optimize performance. Some other work (Guo et al., 2023; Wang et al., 2023) also develops pure LLM-based agents for games like Leduc Hold'em and Avalon. **Reinforcement learning in non-cooperative games.** Applying reinforcement learning (RL) to non-cooperative games has achieved great success in the game of Go (Silver et al., 2016, 2018), poker (Moravick et al., 2017; Brown and Sandholm, 2018, 2019), and video games (Vinyals et al., 2019; Berner et al., 2019). The most popular method that underlies these achievements is self-play and its variants (Heinrich et al., 2015; Heinrich and Silver, 2016; Hennes et al., 2020; Xu et al., 2023), which learn a policy by training against itself and past checkpoints. Population-based training methods like policy-space response oracles (PSRO) (Lanctot et al., 2017; Muller et al., 2019) and league training (Vinyals et al., 2019) generalize self-play by maintaining a pool of different policies and training against the population. Another notable line of work is based on regret minimization techniques such as counterfactual regret minimization (CFR) (Zinkevich et al., 2007; Lanctot et al., 2009; Brown et al., 2019). DeepRole (Serrino et al., 2019) integrates deductive reasoning into CFR to solve the hidden role game named Avalon. Werewolf and Avalon are alike in that they both feature hidden roles, but Werewolf depends more heavily on natural language communication. In fact, DeepRole plays Avalon without communication and still outperforms human players. By contrast, it is almost impossible to achieve a strong play in the Werewolf game without communication. ## 3 The Werewolf game **Setup.** We consider a seven-player version of the Werewolf game with two Werewolves, one Seer, one Doctor, and three Villagers. An example of this game is shown in Fig. 1 and the detailed rules can be found in Appendix B. At the beginning of the game, each player is randomly assigned a hidden role, which divides them into the Werewolves and the Villagers. The two Werewolves know each other's identity and hence also know which players are the Villagers. Their goal is to kill every innocent player and avoid being discovered. The Villagers consist of three Villagers without ability and two special roles including the Seer and the Doctor. None of these players know the hidden roles of other players and their goal is to identify and eliminate the secret Werewolves. **Gameplay.** The game alternates between night and day rounds, starting with the night. In the night round, everyone closes their eyes to let the Werewolves and special roles take secret actions. The Werewolves pick one player to kill. The Serc chooses one player to check if this player is a Werewolf. The Doctor chooses one player to save without knowing who is the target of the Werewolves. If the Doctor chooses the same player as the Werewolves, the player is successfully saved and no one is killed in the night round. Otherwise, the player is eliminated from the game. In the day round, an announcement is first made to every player about who was killed or no one was killed last night. Then the remaining players take turns to speak in a discussion to express their opinion about who might be the Werewolves. Players can choose to claim or lie about their true identities, share or withhold information they have discovered, and accuse or defend other players to achieve their purposes. After all players have participated in a round of discussion, a vote is held to choose one suspicious player. Each player can vote for one player or do not vote, and the player with the most votes will be eliminated. The game then continues to the next night round until the Werewolves or the Villagers win the game. **Winning.** The Villagers win the game if both Werewolves are eliminated. The Werewolves win the game if the number of Werewolves is equal to the number of Villagers left. The winning condition is checked after every night and day round. **Observations and actions.** We implement a pure text-based Werewolf game environment that does not consider external factors like the players' tone or facial expressions. The observation of each player is a text that records the current game history. This includes their ID and hidden role, their secret night actions (if any), the announcements, the discussions, and the voting results. For Werewolves, the ID and secret actions of their teammates are also in the observation. The actions of players can be divided into three types. The first is the secret actions at night including killing, seeing, and saving that choose a specific target. The second type is the statement actions during discussion which are natural languages that convey the player's opinion and information. The last type is the voting actions that can target any surviving player or choose not to vote. ## 4 Strategic language agents The main challenges of the Werewolf game come from the adversarial opponents and ubiquitous deceptions in their claims during discussions. Players are required to deduce the hidden roles of other players from unreliable information and adopt a diverse range of actions to avoid exploitation. To achieve strong play in the game, we propose to build LLM-based agents with strategic thinking abilities with reinforcement learning (RL), which we call _strategic language agents_. Our agent uses an LLM to first distinguish credible information from potential deceptions and apply deductive reasoning to analyze the hidden roles of other players. Then the categorized information and reasoning results are used to prompt the LLM for strategically diverse action candidates. To optimize the final decision-making, an RL policy that selects the output from the candidates is learned by population-based training against itself and an opponent pool. An overview is shown in Fig. 2. ### Deductive reasoning The performance of Werewolf agents is largely determined by their judgment of other players' identities. However, when faced with a large amount of mixed truth and deception, the agents can easily be misled by manipulative claims or overwhelmed by unimportant details. Consider the situation when player_3 is the Seer and they saw player_0 is a Werewolf in the first night round. This is the most important and credible information that should have a decisive impact on their reasoning and decision-making. Nevertheless, this information is surrounded by an abundance of other information like who was killed last night and the discussions of other players, making it hard for this crucial information to be discovered. This problem becomes more pronounced in the later stages of the game as agents acquire more and more information. To address this issue, we maintain an orga Figure 1: An example of the Werewolf game with seven players. Players are randomly assigned a hidden role and are divided into the Werewolves and the Villagers. The game alternates between night and day rounds until the Werewolves or the Villagers achieve the winning condition. nized information record as well as a deductive reasoning result. The information record keeps key information and distinguishes truthful and deceptive statements, while the deduction result deduces the hidden role of each player and rates their reliability with an LLM. The information record is initialized by itemizing the current observation into a list of atomic information like ["you are player_3, your role is the Seer", "you saw player_0 is a Werewolf", "player_0 says...",...]. These atomic pieces of information are further classified into three types including facts, potential truths, and potential deceptions. All available information except for the players' statements is included in facts, which cover established facts like the current player's role, their secret actions, the announcements, and the voting results. The statements are classified into potential truths or deceptions according to the players' reliability in the deduction result. In our implementation, the players' reliability is rated on a scale from 1 to 10, and the statements of players with reliability larger than 6 are regarded as potential truths, otherwise are potential deceptions. With the organized information record, we then prompt the LLM to deduce the hidden roles of others. For each player, LLMs are asked to generate four attributes including reasoning, role, confidence, and evidence. The reasoning attribute is an auxiliary one that explicitly shows the deduction process of LLMs, which has been widely used to improve their performance in a variety of applications such as knowledge-intensive tasks (Yao et al., 2022b) and decision making tasks (Shinn et al., 2023). The role attribute corresponds to the most likely hidden role of the specific player, and the confidence attribute is an integer ranging from 5 to 10 that rates the certainty of the current deduction, where 5 means a random guess and 10 means absolutely sure. These two attributes are then used to determine the reliability of the player. If the deduced role is Werewolf, the reliability is calculated as \(11-\) confidence, otherwise, the reliability is equal to the confidence. Note that the reliability of a player deduced as a Werewolf cannot be larger than \(11-5=6\), which means that the statements of this player are always classified as potential deceptions. The last evidence attribute is a list of integers citing items from the information record that support the current deduction. This is used to identify the key information that contributes to the deduction of hidden roles. If a statement is never cited as evidence for deductions, it is regarded as an uninformative item and removed from the information record. During the gameplay, the information record and deduction result are updated alternatively using each other. When new information arrives and the agent needs to make a decision, the information record is first updated by itemizing and categorizing the new information according to the previous deduction result. Then the updated information record is used as the input to prompt the LLM for a new deduction result. This result loops back to revise the record by removing uncited items and reclassifying potential truths and deceptions. The combination of information record and deduction result transforms the raw observation into structured and informative data, which serves as the foundation for subsequent decision-making. ### Diverse action generation Utilizing a range of strategically diverse actions is a crucial ability in the Werewolf game due to its zero-sum property. Players usually have many possible actions to take in the same situation, but Figure 2: Overview of our agent. (1) Deductive reasoning: classify key information and apply deductive reasoning with the LLM. (2) Diverse action generation: prompt the LLM for a set of strategically diverse action candidates. (3) Population-based RL training: learn an RL policy by playing against itself, its past versions, and an agent pool. no single action leads to the optimal outcome because a fixed action can be easily discerned and exploited by adversarial players. Suppose the agent is a Werewolf and the remaining players are the agent, the Seer, and the Doctor. All players know each other's identity and the agent's only chance to win is to successfully kill a player in the coming night. If the agent always takes the same action to kill the Seer or the Doctor, an observant Doctor can remember the deterministic patterns and save the target. To achieve optimal results, the agent's best strategy is to randomly choose a player to kill. This makes the agent less exploitable and no Doctor can achieve a higher win rate than 50%. However, given the same situation, directly using LLMs often leads to a clear preference for a specific action. In the aforementioned Seer or Doctor example, we independently prompt gpt-3.5-turbo to choose a target 10 times and find that it chooses the Doctor 9 times and the Seer only once. This shows a clear bias toward the Doctor which is inherited inevitably from the model's training data. This lack of strategic diversity is also observed in other scenarios of the game where LLMs tend to produce conservative actions like Werewolf trying to stay unnoticed and Seer hesitating to share information, which can be leveraged by adversarial opponents to gain a significant advantage. To enhance diversity and reduce exploitation, our agent prompts LLMs to produce a set of action candidates instead of a single action. We use the concatenation of the information record and deduction result as input and consider two ways to generate \(N\) action candidates with strategic diversity. The first method is to produce all candidates in a single round by prompting LLMs to "propose \(N\) diverse actions that correspond to different strategies". This takes just one inference and works well for simpler actions like the secret actions and the voting actions. For more complex actions like the statement actions in the discussion, we consider a second way that iteratively generates one action for \(N\) rounds by prompting LLMs to "consider a new action that is strategically different from existing ones". By having more interactions with LLMs, the second method is empirically found to produce more diverse actions with higher quality for the statement actions. In our implementation, we use the second way to prompt statement actions for quality and use the first way to prompt the secret actions and vote actions for efficiency. We also ask LLMs to output reasonings and actions for better performance (Yao et al., 2022). The detailed prompts can be found in Appendix C. ### Population-based RL training With the diverse set of candidates at hand, the agents can choose from a variety of different actions to take. Although random sampling already leads to unpredictable play, the optimal policy in most cases is a non-uniform distribution over the candidates and depends on the game state. To optimize decision-making and achieve strategic play, we use reinforcement learning to train a policy that selects the final action from the candidates. The main difference between our setting and classic RL environments is that our action space is a discrete set of natural languages generated by LLMs. Because our action space is not predefined, we cannot use typical policy networks that only take the state as input and produce a distribution over the fixed action set. Instead, we first convert the game state and all candidate actions from natural language to vector embeddings using LLMs. Then we adopt a self-attention (Vaswani et al., 2017) network that takes all embeddings as input to produce a distribution over the action candidates. More specifically, the game state is the concatenation of the information record and deduction result described in Section 4.1, and an action candidate is the concatenation of the reasoning and action generated by LLMs as described in Section 4.2. These natural languages are converted into vector embeddings by the LLM. We also use a vector that contains player information like ID, role, etc. and pass the vector through an MLP encoder to produce a player embedding. The player embedding and language embeddings are passed through a residual self-attention block without position embeddings, and the probability to sample an action candidate is calculated as the normalized dot-product attention between the output state embedding and the output action embedding. To learn in this mixed cooperative-competitive game, we draw inspiration from fictitious play and following work on MARL (Heinrich et al., 2015; Hennes et al., 2020) to train the policy by playing against itself and its past checkpoints. Moreover, real-world games are usually non-transitive (Czarnecki et al., 2020) and the learned policies may cycle like Rock-Paper-Scissor. The agents can benefit from playing with a variety of teammates and opponents with different styles to achieve a higher level of play. To this end, we generate a pool of fixed LLM-based agents with diverse styles to serve as teammates and opponents. These manually-designed agents also apply the information organizing and deductive reasoning step in Section 4.1 but only generate one final action according to their predefined personalities. We generated three common styles of Werewolf including a quiet follower that lays low and follow others' opinion to avoid drawing attention to themselves, an active contributor that pretends to be one of the Villagers by actively engaging in discussion and looking for Werewolves, and an aggressive accuser that accuses others to create chaos and divert suspicion from themselves. We also set three styles for the Villagers including a secretive player that hides their role to gather more information, a proactive player that reveals their identity once they obtain crucial information, and a default player that uses the regular LLM output without setting the style. The detailed prompts can be found in Appendix C. These fixed agents constitute a population of potential teammates and opponents. At the beginning of each game, four players are set to be the learning agent and the rest three players are randomly sampled from the population and the past checkpoints of the learning policy. This population-based training makes our agent more robust to different types of teammates and opponents. A desirable feature of the learned RL policy in our approach is that it is decoupled from LLMs used in the previous steps for deductive reasoning and diversity prompting. This makes it possible to combine the learned RL policy with any other LLMs to improve their decision-making ability for the Werewolf game in a zero-shot manner. ## 5 Experiments Strategic language agents aim to achieve strong and strategic play in the Werewolf game. To comprehensively evaluate the ability of our agent, we conduct experiments from four different aspects. We first assess the performance of our agent by comparing it with other LLM-based Werewolf agents in a round-robin tournament where our agent achieves the highest win rates against all agents. Then we let humans play against our agent to evaluate its robustness and also against ablated versions of our agent to examine the effectiveness of our design. In addition, we show the zero-shot transfer ability of the learned policy for selecting actions by combining it with unseen LLMs and observing an improved performance than using the LLMs directly. Finally, we exhibit and analyze the emergent human-like behaviors generated by our agent. Unless otherwise stated, the LLM used by all agents in our experiment is gpt-3.5-turbo. More experiment details can be found in Appendix E. ### Round-robin tournament To evaluate the performance of our agent in this two-team zero-sum game, we compare it with three other language agents including a vanilla LLM-based agent (vanilla), the LLM-based agent developed by concurrent work (Xu et al., 2023a) (concurrent), and an RL agent trained on predefined atomic actions (atomic). The vanilla LLM-based agent directly prompts the LLM with natural language observations to produce reasoning and actions. The concurrent agent built by Xu et al. (2023a) takes a step forward by heuristically retrieving key information and reflecting on past experiences to improve the agent's ability. The atomic agent predefines a set of high-level atomic actions and trains an RL policy with this fixed action space. The RL policy takes the embeddings of the information record and deduction result as input and selects the atomic action based on the game history. Then the natural language actions used in gameplay are generated by prompting the LLM to follow the selected atomic actions. In our case, the atomic action set consists of 13 actions including idle, target player_{0,..., 6}, claim to be the {Werewolf, Seer, Doctor, Villager}, and do not reveal role. We perform a round-robin tournament between these four agents, which runs evaluations between all 16 ordered pairs of agents. For each pair of agents, the Werewolf game is played 100 times with the first agent being the Villagers (including the Seer and the Doctor) and the second agent being the Werewolves. This leads to a \(4\times 4\) cross-play matrix that records the Villagers' win rate as shown in Fig. 3. A row in the matrix corresponds to the agent's performance as the Villagers against different opponents, and a row with higher values means stronger performance. As shown by the bold numbers in the last row, our agent achieves the highest win rates against all agents when playing as the Villagers. Similarly, a column with smaller values represents lower lose rates for the Werewolves, and the rightmost column with underlined numbers shows our agent also achieves the best performance when being the Werewolves. Figure 3: Win rate matrix. One thing worth noting is that, although both our agent and the atomic agent combine RL with LLMs, our agent achieves much better performance. This is because the predefined atomic actions can be too general and fail to generate more fine-grained actions. Consider the situation where the agent is a Werewolf and their teammate is accused. The agent can choose to defend the teammate, avoid discussing the accusation, or support the accusation, but none of these actions can be stably generated by prompting LLMs with any of the predefined atomic actions. By contrast, our agent produces the action set during gameplay and can generate diverse actions at any granularity, which greatly improves the performance. We provide a detailed example of the sacrificing behavior generated by our agent in the emergent behavior section. ### Human evaluation Playing against human players is a strong test for robustness. By playing with the same agent for multiple games, humans can gradually learn the agent's behavior pattern from past experiences and adaptively change their strategy to exploit the agent. We evaluate the robustness of our agent against adversarial opponents by playing with human players. We compare our agent with three ablated versions of itself that adds key components to the vanilla agent one by one. More specifically, first agent (Vanilla) removes all three components including deductive reasoning, diverse action generation, and RL training. The second version (+ded) only uses the deductive reasoning component and generate a single action to play. The third version (+{ded, div}) further uses the diverse action generation components and uses the LLM instead of RL policy to select from the action candidates. We show by human evaluation that our agent is more robust to adversarial opponents than all ablated versions, and each component in our design contributes to its robustness. In our experiment, we recruited 80 human players and randomly divided them into 4 groups of 20 people to play against different agents. Half of the human players in each group are assigned to be the Villagers (including the Seer and the Doctor) and the other half are assigned to be the Werewolves. Each human player is paired with 6 agents of the same type to play 10 consecutive matches. Human players know they are playing with AI agents, and the hidden roles and secret actions of all agents are revealed after each match so that human players can use this information to understand the agents' behavior patterns and change their strategies accordingly for better performance in upcoming matches. This leads to 200 games played between humans and each type of agent and Table 1 shows the averaged win rates of human players against different agents. As shown by the bold numbers in the table, our agent with all three steps is least exploited by human players both as the Villagers and the Werewolves. Moreover, the mean win rates of human players increase almost monotonically as more steps are removed, which indicates that all three components help make our agent more robust. We further compare our agent with average humans in single-human setting by running the same experiments but replacing human players with our agent to report its mean win rates. As shown by the underlined numbers in Table 1, our agent consistently obtains higher win rates than humans against all four agents, which indicates that our agent achieves stronger performance than average \begin{table} \begin{tabular}{c c c c c} \hline \hline Row Player Win Rate & Vanilla & +ded & +{ded, div} & **Ours** \\ \hline \multirow{2}{*}{The Villagers} & Humans & 0.42 & 0.35 & 0.36 & **0.27** \\ & Ours & 0.46 & 0.41 & 0.39 & **0.30** \\ \hline \multirow{2}{*}{The Werewolves} & Humans & 0.80 & 0.75 & 0.71 & **0.63** \\ & Ours & 0.84 & 0.78 & 0.77 & **0.70** \\ \hline \hline \end{tabular} \end{table} Table 1: Win rates of human players and our agent against different agents. Bold numbers show that our agent is more robust than all ablated versions. Underlined numbers show that our agent achieves higher win rates than average humans in single-human evaluation. \begin{table} \begin{tabular}{c c c c} \hline \hline Win Rate & GPT-4 & LLaMA-7B & ChatGLM-6B \\ \hline w.o. policy & 0.25 & 0.13 & 0.12 \\ **w. policy** & **0.36** & **0.19** & **0.20** \\ \hline \hline \end{tabular} \end{table} Table 2: Win rates of agents with and without RL policy learned with gpt-3.5-turbo. Bold numbers show that our RL policy improves the performance of agents built with unseen LLMs. human players in the single-human evaluation. The current single-human setting is a starting point to evaluate the robustness of our agent. A more comprehensive way is to play with multiple humans in one game and evaluate the performance of our agent. We discuss the limitations of single-human evaluation and future work on multi-human evaluation in Appendix H.4. ### Zero-shot transfer Since our RL policy takes natural language state and actions as input and is decoupled from the LLM used in previous steps, it can be directly combined with any other LLMs and improve the performance of the LLM-based agent. We evaluate this zero-shot transfer ability of our RL policy trained with gpt-3.5-turbo by applying it to unseen LLMs including GPT-4, LLaMA-7B, and ChatGLM-6B. We implemented two agents for each LLM, one using our RL policy learned with gpt-3.5-turbo and the other without the policy. The agent with our RL policy (w. policy) follows the design of our agent and uses the RL policy to select actions, while the agent without policy (w.o. policy) uses the LLM instead of the RL policy to select actions. These two agents are evaluated by playing against our agents for 100 games and their average win rates are shown in Table 2. Although not trained with any of these LLMs, the RL policy is shown to improve the performance of all these LLMs from stronger models like GPT-4 to weaker models like LLaMA-7B as shown by the bold numbers in the table. This is because we use natural language as a general interface between LLMs and the RL policy. As long as the LLMs can produce a set of language actions, the RL policy can be used to improve the strategic ability in a zero-shot way. ### RL-induced emergent behaviors RL training makes the agents stronger and less exploitable against adversarial opponents. To intuitively show the benefit of RL training, we compare our agents' action distributions with and without the RL policy and analyze their action behaviors in three situations to show the differences. Please see Appendix F and Appendix G for more discussions and examples of emergent behaviors. **Werewolf first night action.** The Werewolves need to choose a player to kill on the first night without any information. Their optimal policy is to randomly select a player other than themselves. However, as shown in Fig. 3(a), the agents without the RL policy have a high probability of 0.38 to kill player_0 and a very low probability of 0.01 to kill player_5. This pattern can be exploited by an adversarial Doctor that always saves player_0 and achieves a success rate of 0.38. By contrast, our agent with the RL policy produces an almost uniform distribution and no Doctor can achieve a success rate higher than 0.17, which is half less exploitable than the agents without the RL policy. **Doctor first night action.** The Doctor also needs to choose a player to save without any information on the first night. Randomly saving a player could waste the action on a Werewolf and the optimal action for the Doctor is to save themselves. As shown in Fig. 3(b), the agents without the RL policy choose to save themselves with a probability of 0.62, while the agents with the RL policy almost always save themselves with a probability of 0.94, which is close to the optimal policy. Figure 4: Comparison of LLM-based agents’ action distributions with and without RL policy. **Villager voting action with two self-proclaimed Seers.** Consider the case where two players claim to be the Seer and a Villager should choose their action in the voting phase. Because there is only one Seer in the game, one of the two self-proclaimed Seers must be a Werewolf, and the Villager should identify the fake Seer and vote them out. Not voting for anyone is a bad action because it makes it easier for the Werewolves to control the voting result and eliminate the real Seer. Unfortunately, the agents without the RL policy are likely to choose not to vote with a probability of 0.69 as shown in Fig. 3(c). This is because the actions chosen by the LLM are conservative when the agents are not sure who is the real Seer. In comparison, the agents with the RL policy have a low probability of choosing not to vote and learn to vote out the right Werewolf who is pretending to be a Seer. ## 6 Conclusion We propose a framework that combines LLMs and RL to build strategic language agents that achieve strong and diverse gameplay in the Werewolf game. Our agent extracts important and reliable information by using LLMs to distinguish potential deceptions and analyze the hidden roles of other players. To reduce exploitation from adversarial opponents, our agent uses LLMs to generate a set of strategically diverse actions and learns an RL policy by population-based training for optimal decision-making. In evaluations against other LLM-based agents and human players, our agent produces a range of emergent behaviors, achieves the highest win rates, and stays robust against adversarial human players in the Werewolf game.
2303.07958
Atomic relaxation and electronic structure in twisted bilayer MoS2 with rotation angle of 5.09 degrees
It is now well established theoretically and experimentally that a moir\'e pattern, due to a rotation of two atomic layers with respect to each other, creates low-energy flat bands. First discovered in twisted bilayer graphene, these new electronic states are at the origin of strong electronic correlations and even of unconventional superconductivity. Twisted bilayers (tb) of transition metal dichalcogenides (TMDs) also exhibit flat bands around their semiconductor gap at small rotation angles. In this paper, we present a DFT study to analyze the effect of the atomic relaxation on the low-energy bands of tb-MoS2 with a rotation angle of 5.09 degrees. We show that in-plane atomic relaxation is not essential here, while out-of-plane relaxation dominates the electronic structure. We propose a simple and efficient atomic model to predict this relaxation.
Somepalli Venkateswarlu, Ahmed Misssaoui, Andreas Honecker, Guy Trambly de Laissardière
2023-03-14T14:59:47Z
http://arxiv.org/abs/2303.07958v2
Atomic relaxation and electronic structure in twisted bilayer MoS\({}_{2}\) with rotation angle of 5.09 degrees ###### Abstract It is now well established theoretically and experimentally that a moire pattern, due to a rotation of two atomic layers with respect to each other, creates low-energy flat bands. First discovered in twisted bilayer graphene, these new electronic states are at the origin of strong electronic correlations and even of unconventional superconductivity. Twisted bilayers (tb) of transition metal dichalcogenides (TMDs) also exhibit flat bands around their semiconductor gap at small rotation angles. In this paper, we present a DFT study to analyze the effect of the atomic relaxation on the low-energy bands of tb-MoS\({}_{2}\) with a rotation angle of 5.09\({}^{\circ}\). We show that in-plane atomic relaxation is not essential here, while out-of-plane relaxation dominates the electronic structure. We propose a simple and efficient atomic model to predict this relaxation. ## 1 Introduction The broad family of transition metal dichalcogenides (TMDs) [1; 2; 3] offers the possibility to stack two layers with a small angle of rotation \(\theta\) to each other, thus forming a moire pattern superstructure. These twisted bilayers have given rise to numerous experimental [4; 5; 6; 7; 8; 9; 10; 11; 12; 13] and theoretical [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35] studies to understand electronic states that are confined by a moire pattern in semiconductor materials. Many of these studies analyze the interlayer distances, the possible atomic relaxation, the transition from a direct band gap in the monolayer system to an indirect band gap in bilayer systems, and more generally the effect of interlayer coupling in these twisted bilayer 2D systems at various rotation angles \(\theta\). For small values of \(\theta\), the emergence of flat bands has been established from first-principles density functional theory (DFT) calculations [21; 31] and tight-binding (TB) calculations [28; 29; 30] in twisted bilayer MoS\({}_{2}\) (tb-MoS\({}_{2}\)), and observed in a 3\({}^{\circ}\) twisted bilayer WSe\({}_{2}\) sample by using scanning tunneling spectroscopy [13]. It has been also shown numerically [26] that Lithium intercalation in tb-MoS\({}_{2}\) increases interlayer coupling and thus promotes flat bands around the gap. There is also experimental evidence that moire patterns may give rise to confined states due to the mismatch of the lattice parameters in MoS\({}_{2}\)-WSe\({}_{2}\) heterobilayers [12]. Most theoretical investigations of the electronic structure of bilayer MoS\({}_{2}\) are density-functional theory (DFT) studies [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 36; 37; 38; 39] with eventually a Wannier wave function analysis [15]. To provide systematic analysis as a function of the rotation angle \(\theta\), in particular for small angles, _i.e._, very large moire pattern cells for which DFT calculations are not feasible, several TB models, based on Slater-Koster (SK) parameters [40], have been proposed for monolayer MoS\({}_{2}\)[41; 42; 43; 44; 45] and multi-layer MoS\({}_{2}\)[41; 15; 29; 30; 41; 43]. Following these efforts, we have proposed [28] a SK-TB set of parameters for non-relaxed structures, _i.e._, rigidly twisted bilayers, that match correctly the DFT bands around the gap of tb-MoS\({}_{2}\) with rotation angles \(\theta>7^{\circ}\). This SK-TB model, with the same parameters, is then used for smaller angles in order to describe the states confined by the moire pattern. For \(\theta\lesssim 5^{\circ}\), the valence band with the highest energy is separated from the other valence states by a minigap of a few meV. In addition, the width of this band decreases as \(\theta\) decreases so that the average velocity of these electronic states reaches 0 for \(\theta\lesssim 2^{\circ}\) such that almost flat bands emerge at these angles. This is reminiscent of the vanishing of the velocity at certain "magic" rotation angles in twisted bilayer graphene [46; 47; 48]. However, in bilayer MoS\({}_{2}\) it arises for an interval of angles and not a set of specific values. Other minigaps and flat bands are also found in the conduction band. The confined states that are closest to the gap are localized in the AA stacking regions of the moire pattern, as in twisted bilayer graphene. However, for small angles, it has been shown [21; 31; 33] that atomic relaxation strongly modifies these low-energy bands, in particular their approximate degeneracy. A better understanding of atomic relaxation and its effects on the electronic structure is therefore essential not only for very small angles but also for larger ones (\(\theta\simeq 5^{\circ}\)). In this paper, we present a DFT study of tb-MoS\({}_{2}\) with a rotation angle of \(\theta=5.09^{\circ}\). This angle is approximately the angle below which the highest energy valence band (just below the gap) is isolated from the rest of the valence states by a minigap for non-relaxed structures. We show that, unlike the case of very small angles [21; 31; 33], atomic relaxation amounts to essentially out-of-plane atomic dis placements along the \(z\)-direction that can be simulated simply as a function of the atomic positions in the moire cell. This simple atomic model allows to understand the origin of the modifications of the electronic structure induced by the relaxation. ## 2 Structure and DFT calculations The construction of the commensurable twisted bilayer we study is explained in detail in Refs. [28; 49]. It corresponds to \((n,m)=(6,7)\) with 762 atoms per unit cell. Starting from an AA stacked bilayer (where Mo atoms of a layer lie above a Mo atom of the other layer, and S atoms of a layer lie above an S atom of the other layer), one layer (layer "+") is rotated with respect to the other layer (layer "-") by the angle \(\theta=5.09^{\circ}\) around an axis containing two Mo atoms. Thus, AA stacking regions are located at the corner of the moire cell (Fig. 1(bottom)). BA' stacking regions (where S\(-\) atoms lie above a Mo+, and Mo\(-\) (S\(+\)) do not lie above any atom of layer + (layer \(-\))), and AB' stacking regions (regions where Mo\(-\) lie above an S\(+\), and S\(-\) (Mo+) do not lie above any atom of layer + (layer \(-\))) are located at 1/3 and 2/3 of the long diagonal of the moire cell, respectively. The DFT calculations were carried out with the ABINIT software [50; 51; 52]. We have checked previously [28; 49] that LDA [53] and GGA [54] + Van der Waals exchange-correlation functionals yield very similar results, so all the results presented here are based on LDA calculations, which require less computation time for large systems. The Brillouin zone was sampled by a k-point mesh in reciprocal space within the Monkhorst-Pack scheme [55]. One k-point is used for atomic relaxation and a 2\(\times\)2 k-grid for the self-consistency procedure of the electronic structure calculation. The kinetic energy cutoff was chosen to be 408 eV. We checked that 3\(\times\)3 k-grid for the self-consistency procedure and energy cutoff of 544.2 eV yield very similar bands for the relaxed structure. The structural optimization of atomic positions is done by using the Broyden-Fletcher-Goldfarb-Shanno minimization. A vacuum region of 10 A was inserted between the MoS\({}_{2}\) bilayers to avoid spurious interactions between periodic images. In our calculations, the spin-orbit coupling (SOC) is not taken into account in order to reduce the calculation time. SOC is important for TMDs, as it introduces some band splittings close to the gap [21; 29; 44; 45; 35], however it will not change the existence or not of low-energy flat bands in tb-MoS\({}_{2}\), so it is not essential for the present study. ## 3 Atomic relaxation The DFT-relaxed atomic structure of tb-MoS\({}_{2}\) with rotation angle \(\theta=5.09^{\circ}\) is shown in Fig. 1. Remarkably, the in-plane displacement \(\vec{\tau}_{i}\) of each atom \(i\) with respect to the rigidly twisted structure is rather small. Indeed, the average \(\|\vec{\tau}_{i}\|\) is \(0.04\pm 0.02\) A, \(0.03\pm 0.02\) A, and \(0.05\pm 0.02\) A, for atoms S\({}_{\pm\mathrm{ext}}\), Mo\({}_{\pm}\), and S\({}_{\pm\mathrm{int}}\), respectively. Such displacements are almost not visible in Fig. 1(bottom), and they have little effect on the electronic structure (see next section). This result shows that the strong in-plane displacements obtained for the smallest angles [21; 31; 33] are not too important for \(\theta\simeq 5^{\circ}\). However, it is interesting to note that these small displacements are precursors to the larger displacements and shear solitons obtained for the smallest angles [21], as shown in Fig. 2. These in-plane displacements tend to reduce the AA stacking regions with respect to the AB stacking regions to minimize the energy [21; 31; 33]. On the other hand, the displacements along the \(z\)-direction are important. For each atomic layer, the mean values of the \(z\)-coordinate and the corresponding standard deviation are given in Table 1. As expected given the interplane distances in the simple stacking cases (see Ref. [38] and Table 1), the distance between layers is greater in the AA stacking regions than in AB' (BA') stacking regions. Inspired by the work of Koshino _et al._[56] for twisted bilayer graphene, we propose the following atomic model Figure 1: Sketch of DFT-relaxed tb-MoS\({}_{2}\) with rotation angle \(\theta=5.09^{\circ}\). All atoms in a moiré cell are represented: Red circles are DFT calculation and green crosses are the positions calculated with \(z\)-modulation only (Eq. (1)). (top) Perspective side view of a moiré cell (layer “+” (“\(-\)”) is located in the \(z>0\) (\(z<0\)) region), and (bottom) the corresponding top view. ("\(z\)-mod" model) where \(xy\) in-plane atomic coordinates are those of the rigidly twisted bilayer and the modulation of the atomic \(z\)-coordinates are calculated by \[z(\vec{r})=\frac{(z_{\rm AA}+2z_{\rm AB})}{3}+\frac{(z_{\rm AA}-z_{\rm AB})}{9} \sum_{i=1}^{6}\cos\left(\vec{G}_{i}\cdot\vec{r}\right), \tag{1}\] where \(\vec{r}\) are the non-relaxed positions of the atoms in the rigidly twisted moire cell, and \(\vec{G}_{i}\) are the six vectors of the reciprocal lattice that define the first Brillouin zone of the moire pattern. For each atomic layer, _i.e._, for the six atomic layers of tb-MoS\({}_{2}\), \(z_{\rm AA}\) and \(z_{\rm AB}\) are the \(z\) values of the DFT-relaxed structure at \(\vec{r}=\vec{0}\) (AA stacking region) and \(\vec{r}\) at AB' or BA' stacking regions (Table 1). Figure 1 shows that DFT-relaxed positions and \(z\)-mod positions fit well together. ## 4 Electronic band dispersion The DFT band dispersions are shown for the DFT-relaxed bilayer and the rigidly twisted bilayer (non-relaxed bilayer) in Fig. 3(a) and 3(b), respectively. For the non-relaxed bilayer the interlayer distance is that obtained for simple AA stacking, like in our previous calculations [28]. In the non-relaxed tb-MoS\({}_{2}\), the minimum of the conduction band is at K like for monolayer MoS\({}_{2}\), which is no longer the case after atomic relaxation. Fig. 3(c) shows a zoom of the highest-energy valence bands. The relaxation does not change the valence band maximum energy at K. However it leads to significant modifications of the bands close to the gap. For the non-relaxed structure [28], when \(\theta<\theta_{c}\), the highest-energy band is non-degenerate and isolated from the other valence bands by a minigrap. For \(\theta=5.09^{\circ}\), this minigrap is equal to \(\sim 0.5\) meV (Fig. 3(c)). For the relaxed structure there is no isolated single valance band, but 2 bands which cross in K while remaining linear in \(k\) (around K). This result is similar to that of Naik _et al._[21; 31] and Vitale _et al._[33], obtained by using a multi-scale approach, a pair potential for relaxation, and DFT or TB calculations for the band dispersion. There are nevertheless small differences. For instance, in our calculation, the bandwidth for these first two valence bands is 84 meV with respect to \(\sim 120\) meV in Ref. [31]. Unlike that previous calculation, these two bands are isolated from the rest of the valence bands by a minigrap of \(\sim 20\) meV. To test the validity of the structure with the \(z\)-modulation only (Eq. (1)), bands around the gap of DFT-relaxed and \(z\)-mod structures are compared in Fig. 4. The gaps of the two structures are slightly different (difference of 36 meV), which may be due to the difference between the average \(z\) values per atomic layer (Table 1). The conduction bands are nevertheless almost the same and valence bands are very similar. In particular, the two valence bands closest to the gap that are characteristic of the atomic relaxation effect are well reproduced. Therefore, the simplified relaxation model given by formula (1) is sufficient to account for the low-energy flat bands due to a moire pattern at not too small rotation angles. \begin{table} \begin{tabular}{l l l l l} \hline \hline Structure & & S\({}_{\pm\rm ext}\) & Mo\({}_{\pm}\) & S\({}_{\pm\rm int}\) \\ \hline non-relaxed & \(\bar{z}\) & \(\pm\)4.96 & \(\pm\)3.40 & \(\pm\)1.84 \\ relaxed & \(\bar{z}\) & \(\pm\)4.86 & \(\pm\)3.21 & \(\pm\)1.56 \\ & \(\sigma_{z}\) & 0.10 & 0.10 & 0.10 \\ & \(z_{\rm AA}\) & \(\pm\)5.08 & \(\pm\)3.43 & \(\pm\)1.78 \\ & \(z_{\rm AB}\) & \(\pm\)4.72 & \(\pm\)3.07 & \(\pm\)1.42 \\ & \(\delta z\) & 0.36 & 0.36 & 0.36 \\ \(z\)-mod & \(\bar{z}\) & \(\pm\)4.84 & \(\pm\)3.19 & \(\pm\)1.54 \\ & \(\sigma_{z}\) & 0.10 & 0.10 & 0.10 \\ bilayer AA & \(z_{\rm AA}\) & \(\pm\)5.11 & \(\pm\)3.50 & \(\pm\)1.88 \\ bilayer AB’ (BA’) & \(z_{\rm AB}\) & \(\pm\)4.62 & \(\pm\)3.01 & \(\pm\)1.39 \\ \hline \hline \end{tabular} \end{table} Table 1: Atomic positions: Average \(\bar{z}\) value per layer and the corresponding standard deviation \(\sigma_{z}\), for layer + (\(z>0\)) and layer \(-\) (\(z<0\)) in non-relaxed, relaxed, and \(z\)-mod tb-MoS\({}_{2}\) structures. The atoms of the \(z\)-mod structure have the same in-plane \(xy\) coordinates as the non-relaxed structure and their \(z\) coordinates are calculated from Eq. (1) with \(z_{\rm AA}\) and z\({}_{\rm AB}\) (_i.e._, in AB’ and BA’ stacking regions) of the DFT-relaxed structure. For each layer \(\delta z=|z_{\rm AA}-z_{\rm AB}|\). The two last lines correspond to simple AA and AB’ (BA’) bilayer stackings for which \(\bar{z}=z_{\rm AA}\) and z\({}_{\rm AB}\), respectively. The \(\bar{z}\) values for the two layers are symmetric with respect to \(z=0\). Distances are in Å. Figure 2: In-plane displacement vectors of the atoms in the moire cell of the DFT-relaxed structure with respect to the rigidly twisted bilayer (Non-relaxed structure). To be visible, the displacement vectors \(\vec{\tau}_{i}\) of each atom \(i\) have been multiplied by 40. ## 5 Conclusion We have performed a DFT study of the atomic relaxation and the electronic band dispersion of twisted bilayer MoS\({}_{2}\) with a rotation angle equal to \(5.09^{\circ}\). Contrary to what has been observed for very small angles (typically less than \(\sim 2^{\circ}\)) [21; 31; 33], the in-plane atomic displacements with respect to a rigidly twisted bilayer are very small, and they have almost no effect on band dispersion. However, the out-of-plane displacements are large and significantly alter the low-energy bands around the gap. These atomic displacements can be modeled by a simple formula that depends only on the interlayer distances in the AA and AB stacking regions. The reduction of bandwidth and related emergence of flat bands identifies weakly doped MoS\({}_{2}\) bilayers as good candidates for the observation of strong correlation effects. For a complete theoretical study of electronic correlations in these complex systems, it is important to take into account the atomic relaxation. We offer here a simple out-of-plane atomic displacements model for not too small rotation angles, typically a few degrees. Preliminary investigations (not shown here) indicate that for determining the electronic structure of low-energy bands, the Slater-Koster tight-binding models [28; 29], that are efficient for rigidly twisted bilayers, would require a further adjustment of the parameters in order to be applicable to the \(z\)-modulated structures. ## 6 Acknowledgments Calculations have been performed at the _Centre de Calcul (CDC)_, CY Cergy Paris Universite, and at GENCI-IDRIS (Grant No. A0060910784). We thank Y. Costes and B. Mary, CDC, for computing assistance. This work was supported by the ANR project FlatMoi (ANR-21-CE30-0029) and the Paris/Seine excellence initiative (Grants No. 2017-231-C01-A0 and AAP2019-000000113).
2302.08942
PAC-Bayesian Generalization Bounds for Adversarial Generative Models
We extend PAC-Bayesian theory to generative models and develop generalization bounds for models based on the Wasserstein distance and the total variation distance. Our first result on the Wasserstein distance assumes the instance space is bounded, while our second result takes advantage of dimensionality reduction. Our results naturally apply to Wasserstein GANs and Energy-Based GANs, and our bounds provide new training objectives for these two. Although our work is mainly theoretical, we perform numerical experiments showing non-vacuous generalization bounds for Wasserstein GANs on synthetic datasets.
Sokhna Diarra Mbacke, Florence Clerc, Pascal Germain
2023-02-17T15:25:49Z
http://arxiv.org/abs/2302.08942v4
# PAC-Bayesian Generalization Bounds for Adversarial Generative Models ###### Abstract We extend PAC-Bayesian theory to generative models and develop generalization bounds for models based on the Wasserstein distance and the total variation distance. Our first result on the Wasserstein distance assumes the instance space is bounded, while our second result takes advantage of dimensionality reduction. Our results naturally apply to Wasserstein GANs and Energy-Based GANs, and our bounds provide new training objectives for these two. Although our work is mainly theoretical, we perform numerical experiments showing non-vacuous generalization bounds for Wasserstein GANs on synthetic datasets. Machine Learning, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein Metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric metric, Wasserstein metric metric, Wasserstein metric, Wasserstein metric, Wasserstein metric metric, Wasserstein metric, Wasserstein metric, Wasserstein metric, Wasserstein metric metric, Wasserstein metric metric, Wasserstein metric, Wasserstein metric metric, Wasserstein metric metric, Wasserstein metric, Wasserstein metric, Wasserstein metric metric, Wasserstein metric metric, Wasserstein metric metric, Wasserstein metric metric, Wasserstein metric metric, Wasserstein metric metric, Wasserstein metric metric, Wasserstein metric metric, Wasserstein metric metric, Wasserstein metric metric, Wasserstein metric metric, Wasserstein metric metric metric, Wasserstein metric metric metric, Wasserstein metric metric metric metric, Wasserstein metric metric metric metric, Wasserstein metric and uses a margin loss. More precisely, given a positive number \(m\) called the margin, EBGAN's critic and generator minimize respectively \[\min_{f}\left\{\mathop{\mathbb{E}}_{\mathbf{x}\sim P^{*}}f(\mathbf{x})+\mathop{ \mathbb{E}}_{\hat{\mathbf{x}}\sim P^{g}}\max\left(0,m-f(\hat{\mathbf{x}})\right) \right\},\] and \[\min_{g}\left\{\mathop{\mathbb{E}}_{\hat{\mathbf{x}}\sim P^{g}}f(\hat{\mathbf{x }})-\mathop{\mathbb{E}}_{\mathbf{x}\sim P^{*}}f(\mathbf{x})\right\}.\] Note that the critic is constrained to be non-negative. Arjovsky et al. (2017) showed that under an optimal critic, the EBGAN's generator minimizes (a constant scaling of) the total variation distance \(d_{TV}(P^{*},P^{g})\). Generalization.Since the true distribution \(P^{*}\) is unknown and the model has only access to its empirical counterpart \(P^{*}_{n}\), the question of generalization naturally arises: How to certify that the learned distribution \(P^{g}\) is "close" to the true one \(P^{*}\)? The goal of this work is to study the generalization properties of GANs using PAC-Bayesian theory. More precisely, we prove non-vacuous PAC-Bayesian generalization bounds for generative models based on the Wasserstein distance and the total variation distance. Since we use the IPM formulation of these metrics, our results are naturally applicable to WGANs and EBGANs. ### Related Works There is a large body of works dedicated to the understanding of the generalization properties of GANs (Arora et al., 2017; Zhang et al., 2018; Liang, 2018; Singh et al., 2018; Uppal et al., 2019; Schreuder et al., 2021; Biau et al., 2021). Given a family of generators \(\mathcal{G}\), a family of critics \(\mathcal{F}\), and a discrepancy measure \(\mathcal{D}\), the usual goal is to upper bound the quantity \(\mathcal{D}(P^{*},P^{\hat{g}})\), where \(\hat{g}\) is an optimal solution to the empirical problem \(\min_{g\in\mathcal{G}}\mathcal{D}(P^{*}_{n},P^{g})\). From a statistical perspective, the most common approach is to quantify the rate of convergence of \(r(\hat{g})\coloneqq\mathcal{D}(P^{*},P^{\hat{g}})-\inf_{g\in\mathcal{G}} \mathcal{D}(P^{*},P^{g})\), as the size of the training set \(n\) goes to infinity. Assuming that the target distribution \(P^{*}\) has a smooth density, Singh et al. (2018); Liang (2018) and Uppal et al. (2019) provide rates of convergence dependent on the ambient dimension of the instance space \(\mathcal{X}\) and the complexity of the critic family \(\mathcal{F}\). Noting that the density assumption on \(P^{*}\) might be unrealistic in practice, Schreuder et al. (2021) prove rates of convergence assuming \(P^{*}\) is a smooth transformation of the uniform distribution on a low-dimensional manifold. This allows them to derive rates depending on the intrinsic dimension of the data, as opposed to its extrinsic dimension. Under simplicity assumptions on the critic family, Zhang et al. (2018) provide upper bounds for \(r(\hat{g})\), when \(\mathcal{D}\) is the negative critic loss \(d_{\mathcal{F}}\). They first prove general bounds using the Rademacher complexity of \(\mathcal{F}\), then bound this complexity in the case when \(\mathcal{F}\) is a family of neural networks with certain constraints. More recently, Biau et al. (2021) developed upper bounds for \(r(\hat{g})\), but assuming \(\mathcal{D}\) is the Wasserstein-1 distance \(W_{1}\). They argue that since the use of \(d_{\mathcal{F}}\) in practice is purely motivated by optimization considerations, \(W_{1}\) is a better way of assessing the generalization properties of WGANs. One major distinction between this work and the ones cited above, is that our definition of the generalization error does not explicitly involve the modeling error \(\inf_{g\in\mathcal{G}}\mathcal{D}(P^{*},P^{g})\). Instead, we define the generalization error as the discrepancy between the empirical loss and the expected population loss, allowing us to derive bounds that can be turned into an optimization objective to be minimized by a learning algorithm. Our approach to generalization is closer to the one taken by Arora et al. (2017), who study the generalization properties of GANs by defining the generalization error, for any generator \(g\), as \(|\mathcal{D}(P^{*},P^{g})-\mathcal{D}(P^{*}_{n},P^{g}_{n})|\), where \(\mathcal{D}(P^{*}_{n},P^{g}_{n})\) is the discrepancy between the empirical training and generated distributions. They show that models minimizing \(W_{1}\) do not generalize (in the sense that the generalization error cannot be made arbitrarily small, given a polynomial number of samples), while models minimizing \(d_{\mathcal{F}}\) do, under certain conditions on \(\mathcal{F}\). A distinction between our approach and the one taken by Arora et al. (2017) is that we define the empirical risk as the expectation \(\mathbb{E}\,\mathcal{D}(P^{*}_{n},P^{g}_{n})\) with respect to the fake distribution \(P^{g}_{n}\), since in practice, the samples defining \(P^{g}_{n}\) are drawn anew at each iteration. Moreover, we study distributions \(\rho\in\mathcal{M}^{1}_{+}(\mathcal{G})\) over the set of generators, as well as individual generators \(g\in\mathcal{G}\). There are other differences between our approach and the ones above. First, our bounds do not depend on the complexity or smoothness of the critic family \(\mathcal{F}\). In other words, our generalization bounds apply systematically to any critic family \(\mathcal{F}\), with no distinctions between the cases where \(\mathcal{F}\) is a "small" subset of \(\text{Lip}_{1}\) and where \(\mathcal{F}=\text{Lip}_{1}\). The intuitive explanation is that the complexity of the critic family is naturally "embedded" in the empirical and population risks defined in the PAC-Bayesian framework. Second, because of the generality of the PAC-Bayesian theory, we make no assumptions on the structure of the critic family, and some of our bounds do not even make assumptions on the hypothesis space \(\mathcal{G}\). The fact that these results can be directly applied to neural networks is a consequence of the generality of PAC-Bayes bounds. Moreover, our bounds provide novel training objectives, giving rise to models that use the training data to not only learn the distribution \(P^{*}\), but also obtain a risk certificate valid on previously unseen data. Aside from the study of the generalization properties of GANs, our work relates to the recent work of Ohana et al. (2022), who develop PAC-Bayes bounds for "adapitative" sliced Wasserstein distances. The sliced-Wasserstein distance (SW) (Rabin et al., 2011) is an optimization-focused alternative to the Wasserstein distance. Given distributions \(P\) and \(Q\) on a high-dimensional space, SW computes \(W_{1}(P_{1},Q_{1})\) instead of \(W_{1}(P,Q)\), where \(P_{1}\) and \(Q_{1}\) are projections of \(P\) and \(Q\) on a 1-dimensional space. Note that the bounds developed by Ohana et al. (2022) apply to the SW distance, whereas our bounds are developed for the Wasserstein distance between distributions on a high dimensional space. In addition, the bounds of Ohana et al. (2022) focus on the discriminative setting, that is, the models they study optimize to find the projections with the highest discriminative power. Then, they argue that these bounds can be applied to the study of generative models based on the distributional sliced-Wasserstein (Nguyen et al., 2021). In contrast, our results are specifically tailored to the generative modeling setting and provide upper bounds on the difference between the empirical risk of a critic and its population risk. Finally, we mention a recent article (Cherief-Abdellatif et al., 2022) which uses PAC-Bayes to obtain generalization bounds on the _reconstruction loss_ of VAEs. In short, Cherief-Abdellatif et al. (2022) clip the reconstruction loss in order to utilize McAllester's bound (McAllester, 2003), which applies to \([0,1]\)-bounded loss functions. Moreover, they omit the KL-loss, meaning they do not analyze a VAE per se, but simply a stochastic reconstruction machine. Hence, theirs is not a PAC-Bayesian analysis of a generative model, but of a reconstruction model. ### Our Contributions The primary objective of this work is to extend PAC-Bayesian theory to adversarial generative models. We develop novel PAC-Bayesian generalization bounds for generative models based on the Wasserstein distance and the total variation distance. First, assuming the instance space is bounded, we prove generalizations bounds for Wasserstein models dependent on the diameter of the instance space. Then, we show that one can obtain bounds dependent on the intrinsic dimension, assuming that the distributions are smooth transformations of a distribution on a low-dimensional space. Finally, we exhibit generalization bounds for models based on the total variation distance. To the best of our knowledge, ours are the first PAC-Bayes bounds developed for the generalization properties of generative models. Our results naturally apply to Wasserstein GANs and Energy-Based GANs. Moreover, our bounds provide new training objectives for WGANs and EBGANs, leading to models with statistical guarantees. It is noteworthy that we make no density assumptions on the true and generated distributions. Although our main motivation is theoretical, we perform numerical experiments showing non-vacuous generalization bounds for WGANs on synthetic datasets. ## 2 PAC-Bayesian Theory PAC-Bayesian theory (introduced by McAllester, 1999) applies Probably Approximately Correct (PAC) inequalities to _pseudo-Bayesian_ learning algorithms--whose output could be framed as a _posterior_ probability distribution over a class of candidate models-- in order to provide generalization bounds for machine learning models. Here, the term generalization bound refers to upper bounds on the discrepancy between a model's empirical loss and its population loss (_i.e._, the loss on the true data distribution). Optimizing these bounds lead to _self-certified_ learning algorithms, that produce models whose behavior on the population is statistically guaranteed to be close to their behavior on the observed samples. PAC-Bayes has been applied to a wide variety of settings such as classification (Germain et al., 2009; Parrado-Hernandez et al., 2012), linear regression (Germain et al., 2016; Shalaeva et al., 2020), meta-learning (Amit and Meir, 2018), variational inference for mixture models (Cherief-Abdellatif and Alquier, 2018) and online learning (Haddouche and Guedj, 2022). In recent years, PAC-Bayes has been used to obtain non-vacuous generalization bounds for neural networks (Dziugaite and Roy, 2018; Perez-Ortiz et al., 2021). See Guedj (2019) and Alquier (2021) for recent surveys. The wide variety of applications is due to the flexibility of the PAC-Bayesian framework. Indeed, the theory is very general, and requires few assumptions. We consider a training set \(S=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\}\), iid sampled from an unknown probability distribution \(P^{*}\) over an instance space \(\mathcal{X}\).2 Given a hypothesis class \(\mathcal{H}\) and a real-valued loss function \(\ell:\mathcal{H}\times\mathcal{X}\to[0,\infty)\), the empirical and population risks of each hypothesis \(h\in\mathcal{H}\) are respectively defined as Footnote 2: A vast majority of the PAC-Bayes literature is devoted to the prediction setting where each training instance is a pair \((x,y)\) of some features \(x\) and a label \(y\). We adopt slightly more general definitions that encompass unsupervised learning. \[\hat{\mathcal{R}}_{S}(g)=\frac{1}{n}\sum_{i=1}^{n}\ell(h,\mathbf{x}_{i})\;\; \text{and}\;\;\mathcal{R}(h)=\underset{\mathbf{x}\sim P^{*}}{\mathbb{E}}\left[ \ell(h,\mathbf{x})\right].\] Instead of individual hypotheses \(h\in\mathcal{H}\), PAC-Bayes focuses on a _posterior_ probability distributions over hypotheses \(\rho\in\mathcal{M}^{1}_{+}(\mathcal{H})\). These distributions can be seen as _aggregate_ hypotheses. Similar to the risks for individual hypotheses, the empirical and true risks of an aggregate hypothesis \(\rho\in\mathcal{M}^{1}_{+}(\mathcal{H})\) are respectively defined as \[\hat{\mathcal{R}}_{S}(\rho)=\underset{h\sim\rho}{\mathbb{E}}\left[\hat{ \mathcal{R}}_{S}(h)\right]\;\;\text{and}\;\;\mathcal{R}(\rho)=\underset{h\sim \rho}{\mathbb{E}}\left[\mathcal{R}(h)\right].\] The goal of PAC-Bayesian theory is to provide upper bounds on the discrepancy between \(\mathcal{R}(\rho)\) and \(\hat{\mathcal{R}}_{S}(\rho)\) which hold with high probability over the random draw of the training set \(S\). As an example, consider the following general PAC Bayes bound originally developed by Germain et al. (2009) and further formalized by Haddouche et al. (2021). **Theorem 2.1**.: _Let \(\pi\in\mathcal{M}^{1}_{+}(\mathcal{H})\) be a prior distribution independent of the data, \(D:\mathbb{R}^{+}\times\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}\) be a convex function, and \(\delta\in(0,1)\) be a real number. With probability at least \(1-\delta\) over the random draw of \(S\sim P^{*\otimes n}\), the following holds for any \(\rho\in\mathcal{M}^{1}_{+}(\mathcal{H})\) such that \(\rho\ll\pi\) and \(\pi\ll\rho\):_ \[\begin{split} D\left(\mathcal{R}(\rho),\hat{\mathcal{R}}_{S}( \rho)\right)&\leq\operatorname{KL}(\rho\,||\,\pi)+\log\frac{1}{ \delta}\\ &\quad+\log\mathop{\mathbb{E}}_{h\sim\pi}\mathop{\mathbb{E}}_{S \sim P^{*\otimes n}}e^{D\left(\mathcal{R}(h),\hat{\mathcal{R}}_{S}(h)\right) }\,,\end{split} \tag{4}\] _where \(\operatorname{KL}(\rho\,||\,\pi)\) is the Kullback-Leibler divergence between distributions \(\rho\) and \(\pi\)._ The left-hand side of Equation (4) quantifies the discrepancy between the true risk \(\mathcal{R}(\rho)\) and its empirical counterpart \(\hat{\mathcal{R}}_{S}(\rho)\) for a given training set \(S\), while the complexity term of the right-hand side involves the expectation with respect to \(S\sim P^{*\otimes n}\). As the data distribution \(P^{*}\) is unknown, the latter term needs to be upper-bounded in order to obtain a finite and numerically computable bound. Theorem 2.1 requires \(\rho\ll\pi\), which is classic in PAC-Bayes bounds and necessary for the KL-divergence to be defined. However, it also requires \(\pi\ll\rho\) which seems a bit more restrictive3. As noted by Haddouche et al. (2021), one has to make sure that \(\pi\) and \(\rho\) have the same support, which is the case when they are from the same parametric family of distributions, such as Gaussian or Laplace. Although the KL-divergence appears in most PAC-Bayes bounds, some bounds have been developed with the Renyi divergence (Begin et al., 2016) and IPMs (Amit et al., 2022). Footnote 3: This requirement was highlighted by Haddouche et al. (2021). Finally, note that Theorem 2.1 requires the prior distribution \(\pi\) to be independent of the training set \(S\). Even though this restriction makes it easier to bound the exponential moment (Rivasplata et al., 2020), it may also lead to large values of the KL term in practice, since the posterior is likely to be far from the prior. A common strategy is to use a portion of the training data to learn the prior, while making sure this portion is not used in the numerical computation of the bound (Perez-Ortiz et al., 2021). Aside from bounds for aggregate hypotheses \(\rho\in\mathcal{M}^{1}_{+}(\mathcal{H})\), PAC-Bayes bounds can be formulated for individual hypotheses \(h\in\mathcal{H}\) as well. Such bounds hold with high probability over the random draw of a single predictor \(h\) sampled from the PAC-Bayesian posterior, and have appeared in, e.g., Catoni (2007). In some cases, the derandomization step is quite straightforward, as a result of the structure of the hypotheses. For instance, Germain et al. (2009) utilize the linearity of the hypotheses to express a randomized linear classifier as a single deterministic linear classifier. In the general case, however, it can be quite challenging and costly to derandomize PAC-Bayesian bounds (Neyshabur et al., 2018; Nagarajan and Kolter, 2019; Biggs and Guedj, 2022). Below, we present a result by Rivasplata et al. (2020), who provide a general theorem for derandomizing PAC-Bayes bounds. **Theorem 2.2**.: _With the definitions and assumptions of Theorem 2.1, given a measurable function \(f:\mathcal{S}\times\mathcal{H}\rightarrow\mathbb{R}\), the following holds with probability at least \(1-\delta\) over the random draws of \(S\sim P^{*\otimes n}\) and \(h\sim\rho\):_ \[f(S,h)\leq\log\frac{d\rho}{d\pi}(h)+\log\frac{1}{\delta}+\log \mathop{\mathbb{E}}_{h\sim\pi}\mathop{\mathbb{E}}_{S\sim P^{*}\otimes n}e^{f( S,h)}. \tag{5}\] Removing the expectation with respect to the hypothesis space, is very useful in applications to neural networks (Vialard et al., 2021). Theorem 2.2 uses the Radon-Nikodym derivative of \(\rho\) with respect to \(\pi\), which can lead to high variance when the bound is used as an optimization objective for neural networks. Viallard et al. (2021) empirically highlighted this phenomenon, and formulated a generic disintegrated bound where the Radon-Nikodym derivative is replaced by the Renyi-divergence between \(\rho\) and \(\pi\). ## 3 PAC-Bayesian Bounds for Generative Models This section presents our main results. We consider a metric space \((\mathcal{X},d)\), an unknown probability measure \(P^{*}\) on \(\mathcal{X}\) and a training set \(S=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\}\) iid sampled from \(P^{*}\). The empirical counterpart of \(P^{*}\) defined by \(S\) is denoted \(P^{*}_{n}\). We also consider a hypothesis space \(\mathcal{G}\) such that each generator \(g\in\mathcal{G}\) induces a probability measure \(P^{g}\) on \(\mathcal{X}\), from which fake samples \(S_{g}=\{\check{\mathbf{x}}_{1},\ldots,\check{\mathbf{x}}_{n}\}\sim P^{g\otimes n}\) are generated. Thus, \[P^{*}_{n}=\frac{1}{n}\sum_{i=1}^{n}\delta_{\mathbf{x}_{i}}\;\;\text{and}\;\;P^ {g}_{n}=\frac{1}{n}\sum_{i=1}^{n}\delta_{\check{\mathbf{x}}_{i}},\] where \(\delta_{\mathbf{x}_{i}}\) is the Dirac measure on sample \(\mathbf{x}_{i}\). ### Bounds for Wasserstein generative models Let us consider a subset \(\mathcal{F}\subseteq\text{Lip}_{1}\) that is _symmetric_, meaning \(f\in\mathcal{F}\) implies \(-f\in\mathcal{F}\). We emphasize that \(\mathcal{F}\) can be a small subset of \(\text{Lip}_{1}\), or the whole set \(\text{Lip}_{1}\). Given a generator \(g\in\mathcal{G}\), we define its empirical risk as \[\mathcal{W}_{\mathcal{F}}\left(P^{*}_{n},P^{g}\right)=\mathop{\mathbb{E}}_{S_{ g}}\left[d_{\mathcal{F}}(P^{*}_{n},P^{g}_{n})\right], \tag{6}\] where the expectation is taken with respect to the iid sample \(S_{g}\) that induces \(P^{g}_{n}\) and \(d_{\mathcal{F}}(P^{*}_{n},P^{g}_{n})\) is the IPM induced by \(\mathcal{F}\) (Equation 1). The generalization error is defined as \[\mathop{\mathbb{E}}_{S\sim P^{*}\otimes n}\left[\mathcal{W}_{\mathcal{F}}\left(P_{n} ^{*},P^{g}\right)\right]-\mathcal{W}_{\mathcal{F}}\left(P_{n}^{*},P^{g}\right),\] namely the difference between the population and empirical risks. These definitions can be extended to aggregate generators by taking the expectation according to \(\rho\in\mathcal{M}_{+}^{1}(\mathcal{G})\). The following theorem provides bounds on the generalization error of both (i) aggregate and (ii) individual generators. **Theorem 3.1**.: _Let \(\mathcal{F}\subseteq\mathrm{Lip}_{1}\) be a symmetric set of real-valued function on \(\mathcal{X}\), \(\Delta\coloneqq\sup_{\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}}d(\mathbf{x },\mathbf{x}^{\prime})<\infty\) be the diameter of \(\mathcal{X}\), \(P^{*}\in\mathcal{M}_{+}^{1}(\mathcal{X})\) be the true data-generating distribution and \(S\in\mathcal{X}^{n}\) a \(n\)-sized iid sample from \(P^{*}\). Consider a set of generators \(\mathcal{G}\) such that each \(g\in\mathcal{G}\) induces a distribution \(P^{g}\) on \(\mathcal{X}\), a prior distribution \(\pi\) over \(\mathcal{G}\), and real numbers \(\lambda>0\) and \(\delta\in(0,1)\)._ 1. _For any probability measure_ \(\rho\) _over_ \(\mathcal{G}\) _such that_ \(\rho\ll\pi\) _and_ \(\pi\ll\rho\)_, the following holds with probability at least_ \(1-\delta\) _over the random draw of_ \(S\)_:_ \[\mathop{\mathbb{E}}_{g\sim\rho}\mathop{\mathbb{E}}_{S}\left[ \mathcal{W}_{\mathcal{F}}\left(P_{n}^{*},P^{g}\right)\right]-\mathop{\mathbb{E} }_{g\sim\rho}\left[\mathcal{W}_{\mathcal{F}}\left(P_{n}^{*},P^{g}\right)\right]\] \[\leq\frac{1}{\lambda}\left[\mathrm{KL}(\rho\left\|\,\pi)+\log \frac{1}{\delta}\right]+\frac{\lambda\Delta^{2}}{4n}.\] (7) 2. _For any probability measure_ \(\rho\) _over_ \(\mathcal{G}\) _such that_ \(\rho\ll\pi\) _and_ \(\pi\ll\rho\)_, the following holds with probability at least_ \(1-\delta\) _over the random draw of_ \(S\) _and_ \(g\sim\rho\)_:_ \[\mathop{\mathbb{E}}_{S}\left[\mathcal{W}_{\mathcal{F}}\left(P_{n }^{*},P^{g}\right)\right]-\mathcal{W}_{\mathcal{F}}\left(P_{n}^{*},P^{g}\right)\] \[\leq\frac{1}{\lambda}\left[\log\frac{d\rho}{d\pi}(g)+\log\frac{1} {\delta}\right]+\frac{\lambda\Delta^{2}}{4n}.\] (8) Proof Idea.: We provide a detailed outline of the proof here. The full details can be found in the supplementary material (Section A.2). The proof of (i) relies on a technical lemma (Lemma A.3). It is possible to view \(d_{\mathcal{F}}(P_{n}^{*},P_{n}^{g})\) as a function \(\mathcal{X}^{2n}\to\mathbb{R}\) as \(P_{n}^{*}\) (resp. \(P_{n}^{g}\)) is the uniform distribution on \(n\) samples that were selected according to \(P^{*}\) (resp. \(P^{g}\)). Lemma A.3 states that \(d_{\mathcal{F}}(P_{n}^{*},P_{n}^{g})\) has the bounded differences property with bounds \(\Delta/n\), meaning that if we were to change only one sample, the new value of \(d_{\mathcal{F}}(P_{n}^{*},P_{n}^{g})\) would differ by at most \(\Delta/n\). The proof (provided in the appendix) uses properties of the \(\sup\) and the fact that \(\mathcal{F}\subset\mathrm{Lip}_{1}\). We then use a result used to prove McDiarmid's inequality (Lemma A.2, previously used by Ohana et al. (2022) for their bounds on the sliced Wasserstein distance) and Fubini's theorem to obtain that \[\mathop{\mathbb{E}}_{S}[Y]\leq\text{exp}\left[\frac{\lambda^{2}\Delta^{2}}{4n }\right],\] where \[Y\coloneqq\mathop{\mathbb{E}}_{g\sim\pi}\mathop{\mathbb{E}}_{S_{g}}\left[ \text{exp}\left[\lambda\left(\mathop{\mathbb{E}}_{S,S_{g}}[d_{\mathcal{F}}(P_{n }^{g},P_{n}^{*})]-d_{\mathcal{F}}(P_{n}^{g},P_{n}^{*})\right)\right]\right].\] Then, Markov's inequality combined with this result yields that with probability at least \(1-\delta\) over the random draw of the training set \(S\), \[Y\leq\frac{1}{\delta}\text{exp}\left[\frac{\lambda^{2}\Delta^{2}}{4n}\right].\] The rest of the proof follows the main steps of the proof of Theorem 2.1, as presented by Haddouche et al. (2021). We use the Radon-Nikodym derivatives to change the expectation over \(g\sim\pi\) into an expectation over \(g\sim\rho\). Applying \(\log\) (a monotone increasing function) to the inequality and then using Jensen's inequality for concave functions, with some further rewriting, yields (i). In order to obtain (ii), we study \(\xi=\log\mathop{\mathbb{E}}_{S}[Y]\). Similarly to what happens in the proof of (i), we have that \(\xi\leq\frac{\lambda^{2}\Delta^{2}}{4n}\). However, using Jensen's inequality for convex functions, we can exchange the expectation over \(S_{g}\) and \(\exp\) in the definition of \(Y\) to yield a new inequality. Combining it with previous result \(\xi\leq\frac{\lambda^{2}\Delta^{2}}{4n}\), we obtain that \[\log\mathop{\mathbb{E}}_{S}\mathop{\mathbb{E}}_{g\sim\pi}e^{\lambda\left( \mathop{\mathbb{E}}_{S}\mathop{\mathbb{E}}_{S_{g}}[d_{\mathcal{F}}(P_{n}^{*},P_{n }^{g})]-\mathop{\mathbb{E}}_{S_{g}}d_{\mathcal{F}}(P_{n}^{*},P_{n}^{g})\right)} \leq\frac{\lambda^{2}\Delta^{2}}{4n}.\] We then use the general desintegrated bound by Rivasplata et al. (2020) stated in Theorem 2.2. We take \[f(S,g)=\lambda\left(\mathop{\mathbb{E}}_{S,S_{g}}[d_{\mathcal{F}}(P_{n}^{*},P_{n }^{g})]-\mathop{\mathbb{E}}_{S_{g}}[d_{\mathcal{F}}(P_{n}^{*},P_{n}^{g})] \right).\] Previously obtained inequality enables us to bound \[\log\mathop{\mathbb{E}}_{S}\mathop{\mathbb{E}}_{g\sim\pi}\left[e^{f(S,g)} \right]\leq\frac{\lambda^{2}\Delta^{2}}{4n},\] which gives us the desired result and concludes the proof of (ii). Note that our desintegrated bound (8) still has the expectation with respect to the fake sample \(S_{g}\). Unlike the usual PAC-Bayesian bounds which are mostly applicable to supervised learning, the loss we are bounding requires not only some data from the unknown distribution, but also some data depending on the hypotheses. Theorem 3.1 requires the samples \(S_{g}\) from the generated distribution to have the same size \(n\) as the training set. In practice, this is not a problem, since the user can easily sample from \(P^{g}\). One might wonder, however, if the bounds could be improved by increasing the number of fake samples. In our approach, the answer is no. Indeed, if the size of \(S_{g}\) is \(m\neq n\), then we obtain bounds with last term \(\frac{\lambda\Delta^{2}}{4n}\) replaced by \(\frac{\lambda\Delta^{2}}{4\min(m,n)}\). Although Theorem 3.1 provides upper bounds on the expected distance between empirical measures, it also implies upper bounds on the distance between the full distributions, as shown in the following corollary. **Corollary 3.2**.: _With the definitions and assumptions of Theorem 3.1, the following properties hold for any probability measure \(\rho\) such that \(\rho\ll\pi\) and \(\pi\ll\rho\)._ 1. _With probability at least_ \(1-\delta\) _over the random draw of_ \(S\)_:_ \[\begin{split}\mathop{\mathbb{E}}_{g\sim\rho}d_{\mathcal{F}}(P^{* },P^{g})\leq&\mathop{\mathbb{E}}_{g\sim\rho}\left[\mathcal{W}_{ \mathcal{F}}\left(P_{n}^{*},P^{g}\right)\right]\\ &+\frac{1}{\lambda}\left[\operatorname{KL}(\rho\,||\,\pi)+\log \frac{1}{\delta}\right]+\frac{\lambda\Delta^{2}}{4n}.\end{split}\] 2. _With probability at least_ \(1-\delta\) _over the random draw of_ \(S\) _and_ \(g\sim\rho\)_:_ \[\begin{split} d_{\mathcal{F}}(P^{*},P^{g})\leq& \mathcal{W}_{\mathcal{F}}\left(P_{n}^{*},P^{g}\right)\\ &+\frac{1}{\lambda}\left[\log\frac{d\rho}{d\pi}(g)+\log\frac{1}{ \delta}\right]+\frac{\lambda\Delta^{2}}{4n}.\end{split}\] The proof of Corollary 3.2 is in the supplementary material (Section A.2). As a special case, when \(\mathcal{F}=\text{Lip}_{1}\), Corollary 3.2 provides upper bounds on the Wasserstein distance between the full distributions \(P^{*}\) and \(P^{g}\). The manifold assumption.The bounds of Theorem 3.1 depend on the diameter of the instance space, which can be a handicap for real-world datasets such as image datasets. Indeed, the manifold hypothesis states that most high-dimensional real-world datasets lie in the vicinity of low-dimensional manifolds. There is a vast body of work dedicated to testing this assumption and estimating the intrinsic dimension of commonly used datasets (Fodor, 2002; Narayanan and Mitter, 2010; Fefferman et al., 2016; Pope et al., 2021). Moreover, latent variable generative models such as VAEs (Kingma and Welling, 2014), GANs (Goodfellow et al., 2014) and their variants exploit the manifold hypothesis by learning models which approximate distributions over high-dimensional spaces with transformations of low-dimensional latent distributions. This is also a main assumption of Schreuder et al. (2021), whose rates of convergence are dependent on the intrinsic dimension of the instance space. Taking a similar approach, we show that by assuming that the true distribution is a smooth transformation of a latent distribution over a low-dimensional hypercube, we can prove a PAC-Bayesian bound depending on the intrinsic dimension. Before stating our next result, we recall the definition of a pushforward measure. **Definition 3.3** (Pushforward Measure).: Given measurable spaces \(\mathcal{X}\) and \(\mathcal{Z}\), a probability measure \(P_{\mathcal{Z}}\) over \(\mathcal{Z}\), and a measurable function \(g:\mathcal{Z}\to\mathcal{X}\), the pushforward measure defined by \(g\) and \(P_{\mathcal{Z}}\) is the probability distribution \(g\sharp P_{\mathcal{Z}}\) on \(\mathcal{X}\) defined as \[g\sharp P_{\mathcal{Z}}(A)=P_{\mathcal{Z}}(g^{-1}(A)),\] for any measurable set \(A\subseteq\mathcal{X}\). In more practical terms, sampling \(\mathbf{x}\) from \(g\sharp P_{\mathcal{Z}}\) means sampling a latent vector \(\mathbf{z}\sim P_{\mathcal{Z}}\) first, then setting \(\mathbf{x}=g(\mathbf{z})\). For example, a GAN's generator's defines a pushforward distribution. **Theorem 3.4**.: _Let \(P^{*}\in\mathcal{M}_{+}^{1}(\mathcal{X})\) be the true data-generating distribution and \(S\in\mathcal{X}^{n}\) a \(n\)-sized iid sample from \(P^{*}\). We consider a set of generators \(\mathcal{G}\) such that each \(g\in\mathcal{G}\) induces a distribution \(P^{g}\) on \(\mathcal{X}\), a prior distribution \(\pi\) over \(\mathcal{G}\), and real numbers \(\lambda>0\) and \(\delta\in(0,1)\). We also consider a latent space \(\mathcal{Z}=[0,1]^{d_{Z}}\), a latent distribution \(P_{\mathcal{Z}}\) on \(\mathcal{Z}\), and a true generator \(g^{*}:\mathcal{Z}\to\mathcal{X}\) such that \(P^{*}=g^{*}\sharp P_{\mathcal{Z}}\) and each \(g\in\mathcal{G}\) is a function \(g:\mathcal{Z}\to\mathcal{X}\) with \(P^{g}=g\sharp P_{\mathcal{Z}}\). Finally, we assume \(\mathcal{G}\cup\{g^{*}\}\subseteq\text{Lip}_{K}\) for some positive real number \(K\)._ 1. _For any probability measure_ \(\rho\) _over_ \(\mathcal{G}\) _such that_ \(\rho\ll\pi\) _and_ \(\pi\ll\rho\)_, the following holds probability at least_ \(1-\delta\) _over the random draw of_ \(S\)_:_ \[\begin{split}&\mathop{\mathbb{E}}_{g\sim\rho}\mathop{\mathbb{E}}_{S} \left[\mathcal{W}_{\mathcal{F}}\left(P_{n}^{*},P^{g}\right)\right]-\mathop{ \mathbb{E}}_{g\sim\rho}\left[\mathcal{W}_{\mathcal{F}}\left(P_{n}^{*},P^{g} \right)\right]\\ &\leq\frac{1}{\lambda}\left[\operatorname{KL}(\rho\,||\,\pi)+\log \frac{1}{\delta}\right]+\frac{\lambda K^{2}d_{\mathcal{Z}}}{4n}.\end{split}\] (9) 2. _For any probability measure_ \(\rho\) _over_ \(\mathcal{G}\) _such that_ \(\rho\ll\pi\) _and_ \(\pi\ll\rho\)_, the following holds probability at least_ \(1-\delta\) _over the random draw of_ \(S\) _and_ \(g\sim\rho\)_:_ \[\begin{split}&\mathop{\mathbb{E}}_{S}\left[\mathcal{W}_{ \mathcal{F}}\left(P_{n}^{*},P^{g}\right)\right]-\mathcal{W}_{\mathcal{F}} \left(P_{n}^{*},P^{g}\right)\\ &\leq\frac{1}{\lambda}\left[\log\frac{d\rho}{d\pi}(g)+\log\frac{1 }{\delta}\right]+\frac{\lambda K^{2}d_{\mathcal{Z}}}{4n}.\end{split}\] (10) The proof can be found in Section A.3 in the appendix. The proof is very similar to that of (i) of Theorem 3.1 but the technical lemma we rely on differs: instead of bounding small perturbations of \(d_{\mathcal{F}}(P_{n}^{*},P_{n}^{g})\) using the diameter \(\Delta\), we bound those by \(\frac{\lambda K^{2}d_{\mathcal{Z}}}{n}\) (see Lemma A.5). As noted by Schreuder et al. (2021), the Lipschitz assumption on the true generator \(g^{*}\) may be realistic in practice. Indeed, the generator learned by a GAN is a Lipschitz function of its input (Seddik et al., 2020) and GAN-generated data has been shown to be a good substitute for real-life data in many applications (Frid-Adar et al., 2018; Wang et al., 2018; Sandfort et al., 2019; Zhang et al., 2022). A result similar to Corolalry 3.2 can be proven for Theorem 3.4 (see Corollary A.6). ### Bounds for Total-Variation generative models In this section, we prove PAC-Bayesian generalization bounds for models based on the total variation distance. One such model is the EBGAN (Zhao et al., 2017). Indeed, Arjovsky et al. (2017) show that given an optimal critic, the EBGAN's generator minimizes a constant scaling of the total variation distance between the real and fake distributions. Let us assume \(\mathcal{F}\) is a symmetric set of functions \(f:\mathcal{X}\mathcal{\rightarrow}[-1,1]\) and denote \[\mathcal{D}_{\mathcal{F}}(P_{n}^{*},P^{g})=\mathop{\mathbb{E}}_{S_{g}}\left[d_ {\mathcal{F}}(P_{n}^{*},P_{n}^{g})\right]\,.\] When \(\mathcal{F}\) is the set of all \([-1,1]\)-valued functions defined on \(\mathcal{X}\), then \(\mathcal{D}_{\mathcal{F}}(P_{n}^{*},P^{g})\) is the expected total variation distance between the real and fake empirical distributions. **Theorem 3.5**.: _Let \((\mathcal{X},d)\) be a metric space, \(P^{*}\in\mathcal{M}_{+}^{1}(\mathcal{X})\) be the true data-generating distribution and \(S\in\mathcal{X}^{n}\) a \(n\)-sized iid sample from \(P^{*}\). Consider a set of generators \(\mathcal{G}\) such that each \(g\in\mathcal{G}\) induces a distribution \(P^{g}\) on \(\mathcal{X}\), a prior distribution \(\pi\) over \(\mathcal{G}\) and real numbers \(\lambda>0\) and \(\delta\in(0,1)\)._ 1. _For any probability measure_ \(\rho\) _over_ \(\mathcal{G}\) _such that_ \(\rho\ll\pi\) _and_ \(\pi\ll\rho\)_, the following holds with probability at least_ \(1-\delta\) _over the random draw of_ \(S\)_:_ \[\mathop{\mathbb{E}}_{S}\limits^{\text{E}}\limits_{\rho}\mathop{ \mathbb{E}}\limits_{\mathcal{S}}\left[\mathcal{D}_{\mathcal{F}}\left(P_{n}^{* },P^{g}\right)\right]-\mathop{\mathbb{E}}_{g\sim\rho}\left[\mathcal{D}_{ \mathcal{F}}\left(P_{n}^{*},P^{g}\right)\right]\] (11) \[\leq\frac{1}{\lambda}\left[\operatorname{KL}(\rho\,||\,\pi)+ \log\frac{1}{\delta}\right]+\frac{4\lambda}{n}.\] 2. _For any probability measure_ \(\rho\) _over_ \(\mathcal{G}\) _such that_ \(\rho\ll\pi\) _and_ \(\pi\ll\rho\)_, the following holds with probability at least_ \(1-\delta\) _over the random draw of_ \(S\) _and_ \(g\sim\rho\)_:_ \[\mathop{\mathbb{E}}_{S}\limits^{\text{E}}\left[\mathcal{D}_{ \mathcal{F}}\left(P_{n}^{*},P^{g}\right)\right]-\mathcal{D}_{\mathcal{F}} \left(P_{n}^{*},P^{g}\right)\] (12) \[\leq\frac{1}{\lambda}\left[\log\frac{d\rho}{d\pi}(g)+\log\frac{1 }{\delta}\right]+\frac{4\lambda}{n}.\] The proof of Theorem 3.5 is in the appendix (Section A.4). The proof is very similar to that of (i) of Theorem 3.1 but the technical lemma we rely on differs: instead of bounding small perturbations of \(d_{\mathcal{F}}(P_{n}^{*},P_{n}^{g})\) using the diameter \(\Delta\), we bound those by \(\frac{2}{n}\) (see Lemma A.7). Note that unlike the bounds for the Wasserstein distance, the bounds for the total variation distance do not involve the size of the latent or instance space. This is not surprising, since \(d_{TV}\) can be seen as a special case of \(W_{1}\) when the underlying metric on \(\mathcal{X}\) is \(d=\mathbf{1}_{[x\neq y]}\). Results by Arjovsky et al. (2017) show that the topology induced by the total variation distance is as strong as the one induced by the Jensen Shannon divergence, implying that EBGANs may suffer from some of the issues of the original GAN. Therefore, we focus our experiments on WGANs. ### Rate of convergence The rate of convergence of the bounds proposed in this work depends on the choice of the hyperparameter \(\lambda\). Choosing \(\lambda=n\) leads to a fast rate of \(n^{-1}\), but the bounds do not converge to \(0\). The optimal rate for a convergence to \(0\) is \(n^{-1/2}\) and is obtained with \(\lambda=\sqrt{n}\). Note that unlike previous results for GANs, our optimal rate of convergence does not depend on the (intrinsic or extrinsic) dimension of the dataset. ## 4 Experiments We perform experiments on two synthetic datasets: a mixture of 8 Gaussians arranged on a ring, and a mixture of 25 Gaussians arranged on a grid. These are standard synthetic datasets for GAN experiments, see, e.g, Dumoulin et al. (2017); Srivastava et al. (2017); Dieng et al. (2019). In order to formally ensure the diameter of the instance space is finite, we truncate the data so that the first dataset is contained in a disc of radius \(3.2\) and the second dataset in a square of side \(8.2\), both centered at the origin. Both the generator and critic are fully connected networks with three hidden layers each. The networks used for learning the 8-component mixture dataset have \(100\) hidden units per layer, and the ones for the 25-component mixture dataset have 200 hidden units for each layer. In order to guarantee that the critic is \(1\)-Lipschitz, we used weight-clipping (Arjovsky et al., 2017). The clipping parameter for each layer \(L\) is \(\frac{1}{\sqrt{c_{L}}}\), where \(c_{L}\) is the dimension of the layer's output. This is enough to guarantee that the layer is \(1\)-Lipschitz with respect to its input (see Remark B.1). As commonly done when optimizing PAC-Bayesian bounds with neural networks (Perez-Ortiz et al., 2021), we use a portion of the training set to learn the mean of the prior distribution \(\pi\). Given a size \(n\) training set, the prior's mean is learned on \(n_{0}<n\) samples, the posterior \(\rho\) is learned on all \(n\) samples, and the bound in computed on the remaining \(n-n_{0}\) samples. The standard deviation of the prior \(\pi\) is denoted \(\sigma_{0}\) and we performed a sweep over the values \(\sigma_{0}\in\{10^{-7},10^{-6},10^{-5},0.0001,0.001\}\), and fix the hyperparameter \(\lambda=\frac{n}{32}\). The standard deviation of the posterior is learned, and we use \(\sigma_{0}\) as a starting point. Samples from the learned distributions are displayed in the supplementary material (Figures 4 and 5). approximated by averaging over \(100\) generators independently sampled from \(\rho\). We observe that the learned generator has similar empirical and test risks. This is a known asset of learning by optimizing a PAC-Bayesian bound, as it prevents overfitting the training samples. We even notice that some model instances have an empirical risk larger than their test risk, a phenomenon rarely observed when training a discriminative (prediction) model. In our generative setting, this indicates situations where the critic is better at distinguishing between real and fake samples when the real samples have been already observed. The computed risk certificates lie in the same order of magnitude than the test loss, which qualifies them as _non-vacuous_. Of note, the bound value provides an accurate model selection criteria for the parameter \(\sigma_{0}\), _i.e._, the model that has the lowest bound comes with the best test risk. This is true even if the model with \(\sigma_{0}=10^{-3}\) in the _grid_ experiment of Figure 2 comes with a bound value that is less tight than for other parameter values (relatively to their test risk). This indicates that the models have to reach a higher complexity (measured by the term \(\operatorname{KL}(\rho\,||\,\pi)\) of (7)) to drive down the empirical risk, and illustrates the trade-off captured by Theorem 3.1. ## 5 Conclusion and Future Works Recent years have seen a growing interest in PAC-Bayesian theory, as a framework for deriving statistical guarantees for a variety of machine learning models (Guedj, 2019). Despite the long list of topics for which PAC-Bayesian bounds have been developed, generative models were missing from this list. In this work, we developed PAC-Bayesian bounds for adversarial generative models. We showed that these bounds can be numerically computed and provide non-vacuous risk certificates for synthetic datasets. In future works, we will explore the application of these bounds on real-life datasets. Unlike synthetic datasets for which we can have all the information such as the intrinsic and extrinsic dimensions, real-life datasets come with the challenge that some information is unknown. Take, for instance, a real-life image dataset such as MNIST or Celeb-A. Computing the bounds of Theorem 3.1 would require the use of the diameter of the instance space, which is clearly irrelevant to the structure of the dataset. On the other hand, the bounds of Theorem 3.4 require some information about the smoothness of the data generating process. In future works, we will explore empirical estimations of that quantity. Figure 1: Negative critic losses and risk certificates of a model trained on a mixture of 8 Gaussian arranged on a ring. The x-axis shows the value of the prior parameters’ standard deviation \(\sigma_{0}\). See Appendix (Fig. 4) for illustrations of the generated samples. Figure 2: Negative critic losses and risk certificates of a model trained on a mixture of 25 Gaussian arranged on a grid. The x-axis shows the value of the prior parameters’ standard deviation \(\sigma_{0}\). See Appendix (Fig. 5) for illustrations of the generated samples.
2305.12765
General quantum measurements in relativistic quantum field theory
Single particle detection is described in a limited way by simple models of measurements in quantum field theory. We show that a general approach, using Kraus operators in spacetime constructed from natural combinations of fields, leads to an efficient model of a single particle detector. The model is free from any auxiliary objects as it is defined solely within the existing quantum field framework. It can be applied to a large family of setups where the time resolution of the measurement is relevant, such as Bell correlations or sequential measurement. We also discuss the limitations and working regimes of the model.
Adam Bednorz
2023-05-22T06:37:03Z
http://arxiv.org/abs/2305.12765v2
# General quantum measurements in relativistic quantum field theory ###### Abstract Single particle detection is described in a limited way by simple models of measurements in quantum field theory. We show that a general approach, using Kraus operators in spacetime constructed from natural combinations of fields, leads to an efficient model of a single particle detector. The model is free from any auxiliary objects as it is defined solely within the existing quantum field framework. It can be applied to a large family of setup where the time resolution of the measurement is relevant, such as Bell correlations or sequential measurement. We also discuss limitations and working regimes of the model. ## I Introduction Quantum field theory makes predictions about scattering and decays of particles that can be measured. In contrast to quantum optics and condensed matter, where low energies allow very efficient detectors, high energy experiments especially in the ultrarelativistic limit, cope with practical limitations, e.g. not all particles are detected [1]. Textbook approach is based on the relation between the transition probability between incoming and outgoing particles and scattering matrix [2]. This is completely nonlocal as the states are considered in momentum representation. Exact modeling of real detectors is impractical because of enormous technical complexity. Instead, simplified models like Unruh-deWitt have beef proposed, involving an auxilary particle travelling on its worldline, imitating a detector [3; 4; 5; 6; 7; 8; 9; 10; 11], reducing the notion of particle to what a particle detector detects [12]. Such models were fine in the early days of high energy physics when the actual low efficiency was not a particular problem. It was sufficient to map the collection statistics to the theoretical predictions about scattering and decays. However, the Unruh-deWitt and all existing constructions are unable to model a \(\sim 100\%\) efficient detection of a single particle as a click [13]. The real measurements in modern experiments have also to be localized in time and space. Firstly, Bell-type tests of nonlocality [14; 15; 16; 17; 18] require time resolution and high efficiency, achieved in low energy optics [19; 20; 21; 22]. Higher energy attempts to make similar tests have failed so far [23]. Secondly, sequential measurements cannot destroy the particle after detection, allowing to measure it once again. This kind of measurement is useful to reveal incompatibility between the measured quantities [24]. It is possible for immobile solid state objects [25] and only recently for photons [26]. The high energy analogues are still awaited. In some cases one can use the space-like momentum to suppress the vacuum noise [27] but it does not help to increase the efficiency. In this work, we propose a different approach to measurement models in quantum field theory. Instead of auxiliary particles, we shall directly define measurement Kraus operators [28], an element of positive operator-valued measure (POVM) [29; 30], within the existing quantum field Hilbert space, replacing the old fashioned projection, either direct or in the auxiliary detector's space. The Kraus operators are functionals of already existing fields. We show that a properly defined functional can correctly represent the measurement. However, not all classes of such operators can serve as single particle detection models. To detect single particles (in a clickwise fashion), a nonlinear functional is necessary and it has a limited efficiency outside the safe energy regime. The best option is a universal measurement of energy-momentum density, which models an almost perfect measurement in a wide range of parameters. The model is able to map the outcomes onto almost dichotomic events, with well separated absence and presence of the particle, when a continuous flux of incoming particles arrives at the detector. Although the model is in principle perturbative in the detection strength, we are able to control potential higher order deviation and identify the working regime. The paper is organized as follows. We start from standard definition of generalized measurement operators, mapping them onto the quantum field theory framework. Next, we present several natural classes of such measurement, pointing out their weaknesses and advantages. Finally, we discuss the applicability of the models to Bell test and sequential measurements, and confront them with other options like Unruh-deWitt model. Lengthy calculations are left in Appendices. General Quantum Measurements From the obsolete projections, through auxiliary detectors, modern quantum measurements evolved to a description that requires no extra objects. They are defined within the system's space by a set of Kraus operators \(\hat{K}\) such that \(\sum\hat{K}^{\dagger}\hat{K}=\hat{1}\) and the state \(\hat{\rho}\) after the measurement is transformed into \(\hat{\rho}^{\prime}=\hat{K}\hat{\rho}\hat{K}^{\dagger}\) (no longer normalized) with the probability \(\operatorname{Tr}\hat{\rho}^{\prime}\)[28; 29; 30]. It is straightforward to generalize it to a time sequence of operators \(\hat{K}_{j}\). Then \(\sum\hat{K}_{j}^{\dagger}\hat{K}_{j}=\hat{1}\) while the probability reads \[\operatorname{Tr}\hat{K}_{n}\cdots\hat{K}_{1}\hat{\rho}\hat{K}_{1}^{\dagger} \cdots\hat{K}_{n}^{\dagger} \tag{1}\] The choice of the sets \(\hat{K}_{j}\) is quite arbitrary, but a continuous time limit leads to the most natural choice, a Gaussian form \(\hat{K}(a)\propto\exp(-\lambda(a-\hat{A})^{2}\) for some operator \(\hat{A}\) and the outcome \(a\)[31]. In quantum field theory this definition can be adapted by expressing \(\hat{A}\) in terms of local field operators (see Appendix A for the standard quantum field notation conventions). What is even more helpful, Kraus operators can be also incorporated into the path integral framework and closed time path (CTP) formalism [32; 33; 34], see Appendix A. The CTP consists of there parts: thermal Matsubara part (imaginary) and two flat parts (real), forward and backward. We shall distinguish them denoting \(x_{\pm}=x\pm i\epsilon\) for an infinitesimally small real positive epsilon. Then the Kraus operators need the field with the part specified, i.e. \(\hat{K}(x)\to K(x_{+})\), \(\hat{K}^{\dagger}(x)\to K(x_{-})\), see details in Appendix A. In other words, \(\hat{K}\), expressed by some field \(\phi(x)\) must be placed on the proper part of CTP. For a moment, the relation between \(K\) and \(\phi\) is a completely general functional, can be nonlinear and nonlocal, but in short we will heavily reduce this freedom. The simplest example is the Gaussian form \[\hat{K}(a)\propto\exp\left[-\lambda\left(\int f(x)\hat{\phi}(x)dx-a\right)^{2 }\right], \tag{2}\] ignoring the \(\lambda-\)dependent global normalization. We can adopt it to the path integral form, combining both \(\hat{K}\) and \(\hat{K}^{\dagger}\) into a single form, distinguishing the CTP parts \[K(a)\propto\exp\left[-\lambda\sum_{\pm}\left(\int f(x)\phi(x_{\pm})dx-a \right)^{2}\right] \tag{3}\] where \(f(x)\) is a real-valued function localized in spacetime (nonzero only inside a finite region of the spacetime) such that \(f(x_{+})=f(x_{-})=f(x)\) on the flat part of the CTP. The apparent nonlocality of the above construction can be removed by the Fourier transform \[K(a)\propto\int d\xi_{+}d\xi_{-}\;e^{ia(\xi_{-}-\xi_{+})-(\xi_{ +}^{2}+\xi_{-}^{2})/4\lambda}\times\] \[\exp\int if(x)(\xi_{+}\phi(x_{+})-\xi_{-}\phi(x_{-}))dx \tag{4}\] which has an interpretation of two independent random external fields \(\xi_{\pm}\) for upper and lower part of the contour. We shall apply the measurement to the simple bosonic field with the Lagrangian density \[2\mathcal{L}(x)=\partial\phi(x)\cdot\partial\phi(x)-m^{2}\phi^{2}(x) \tag{5}\] To measure the field in vacuum we can define the probability in terms of path integrals \[p(a)=Z\int\mathcal{D}\phi K(a)\exp\int i\mathcal{L}(z)dz, \tag{6}\] where \(Z\) is the path integral normalization, see Appendix B. The normalization of the probability can be checked by the identity \[\int daK(a)=\exp\left[-\frac{\lambda}{2}\left(\int f(x)\phi_{q}(x)dx\right)^{ 2}\right] \tag{7}\] with \(\phi_{q}(y)=\phi(y_{+})-\phi(y_{-})\) and \(2\phi_{c}(x)=\phi(x_{+})+\phi(x_{-})\). By Wick theorem [35], all correlations can by expressed in terms of products of two-point propagators, \(\langle\phi(x)\phi(y)\rangle\). Simple, translation-invariant correlations read \[\langle\phi(x)\phi(y)\rangle=Z\int\mathcal{D}\phi\phi(x)\phi(y)\exp\int i \mathcal{L}(z)dz \tag{8}\] with special cases defined on the flat part (\(\operatorname{Im}\,x\to 0\)) \[S(x,y) =\langle\phi(x_{+})\phi(y_{+})\rangle=\langle\phi(x_{-})\phi(y_{- })\rangle^{\star},\] \[B(x,y) =\langle\phi(x_{+})\phi(y_{-})\rangle,\] \[C(x,y) =\langle\phi_{c}(x)\phi_{c}(y)\rangle,\] \[G(x,y) =\langle\phi(x_{\pm})\phi_{q}(y)\rangle, \tag{9}\] for \(2\phi_{c}(x)=\phi(x_{+})+\phi(x_{-})\) (symmetrized field, classical counterpart) and \(\phi_{q}(y)=\phi(y_{+})-\phi(y_{-})\) (antisymmetric, quantum susepctibility to external influence). Here \(S\) is known as Feynman propagator, \(C\) is symmetric correlation (real) while \(G\), causal Green function, is imaginary and satisfies \(G=0\) for \(y^{0}>x^{0}\), and \(\langle\phi_{q}(x)\phi_{q}(y)\rangle=0\). To shorten notation, using the translation invariance (also for complex times), we shall identify \(X(x,y)\equiv X(x-y)\) for \(X=S,B,C,G\). Note that using the identity \(F(x_{\pm})=F_{c}(x)\pm F_{q}(x)/2\) we get \(B(x)=C(x)+(G(-x)-G(x))/2\) and \(S(x)=C(x)+(G(x)+G(-x))/2\). The function \(G\) is responsible for causality, i.e. \(G(x)\neq 0\) only if \(x\cdot x\geq 0\) and \(x^{0}\geq 0\). Now the right hand side of (7) is \(1\) because \(\phi(x_{+})\equiv\phi(x_{-})\) at the _latest_ time \(x^{0}\) (they meet at the returning tip of the time path). It is convenient to introduce the concept of generating function \[S(\chi)=\int dap(a)e^{i\chi a}. \tag{10}\] which allows express moments and cumulants \[\langle a^{n}\rangle=\left.\frac{d^{n}S}{d(i\chi)^{n}}\right|_{\chi=0},\; \langle\langle a^{n}\rangle\rangle=\left.\frac{d^{n}\ln S(\chi)}{d(i\chi)^{n} }\right|_{\chi=0}. \tag{11}\] They are related, in particular, \[\langle\langle a\rangle\rangle=\langle a\rangle,\;\langle\langle a ^{2}\rangle\rangle=\langle\delta a^{2}\rangle,\] \[\langle\langle a^{3}\rangle\rangle=\langle\delta a^{3}\rangle,\; \langle\langle a^{4}\rangle\rangle=\langle\delta a^{4}\rangle-3\langle\langle a ^{2}\rangle\rangle^{2} \tag{12}\] for \(\delta a=a-\langle a\rangle\). Gaussian distribution have only nonzero first two cumulants. It is helpful to introduce convolution \[p(a)=(2\lambda/\pi)^{1/2}\int d\bar{a}\,e^{-2\lambda(a-\bar{a})^{2}}p.(\bar{a}) \tag{13}\] which leads to \(\ln S(\chi)=\ln S.(\chi)-\chi^{2}/8\lambda\) so only \(\langle\langle a^{n}\rangle\rangle=\langle\langle a^{n}\rangle\rangle.\) for \(n\neq 2\) while \(\langle\langle a^{2}\rangle\rangle=\langle\langle a^{2}\rangle\rangle.+1/4\lambda\). In other words we can separate the measurement statistics into the Gaussian detection noise of the variance \(1/4\lambda\), divergent in the limit \(\lambda\to 0\) and the bare function which turns out to be a quasiprobability. It is normalized and has well defined moments but lacks general positivity. In the case of our measurement we have a Gaussian function \[S.(\chi)=Z\int\mathcal{D}\phi\exp\int i\mathcal{L}(z)dz\times\] \[\exp\left[-\frac{\lambda}{2}\left(\int f(x)\phi_{q}(x)\right)^{2 }\right]\exp\int i\chi f(y)\phi_{c}(y)dy. \tag{14}\] Therefore \(\langle a\rangle=0\) in the vacuum and \[\langle a^{2}\rangle.=Z\int\mathcal{D}\phi\exp\int i\mathcal{L}(x )dx\times\] \[\exp\left[-\frac{\lambda}{2}\left(\int f(x)\phi_{q}(x)dx\right)^{ 2}\right]\left(\int f(y)\phi_{c}(y)dy\right)^{2} \tag{15}\] Now, the term \(\phi(x_{+})-\phi(x_{-})\) must be contracted with some \(\phi(y)\) with \(y^{0}>x^{0}\), otherwise it vanishes. It leaves essentially only a few terms \[\langle a^{2}\rangle.=\int dxdyf(x)f(y)C(x-y)\] \[-\lambda\left(\int dxdyf(x)f(y)G(x-y)\right)^{2}. \tag{16}\] Suppose now that we perturbed the vacuum. This is the natural physical situation when a beam of particles is sent to the detector. Let \[\exp\int idz\mathcal{L}(z)\rightarrow\mathcal{P}[\phi]\exp\int idz\mathcal{L} (z) \tag{17}\] where \(\mathcal{P}\) denotes perturbation \[\mathcal{P}[\phi]=\exp\int ig(w)\phi_{q}(w)dw \tag{18}\] by the shift of the field induced by \(g(w)\) (localized in spacetime). In principle we should discuss not only the particle detector but also generator. Since we prefer to focus on the detection part we stay at the minimal description with the single perturbation function \(g\). Almost all the above formulas remain valid, i.e. probability has the same \(g\)-independent normalization and is Gaussian. It only gets the nonzero average. We calculate \[S.(\chi)=Z\int\mathcal{D}\phi\exp\int i\mathcal{L}(z)dz\times\] \[\exp\int ig(w)\phi_{q}(w)dw\times\] \[\exp\left[-\frac{\lambda}{2}\left(\int f(x)\phi_{q}(x)dx\right)^{ 2}\right]\exp\int i\chi f(y)\phi_{c}(y)dy. \tag{19}\] The average is \(\lambda\)-independent while the higher cumulants remain unaffected, \[\langle a\rangle.=\int dwdyf(y)g(w)iG(y-w). \tag{20}\] As we see, the linear measurement gives simply a Gausssian statistics and cannot be used to model single particle detection with the Poissonian clicks. Nevertheless, already the selfconsistency of the above construction is a promising signature that the approach by Kraus operators is correct. ## III Nonlinear measurement We shall define a quadratic Kraus operator and apply it to the vacuum and continuous plane wave, showing that it exhibits features of Poisson statistics in contrast to the Gaussian linear case. ### Quadratic measurement Let us define Kraus operators in terms of path integrals \[K(a)\propto\exp\left[-\lambda\sum_{\pm}\left(\int f(x)\phi^{2}(x_{\pm})dx-a \right)^{2}\right], \tag{21}\] which can be made local as previously by a Fourier transform. Analogously to the field measurement, we can write down the formal expression for the generating function, namely \[S.(\chi)=Z\int\mathcal{D}\phi\exp\int i\mathcal{L}(z)dz\times\] \[\exp\left[-\frac{\lambda}{2}\left(\int f(x)\phi_{q}^{2}(x)dx \right)^{2}\right]\exp\int i\chi f(y)\phi_{c}^{2}(y)dy \tag{22}\] where \(\phi_{c}^{2}=(\phi_{c})^{2}+(\phi_{q})^{2}/4\) and \(\phi_{q}^{2}=2\phi_{c}\phi_{q}\). Our aim is to calculate \(\langle a^{n}\rangle\) and show that for some linear perturbation \(g\), there exists a regime (some \(f\)) where the distribution is Poissonian, \[p(a=n\eta)=e^{-\alpha}\alpha^{n}/n!, \tag{23}\] proving approximate dichotomy \(a=0,\eta\) from \(\langle a^{2}(a-\eta)^{2}\rangle\simeq 0\). Poisson distribution has simple cumulants (11) \(\langle\langle a^{n}\rangle\rangle=\alpha\eta^{n}\) but they coincide with the moments for \(\alpha\ll 1\). We will attempt to expand \(\langle a^{n}\rangle\) in powers \(\lambda\) and estimate upper bound for the higher terms of the expansion using Wick theorem. ### The vacuum We shall apply the quadratic measurement to the vacuum. It is then useful to calculate moments, \[\langle a^{n}\rangle_{\cdot}=Z\int\mathcal{D}\phi\exp\int i \mathcal{L}(z)dz\times\] \[\exp\left[-\frac{\lambda}{2}\left(\int f(x)\phi_{q}^{2}(x)dx \right)^{2}\right]\left(\int f(y)\phi_{c}^{2}(y)dy\right)^{n} \tag{24}\] expanding in powers of \(\lambda\). Problems arise when \(\phi^{2}\) contains two fields at the same time so e.g. \(\langle\phi^{2}(x)\rangle=\langle\phi_{c}^{2}(x)\rangle\rightarrow\infty\) It must be _renormalized_ e.g. by subtracting counteraverage for a fictitious mass \(M\rightarrow\infty\). We can subtract large masses i.e. \[\langle\phi_{c}^{2}(x)\rangle\rightarrow\langle\phi_{c}^{2}(x)\rangle+\sum_{ j}\epsilon_{j}\langle\phi_{c}^{2}(x)\rangle_{m\to M_{j}}=\Lambda \tag{25}\] where \(M_{j}\gg m\) are large renormalization masses (Pauli-Villars) [2; 36] while \(\epsilon_{j}\) are some numbers (e.g. \(\pm 1\)) not too large. The constant \(\Lambda\) is an unobservable calibration shift. From now on we also make this shift in \(a\), i.e. \(a\to a-\Lambda\int f(x)dx\). In the lowest order of \(\lambda\) \[\langle a^{2}\rangle_{\cdot}=\int f(x)f(y)\langle\phi_{c}^{2}(x)\phi_{c}^{2}( y)\rangle dxdy \tag{26}\] Denoting \(\tilde{f}(k)=\int dxe^{ik\cdot x}f(x)/(2\pi)^{D+1}\) with help of \[\langle\phi_{c}^{2}(x)\phi_{c}^{2}(y)\rangle=\langle\phi_{c}^{2} (x)\phi_{c}^{2}(y)\rangle-\langle\phi_{q}^{2}(x)\phi_{q}^{2}(y)\rangle/4\] \[=\langle\phi^{2}(x_{+})\phi^{2}(y_{-})+\phi^{2}(x_{-})\phi^{2}(y_ {+})\rangle/2 \tag{27}\] we get (see Appendix C), \[\langle a^{2}\rangle_{\cdot}=\int W(q)|\tilde{f}(q)|^{2}dq \tag{28}\] with \[W(q)=\frac{\pi^{2}(q\cdot q/4-m^{2})^{(D-2)/2}S_{D}}{\sqrt{q\cdot q}} \tag{29}\] for \(q\cdot q>4m^{2}\) and \(0\) otherwise. Here \(S_{D}=2\pi^{D/2}/\Gamma(D/2)\) is the surface of a unit ball in \(D\) dimensions. It shows that nonlinear fluctuations of the field need pair creation. For \(f\) varying over time/length much larger than \(1/m\), they are negligible, allowing low-noise measurement in the vacuum. ### Measurement of the plane wave We want the perturbation \(g\) to generate enveloped wave of a particular frequency, i.e. of the form \[g(x)=e^{iE_{p}x^{0}}h(x^{1}+L)+e^{-iE_{p}x^{0}}h^{*}(x^{1}+L) \tag{30}\] with a function \(h(y)\) localized at \(|y|\ll L\) and \(E_{p}>m\), which should generate a plane wave in \(x^{1}\) direction. It corresponds to a constant coherent flux of free particles in \(x^{1}\) direction. Our measurement model is able to capture single particles in the flux in contrast to vacuum and its fluctuations. The effect of the perturbation can be described by \[G_{g}(x)=\int dyG(x-y)g(y) \tag{31}\] in the limit \(L\rightarrow\infty\), where it reads (see Appendix D) \[G_{g}(x)=2i\text{Im}Ae^{i|p|x^{1}-iE_{p}x^{0}} \tag{32}\] or equivalently \[G_{g}(x)=G_{g+}(x)+G_{g-}(x),\] \[G_{g\mp}=(iA_{i}\pm A_{r})e^{\mp ip\cdot x}, \tag{33}\] for \(p=(E_{p},|p|,0,0)\) and \(2A=\tilde{h}(-|p|)/|p|\). For a linear perturbation \(g\) we can now determine the measurement statistics, inserting (18) into (24). The measurement function \(f(x)\) will vary at the scale much longer than \(1/E_{p}\). Defining \[F(x)=\sum_{\mp}e^{\mp ip\cdot x}F_{\mp}(x), \tag{34}\] for \(F=G,C,S,B\), and expanding \(k=k^{\prime}\mp p\) in (16) for small \(k^{\prime}\) \[C_{\mp}(x) =\int\frac{dk^{\prime}}{2(2\pi)^{D}}\delta(2k^{\prime}\cdot p)e^{ ik^{\prime}\cdot x},\] \[G_{\mp}(x) =\int\frac{idk^{\prime}}{(2\pi)^{D+1}}\frac{e^{ik^{\prime}\cdot x }}{\mp 2p\cdot k^{\prime}_{+}}, \tag{35}\] explicit calculation gives \[B_{+}(x) =\delta(x^{\perp})\delta(x^{0}v-x^{1})/2E_{p},\] \[C_{\mp}(x) =B_{+}(x)/2,\] \[G_{\mp}(x) =\pm\theta(x^{0})B_{+}(x),\] \[S_{\mp}(x) =\theta(\pm x^{0})B_{+}(x), \tag{36}\] and \(B_{-}=0\) with \(x^{\perp}=(x^{2},x^{3},\dots)\) and the speed of the field \(v=|p|/E_{p}\) (in the units of speed of light). This is a very intuitive physical picture since the dynamics is concentrated on the lines of the propagation at constant speed \(v\). In the lowest order of \(g\), the average \(\langle(g\phi)(g\phi)(f\phi^{2})\cdots(f\phi^{2})\rangle\) turns out to be a sum of Feynman graphs with part of vertices on \(+\) side and part on \(-\) side of CTP, see Fig. 1. We shall now express the first \(\langle a^{n}\rangle_{0}\), \(n=1,2,3,4\) in terms of the just derived functions in the lowest order of \(\lambda\), we get \[\langle a^{n}\rangle_{\cdot}\simeq 2E_{p}|A|^{2}\times\] \[\int d\mathbf{x}\left(\int dx^{0}f(x^{0},x^{1}+vx^{0},x^{\perp})/E_{ p}\right)^{n}, \tag{37}\] where we used the shift \(x^{1}\to x^{1}-vx^{0}\) in the integrals. The factor \(2E_{p}\) appears because each \(\phi^{2}\) contains 2 fields \(\phi\) giving 2 per vertex and \(2^{n}\) in total. On the other hand \(2\phi^{2}=\phi^{2}(x_{+})+\phi^{2}(x_{-})\) so we get a factor \(2^{-n}\) to cancel out with the above one. Each line (\(S_{\pm}\) and \(B_{+}\)) has the factor \((2E_{p})^{-1}\) giving \((2E_{p})^{1-n}\) in total. For \(n\) vertices we have all possible decompositions into \(n_{+}\) and \(n_{-}=n-n_{+}\) vertices with \(n_{+}=0\dots n\). All combinations give the total factor \((1+1)^{n}=2^{n}\) (by binomial formula). There are no additional factors \(n_{\pm}!\) because these factors cancel out (\(n_{\pm}\) permutations cancel out with \(\theta(\pm x^{0})\) ordering). To get the Poisson statistics, it is sufficient that \[\int dx^{0}f(x^{0},x^{1}+vx^{0},x^{\perp})/E_{p}=\left\{\begin{array}{ll}\eta &\mbox{if $\mathbf{x}\in V$}\\ 0&\mbox{if $\mathbf{x}\notin V$}\end{array}\right. \tag{38}\] where \(V\) is a certain volume in \(D\) dimensional space. In other words, we need constant integral of the measuring function \(f\) along line of speed \(v\), i.e. \(x^{1}=vx^{0}\) see Figure 2. We have also \(\alpha=2E_{p}|A|^{2}V\). The approximation is valid as long as variation lengthscale of \(f\), say \(\ell\) is much larger that the wavelength \(1/p\), i.e. \(p\ell\gg 1\), see figure 3. Finally, higher order terms in \(\lambda\) expansion will contain \(\phi_{q}^{2}=\phi^{2}(x_{+})-\phi^{2}(x_{-})\). Fortunately, in our approximation (36) these terms cancel. This is because inserting such points in the existing graph gives always two opposite expressions, see Figure 4. Figure 1: A chain of propagators \(G\), \(S\) and \(B\). The arrow points into the time direction positive and negtive imaginary sides denoter by \(+\) and \(-\), respectively. Figure 3: Lengthscales in measurement. The wavelength \(1/p\) is shorter than the variation length \(\ell\) of the envelope function \(f\). Figure 2: Projecting of the measurement lines onto the \(D\)-dimensional spatial base ### Perturbative estimates We shall calculate \(\langle a\rangle\) including order \(\lambda\). In the leading order \[\langle a\rangle=2|A|^{2}\int dxf(x^{0},x^{1}+vx^{0},x^{\perp})=2|A|^{2}\int dxf(x)\] because the shift is irrelevant. This result is exact in the leading order, \(\lambda^{0}\). Now, the next order term, \(\lambda\) would be \(0\) if we make our approximations on \(G\), \(S\), \(B\). In order to find a nonzero contribution, we need to estimate the lowest deviations. The Feynman-Schwinger graph is depicted in Figure 5. The deviation of \(G\) reads \[\Delta G_{\mp}(x)=\int\frac{dk^{\prime}}{(2\pi)^{D+1}}\frac{k^{\prime}\cdot k ^{\prime}e^{ik^{\prime}\cdot x}}{i(2p\cdot k^{\prime}_{+})^{2}} \tag{39}\] for \(k^{\prime 0}=k^{\prime 1}|p|/E_{p}\) and \(\partial=\partial_{x}\). Hence, \[\Delta G_{\mp}(x)=\partial\cdot\partial[\theta(x^{0})x^{0}B_{+}(x)]/2iE_{p} \tag{40}\] The derivatives can be moved to \(f\) so the graphs will contain products \((\partial f)(\partial ff\) or \((\partial\partial f)ff\). We shall estimate the corrections for \[f(x)=FI_{0}(x^{0}-x^{1}/v-w(\mathbf{x}))I(\mathbf{x})] \tag{41}\] where \(I_{0}(t)=\theta(t)\theta(L_{0}-t)\), and \[I(\mathbf{x})=\left\{\begin{array}{ll}1&\mbox{if $\mathbf{x}\in V$}\\ 0&\mbox{if $\mathbf{x}\notin V$}\end{array}\right. \tag{42}\] of linear size \(L\) for some function (measurement start time) \(w\) defined inside \(V\). In this case the derivatives are nonzero only on the boundary. In the case of two first derivatives, it gives essentially a product of two \(\delta\) functions, pinning \(f\) to the boundary. It turns out that the case of second derivative is actually of the same order. Formally we would get \(\delta^{\prime}\) which is infinite, so in both cases we need to regularize \(\theta\) functions. Let us assume that \(\theta\) and \(I\) change from \(0\) to \(1\) smoothly over the distance \(\ell\ll L,L_{0}\). The total correction to \(\langle a\rangle\) is \(\sim\lambda\langle a^{3}\rangle_{0}L_{0}/L\ell E_{p}\) since the numerator contains also \(x^{0}\sim L_{0}\). Therefore, the Poisson statistics will be a good approximation in the limit \(\lambda\eta^{2}\ll E_{p}\ell L/L_{0}\) with \(\eta=FL_{0}/E_{p}\). On the other hand the detection noise cannot blur the distribution giving another constraint \(1\ll\lambda(\alpha\eta)^{2}\). If we want maximally a single click, then additionally \(\alpha\ll 1\). Summarizing, the single click statistics occurs if \[1\ll\alpha^{-2}\ll\lambda\eta^{2}\ll\ell LE_{p}/L_{0} \tag{43}\] Finally, the previously calculated vacuum fluctuation should also be small. They are almost completely negligible (exponentially) if \(m\ell\gg 1\), i.e. the measurement shape function varies over scales impossible to generate pairs of particles. However, in the massless (or low mass) case, the fluctuation are estimated by \(W\sim(q\cdot q)^{(D-3)/2}\) giving \(|q|\sim 1/\ell\), i.e. \(\langle a^{2}\rangle_{0}\sim\ell^{2-2D}\tilde{f}^{2}(0)\) and \(\tilde{f}(0)\sim VFL_{0}\). At \(D=1\) they are actually divergent logarithmically with the shrinking mass \(m\). For \(D>1\) we need also \(VE_{p}/\ell^{D-1}\ll\alpha\). Together with the previous condition it implies \(\ell LE_{p}/L_{0}\gg 1\gg VE_{p}/\ell^{D-1}\) which gives the impossible requirement \(\ell^{D}\gg VL_{0}/L\). The reason of this failure is that \(\phi^{2}\) is not a conserved quantity and the measurement can easily change it locally (above the mass threshold). We shall resolve this problem replacing \(\phi^{2}\) by the conserved energy-momentum density \(T_{\mu\nu}\). ## IV Energy-momentum measurement We shall modify the previous simple quadratic measurement replacing \(\phi^{2}\) by energy-momentum density, which is still quadratic in \(\phi\) but the addtional derivatives turn out to suppress unwanted noise in the high energy limit. Figure 4: Approximate cancellation of disturbances caused by the measurement. Insertion of \(\phi_{q}^{2}\) at \(x_{+}\) and \(x_{-}\) in the chains of propagators (36) on the line of propagation gives two exactly opposite terms. Figure 5: The Feynman-Schwinger graph contributing to the \(\lambda\) correction to \(\langle a\rangle\) ### Energy-momentum tensor Energy-momentum stress tensor by Noether theorem reads \[T^{\mu\nu}=\frac{\partial\mathcal{L}}{\partial(\partial_{\mu}\phi)}\partial^{\nu} \phi-g^{\mu\nu}\mathcal{L} \tag{44}\] with \(\partial^{\nu}=g^{\nu\tau}\partial_{\tau}\) is equal in our case \[T^{\mu\nu}=\partial^{\mu}\phi\partial^{\nu}\phi-g^{\mu\nu}(g^{\sigma\tau} \partial_{\sigma}\phi\partial_{\tau}\phi-m^{2}\phi^{2})/2 \tag{45}\] We define the energy-momentum measurement \[K(a)\propto\exp\left[-\lambda\sum_{\pm}\left(\int f_{\mu\nu}(x)T^{\mu\nu}(x_{ \pm})dx-a\right)^{2}\right], \tag{46}\] which is normalized in the same way as the field, with symmetric \(f_{\mu\nu}=f_{\nu\mu}\). The generating function reads \[S.(\chi)=Z\int\mathcal{D}\phi\exp\int i\mathcal{L}(z)dz\times\] \[\exp\left[-\frac{\lambda}{2}\left(\int f_{\mu\nu}(x)T^{\mu\nu}_{ q}(x)dx\right)^{2}\right]\times\] \[\exp\int i\chi f_{\mu\nu}(y)T^{\mu\nu}_{c}(y)dy, \tag{47}\] where we denoted \(T^{\mu\nu}_{q}(x)=T^{\mu\nu}(x_{+})-T^{\mu\nu}(x_{-})\) and \(2T^{\mu\nu}_{c}(x)=T^{\mu\nu}(x_{+})+T^{\mu\nu}(x_{-})\). Note also that \[2T^{\mu\nu}_{q}(x)=\partial^{\mu}\phi_{c}\partial^{\nu}\phi_{q}+ \partial^{\mu}\phi_{q}\partial^{\nu}\phi_{c}\] \[-g^{\mu\nu}(\partial\phi_{c}\cdot\partial\phi_{q}-m^{2}\phi_{c} \phi_{q}),\] \[T^{\mu\nu}_{c}(x)=\partial^{\mu}\phi_{c}\partial^{\nu}\phi_{c}-g ^{\mu\nu}(\partial\phi_{c}\cdot\partial\phi_{c}-m^{2}\phi_{c}^{2})/2\] \[+\partial^{\mu}\phi_{q}\partial^{\nu}\phi_{q}/4-g^{\mu\nu}( \partial\phi_{q}\cdot\partial\phi_{q}-m^{2}\phi_{q}^{2})/8. \tag{48}\] As in the case of \(\phi^{2}\), the calculations involve correlations of the type \(\langle(g\phi)\cdots(g\phi)(fT)\cdots(fT)\rangle\). However, there are dangerous contact terms to be regularized by fermionic ghosts [37; 2], see Appendix E. Fortunately, once identified, we can basically forget about ghosts, just keep the unitarity constraints when calculating loops, \(\langle T_{q}(x)T_{q}(y)\cdots T_{q}(w)\rangle=0\) as a calculation rule if only \(T_{q}\) are involved. Basic examples of graphs involved in our calculations are depicted in Fig. 6. From now on we shall subtract the zero-temperature average from \(T\), i.e. \(T\to T-\langle T\rangle_{0}\), as it is unobservable, contains the renormalization parameters, and we are interested only in the noise and sensitivity of the detector to the incoming particles. By this shift \(\langle T\rangle=0\). ### Measurement of the vacuum In the lowest order of \(\lambda\) \[\langle a^{2}\rangle.=\int f_{\mu\nu}(x)f_{\xi\eta}(y)\langle T^{\mu\nu}_{c} (x)T^{\xi\eta}_{c}(y)\rangle dxdy \tag{49}\] analogously to the \(\phi^{2}\). We get (see Appendix F), \[\langle a^{2}\rangle.=\int dq\,\tilde{f}_{\mu\nu}(q)\tilde{f}_{\xi\eta}(-q) \frac{X^{\mu\nu\xi\eta}(q)W(q)}{(q\cdot q)^{2}(D+2)D} \tag{50}\] with [38] \[X^{\mu\nu\xi\eta}=P(q^{\mu}q^{\nu}-(q\cdot q)g^{\mu\nu})(q^{\xi} q^{\eta}-(q\cdot q)g^{\xi\eta})\] \[+R[(q^{\mu}q^{\eta}-(q\cdot q)g^{\mu\eta})(q^{\xi}q^{\nu}-(q\cdot q )g^{\xi\nu})\] \[+(q^{\mu}q^{\xi}-(q\cdot q)g^{\mu\xi})(q^{\nu}q^{\eta}-(q\cdot q) g^{\nu\eta})],\] \[P=m^{4}+(D+1)m^{2}q\cdot q/2+(q\cdot q)^{2}(D^{2}-3)/16,\] \[R=m^{4}-m^{2}q\cdot q/2+(q\cdot q)^{2}/16, \tag{51}\] and \(W\) defined by (29) The case \(D=1\) is degenerate, see Appendix F. ### Poisson statistics We can adapt most of results from the \(\phi^{2}\) replacing \(f\) with \(f_{\mu\nu}\) and adding factor \(p^{\mu}p^{\nu}\) from \(T\) in (37). Then e.g. \(\eta=\int dx^{0}f_{\mu\nu}(x^{0},x^{1}+vx^{0},x^{\perp})p^{\mu}p^{\nu}/E_{p}\). Replacing further \(F\) with \(F_{\mu\nu}\) in (41) we have \(\eta=F_{\mu\nu}p^{\mu}p^{\nu}L_{0}/E_{p}\). We only need to include deviation from derivatives in \(T\) that can act on variables in \(f\) considering potential disturbance (the \(\sim\lambda\) term). Fortunately, if say \(F_{00}=F\) and \(0\) otherwise (assuming we measure energy density, not momentum), their contribution to the disturbance is negligible, and we stay with (43). What is qualitatively different, vacuum fluctuations remain small in the massless limit. From our above discussion we have then \(\langle a^{2}\rangle\sim\ell^{-2(D+1)}\tilde{F}^{2}(0)\). If we want \(\ell^{-D-1}\tilde{F}\ll\alpha\eta\) then \(V/\ell^{D+1}E_{p}\ll\alpha\). In contrast to the \(\phi^{2}\) case increasing \(E_{p}\) helps to satisfy this and the other requirements, due to the fact that we work with the conserved quantity. Figure 6: Loops and chains in the graphs involving energy-momentum tensor. They vanish if solely \(T_{q}\) (not \(T_{c}\)) appears in a loop/chain Conclusion We have presented a self-consistent measurement model suitable for high energy particle detectors. Defined completely withing existing framework of quantum field theory combine with POVM and Kraus functionals, it allows to identify a particle as a click with almost perfect efficiency, in constrast to Unruh-deWitt model. The presented examples stress the importance of nonlinearity and connection with conservation principles to choose a proper Kraus functional. The strengh of the measurement, parameter \(\lambda\), must be neither too low, to keep the detection noise smalle, nor too high, to keep the backaction small. Only an intermediate regime, depending on the energy, time and length scales, allows to \(\sim 100\%\) efficiency of the detetion. Although we analyzed a simple bosonic field, the ideas are quite general an can be easily extended to fermions and compound particles. We believe that further work on such models will help to establish a Bell-type family of experiments in high energy regime, identifying the main technical challenges. One can also explore completely different detector's functions \(f\), e.g. an analogue of an accelerating observer like in the original Unruh model or a sequence of measurements. Note also that our model is a theoretical idealization and it may need practical adjustments taking into account specific experiments. ## Acknowledgements I thank W. Belzig and P. Chankowski for many fruitful discussions on the subject and pointing out important issues. ## Appendix A Closed time path formalism We shall summarize the notation in quantum field theory of a scalar field, in \(D+1\) dimensions ( \(D\) spatial dimensions and \(1\) time, with \(D=3\) in full space but also \(D=1\) for simple illustrative cases) \(x=(x^{0}=ct,\mathbf{x})\), with time \(t\), speed of light \(c\) and spatial position \(\mathbf{x}=(x^{1},\dots,x^{D})\). For simplicity \(c=\hbar=1\). We denote partial derivatives \(\partial_{\mu}=\partial/\partial x^{\mu}\) and Minkowski scalar product \(A\cdot B=A^{\mu}B_{\mu}=A^{\mu}g_{\mu\nu}B^{\nu}\) with flat metric \(g^{\mu\nu}=g_{\mu\nu}=1\) for \(\mu=\nu=0\), \(g^{\mu\nu}=g_{\mu\nu}=-1\) for \(\mu=\nu=1\dots D\) and \(g^{\mu\nu}=g_{\mu\nu}=0\) for \(\mu\neq\nu\). Real scalar field \(\hat{\phi}(\mathbf{x})\) with conjugate field \(\hat{\pi}(\mathbf{x})\) obeys commutation relation \[[\hat{\phi}(\mathbf{x}),\hat{\pi}(\mathbf{y})]=i\delta(\mathbf{x}-\mathbf{y}) \tag{10}\] for \([\hat{A},\hat{B}]=\hat{A}\hat{B}-\hat{B}\hat{A}\). Relativistic field Hamiltonian reads \[\hat{H}=\int d\mathbf{x}(\hat{\pi}^{2}(\mathbf{x})+|\nabla\hat{\phi}(x)|^{2}+m^{2} \hat{\phi}^{2}(\mathbf{x}))/2 \tag{11}\] Here the \(\nabla\) term is in fact a sum of partial derivatives \[|\nabla\hat{\phi}(x)|^{2}=\sum_{j=1}^{D}(\partial_{j}\hat{\phi}(\mathbf{x}))^{2}. \tag{12}\] Heisenberg picture transforms the field with time \[\hat{\phi}(x)=e^{iHt}\hat{\phi}(\mathbf{x})e^{-i\hat{H}t} \tag{13}\] Translation into path integrals gives \[\langle\Phi^{\prime}|\exp(-i\hat{H}t)|\Phi\rangle=\int\mathcal{D}\phi\exp\int i \mathcal{L}(x)dx \tag{14}\] with \(\phi(x^{0}=0,\dots)=\Phi\) and \(\phi(x^{0}=t,\dots)=\Phi^{\prime}\), where the Lagrangian density \(\mathcal{L}\) is given by (5). For the fermionic fields, here use only to generate renormalization counterterms, one has to replace commutator \([\hat{\phi},\hat{\pi}]\) in (10) by anticommutator \(\{\hat{\phi},\hat{\pi}\}=\hat{\phi}\hat{\pi}+\hat{\pi}\hat{\phi}\) and introduce Grassmann anticommuting fields \(\phi\) in path integrals, i.e. \(\phi(x)\phi(y)=-\phi(y)\phi(y)\), \(\int d\phi=0\), \(\int\phi d\phi=1\). The complications, including the signs and order conventions, are thoroughly described in literature [2]. Time flow over Schwinger-Kadanoff-Baym-Matsubara closed time path (CTP) [32; 33; 34; 39; 40; 41; 42] is parameterized \(t(s)\) (optionally with a subscript indicating the specific point in spacetime, \(x^{0}(s_{z})\)). The real parameter \(s\in[s_{i},s_{f}]\) must satisfy \(dt/ds\neq 0\) and \(\text{Im }dt/ds\leq 0\), the jump \(t(s_{i})-t(s_{f})=i\beta\) for \(\beta=1/k_{B}T>0\) (inverse temperature), see Fig. 7. For fermionic fields, the jump is accompanied by the sign reversal for each field. In the case \(k_{B}T\to 0\) we have \(t(s_{\mp})\rightarrow\pm i\infty\). For the convenience the flat part splits into \(t\to t_{\pm}=t(s_{\pm})=t\pm i\epsilon\) (\(\epsilon\to 0_{+}\), a small positive number going to \(0\) in the limit) and \(x_{\pm}=(t_{\pm},\mathbf{x})\) with \(s_{+}<s_{-}\). We use derivative rule: \(\partial_{0}=(dt/ds)^{-1}\partial/\partial s\) and differential \(dx=dx^{0}dx^{1}\cdots dx^{D}\) with \(dx^{0}=(dt/ds)ds\) and \[\delta(x-y)=\delta(x^{0}-y^{0})\delta(x^{1}-y^{1})\cdots\delta(x^{D}-y^{D}) \tag{15}\] with \(\delta(x^{0}-y^{0})=\delta(s_{x}-s_{y})/(dt/ds)|_{s=s_{x}=s_{y}}\). These rules allow to establish full compliance of CTP with perturbative relativistic quantum field theory [43; 44]. ## Appendix B Two point correlations With the definitions in Appendix A and (5) one can calculate all relevant quantum field theory functions, i.e. \[\langle\phi(x)\phi(y)\cdots\rangle=\langle\mathcal{T}\hat{\phi}(x) \hat{\phi}(y)\cdots\rangle\] \[=Z\int\mathcal{D}\phi\phi(x)\phi(y)\exp\int i\mathcal{L}(z)dz \tag{10}\] with \[Z^{-1}=\int\mathcal{D}\phi\exp\int i\mathcal{L}(z)dz, \tag{11}\] where \(\mathcal{T}\) denotes ordering by \(s\), i.e. \[\mathcal{T}\hat{\phi}(x)\hat{\phi}(y)=\left\{\begin{array}{ll}\hat{\phi}(x) \hat{\phi}(y)&\text{if }s_{x}>s_{y},\\ \hat{\phi}(y)\hat{\phi}(x)&\text{if }s_{y}>s_{z}.\end{array}\right. \tag{12}\] Since the formal path functional is Gaussian, all correlations split into these simple second-order correlations (Wick theorem [35]) \[\langle\phi(x_{1})\cdots\phi(x_{n})\rangle=\] \[2^{-n/2}\sum_{\sigma}\prod_{j=1}^{n/2}(\phi(x_{\sigma(j)})\phi( x_{\sigma(n/2+j)})) \tag{13}\] for even \(n\) while \(0\) for odd \(n\), summing over all permutations. For the fermionic field one has to include also the permutation sign \(\text{sgn}\,\sigma\). In the case of linear perturbation we often use the identity \(\int dae^{-2\lambda a^{2}+ba}=(\pi/2\lambda)^{1/2}e^{b^{2}/8\lambda}\). Applying functional derivative \[\delta(x-y)=\frac{\delta\phi(y)}{\delta\phi(x)}=\left\langle \frac{\delta\phi(y)}{\delta\phi(x)}\right\rangle\] \[=Z\int\mathcal{D}\phi\frac{\delta\phi(y)}{\delta\phi(x)}\exp\int i \mathcal{L}(z)dz \tag{14}\] and integrating by parts \[\delta(x-y)Z^{-1}=\int\mathcal{D}\phi\phi(y)\int\frac{\delta \mathcal{L}(w)}{i\delta\phi(x)}dw\exp\int i\mathcal{L}(z)dz\] \[=\int\mathcal{D}\phi i\phi(y)(\partial\cdot\partial\phi(x)+m^{2} \phi(x))\exp\int i\mathcal{L}(z)dz \tag{15}\] we get \[(g^{\mu\nu}\partial_{\mu}\partial_{\nu}+m^{2})\langle\phi(x)\phi(y)\rangle= \delta(x-y)/i \tag{16}\] with the derivative over \(x\). The equation can be solved by Fourier transform in \(\mathbf{x}\) and \(\mathbf{y}\), i.e. \[\phi(x)=\int\frac{d\mathbf{k}}{(2\pi)^{D/2}}e^{i\mathbf{k}\cdot\mathbf{x}}\phi(x^{0},\mathbf{k}) \tag{17}\] with the standard scalar product \(\mathbf{k}\cdot\mathbf{x}=\sum_{j=1}^{D}k^{j}x^{j}\). We obtain \[(\partial_{0}\partial_{0}+\mathbf{k}\cdot\mathbf{k}+m^{2})\langle\phi(x^{ 0},\mathbf{k})\phi(y^{0},\mathbf{q})\rangle\] \[=\delta(x^{0}-y^{0})\delta(\mathbf{k}-\mathbf{q})/i \tag{18}\] with the standard \(\delta(\mathbf{k}-\mathbf{q})=\delta(k^{1}-q^{1})\cdots\delta(k^{D}-q^{D})\). This is a simple linear equation with solution \[\langle\phi(x^{0},\mathbf{k})\phi(y^{0},\mathbf{q})\rangle=\sum_{\pm}\pm\delta(\mathbf{k}- \mathbf{q})\frac{e^{\mp i|x^{0}-y^{0}|E_{k}}}{1-e^{\mp\beta E_{k}}} \tag{19}\] with \(E_{k}=\sqrt{\mathbf{k}\cdot\mathbf{k}+m^{2}}\) and \[|x^{0}-y^{0}|=\left\{\begin{array}{ll}x^{0}-y^{0}&\text{if }s_{x}>s_{y}\\ y^{0}-x^{0}&\text{otherwise}\end{array}\right. \tag{20}\] In the zero-temperature limit \(\beta\to 0\) we get \[\langle\phi(x^{0},\mathbf{k})\phi(y^{0},\mathbf{q})\rangle=\delta(\mathbf{k}-\mathbf{q})e^{- i|x^{0}-y^{0}|E_{k}}. \tag{21}\] Equivalently \[\langle\phi(x)\phi(y)\rangle=\sum_{\pm}\int\frac{\pm d\mathbf{k}}{(2\pi)^{D}2E_{k} }e^{i\mathbf{k}\cdot(\mathbf{x}-\mathbf{y})}\frac{e^{\mp i|x^{0}-y^{0}|E_{k}}}{1-e^{\mp \beta E_{k}}} \tag{22}\] or in the zero temperature limit \[\langle\phi(x)\phi(y)\rangle=\int\frac{d\mathbf{k}}{2E_{k}(2\pi)^{D}}e^{i\mathbf{k} \cdot(\mathbf{x}-\mathbf{y})}e^{-i|x^{0}-y^{0}|E_{k}}. \tag{23}\] The special correlations read \[S(x)=\sum_{\pm}\int\frac{\pm d\mathbf{k}}{(2\pi)^{D}2E_{k}}e^{i\mathbf{k} \cdot\mathbf{x}}\frac{e^{\mp i|x^{0}|E_{k}}}{1-e^{\mp\beta E_{k}}},\] \[B(x)=\sum_{\pm}\int\frac{\pm d\mathbf{k}}{(2\pi)^{D}2E_{k}}e^{i\mathbf{k} \cdot\mathbf{x}}\frac{e^{\pm i\alpha^{0}E_{k}}}{1-e^{\mp\beta E_{k}}},\] \[C(x)=\int\frac{d\mathbf{k}}{(2\pi)^{D}2E_{k}}e^{i\mathbf{k}\cdot\mathbf{x}} \frac{\cos(x^{0}E_{k})}{\tanh(\beta E_{k}/2)}, \tag{24}\] Figure 7: The time path in the CTP approach in the case of finite temperature \(\beta=1/T\). At zero temperature, the shift \(\beta\) stretches to infinity with \(t_{i}\to+i\infty\), \(t_{f}\to-i\infty\) with \(|x^{0}|\) reduced to usual absolute value and zero-temperature limits \[S(x)=\int\frac{d\mathbf{k}}{2E_{k}(2\pi)^{D}}e^{i\mathbf{k}\cdot\mathbf{x}}e^{- i|x^{0}|E_{k}}\] \[=\int\frac{idk}{(2\pi)^{D+1}}\frac{e^{ik\cdot(x-y)}}{k\cdot k-m^{ 2}+i\epsilon},\] \[B(x)=\int\frac{d\mathbf{k}}{2E_{k}(2\pi)^{D}}e^{i\mathbf{k}\cdot\mathbf{x}}e ^{x^{0}E_{k}}\] \[=\int\frac{dk}{(2\pi)^{D}}\delta(k\cdot k-m^{2})e^{ik\cdot x} \theta(k^{0}),\] \[C(x)=\int\frac{d\mathbf{k}}{(2\pi)^{D}2E_{k}}e^{i\mathbf{k}\cdot\mathbf{x}} \cos(x^{0}E_{k})\] \[=\int\frac{dk}{2(2\pi)^{D}}\delta(k\cdot k-m^{2})e^{ik\cdot x},\] \[G(x)=\int\frac{d\mathbf{k}}{iE_{k}(2\pi)^{D}}e^{i\mathbf{k}\cdot\mathbf{x}} \sin(x^{0}E_{k})\] \[=\int\frac{idk}{(2\pi)^{D+1}}\frac{e^{ik\cdot x}}{k_{+}\cdot k_{ +}-m^{2}}. \tag{100}\] The causal Green function is independent of temperature and defined only for \(x^{0}\geq 0\) while \(G=0\) for \(x\cdot x<0\), where \(k_{+}^{0}=k^{0}-i\epsilon\) (\(\epsilon\to 0_{+}\) as previously is necessary to make the integrals well defined). Convenient substitution \(\mathbf{k}=k\mathbf{n}\) where \(\mathbf{n}\) is a unit vector and \(k=m\sinh\eta>0\). Then \(E_{k}=m\cosh\eta\) and \(d\mathbf{k}/E_{k}=d\mathbf{n}(m\sinh\eta)^{D-1}d\eta\) (in \(D=1\) we have \(\sum_{\mathbf{n}=\pm 1}\) instead of \(\int d\mathbf{n}\)). Then \[\langle\phi(x)\phi(0)\rangle=\int\frac{(m\sinh\eta)^{D-1}d\eta d \mathbf{u}}{2(2\pi)^{D}}\times\] \[e^{im\sinh\eta\mathbf{u}\cdot\mathbf{x}}e^{-i|x^{0}|m\cosh\eta} \tag{101}\] For \(x^{1\ldots D}=0\) we have \[\langle\phi(x)\phi(0)\rangle=\int\frac{(m\sinh\eta)^{D-1}d\eta d\mathbf{u}}{2(2\pi )^{D}}e^{-i|x^{0}|m\cosh\eta}. \tag{102}\] The integral \(\int d\mathbf{u}=S_{D}\) is the surface of \(D-1\) unit sphere (embedded in \(D\) dimensions). In particular \(S_{1}=2\), \(S_{2}=2\pi\), \(S_{3}=4\pi\), or in general \(S_{D}=2\pi^{D/2}/\Gamma(D/2)\) (\(\Gamma\) - Euler Gamma function, here \(\Gamma(1/2)=\pi^{1/2}\), \(\Gamma(1)=1\) and \(\Gamma(z+1)=z\Gamma(z)\)). Substituting \(w=\cosh\eta\) we get \[\int\frac{m^{D-1}(w^{2}-1)^{D/2-1}dwd\mathbf{u}}{2^{D}\pi^{D/2}\Gamma (D/2)}e^{-i|x^{0}|mw}=\] \[\frac{m^{(D-1)/2}K_{(D-1)/2}(i|x^{0}|m)}{(2\pi)^{(D+1)/2}(i|x^{0} |)^{(D-1)/2}}. \tag{103}\] By Lorentz invariance and analyticity we can write in general \[\langle\phi(x)\phi(0)\rangle=\] \[\frac{m^{(D-1)/2}K_{(D-1)/2}(m\sqrt{-x\cdot x})}{(2\pi)^{(D+1)/2 }(-x\cdot x)^{(D-1)/4}} \tag{104}\] with the complex square root defined so that the real part is positive. The divergence at \(x=0\) is removable because we can make an infinitesimal shift in imaginary direction. We shall list the special cases in zero temperature limit [45]. Case \(D=1\): \[C(x)=\left\{\begin{array}{ll}K_{0}(m\sqrt{-x\cdot x})/2\pi& \mbox{for $x\cdot x<0$},\\ -Y_{0}(m\sqrt{x\cdot x})/4&\mbox{for $x\cdot x>0$},\end{array}\right.\] \[G(x,y)=-iJ_{0}(m\sqrt{x\cdot x})/2\mbox{ for $x\cdot x>0$ and $x^{0}>0$}. \tag{105}\] Case \(D=2\). Taking into account that \(K_{1/2}(z)=e^{-z}\sqrt{\pi/2z}\) we can write \[\langle\phi(x)\phi(0)\rangle=\frac{\exp(-\sqrt{-x\cdot x}m)}{4\pi\sqrt{-x \cdot x}} \tag{106}\] or in particular \[C(x)=\frac{C^{\prime}(x)}{4\pi\sqrt{|x\cdot x|}},\] \[C^{\prime}(x)=\left\{\begin{array}{ll}\exp(-m\sqrt{-x\cdot x})& \mbox{for $x\cdot x<0$}\\ -\sin(m\sqrt{x\cdot x})&\mbox{for $x\cdot x>0$}\end{array}\right.,\] \[G(x)=\frac{\cos(\sqrt{x\cdot x}m)}{2\pi i\sqrt{x\cdot x}}, \tag{107}\] for \(x^{0}>0\) and \(x\cdot x>0\). Case \(D=3\): \[\langle\phi(x)\phi(0)\rangle=\frac{m}{4\pi^{2}\sqrt{-x\cdot x}}K _{1}(m\sqrt{-x\cdot x}),\] \[C(x)=\frac{mC^{\prime}(x)}{8\pi^{2}\sqrt{|x\cdot x|}},\] \[C^{\prime}(x)=\left\{\begin{array}{ll}2K_{1}(m\sqrt{-x\cdot x}) &\mbox{for $x\cdot x<0$},\\ \pi Y_{1}(m\sqrt{x\cdot x})&\mbox{for $x\cdot x>0$},\end{array}\right.\] \[G(x)=\frac{imJ_{1}(\sqrt{x\cdot x}m)}{4\pi\sqrt{x\cdot x}}+ \frac{\delta(x\cdot x)}{2\pi i}, \tag{108}\] with only the last term in the \(m\to 0\) case. The \(m\to 0\) case. By expansion of Bessel functions, we have for \(D=1\) \[-2\pi\langle\phi(x)\phi(0)\rangle\to\ln(-x\cdot x)/2+\ln(m/2)+\gamma \tag{109}\] with Euler-Mascheroni constant \(\gamma\) and \(G(x)\to-i/2\) (for \(x^{0}>0\) and \(x\cdot x>0\)). One encounters infrared divergence of correlation at small \(\mathbf{k}\) and \(m\to 0\). For \(D>1\) we have \[\langle\phi(x)\phi(0)\rangle\rightarrow\frac{\Gamma((D-1)/2)}{4\pi^{(D+1)/2}(-x \cdot x)^{(D-1)/2}}. \tag{100}\] For \(D=2\): \[G(x) \to 1/2\pi i\sqrt{x\cdot x},\] \[C(x) \rightarrow\theta(-x\cdot x)/4\pi\sqrt{-x\cdot x}. \tag{101}\] and \(D=3\): \[G(x) \rightarrow\delta(x\cdot x)/2\pi i,\] \[C(x) \rightarrow-1/4\pi^{2}x\cdot x. \tag{102}\] Since \(C\) diverges at \(x\cdot x\to 0\) (also for \(m>0\)) we have calculate it as Cauchy principal value. ## Appendix C Vacuum fluctuations Here we present the details of the calculation of (28). We begin with \[\langle a^{2}\rangle.=\int dkdp\tilde{f}(k+p)\tilde{f}(-k-p) \theta(k^{0})\theta(p^{0})\times\] \[8\pi^{2}\delta(k\cdot k-m^{2})\delta(p\cdot p-m^{2}). \tag{103}\] Note that \(k\) and \(p\) are forward, i.e. \(k\cdot k,p\cdot p>0\) and \(k^{0},p^{0}>0\) so \(q=k+p\) is also forward. On the other hand, forward \(q\) and timelike \(k,p\) are not sufficient to keep both \(k\) and \(p\) forward, see Fig. 8. Replacing \(k+p=q\) and shifting \(p\), we get \[\int dqdp\tilde{f}(q)\tilde{f}(-q)\theta(q^{0}/2+p^{0})\theta(q^ {0}/2-p^{0})8\pi^{2}\times\] \[\delta((q/2+p)\cdot(q/2+p)-m^{2})\times\] \[\delta((q/2-p)\cdot(q/2-p)-m^{2}). \tag{104}\] Note that \[\delta((q/2+p)\cdot(q/2+p)-m^{2})\times\] \[\delta((q/2+p)\cdot(q/2+p)-m^{2})\] \[=\delta(2q\cdot p)\delta(q\cdot q/4+p\cdot p-m^{2}), \tag{105}\] which gives fixings \(q\cdot p=0\) and \(q\cdot q/4+p\cdot p=m^{2}\). We need to calculate \(W(q)\) equal \[\int dp\theta(|q^{0}|/2-|p^{0}|)\delta(2q\cdot p)\delta(q\cdot q/4+p\cdot p-m ^{2})/2 \tag{106}\] but due to Lorentz invariance we need to do it only for \(q^{1...D}=0\) and \(q^{0}>0\), which this the easiest. Then \(p^{0}=0\) and by substitution \(|\mathbf{p}|=\tilde{p}\) it reduces to \[\int\tilde{p}^{D-1}S_{D}d\tilde{p}\delta((q^{0})^{2}/4-\tilde{p}^{2}-m^{2})/4 q^{0}. \tag{107}\] Finally putting \(p=\sqrt{(q^{0})^{2}/4-m^{2}}\) and replacing \(q^{0}\rightarrow\sqrt{q\cdot q}\), we get (29). ## Appendix D Plane wave generation We shall derive the causal Green function in the limit of oscillatin perturbation that generates a flame wave. We define \[G_{g}(x)=\int dyG(x,y)g(y)=\] \[\int_{y^{0}<x^{0}}dy\frac{d\mathbf{k}}{iE_{k}(2\pi)^{D}}e^{i\mathbf{k} \cdot(\mathbf{x}-\mathbf{y})}\sin((x^{0}-y^{0})E_{k})g(y). \tag{108}\] We first integrate over \(y^{2,3,\cdots}\) while substituting \(y^{0}=x^{0}-t\), and \(y^{1}=y-L\) getting \[\int_{t>0}dtdy\frac{dk}{iE_{k}(2\pi)}e^{ik(x^{1}-y)}e^{ikL}\times\] \[\sin(tE_{k})(e^{iE_{p}(x^{0}-t)}h(y)+e^{-iE_{p}(x^{0}-t)}h^{*}(y ))=\] \[\int_{t>0}dtdy\frac{dk}{E_{k}(4\pi)}e^{ik(x^{1}-y)}e^{ikL}\times\] \[[(e^{itE_{k}+iE_{p}(x^{0}-t)}-e^{-itE_{k}+iE_{p}(x^{0}-t)})h(y)\] \[+(e^{itE_{k}-iE_{p}(x^{0}-t)}-e^{-itE_{k}-iE_{q}(x^{0}-t)})h^{*}( y)] \tag{109}\] with \(k\) being \(1\)-dimensional variable and \(E_{k}=\sqrt{k^{2}+m^{2}}\). We integrate over \(t\) with damping factor \(e^{-0_{+}t}\). Then \[G_{g}(x)=\int\frac{dk}{E_{k}(4\pi)}e^{ik(L+x^{1})}\times\] \[\left[\left(\frac{e^{iE_{p}x^{0}}}{0_{+}-iE_{k}+iE_{p}}-\frac{e^{ iE_{p}x^{0}}}{0_{+}+iE_{k}+iE_{p}}\right)\tilde{h}(k)\right.\] \[+\left(\frac{e^{-iE_{p}x^{0}}}{0_{+}-iE_{k}-iE_{p}}-\frac{e^{-iE_ {p}x^{0}}}{0_{+}+iE_{k}-iE_{p}}\right)\tilde{h}^{*}(-k)\right] \tag{110}\] with \(\tilde{h}(k)=\int dyh(y)e^{-iky}\). For large \(L\) the factor \(e^{ikL}\) is quickly oscillating while the other functions vary Figure 8: Relation between \(k\), \(p\) and \(q\) showing that timelike \(p=q-k\) is not necessarily forward, even if \(k\) and \(q\) are. slowly. Exceptions are the first and last denominator at \(E_{k}\sim E_{p}\) as they diverge. Only in these cases we make approximations \(E_{k}-E_{p}\simeq(|k|-|p|)|p|/E_{p}\) for \(|p|=\sqrt{E_{p}^{2}-m^{2}}\). The integral concentrates only near two peaks at \(k=\pm|q|\) so we can calculate \[G_{g}(x)\rightarrow\sum_{\pm}\int\frac{dk}{4\pi E_{p}}e^{ik(L+x^ {1})}\times\] \[\left[\frac{e^{iE_{p}x^{0}}\tilde{h}(\pm|p|)}{0_{+}+i|p|(|p|\mp k) /E_{p}}-\frac{e^{-iE_{p}x^{0}}\tilde{h}^{*}(\mp|p|)}{0_{+}-i|p|(|p|\mp k)/E_{p}}\right]\] \[=\sum_{\pm}\int\frac{dk}{4\pi|p|}e^{ik(L+x^{1})}\times\] \[\left[\frac{e^{iE_{p}x^{0}}\tilde{h}(\pm|p|)}{0_{+}+i(|p|\mp k)}- \frac{e^{-iE_{p}x^{0}}\tilde{h}^{*}(\mp|p|)}{0_{+}-i(|p|\mp k)}\right]. \tag{101}\] The integral over \(k\) can be now calculated using residues, giving \[2|p|G_{g}(x)\rightarrow\] \[e^{iE_{p}x^{0}-i|p|x^{1}}\tilde{h}(-|p|)-e^{i|p|x^{1}-iE_{p}x^{0 }}\tilde{h}^{*}(+|p|) \tag{102}\] equivalent to (33). ## Appendix E Contact term problem in energy correlations For the normalization \((1)_{0}=1\) (unitarity) to hold, we expect \(\langle T_{q}^{\mu\nu}(x)T_{q}^{\xi_{q}}(y)\rangle\) to vanish because any quantity \(A_{q}(x)=A_{+}(x)-A_{-}(x)\) should cancel the correlation if \(x^{0}\) is the _latest_ time. For \(\mu=\nu=\xi=\eta=0\) the above expression will contain the term \[(\partial_{x}^{0}\partial_{y}^{0}G(x,y))(\partial_{x}^{0}\partial_{y}^{0}G(y, x)), \tag{103}\] which indeed is 0 for both \(x^{0}>y^{0}\) and \(y^{0}-x^{0}\). However, something strange happens at \(x^{0}=y^{0}\). Then taking the definitions of \(G\) we get \[\partial_{x}^{0}G(x,y)=\int\frac{d\mathbf{k}}{i(2\pi)^{D}}e^{i\mathbf{k}\cdot(\mathbf{x}- \mathbf{y})}\cos((x^{0}-y^{0})E_{k}) \tag{104}\] for \(x^{0}>y^{0}\). and 0 for \(x^{0}<y^{0}\). Unfortunately, there is a discontinuity at \(x^{0}=y^{0}\) which gives \(\partial_{x}^{0}\partial_{y}^{0}G(x,y)=i\delta(x-y)\) As a result, we get \(\delta^{2}(x-y)=\delta(0)\delta(x-y)\), with \(\delta(0)\) undefined, formally divergent to \(+\infty\). The problem persists at higher (arbitrary) order correlations since we get e.g. \[\delta(x-y)\delta(y-z)\delta(z-w)\delta(w-x)=\] \[\delta(x-y)\delta(x-y)\delta(y-z)\delta(z-w)\delta(0) \tag{105}\] again diverging. To cure the problem, we need an auxiliary independent renormalizing field \(\phi_{M}\), but with a large mass \(M\rightarrow\infty\), and construct \(T_{M}^{\mu\nu}(x)\) by replacing \(m\to M\) and \(\phi\rightarrow\phi_{M}\). Unfortunately, \(\phi_{M}\) must be anticommuting (Grassmann, fermionic). More precisely, there are two fields \(\phi_{M}(x)\) and \(\phi_{M}^{*}(y)\) but they anticommute, i.e. \(AB=-BA\) for \(A,B=\phi_{M}(x,y),\phi_{M}^{*}(x,y)\). The field is called a ghost because it is not physical [37]. The renormalizing Lagrangian density reads \[\mathcal{L}_{M}(x)=\partial\phi_{M}^{*}(x)\cdot\partial\phi_{M}(x)-m^{2}\phi^ {*}(x)\phi(x) \tag{106}\] with the energy-momentum tensor \[T_{M}^{\mu\nu}(x)=\partial^{\mu}\phi_{M}^{*}\partial^{\nu}\phi_{ M}+\partial^{\nu}\phi_{M}^{*}\partial^{\mu}\phi_{M}\] \[-g^{\mu\nu}(g^{\sigma\tau}\partial_{\sigma}\phi_{M}^{*}\partial_ {\tau}\phi_{M}-M^{2}\phi_{M}^{*}\phi_{M}). \tag{107}\] The basic correlations read \(\langle\phi_{M}(x)\phi_{M}(y)\rangle=0\) and \[\langle\phi_{M}(x)\phi_{M}^{*}(y)\rangle=\] \[\int\frac{d\mathbf{k}}{(2\pi)^{D}2E_{k}}e^{i\mathbf{k}\cdot(\mathbf{x}-\mathbf{y })}\left(\frac{e^{-i|x^{0}-y^{0}|E_{k}}}{1+e^{-\beta E_{k}}}-\frac{e^{i|x^{0}- y^{0}|E_{k}}}{1+e^{\beta E_{k}}}\right) \tag{108}\] with the zero-temperature limit \[\langle\phi_{M}(x)\phi_{M}^{*}(y)\rangle=\int\frac{d\mathbf{k}}{(2\pi)^{D}2E_{k}} e^{i\mathbf{k}\cdot(\mathbf{x}-\mathbf{y})}e^{-i|x^{0}-y^{0}|E_{k}}. \tag{109}\] The Wick decomposition includes now the sign of the permutation \[\langle\phi_{M}^{*}(x_{1})\phi_{M}(y_{1})\cdots\phi_{M}^{*}(x_{n} )\phi_{M}(y_{n})\rangle=\] \[\sum_{\sigma}\operatorname{sgn}\sigma\langle\phi_{M}^{*}(x_{\sigma (1)})\phi_{M}(y_{1})\rangle\cdots\langle\phi_{M}^{*}(x_{\sigma(n)})\phi_{M}(y_ {n})\rangle. \tag{110}\] Now, because of the opposite sign the permutation we get \[\langle T_{Mq}^{\mu\nu}(x)T_{Mq}^{\xi_{q}}(y)\rangle=-\langle T_{q}^{\mu\nu}(x )T_{q}^{\xi_{q}}(y)\rangle \tag{111}\] so let us modify \(T^{\mu\nu}\) by \[T^{\mu\nu}(x)\to T^{\mu\nu}(x)+T_{M}^{\mu\nu}(x) \tag{112}\] in our Kraus operator. Then what we get is \[\langle T_{q}^{\mu\nu}(x)T_{q}^{\xi_{q}}(y)\rangle=0. \tag{113}\] Note that finite \(M\) would also add some correction to terms containing \(T_{c}^{\mu\nu}\) (which do not spoil unitarity, though) so we keep the limit \(M\rightarrow\infty\). ## Appendix F Energy-momentum fluctuations We define \[\tilde{f}_{\mu\nu}(k)=\int dxe^{ik\cdot x}f_{\mu\nu}(x)/(2\pi)^{D+1} \tag{114}\] and note that \[\langle T_{c}^{\mu\nu}(x)T_{c}^{\xi qn}(y)\rangle=\] \[\langle T_{c}^{\mu\nu}(x)T_{c}^{\xi qn}(y)\rangle-\langle T_{q}^{ \mu\nu}(x)T_{q}^{\xi q}(y)\rangle/4\] \[=\langle(T^{\mu\nu}(x_{+})T_{c}^{\xi q}(y_{-})+T^{\mu\nu}(x_{-})T _{c}^{\xi q}(y_{+}))\rangle/2 \tag{101}\] Using \(B(x,y)\) and \(B(y,x)\) we get \[\langle a^{2}\rangle.=\int dkdp\tilde{f}_{\mu\nu}(k+p)\tilde{f}_{ \xi q}(-k-p)\times\] \[(k^{\mu}p^{\nu}-g^{\mu\nu}(k\cdot p+m^{2})/2)\times\] \[(k^{\xi}p^{\eta}-g^{\xi\eta}(k\cdot p+m^{2})/2)\times\] \[8\pi^{2}\theta(k^{0})\theta(p^{0})\delta(k\cdot k-m^{2})\delta(p \cdot p-m^{2}). \tag{102}\] Replacing \(k+p=q\) and shifting \(p\), we get \[\langle a^{2}\rangle.=\int dq\tilde{f}_{\mu\nu}(q)\tilde{f}_{\xi\eta}(-q)X^{ \mu\nu\xi\eta}(q) \tag{103}\] with \[X^{\mu\nu\xi\eta}(q)=\int dp\theta(|q^{0}|/2-|p^{0}|)\delta(2q \cdot p)4\pi^{2}\times\] \[\delta(q\cdot q/4+p\cdot p-m^{2})\times\] \[(q^{\mu}q^{\nu}/4-p^{\mu}p^{\nu}-g^{\mu\nu}q\cdot q/4)\times\] \[(q^{\xi}q^{\eta}/4-p^{\xi}p^{\eta}-g^{\xi\eta}q\cdot q/4). \tag{104}\] We can observe that (a) \(X\) is \(0\) is \(q\cdot q<0\) (b) \(X\) is symmetric under interchanging \(\mu\leftrightarrow\nu\) or \(\xi\leftrightarrow\eta\) or \(\mu\nu\leftrightarrow\xi\eta\) (c) satisfies \(q_{\mu}X^{\mu\nu\xi\eta}(q)=0\) (Ward identity or energy conservation) (d) is Lorentz covariant (there is no preferred frame). Even more, if we take \(q^{1\ldots D}=0\) then the constraints lead to \(p^{0}=0\) and \(|p|^{2}=(q^{0})^{2}/4=m^{2}\) so we have bound \(q\cdot q>4m^{2}\). Therefore the expected form of \(X\) is \[X^{\mu\nu\xi\eta}=\theta(q\cdot q-4m^{2})4\pi^{2}\times\] \[[P^{\prime}(q^{\mu}q^{\nu}-(q\cdot q)g^{\mu\nu})(q^{\xi}q^{\eta}- (q\cdot q)g^{\xi\eta})\] \[+R^{\prime}[(q^{\mu}q^{\eta}-(q\cdot q)g^{\mu\eta})(q^{\xi}q^{ \nu}-(q\cdot q)g^{\xi\nu})\] \[+(q^{\mu}q^{\xi}-(q\cdot q)g^{\mu\xi})(q^{\nu}q^{\eta}-(q\cdot q) g^{\nu\eta})]]. \tag{105}\] We only need to find two functions \(P^{\prime}(q\cdot q)\) and \(R^{\prime}(q\cdot q)\), which is the simplest by contraction \[g_{\mu\nu}g_{\xi\eta}X^{\mu\nu\xi\eta}=(q\cdot q)^{2}D(DP^{\prime}+2R^{\prime}) \tag{106}\] and \[g_{\mu\xi}g_{\nu\eta}X^{\mu\nu\xi\eta}=(q\cdot q)^{2}D[P^{\prime}+R^{\prime}( D+1)] \tag{107}\] giving \[(D-1)(D+2)D(q\cdot q)^{2}P^{\prime}=\] \[(D+1)g_{\mu\nu}g_{\xi\eta}X^{\mu\nu\xi\eta}-2g_{\mu\xi}g_{\nu\eta} X^{\mu\nu\xi\eta},\] \[(D-1)(D+2)D(q\cdot q)^{2}R^{\prime}=\] \[Dg_{\mu\xi}g_{\nu\eta}X^{\mu\nu\xi\eta}-g_{\mu\nu}g_{\xi\eta}X^{ \mu\nu\xi\eta}. \tag{108}\] At \(D=1\) both equations give \(P^{\prime}+2R^{\prime}\) but it is not a problem, as the \(P^{\prime}\) and \(R^{\prime}\) terms are actually the same. It is clear from the general rule \(X^{0\nu\xi\eta}=X^{1\nu\nu\xi\eta}q^{1}/q^{0}\) and similarly for all indices so \(X\) is fixed by just one term \(X^{1111}\) (not true at \(D>1\)). We can express the contraction by previous quantities \[g_{\mu\nu}g_{\xi\eta}X^{\mu\nu\xi\eta}=\] \[\int dp\theta(|q^{0}|/2-|p^{0}|)\delta(2q\cdot p)\delta(q\cdot q/4 +p\cdot p-m^{2})\times\] \[(q\cdot q/4-p\cdot p-(D+1)q\cdot q/4)^{2}\] \[=((D-1)q\cdot q/4+m^{2})^{2}W(q) \tag{109}\] and \[g_{\mu\xi}g_{\nu\eta}X^{\mu\nu\xi\eta}=\] \[\int dp\theta(|q^{0}|/2-|p^{0}|)\delta(2q\cdot p)\delta(q\cdot q /4+p\cdot p-m^{2})\times\] \[((q\cdot q)^{2}/16+(p\cdot p)^{2}+(D+1)(q\cdot q)^{2}/16\] \[-(q\cdot p)^{2}/2-(q\cdot q)^{2}/8+(p\cdot p)(q\cdot q)/2)\] \[=((q\cdot q)^{2}(D-1)/16+m^{4})W(q) \tag{110}\] for \(W\) given by (29) so that \[(q\cdot q)^{2}(D+2)DP^{\prime}=\] \[(m^{4}+(D+1)m^{2}q\cdot q/2+(q\cdot q)^{2}(D^{2}-3)/16)W(q),\] \[(q\cdot q)^{2}(D+2)DR^{\prime}=\] \[(m^{4}-m^{2}q\cdot q/2+(q\cdot q)^{2}/16)W(q) \tag{111}\] with \(P^{\prime}=PW\), \(R^{\prime}=RW\) in (51). In the case \(D=1\) we can set \(R^{\prime}=0\) and \[P^{\prime}=\pi^{2}m^{4}(q\cdot q)^{-5/2}(q\cdot q/4-m^{2})^{-1/2} \tag{112}\] In the limit \(m\to 0\) also \(P^{\prime}\to 0\) but only at \(q\cdot q>0\). The limit \(q\cdot q\to 0\) has to be considered separately Note that the final integral \[\int dq\theta(q\cdot q/4-m^{2})m^{4}(q\cdot q)^{-5/2}\times\] \[(q\cdot q/4-m^{2})^{-1/2}|(q^{\mu}q^{\nu}-g^{\mu\nu}(q\cdot q)) \tilde{f}_{\mu\nu}(q)|^{2}\pi^{2} \tag{113}\] can be transformed using change of variables \(q^{0}=m\sqrt{w}\cosh u\), \(q^{1}=m\sqrt{w}\sinh u\) into \[\int_{4}^{\infty}dw\int duw^{-5/2}(w/4-1)^{-1/2}m^{4}w^{2}\times\] \[|(U^{\mu}U^{\nu}-g^{\mu\nu})\tilde{f}_{\mu\nu}(m\sqrt{w}U)|^{2}\pi ^{2} \tag{114}\] with \(U^{0}=\cosh u\), \(U^{1}=\sinh u\). Suppose \(\tilde{f}(q)\) is a regular function that decays to \(0\) if \(q\to\infty\). Then the limit \(m\to 0\) reduces the integral essentially to the lines of light, i.e. \(q^{0}=\pm q^{1}\), for \(|u|\gg 1\). We make then approximation \(U=e^{u}U_{\pm}/2\) with \(\lambda=m\sqrt{w}e^{u}\) and \(U_{\pm}=(1,\pm 1)\). Then we get \[\langle a^{2}\rangle_{0}=\sum_{\pm}\int_{0}^{\infty}d\lambda\lambda^{3}|U_{\pm }^{\mu}U_{\pm}^{\nu}\tilde{f}_{\mu\nu}(\lambda U_{\pm})|^{2}\pi^{2}/3\cdot 2^ {5} \tag{116}\] since \[\int_{4}^{\infty}dww^{-5/2}(w/4-1)^{-1/2}=1/6. \tag{117}\]
2308.03694
Continuous Hamiltonian dynamics on digital quantum computers without discretization error
We introduce an algorithm to compute Hamiltonian dynamics on digital quantum computers that requires only a finite circuit depth to reach an arbitrary precision, i.e. achieves zero discretization error with finite depth. This finite number of gates comes at the cost of an attenuation of the measured expectation value by a known amplitude, requiring more shots per circuit. The gate count for simulation up to time $t$ is $O(t^2\mu^2)$ with $\mu$ the $1$-norm of the Hamiltonian, without dependence on the precision desired on the result, providing a significant improvement over previous algorithms. The only dependence in the norm makes it particularly adapted to non-sparse Hamiltonians. The algorithm generalizes to time-dependent Hamiltonians, appearing for example in adiabatic state preparation. These properties make it particularly suitable for present-day relatively noisy hardware that supports only circuits with moderate depth.
Etienne Granet, Henrik Dreyer
2023-08-07T16:12:27Z
http://arxiv.org/abs/2308.03694v2
# Continuous Hamiltonian dynamics on noisy digital quantum computers without Trotter error ###### Abstract We introduce an algorithm to compute Hamiltonian dynamics on digital quantum computers that requires only a finite circuit depth to reach an arbitrary precision, i.e. achieves zero Trotter error with finite depth. This finite number of gates comes at the cost of an attenuation of the measured expectation value by a known amplitude, requiring more shots per circuit. The algorithm generalizes to time-dependent Hamiltonians, appearing for example in adiabatic state preparation. This makes it particularly suitable for present-day relatively noisy hardware that supports only circuits with moderate depth. _Introduction.--_ Hamiltonian dynamics is one of the most promising application of current and near-term quantum computers [1; 2]. It is believed to be a task that can be performed exponentially faster on quantum computers and that finds multiple applications, either directly or indirectly as subroutines of other quantum algorithms [3; 4; 5; 6; 7; 8]. Given a Hamiltonian \(H\) and a simulation time \(t\), there exist several ways to implement the time evolution operator \(U(t)=e^{itH}\) on a digital quantum computer where only one or two-qubit gates can be used. The simplest approach uses product formulas such as Trotterization to decompose the dynamics into a sequence of elementary gates \(U(t)\approx e^{itH_{1}}...e^{itH_{n}}\)[9; 10; 2]. These methods have been generalized and improved in many ways [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. In particular, the introduction of randomness [23; 24; 25; 26; 27; 28] such as in the qDRIFT algorithm [29; 30; 31; 32; 33; 34; 35; 36; 37] have particularly good theoretical performance for non-sparse Hamiltonians \(H\). These algorithms have in common that they only _approximate_ the continuous time evolution with finite-depth circuits, and display discretization errors (sometimes called _Trotter errors_) that vanish only in the limit of infinitely many gates. This is particularly problematic for adiabatic state preparation (since Trotter errors generically lead to an effective heating of the system) [38; 39; 5] and in applications like quantum chemistry [40; 41; 42; 43; 44; 45] where extreme precision is desired. More sophisticated algorithms have better theoretical scalings, such as linear combination of unitaries [46; 19], quantum signal processing [48; 49; 50; 51; 52] or quantum walks [53; 54], but require a large resource overhead. As a consequence, at least in the near term, exact continuous time dynamics is considered to be the prerogative of non-gate-based, analog quantum simulators [55; 56; 57; 58; 59]. In this Letter we introduce an algorithm for the computation of Hamiltonian dynamics of observables \(\langle\mathcal{M}(t)\rangle=\langle 0|e^{iHt}\mathcal{M}e^{-iHt}|0\rangle\) on digital quantum computers that has no Trotter error even with a finite number of gates, for any hermitian Hamiltonian \(H\) and unitary \(\mathcal{M}\). The expectation value \(\langle\mathcal{M}(t)\rangle\) is obtained as the average over random circuits drawn from a judiciously chosen distribution, multiplied by a known amplification factor. The angle \(\tau\) of the gates entering the circuit is a free parameter of our algorithm, that only modifies the amplification factor and not the precision of the result. It allows one to tune the number of gates in the circuit at the cost of an increased number of shots on a quantum computer, which makes it particularly suitable for present-day hard-ware that supports only moderate circuit depths. The algorithm straightforwardly generalizes to time-dependent Hamiltonians, achieving again zero Trotter error with finite circuit depth. We demonstrate the algorithm with numerical simulations on a 2D Ising model and for the electronic structure of the stretched water molecule. _Simulating small-angle gates with large-angle gates.--_ Let us first introduce the basic mechanism that underpins our algorithm. For a real number \(0\leq p\leq 1\), an angle \(\tau\) and \(O\) any operator such that \(O^{2}=I\), we have the equality \[1-p+pe^{i\tau O}=\lambda e^{i\tau^{\prime}O}\,, \tag{1}\] where \[\tan\tau^{\prime}=\frac{p\sin\tau}{1-p+p\cos\tau}\,,\qquad\lambda=\frac{p\sin \tau}{\sin\tau^{\prime}}\,. \tag{2}\] This relation can be used to realize an effective angle \(0<\tau^{\prime}<\tau\) by applying \(e^{i\tau O}\) with probability \[p=\frac{\tan\tau^{\prime}}{\sin\tau+(1-\cos\tau)\tan\tau^{\prime}}\,. \tag{3}\] To see this, let us consider a circuit containing \(G\) gates \(e^{i\tau^{\prime}O}\) that produces the wave function \(|\psi\rangle\). For a subset \(S\) of these gates, we denote \(|\psi_{S}\rangle\) the wave function obtained with the Figure 1: Example of a tetris configuration for a transverse-field Ising Hamiltonian, with \(U_{O}=e^{i\tau O}\) acting on qubits at the bottom of the drawing. The average over such configurations converges to the exact time-evolved expectation value \(\langle 0|e^{iHt}\mathcal{M}e^{-iHt}|0\rangle\), up to a known multiplicative constant. same circuit but with deleting the gates \(e^{i\tau^{\prime}O}\) contained in \(S\) and replacing the remaining gates \(e^{i\tau^{\prime}O}\) not in \(S\) by \(e^{i\tau O}\). Then, for any observable \(\mathcal{M}\), by repeatedly applying (1), we have \[\sum_{S,S^{\prime}}(1-p)^{|S|+|S^{\prime}|}p^{2G-|S|-|S^{\prime}|}\langle\psi_{ S^{\prime}}|\mathcal{M}|\psi_{S}\rangle=\lambda^{2G}\langle\psi|\mathcal{M}|\psi \rangle\,, \tag{4}\] where the sum runs over all the possible subsets \(S,S^{\prime}\) of gates to delete in the original circuit. The expectation value \(\langle\psi|\mathcal{M}|\psi\rangle\) that originally involves a circuit with angle \(\tau^{\prime}>0\) can thus be exactly computed with a circuit involving only larger angles \(\tau>\tau^{\prime}\) but in which some gates are randomly removed. This however comes at the cost of an attenuation factor \(\lambda^{2G}<1\) which increases the total number of shots required to evaluate \(\langle\psi_{S^{\prime}}|\mathcal{M}|\psi_{S}\rangle\) for different \(S,S^{\prime}\) to reach a given precision on \(\langle\psi|\mathcal{M}|\psi\rangle\). The same reasoning applies to gates with a negative angle \(\tau^{\prime}<0\), with then \(\tau<\tau^{\prime}\). _Continuous time limit.--_We now consider a Hamiltonian \(H\) that we write as a linear combination of \(N\) operators \(O_{n}\) that satisfy \(O_{n}^{2}=I\), i.e. \[H=\sum_{n=1}^{N}c_{n}O_{n}\,, \tag{5}\] which is always possible since a Hamiltonian can be written as a sum of products of Pauli matrices. We would like to compute the expectation value of an observable \(\mathcal{M}\) within the wave function \(|\psi(t)\rangle=e^{iHt}|\psi(0)\rangle\) obtained by continuous time evolution. According to the Trotter-Suzuki formula, we can write \[e^{itH}=\lim_{\tau^{\prime}\to 0^{+}}(e^{i\tau^{\prime}c_{1}O_{1}}...e^{i\tau^{ \prime}c_{N}O_{N}})^{t/\tau^{\prime}}\,. \tag{6}\] For each \(n=1,...,N\), we choose \(0<\tau_{n}\leq\pi/2\) and implement \(e^{i\tau^{\prime}c_{n}O_{n}}\) using the previously explained protocol with angle \(\tau_{n}\). In the sequence of gates appearing on the right-hand side of (6), instead of each gate \(e^{i\tau^{\prime}c_{n}O_{n}}\) we thus apply \(e^{i\tau_{n}\text{sgn}\,(c_{n})O_{n}}\) with probability \(p_{n}\) given by \[p_{n}=\frac{\tau^{\prime}|c_{n}|}{\sin\tau_{n}}+\mathcal{O}((\tau^{\prime})^{2 })\,. \tag{7}\] The corresponding attenuation factor \(\lambda_{n}\) is \[\lambda_{n}=1-\tau^{\prime}|c_{n}|\tan(\tau_{n}/2)+\mathcal{O}((\tau^{\prime} )^{2})\,. \tag{8}\] In the limit \(\tau^{\prime}\to 0\), the probability of picking a gate \(p_{n}\to 0\) becomes a rare event and we converge to a Poisson process. Namely, for each gate \(e^{i\tau_{n}\text{sgn}\,(c_{n})O_{n}}\) we obtain a sequence of gate times \(0<t_{n}^{(1)}<...<t_{n}^{(m_{n})}<t\) drawn from a Poisson process with rate \(|c_{n}|/\sin\tau_{n}\). For a given realization \(T\) of all these times for different \(n\)'s, we denote \(|\psi_{T}\rangle\) the wave function obtained by applying the gates \(e^{i\tau_{n}\text{sgn}\,(c_{n})O_{n}}\) ordered by time of occurrence \(t_{n}^{(i)}\), irrespective of \(n\). We call each of these configurations a _tetris_, as each gate "falls" randomly on the initial state with rate \(|c_{n}|/\sin\tau_{n}\), which is reminiscent of the eponymous game, see Fig 1. Then, denoting \(\mathbb{E}\) the expectation value with respect to two independent tetrises \(T,T^{\prime}\), we have \[\langle\psi(t)|\mathcal{M}|\psi(t)\rangle=\frac{\mathbb{E}[\langle\psi_{T}| \mathcal{M}|\psi_{T}\rangle]}{\lambda_{\mathrm{att}}}\,, \tag{9}\] with the total attenuation \[\lambda_{\mathrm{att}}=\exp\left(-2t\sum_{n=1}^{N}|c_{n}|\tan(\tau_{n}/2) \right)\,. \tag{10}\] This is the fundamental relation that defines our algorithm. Remarkably, the exact expectation value at time \(t\) under continuous time dynamics can be obtained _without_ Trotter error, by only applying gates with application time \(\tau_{n}>0\) fixed. Moreover, the average number of gates in each circuit is \(2t\sum_{n=1}^{N}\frac{|c_{n}|}{\sin\tau_{n}}\), which is finite and only controlled by \(\tau_{n}\) and not by the precision wanted on the result. _Statement of the algorithm.--_ We state here the algorithm deduced from the previous explanations. It takes as input a Hamiltonian \(H\) decomposed as in (5), a unitary observable \(\mathcal{M}\), a time \(t\) and an initial state \(|\psi(0)\rangle\), and outputs \(\langle\psi(t)|\mathcal{M}|\psi(t)\rangle\). It takes \(0<\tau_{n}\leq\pi/2\) as \(N\) parameters. 1. Prepare the system in the initial state. 2. For each \(n\in\{1,...,N\}\), draw an integer \(m_{n}\) from a Poisson distribution with parameter \(\frac{|c_{n}|t}{\sin|\tau_{n}|}\). Then draw \(m_{n}\) real numbers \(t_{n}^{(i)}\), where \(i\in\{1,...,m_{n}\}\), uniformly at random between \(0\) and \(t\). These \(M\equiv\sum_{n=1}^{N}m_{n}\) numbers are collected into a set \(T\). 3. Deduce then the sequence \(k_{1},...,k_{M}\in\{1,...,N\}\) such that the index \(n\) corresponding to the \(j\)-th smallest element of \(T\) is \(k_{j}\). For \(m=1,...,M\) in this order, apply the gate \(e^{i\tau_{n}\text{sgn}\,(c_{k_{m}})O_{k_{m}}}\) on the system. 4. Apply the observable \(\mathcal{M}\) on the system. 5. Repeat steps (2) and (3) with \(\tau_{n}\) replaced by \(-\tau_{n}\). 6. Measure the overlap with the initial state and divide the result by \(\lambda_{\mathrm{att}}\) given in (10). On a quantum computer, this step requires to perform a Hadamard test. 7. Repeat steps (1) to (6) a number of times and average the results. The algorithm shares similarities with qDRIFT [29], as they are both random compilation algorithms. However the algorithm we present in this Letter does not have any Trotter error and any angle \(\tau_{n}\) can be used. Our algorithm also requires to perform a backward propagation, i.e. to implement \(e^{itH}\) and \(e^{-itH}\) explicitly in the circuit. The drawing process of the random gates also differ: the tetrises have to be drawn with the Poisson process described earlier, whereas using the same drawing process as in qDRIFT would lead to a systematic bias at finite \(t,N,\tau\). _Noisy gates and optimal angles.--_ In our algorithm, the gate angles \(\tau_{n}\) are parameters that can be chosen freely and do not deteriorate the precision. We will now show that there exist optimal \(\tau_{n}\)'s that only depend on the level of noise of the quantum computer on which the algorithm is run. Let us assume that the application of each gate \(e^{i\tau_{n}\mathrm{sgn}\left(c_{n}\right)O_{n}}\) comes with a signal attenuation \(e^{-r_{n}}\) due to depolarizing noise [6]. This scenario is realistic in state-of-the-art devices and typical two-qubit gate fidelities are \(e^{-r}\sim 99.8\%\), in which case \(r\sim 2\mathrm{e}{-3}\)[60]. There are on average \(\frac{t|c_{n}|}{\sin\tau_{n}}\) such gates per tetris. The damping due to hardware imperfection is thus \[q_{\mathrm{att}}=\exp\left(-2t\sum_{n=1}^{N}\frac{r_{n}|c_{n}|}{\sin\tau_{n}} \right)\,. \tag{11}\] The total attenuation of the signal is \(\lambda_{\mathrm{att}}q_{\mathrm{att}}\). Assuming \(r_{n}\) small, the optimal times \(\tau_{n}^{*}\) that minimize the total attenuation are \[\tau_{n}^{*}=\sqrt{2r_{n}}\,. \tag{12}\] Then the minimal total attenuation is \[\lambda_{\mathrm{att}}q_{\mathrm{att}}=\exp\left(-2t\sum_{n=1}^{N}\sqrt{2r_{n }}|c_{n}|\right)\,. \tag{13}\] We note the scaling in \(t\sqrt{r_{n}}\) for continuous time dynamics up to time \(t\), whereas the exponent for discrete time dynamics with \(t\) steps would scale as \(tr_{n}\). _Scaling comparison with Trotter.--_ We now compare the efficiency of our algorithm with Trotter-Suzuki. Specifically, we wish to estimate the number of shots \(M\) required in our case and in Trotter to obtain a given precision \(\epsilon\) on the result. For simplicity, we assume a depolarizing noise model, causing a constant signal attenuation \(e^{-r}\) for all gates. In our algorithm, the error after \(M\) shots is \(\frac{1}{\sqrt{M\lambda_{\mathrm{att}}q_{\mathrm{att}}}}\). At the optimal gate time \(\tau^{*}=\sqrt{2r}\) we obtain thus the following required number of shots \(M_{\mathrm{Totris}}\) for a precision \(\epsilon\) \[M_{\mathrm{Totris}}\sim\frac{e^{4t\sqrt{2r}\sum_{n=1}^{N}|c_{n}|}}{\epsilon^{2 }}\,. \tag{14}\] One notes that the precision \(\epsilon\) appears only as a multiplicative factor \(1/\epsilon^{2}\), and not in the exponential. In the Trotter algorithm, any choice of Trotter step \(\tau\) comes with a certain minimal error, that scales as \(Nt^{2}C\tau\) for the first order Trotter formula, with some coefficient \(C\). To reach precision \(\epsilon\), we thus necessarily have to take \(\tau<\epsilon/(Nt^{2}C)\), hence run a circuit with more than \(N^{2}t^{3}C/\epsilon\) gates. Then the imperfections in the hardware incur a signal attenuation of order \(e^{-N^{2}t^{3}Cr/\epsilon}\). The number of shots thus has to be at least \[M_{\mathrm{Trotter}}\sim\frac{e^{2N^{2}t^{3}Cr/\epsilon}}{\epsilon^{2}}\,. \tag{15}\] Higher-order Trotter formulas achieve better scalings with fractional powers of \(\epsilon\) appearing in the exponential, but are generally too costly to be implemented on present-day hardware. Comparing with the exponent in (14), we thus see that under the assumption of a depolarizing noise model our algorithm requires exponentially fewer shots for a given precision \(\epsilon\) when \[\epsilon<\frac{t^{2}N^{2}C\sqrt{r}}{2\sqrt{2}\sum_{n=1}^{N}|c_{n}|}\,. \tag{16}\] Alternatively, without assuming a depolarizing noise model, we can say that the noise sets an upper limit \(\sim 1/r\) for the circuit depth on the hardware. This sets a lower limit for the precision \(\epsilon\sim Nt^{3}Cr\) that can be achieved with a Trotter decomposition, whereas in our algorithm arbitrary precision can be reached by increasing the number of shots. _Time-dependent Hamiltonian.--_ The algorithm can be generalized straightforwardly to dynamics with a time-dependent Hamiltonian \(H(t)\). Decomposing \[H(t)=\sum_{n=1}^{N}c_{n}(t)O_{n}\,, \tag{17}\] with time-dependent coefficients \(c_{n}(t)\), and denoting \(|\psi(t)\rangle=\mathcal{T}\exp\left(i\int_{0}^{t}\mathrm{d}sH(s)\right)| \psi(0)\rangle\), the expectation value of operators \(\mathcal{M}\) can again be written as an average over tetrises (9). The tetrises \(|\psi_{T}\rangle\) are produced by drawing times \(0<t_{n}^{(1)}<...<t_{n}^{(m_{n})}<t\) from a Poisson process with time-dependent rate \(|c_{n}(s)|/\sin\tau_{n}\), for each \(n=1,...,N\), and applying on the initial wave function each gate \(e^{i\tau_{n}\mathrm{sgn}\left(c_{n}(s)\right)O_{n}}\) ordered by time of occurrence \(s=t_{n}^{(i)}\). The attenuation factor is then obtained by replacing \(t|c_{n}|\) by \(\int_{0}^{t}\mathrm{d}s|c_{n}(s)|\). In practice, we proceed as follows. We introduce the function \[z_{n}(u)=\int_{0}^{u}\mathrm{d}s|c_{n}(s)|\,, \tag{18}\] as well as \(z_{n}^{-1}(u)\) its reciprocal, i.e. such that \(z_{n}^{-1}(z_{n}(u))=u\). We replace step (2) of the time-independent case by Figure 2: Expectation value of \(Z\) in the 2D Ising model at \(h=3\) in size \(3\times 4\) computed with (9). Left panel: as a function of time \(t\), with gate angles \(\tau=0.04\), with \(100\) tetrises (teal) and \(1000\) tetrises (orange). For this value of \(\tau\), there are around \(3000\times t\) gates \(e^{i\tau O}\) per circuit. Right panel: as a function of the number of samples, for \(t=1\) and \(\tau=0.04\) (purple) and \(\tau=0.08\) (olive). The exact value is indicated by the black continuous curves in both panels. In the right panel, the dashed black line indicates the value obtained with a Trotter decomposition with a time step \(\tau=0.04\). 2. For each \(n\in\{1,...,N\}\), draw an integer \(m_{n}\) from a Poisson distribution with parameter \(\frac{z_{n}(t)}{\sin|\tau_{n}|}\). Then draw \(m_{n}\) real numbers \(\tilde{t}_{n}^{(i)}\), where \(i\in\{1,...,m_{n}\}\), uniformly at random between \(0\) and \(z_{n}(t)\), and set \(t_{n}^{(i)}=z_{n}^{-1}(\tilde{t}_{n}^{(i)})\). These \(M\equiv\sum_{n=1}^{N}m_{n}\) numbers \(t_{n}^{(i)}\) are collected into a set \(T\). Then in step (3), the gate applied is \(e^{i\tau_{k_{m}}\text{sgn}\left(c_{k_{m}}(s)\right)O_{k_{m}}}\) where \(s\in\{t_{n}^{(i)}\}\) is the time at which the gate is applied. For the backward propagation one has to reverse the order of the gates. The total attenuation \(\lambda_{\rm att}\) of step (6) is \[\lambda_{\rm att}=\exp\left(-2\sum_{n=1}^{N}z_{n}(t)\tan(\tau_{n}/2)\right)\,. \tag{19}\] This algorithm produces the _exact_ time evolution of the wave function under the time-dependent Hamiltonian \(H(t)\), without errors arising from discretizing the continuous function \(c_{n}(s)\) or Trotter errors. **Background Hamiltonian.--** The algorithm has the following simple improvement. Let us consider a subset of gates \(\mathcal{O}\subset\{O_{n}\}_{n=1,..,N}\) that all commute among themselves, i.e. \([O,O^{\prime}]=0\) for all \(O,O^{\prime}\in\mathcal{O}\), and denote \(H_{\rm background}=\sum_{O_{p}\in\mathcal{O}}c_{p}O_{p}\). If for these gates we take the limit of zero angle \(\tau_{p}\to 0\) in the algorithm, the Poisson process almost always picks gates in \(\mathcal{O}\) with very short angle. But since they commute this can be implemented with a finite product of \(e^{i\tau O}\) with \(\tau\) obtained by summing the different angles. Hence the algorithm is modified as follows. In step (2) we only draw times corresponding to the gates _not_ in \(\mathcal{O}\), and use them to produce an ordered sequence of gates as in step (3). However, between each application of these gates \(e^{i\tau_{n}O_{n}}\) and \(e^{i\tau_{n}O_{n}}\) at times \(s_{n}<s_{m}\), we apply \(e^{i\delta sH_{\rm background}}\) with \(\delta s=s_{m}-s_{n}\) the difference between the two instants at which the gates are applied. This reduces the attenuation factor \(\lambda_{\rm att}\), which is now computed _only_ including the gates _not_ in \(\mathcal{O}\). The Hamiltonian \(H_{\rm background}\) thus plays the role of a "background" Hamiltonian that is always applied on the system, but on top of which gates \(e^{i\tau_{n}O_{n}}\) with \(O_{n}\notin\mathcal{O}\) are applied. _Numerical tests: square lattice Ising model.--_ We present numerical tests of our algorithm on the 2D square lattice Ising model in a transverse field \(h\), with Hamiltonian \[H=-\sum_{\langle i,j\rangle}Z_{i}Z_{j}-h\sum_{j}X_{j}\,, \tag{20}\] with periodic boundary conditions. In Figure 2 we show \(\langle Z(t)\rangle\), with the initial state being the \(Z=+1\) product state, as computed with formula (9). We show the convergence of (9) towards the exact value in the limit of large number of tetrises, even if the gate time \(\tau\) is finite. The convergence is faster when the angle \(\tau\) is small, but the corresponding quantum circuits involve then more gates. We then consider the time-dependent case of our algorithm with the adiabatic preparation of the ground state at \(h_{f}=2.5\), starting from \(h=0\). For a final time \(T_{f}\) we consider the time-dependent magnetic field \(h(t)=h_{f}\sin(\frac{\pi}{2}\sin(\frac{\pi t}{2T_{f}})^{2})^{2})\)[61]. In the left panel of Figure 3 we show the final energy per site obtained at \(t=T_{f}\), as a function of \(T_{f}\), together with the error bars for a fixed number of shots. We see the agreement with the exact values, as well as the broadening of the error bars as \(T_{f}\) grows at fixed \(\tau\), as imposed by (10). _Noisy numerical tests: quantum chemistry.--_ We now consider a noisy application of our algorithm on a non-sparse Hamiltonian. Hamiltonians describing molecular electronic structure problems are typically decomposed in a basis set of orbitals, where they read \[H=\sum_{ij=1}^{N}h_{ij}c_{i}^{\dagger}c_{j}+\sum_{ijkl=1}^{N}h_{ijkl}c_{i}^{ \dagger}c_{j}^{\dagger}c_{k}c_{l}\,, \tag{21}\] with \(c\)'s canonical fermionic operators and \(N\) the number of orbitals considered. Written in terms of Pauli gates through a Jordan-Wigner transformation, these molecular Hamiltonians involve thus \(\mathcal{O}(N^{4})\) terms with each \(\mathcal{O}(N)\) Pauli gates, incurring a prohibitively large gate count for any Trotter implementation of the dynamics. We consider a H\({}_{2}\)O water molecule in a STO-6G basis with \(14\) orbitals, and use our algorithm to study the Loschmidt echo \[\mathcal{L}(t)=\langle HF|e^{itH}|HF\rangle\,, \tag{22}\] where \(|HF\rangle\) denotes the Hartree-Fock ground state. We take a geometry with an angle H-O-H of \(105^{o}\) and with an elongated distance H-O of \(2.2\) A. This geometry is chosen so as to have a more significant difference between the exact ground state and \(|HF\rangle\). Our algorithm for expectation values is straightforwardly modified to compute \(\mathcal{L}(t)\) by skipping steps (4) and (5) and using \(\sqrt{\lambda_{\rm att}}\) in step (6), as we have \(\mathcal{L}(t)=\mathbb{E}[\langle HF|\psi_{T}\rangle]/\sqrt{\lambda_{\rm att}}\). Figure 3: Left: _Adiabatic state preparation_. Energy per site at time \(t=T_{f}\) as a function of \(T_{f}\), for the Ising model in size \(3\times 4\) with the time-dependent magnetic field \(h(t)\) quoted in the text, starting from the \(Z=1\) product state: noiseless simulations with \(10^{5}\) tetrises and \(\tau=0.04\) (teal, the shade indicating one standard deviation), and exact value (black). Right: _Chemistry Hamiltonian._ Ratio \(R(t)\) in (23) as a function of \(t\), exact value (black), noiseless simulations (teal) and noisy simulations (orange) with a two-body gate fidelity \(0.982\). In all cases the angles are given by (12). Each run of the algorithm uses \(2000\) tetrises and \(20\) shots per tetris, and each tetris has in average \(144\) two-body gates. On a noisy hardware, the attenuation due to \(q_{\rm att}\) has to be mitigated. One strength of our algorithm is that one can reduce the depth of the circuits, and so the noise, by increasing the angles \(\tau\), without affecting the precision. Alternatively, one may compute the ratio \[R(t)=\frac{\Im\mathcal{L}(t)}{\Re\mathcal{L}(t)}\,. \tag{23}\] This ratio should be noise-free if the noise on the hardware can be well approximated by a depolarizing channel, while in principle containing information about the energy of all states that have non-zero overlap with \(|HF\rangle\), including the ground state, as they appear in the oscillation frequencies of the numerator and denominator. In the right panel of Figure 3 we show a noisy simulation of our algorithm that includes a depolarizing channel after each \(2\)-qubit gate, and observe excellent agreement. A few comments on a hardware implementation of our algorithm are in order. Implementing (9) with an actual digital quantum circuit requires to perform a Hadamard test, as one has to average the amplitude \(\langle\psi_{T}|\mathcal{M}|\psi_{T}\rangle\) over the tetrises \(T,T^{\prime}\), and not the absolute value square. One also has to choose a certain number of shots \(M\) to do per tetris, i.e. how to distribute the resources between number of tetrises and accuracy of each tetris. Neglecting compilation time, the optimal number of shots can be seen to be \(M=1\). Separately, the numerical data reported in this Letter underestimates the efficiency of the algorithm, since no background Hamiltonian techniques were used. A more quantitative study of its efficiency would be interesting to conduct. _Interpolation between quantum time evolution and path integral representation.--_ Some comments about the upper and lower limits of the angle \(\tau\) are in order. When \(\tau\to 0\), the different gates \(e^{i\tau O_{n}}\) commute at order \(\tau\) and there are increasingly more gates drawn from the Poisson process. With probability \(1\) in this limit, a single tetris \(|\psi_{T}\rangle\) is equal to the exact time evolution \(e^{iHt}|0\rangle\), and the attenuation factor (10) is equal to \(1\). When \(\tau=\pi/2\), we have \(e^{i\tau O_{n}}=iO_{n}\), and so when \(O_{n}\) is a string of Pauli operators, the gates can be implemented with only one-qubit gates. Each tetris \(|\psi_{T}\rangle\) remains thus a product state in the \(Z\) basis and describes a classical trajectory in discrete time. Thus (9) becomes akin to a path integral representation, in the sense that a quantum expectation value is expressed as an averaging over exponentially many classical trajectories. These two limits provide an interesting interpretation of our algorithm as an _interpolation_ between a single quantum trajectory for \(\tau=0\), and exponentially many classical trajectories for \(\tau=\pi/2\). Intermediate values of \(\tau\) near \(\pi/2\) allow one to decrease the entanglement present in each tetris \(|\psi_{T}\rangle\) compared to the exact time-evolved state \(e^{iHt}|0\rangle\), but at the cost of increasing the number of samples required through the attenuation factor (10). This could find applications in the classical simulation of quantum dynamics, where the major obstacle is the entanglement in \(e^{iHt}|0\rangle\) that grows linearly with time. One could instead implement (9) classically by choosing \(\tau>0\) such that each tetris \(|\psi_{T}\rangle\) has little enough entanglement to be implemented with tensor network techniques. This would come at the cost of summing over multiple samples because of the attenuation factor (10), but this sampling can be efficiently parallellized. _Discussion.--_ We presented a quantum algorithm to compute continuous Hamiltonian dynamics that requires a circuit depth that is independent of the desired precision \(\epsilon\), contrary to previously known algorithms that require an infinite depth when \(\epsilon\to 0\). We show that by applying randomly gates on the initial state according to a well-chosen distribution, and multiplying the result by a known amplification factor, one achieves zero Trotter error while having finite-depth circuits. This amplification factor however increases the number of shots required to reach a given precision. The algorithm and its mechanisms can be generalized in a number of ways. For example, decomposing a circuit into Clifford gates and \(T\)-gates, one can implement each \(T\)-gate by applying a \(e^{iZ\pi/4}\)-gate or no gate both with probability \(1/2\), which is a \(S\)-gate times a global phase. Because of the attenuation factor \(\lambda=\cos(\pi/8)\) for each replaced \(T\)-gate, this enables one to sample classically from a circuit with \(t\) many \(T\)-gates in a time \(e^{2|\log(\cos\frac{\pi}{8})|t}\) multiplied by the simulation time of the resulting Clifford circuits. The exponent matches the currently best known exponent [62]. We thank David Zsolt Manrique for helpful comments on the draft. E.G. acknowledges support by the Bavarian Ministry of Economic Affairs, Regional Development and Energy (StMWi) under project Bench-QC (DIK0425/01).
2304.02887
Design and Control of a Ballbot Drivetrain with High Agility, Minimal Footprint, and High Payload
This paper presents the design and control of a ballbot drivetrain that aims to achieve high agility, minimal footprint, and high payload capacity while maintaining dynamic stability. Two hardware platforms and analytical models were developed to test design and control methodologies. The full-scale ballbot prototype (MiaPURE) was constructed using off-the-shelf components and designed to have agility, footprint, and balance similar to that of a walking human. The planar inverted pendulum testbed (PIPTB) was developed as a reduced-order testbed for quick validation of system performance. We then proposed a simple yet robust LQR-PI controller to balance and maneuver the ballbot drivetrain with a heavy payload. This is crucial because the drivetrain is often subject to high stiction due to elastomeric components in the torque transmission system. This controller was first tested in the PIPTB to compare with traditional LQR and cascaded PI-PD controllers, and then implemented in the ballbot drivetrain. The MiaPURE drivetrain was able to carry a payload of 60 kg, achieve a maximum speed of 2.3 m/s, and come to a stop from a speed of 1.4 m/s in 2 seconds in a selected translation direction. Finally, we demonstrated the omnidirectional movement of the ballbot drivetrain in an indoor environment as a payload-carrying robot and a human-riding mobility device. Our experiments demonstrated the feasibility of using the ballbot drivetrain as a universal mobility platform with agile movements, minimal footprint, and high payload capacity using our proposed design and control methodologies.
Chenzhang Xiao, Mahshid Mansouri, David Lam, Joao Ramos, Elizabeth T. Hsiao-Wecksler
2023-04-06T06:33:20Z
http://arxiv.org/abs/2304.02887v1
# Design and Control of a Ballbot Drivetrain with High Agility, Minimal Footprint, and High Payload ###### Abstract This paper presents the design and control of a ballbot drivetrain that aims to achieve high agility, minimal footprint, and high payload capacity while maintaining dynamic stability. Two hardware platforms and analytical models were developed to test design and control methodologies. The full-scale ballbot prototype (MiaPURE) was constructed using off-the-shelf components and designed to have agility, footprint, and balance similar to that of a walking human. The planar inverted pendulum testbed (PIPTB) was developed as a reduced-order testbed for quick validation of system performance. We then proposed a simple yet robust LQR-PI controller to balance and maneuver the ballbot drivetrain with a heavy payload. This is crucial because the drivetrain is often subject to high stiction due to elastomeric components in the torque transmission system. This controller was first tested in the PIPTB to compare with traditional LQR and cascaded PI-PD controllers, and then implemented in the ballbot drivetrain. The MiaPURE drivetrain was able to carry a payload of 60 kg, achieve a maximum speed of 2.3 m/s, and come to a stop from a speed of 1.4 m/s in 2 seconds in a selected translation direction. Finally, we demonstrated the omnidirectional movement of the ballbot drivetrain in an indoor environment as a payload-carrying robot and a human-riding mobility device. Our experiments demonstrated the feasibility of using the ballbot drivetrain as a universal mobility platform with agile movements, minimal footprint, and high payload capacity using our proposed design and control methodologies. Body Balancing, Wheled Robots, Underactuated Robots ## I Introduction In this study, we proposed the development of a modular ballbot drivetrain as a universal mobility platform (Fig. 1). Ballbots, or ball balancing robots, are a family of dynamically stable mobile robots riding on top of a ball, or a spherical wheel [1, 2, 3]. The unique drivetrain design has a nonholonomic constraint [4]: it enables omnidirectional maneuverability such that the device can move, or translate, in any direction and spin around its vertical axis independently. Our goal was to create a ballot drivetrain that can be configured with different top modules and controlled through a remote control device or physical human-robot interaction. The potential applications for this platform could range from standalone tasks, such as package delivery and surveillance, to collaborative tasks such as mobile manipulation and human riding (Fig. 1). Among them, the human riding task presents unique challenges due to the high load capacity, agile locomotion, and safety requirements involved. To address these challenges, we explored and validated a drivetrain prototype of a mobility platform called Modular interactive adaptive Personal Unique Rolling Experience (MiaPURE). MiaPURE is a ballbot with a minimal footprint that can carry the weight of a human, navigate in a constrained space with its omnidirectional maneuverability, and enable intuitive control via physical interactions, such as maneuvering via torso leaning for human riding tasks. Building a ballbot with high agility, minimal footprint, and high load capacity is a nontrivial task due to a lack of standard guidelines for selecting or customizing drivetrain components, including actuators, omnireals, and the spherical wheel. A few previous ballbots with load capacity similar to the human weight have been developed. These devices include the CMU ballbot [4], OmniRide [5], OmniRide2 [6], and Ball Segway [7]. However, few benchmark results have been provided for these ballbot drivetrains based on performance specifications, such as the maximum speed and minimum braking time. Moreover, most of these ballbots lacked the compactness required for navigation in a constrained indoor environment. Therefore, we needed to re-visit design and control methodologies that could address these challenges and serve as a benchmark for future research. Controlling a ballbot for balancing and maneuvering is also challenging; not only due to its nonlinear, unstable zero dynamics, but also the unmodeled friction and dynamics in the torque transmission system. Model-based optimal Fig. 1: Prototype of MiaPURE drivetrain balancing with a 60 kg payload, and CAD renderings of the proposed mobility platform with potential top modules for package delivery, mobile manipulation, and human riding. controllers (such as the linear quadratic regulator, or LQR) [3] often fail to handle the unmodeled stiction in the system. On the other hand, empirical controllers such as the cascaded proportional-integral-proportional-derivative (PI-PD) controller [2, 4, 8] lack optimality and are often hard to tune and less robust to changes to the system parameter such as the payload weight. In this case, a model-based controller with stiction compensation capability would be desirable, especially for the ballbot drivetrain with heavy payloads. A few researchers implemented a sliding mode controller to handle uncertainties and unmodeled dynamics [9, 10], but it is often subject to chattering and difficulty in tuning when implemented in physical hardware [11]. The main contribution of this study is the mechanical design of a minimal footprint, high payload ballbot drivetrain, and investigation into a cascaded LQR-PI controller to achieve high agility. The mechanical design of the MiaPURE drivetrain and a reduced-order planar inverted pendulum testbed (PIPTB) are detailed in Section II. Section III reviews the system modeling and trajectory optimization for a safety-critical braking task for later benchmark experiments. Section IV presents the cascaded LQR-PI controller, which combines the advantages of both model-based LQR and PI controller to compensate for unmodeled friction in the system. The PIPTB is used to preliminarily validate the controller performance. Section V evaluates the performance of the full-sized MiaPURE drivetrain with a heavy payload, including the maximum speed and minimum braking time in multiple translation directions, along with demonstrations of payload carrying and human riding. Section VI discusses insights from the experiments, system limitations, typical failure modes, and future work directions, followed by a conclusion in Section VII. ## II Hardware Platforms Two hardware platforms were developed: the MiaPURE drivetrain and a reduced-order PIPTB ballbot model. The MiaPURE drivetrain was designed to carry loads of up to 80 kg, while being comparable in size to an office chair, with a height of under 50 cm and a footprint of under 40 cm \(\times\) 40 cm (approximately the width of an adult male's shoulders [12]). It was designed to move alongside humans with agility similar to human walking, with a maximum speed of 2 m/s and a braking time of 2 s from a cruise speed of 1.4 m/s (the preferred walking speed of humans [13]). The benchtop PIPTB version was designed to capture the unstable planar dynamics of the ballbot. It was constructed using similar hardware and served as a testbed for controller investigation and comparison. ### _MiaPURE - Ballbot Drivetrain_ The drivetrain was designed following a conventional ballbot configuration with omniwheel (OW) placement similar to BallIP [2] and Rezero [3]. Three OW-actuator pairs separated by \(120^{\circ}\) were utilized in the design, with each OW contacting the upper surface of the spherical wheel (SW), forming a contact angle (with respect to the vertical axis) of \(45^{\circ}\) (Fig. 2). #### Ii-A1 Omniwheel The OWs were responsible for supporting the payload and withstanding the torque generated by the motor. For this purpose, we selected single-plate OWs with 125 mm diameter, 60 kg load capacity, and TPU-coated rollers from SeCure Inc. in China. With the proposed OW configuration, these OWs provided a theoretical static load capacity of up to 127 kg (\(3\cos(\alpha)F_{LC}=127\) kg, where \(F_{LC}\) is the load capacity of each OW, and \(\alpha\) is the contact angle between the OW and the SW as mentioned earlier). #### Ii-A2 Actuator We used Quasi-Direct-Drive (QDD) actuators, which are commonly used in legged robots, to provide high power density and high backdrivability [14, 15, 16]. The actuators were customized using a brushless direct current (BLDC) motor and a planetary gearbox with a reduction ratio of 7.5 from T-motor Inc. in China. The resulting actuator had a diameter of 98 mm, a maximum torque of 43.2 Nm, and a no-load speed of 62 rad/s when powered by a 45 V source (without considering the efficiency of the motor driver). #### Ii-A3 Spherical Wheel SW required a high load capacity, traction with the OWs, high torque transmission bandwidth, and a no-slip condition with the ground to ensure adequate yaw control authority. To meet these requirements, we fabricated our own SW using an off-the-shelf bowling ball and attached pentagon and hexagon-shaped pieces of 60A SBR rubber (with a thickness of 6.35 mm) using adhesives, as shown in Fig. 3. The resulting prototype is a 22.9 cm diameter SW with a weight of 3.6 kg and an estimated moment of inertia of 0.047 \(\mathrm{kg\,mm^{2}}\). #### Ii-A4 Electrical System The main electronic components include a micro-controller (RoboRIO Robotics Controller, National Instrument Inc., USA), two motor drivers (ODrive V3.6, ODrive Robotics Inc., USA) to control three QDD actuators, and an inertial measurement unit (VN100, Vector-Nav. Inc., USA) for sensing of upper body tilting. #### Ii-A5 Overview The final prototype of the MiaPURE drivetrain weighed 17.9 kg. The major drivetrain components including motors, OWs, and SW had a total weight of 10.4 kg, and the overall weight of the system can be further reduced with lighter chassis materials and a lighter SW. The Fig. 2: Mechanical design of the ballbot drivetrain for MiaPURE. drivetrain has a height of 45 cm (height) and a maximum footprint of 36 cm \(\times\) 36 cm without the support ring (51 cm \(\times\) 51 cm), which is comparable to the design target of 50 cm (height) \(\times\) 40 cm \(\times\) 40 cm (footprint). More details of the mechanical and electrical system of this drivetrain can be found in [17]. ### _PIPTB - Controller Testbed_ The PIPTB, a physical embodiment of the ballbot model, was constructed using mechanical and electrical components similar to those used in the MiaPURE drivetrain (Fig. 4). The PIPTB's design is advantageous because it can capture the unstable planar dynamics of the full-scale ballbot while lowering system complexity for experimental investigations with more controlled system parameters and environment. The PIPTB is half the size and one-fifth of the weight of the full-scale MiaPURE drivetrain (with payload). Although the testbed does not share similar static and dynamic friction properties as the full-scaled system, it can still be used to evaluate controller performances. Further details on the mechanical and electrical systems are available in [17]. ## III Modeling & Simulation Analytical models were derived for controller development and braking performance validation. The motion of the ballbot in the 3D space was decomposed into dynamic models in three orthogonal planes. State and input trajectories for the ballbot during the braking task were further obtained using a planar model of the ballbot. ### _Planar Models_ Planar models were used to describe the ballbot dynamics in the transverse, sagittal, and frontal planes, which assume negligible coupling between these three planes during spinning and translational motions (Fig. 5a). The transverse plane is perpendicular to the centerline of the upper body when fully upright, the sagittal plane is defined as the plane that intersects one of the actuators, and the frontal plane is orthogonal to the sagittal plane (Fig. 5a). #### Iii-A1 Translation Models Translational movements were decoupled into movements in the sagittal and frontal planes. The complex interactions between the OWs and SW were simplified to a torque applied to the center of the SW through a virtual revolute joint for each plane (Fig. 5b). The resultant system is a classic wheeled-inverted-pendulum (WIP) with two generalized coordinates (\(\theta\) for upper body tilt angle and \(\phi\) for angular position of the SW, which are each relative to the vertical axis) (Fig. 5b). The equations of motion of the WIP model in the sagittal and frontal planes were obtained using Euler-Lagrange's method [18]. The derivation of these equations is detailed in [17]. \[\boldsymbol{\ddot{q}_{j}}=f_{dj}(\boldsymbol{s_{j}},\tau_{j}) \tag{1}\] where \(\boldsymbol{\ddot{q}_{j}}=[\ddot{\theta}_{j},\ddot{\phi}_{j}]^{T}\), \(f_{dj}(\boldsymbol{\mathrm{s}_{j}},\tau_{j})\) is the equation of motion of the WIP model, \(\boldsymbol{s_{j}}=[\theta_{j},\phi_{j},\dot{\theta}_{j},\dot{\phi}_{j}]^{T}\) is the state-space vector, \(\tau_{j}\) is the torque applied to the SW, and subscript \(j=y\) is for system states and input torque in the sagittal plane (i.e. \([\theta_{y},\phi_{y},\tau_{y}]\)) and \(j=x\) for those in the frontal plane (i.e. \([\theta_{x},\phi_{x},\tau_{x}]\)). The system dynamics of the PIPTB can be captured by the same WIP model derived in the previous section, using a generalized coordinate system of \(\boldsymbol{q_{P}}=[\theta_{P},\phi_{P}]^{T}\) where Fig. 4: Physical prototype of the PIPTB while it is balancing. Fig. 5: (a) Three individual planes defined for planar models and input torque applied to each model. (b) Translation model of the ballbot in the sagittal plane. (c) Spin model in the transverse plane, the black circle represents the upper body of the ballbot from the top view. Fig. 3: Physical prototype of the ballbot drivetrain viewed facing (a) OW1 and (b) OW2 and OW3. represents the chassis tilt angle relative to the pole and \(\phi_{p}\) is the angular displacement of the wheel. Similarly, we further have \(\mathbf{s_{P}}=[\theta_{P},\phi_{P},\dot{\theta}_{P},\dot{\phi}_{P}]^{T}\) and \(\tau_{P}\) as the state-space vector and input torque of the PIPTB system, respectively. Its equation of motion was derived as \(\mathbf{s_{P}}=f_{dP}(\mathbf{s_{P}},\tau_{P})\). #### Iii-A2 Spin Model The spinning motion of the ballbot was modeled as a single rigid body spinning around a vertical axis passing through the center of the SW in the transverse plane (Fig. 5c). Assuming no spin motion between the SW and ground [3, 4], the system dynamics can be determined as \[I_{z}\ddot{q_{z}}+D_{z}(\dot{q_{z}})=\tau_{z} \tag{2}\] where \(q_{z}=\theta_{z}\) is the yaw angle, \(I_{z}\) is the lumped moment of inertia of the upper body and OWs, and \(D_{z}\) represents the viscous friction torque during spinning. Detailed derivation can be found in [17]. We further have the equation of motion (\(f_{dz}\)) of the spin model \[\ddot{q}_{z}=f_{dz}(\mathbf{s_{z}},\tau_{z}) \tag{3}\] where \(\ddot{q}_{z}=\ddot{\theta}_{z}\), \(\mathbf{s_{z}}=[\theta_{z},\dot{\theta}_{z}]^{T}\) is the state vector of spin model. ### _Conversion to 3D Model_ For the purpose of the control system, we need to convert the state vectors and SW torques in the planar model (\(\mathbf{s_{j}},\tau_{j}\)) into the individual OW speed and motor torque. The following conversion equations were obtained by equating the linear velocity of OWs and SW at their contacting point [17]: \[\begin{split}\mathbf{\dot{\psi}}&=V_{3D}^{-1}(\dot{ \phi}_{x},\dot{\phi}_{y},\dot{\theta}_{x},\dot{\theta}_{y},\dot{\theta}_{z})\\ \mathbf{u}&=T_{3D}^{-1}(\tau_{x},\tau_{y},\tau_{z})\end{split} \tag{4}\] where \(\mathbf{\dot{\psi}}=[\dot{\psi}_{1},\dot{\psi}_{2},\dot{\psi}_{3}]^{T}\) and \(\mathbf{u}=[\tau_{1},\tau_{2},\tau_{3}]^{T}\) are respective motor speed and torque for three OW-motor pair. ### _Simulation of the Braking Task_ The WIP model was used to simulate the translational motion of the ballbot during the braking task: stopping from 1.4 m/s until stopped in an upright orientation within 2 s in the sagittal plane. The optimized trajectories for input torque (\(\tau_{y}^{*}(t)\)) and system states (\(\mathbf{s_{y}^{*}}(t)\)) in this task were obtained. A simple quadratic cost function \(J=\int_{t_{0}}^{t_{F}}\tau_{y}(t)^{2}dt\) was utilized to minimize the input torque during the braking task. Since these models were agnostic to a specific actuator, input torque constraints were not included and high input torque was always penalized by the objective function. The formulation of the optimization is presented below: \[\begin{split}\min_{\mathbf{s_{y}}[:],\tau_{y}[:]}& J=\int_{t_{0}}^{t_{F}}\tau_{y}(t)^{2}dt\\ \text{subject to}&\mathbf{\ddot{q_{y}}}(t)=f_{dy}(\mathbf{s_{ y}}(t),\tau_{j}(t))\\ & H(t,s_{y}(t),u_{y}(t))\leq 0\\ & G(t_{0},t_{F},s_{y}(t_{0}),s_{y}(t_{F}))\leq 0\end{split} \tag{5}\] where \(J\) is the objective function, \(\mathbf{s_{y}}\) is the state vector and \(\tau_{j}\) is the input torque to the SW. The path constraint function \(H(\cdot)\) includes the boundaries for system states while braking, and the boundary constraint function \(G(\cdot)\) defines braking duration (\(t_{F}-t_{0}\)), state vector, and input at initial (\(t=t_{0}\)) and final (\(t=t_{F}\)) condition. Such an optimization problem was solved using Direct Collocation [19] in MATLAB, as detailed in [17]. The optimized state and input trajectories for the ballbot model were generated with the following estimated system parameters (Fig. 6a). Optimal solutions for system states and input torque trajectories for the braking tasks were successfully obtained (Fig. 6b) and were later utilized as the command state for the controller. Classic non-minimum phase behavior of the WIP dynamics can be observed from the obtained state trajectories, such that the upper body tilts backward during the braking stage, and the SW first accelerates beyond 1.4 m/s and then decelerates (Fig. 6b). There also exists a negative power region during the braking task, when the input torque is negative while the SW speed is positive, indicating the back-driving of the actuators. ## IV Control System Development & Validation We aimed to develop a model-based controller that could handle the nonlinear stiction and other frictions in the ballbot drivetrain while providing intuitive tuning. However, as noted in Section I, the open-loop torque control using model-based methods, such as LQR, used by ETH Rezero [3], is not effective in managing high stiction in the system (Fig. 7a). On the other hand, the cascaded PI-PD controller utilized in the CMU Ballbot [4] can break the stiction through the use of the PI controller, but it is challenging to tune (Fig. 7b). In this study, we propose a novel controller, called cascaded LQR-PI control, that combines the strengths of both approaches. We validated the performance of this control scheme on the PIPTB before implementing it in the more complex ballbot drivetrain. ### _Control System Design_ The cascaded LQR-PI controller is composed of an outer linear quadratic regulator (LQR) loop and an inner proportional-integral (PI) control loop. In the LQR loop, we utilized optimal control theory and a reference WIP model to Fig. 6: (a) System parameters for upper body (UB) and SW utilized for the planar ballbot model, and (b) optimized state and input trajectories of the WIP model for the braking task to stop within 2 s from 1.4 m/s. obtain the reference SW speed (\(\dot{\phi}_{r}\)). The PI control loop was utilized to ensure the tracking of the reference SW speed in the WIP plant (Fig. 7c). We first obtained optimal SW input torque using a linear quadratic regulator (LQR) for each planar model: \[\tau_{r\dot{\mathbf{j}}}=\mathbf{k_{LQRj}}(\mathbf{s_{c\dot{\mathbf{j}}}}-\mathbf{s_{\dot{\mathbf{j}}}}) \tag{6}\] for \(j\in[x,y]\), and \(\mathbf{k_{LQRj}}=[k_{1j},0,k_{2j},k_{3j}]^{T}\) is the optimal LQR control gains, \(\mathbf{s_{c\dot{\mathbf{j}}}}=[\theta_{cj},\phi_{cj},\theta_{cj},\phi_{cj}]^{T}\) is the command state vector representing the commanded tilt angle, tilt angular rate, and SW angular speed, respectively, and \(\mathbf{s_{\dot{\mathbf{j}}}}=[\theta_{j},\phi_{j},\dot{\theta}_{j},\dot{\phi}_{j}]^{T}\) is the measured state vector for the WIP plant. The command state vector can be generated for device control with user input devices or computer-generated command state trajectories. It should be noted that we chose to not directly control the SW position since controlling the SW speed would be more intuitive for users in later applications. In this case, we specifically have \[\tau_{r\dot{\mathbf{j}}}=k_{1j}(\theta_{cj}-\theta_{j})+k_{2j}(\dot{\theta}_{cj}- \dot{\theta}_{j})+k_{3j}(\dot{\phi}_{cj}-\dot{\phi}_{j}) \tag{7}\] The equation of motion of the reference WIP model (1) was then utilized to calculate the reference SW angular acceleration (\(\ddot{\phi}_{r\dot{\mathbf{j}}}\)), which is an element of \(\mathbf{\ddot{q}_{r\dot{\mathbf{j}}}}\), given the optimal input torque \(\tau_{r\dot{\mathbf{j}}}\) and measured system state \(\mathbf{s_{\dot{\mathbf{j}}}}\). We then integrated it to obtain the reference SW angular speed \(\dot{\phi}_{r\dot{\mathbf{j}}}\), which is an element of \(\mathbf{\ddot{q}_{r\dot{\mathbf{j}}}}\) (\(j\in[x,y]\)): \[\mathbf{\dot{q}_{r\dot{\mathbf{j}}}}=\int\mathbf{\ddot{q}_{r\dot{\mathbf{j}}}}dt=\int f_{dj}( \mathbf{s_{\dot{\mathbf{j}}}},\tau_{r\dot{\mathbf{j}}})dt \tag{8}\] The obtained reference SW speed was then utilized in an inner PI control loop to obtain a tracking torque (\(\tau_{ej}\)) that compensates for the speed tracking error (\(\dot{\phi}_{ej}\)) to ensure a zero steady-state error between the reference SW speed and the measured SW speed: \[\tau_{ej}=k_{P}\dot{\phi}_{ej}+k_{I}\int\dot{\phi}_{ej}dt \tag{9}\] where \(\dot{\phi}_{ej}=\dot{\phi}_{rj}-\dot{\phi}_{j}\) is the tracking error of the SW speed, and \(k_{Pj}\) and \(k_{Ij}\) are the proportional and integral control gains for the inner PI loop. Finally, the total input torque for the SW is the summation of the reference input torque (\(\tau_{r\dot{\mathbf{j}}}\)) obtained from LQR and the tracking torque (\(\tau_{ej}\)) from the PI controller that compensates for unmodeled static and dynamic frictions: \[\tau_{j}=\tau_{r\dot{\mathbf{j}}}+\tau_{ej} \tag{10}\] ### _Controller Implementation in PIPTB_ The LQR, the cascaded PI-PD, and the cascaded LQR-PI controllers were implemented in the PIPTB hardware. The LQR and the cascaded PI-PD controllers were directly implemented in the sbRIO controller (sbRIO9626, National Instrument Inc., USA.) at 400 Hz, and the torque command was sent to the motor driver for direct open-loop torque control. For the cascaded LQR-PI controller, the LQR outer loop was implemented in the sbRIO controller at 400 Hz. Taking advantage of the speed control capability of the motor driver, the generated reference SW speed (\(\dot{\phi}_{rP}\)) and reference motor torque (\(\tau_{rp}\)) were sent directly to the motor driver with the PI control inner loop running at 8 kHz. LQR control gains for the LQR and cascaded LQR controllers were generated using MATLAB (Mathworks Inc., USA), whereas the rest of the control gains were tuned manually. ### _Controller Validation & Comparison Experiment_ The performance of these three controllers was compared in a braking test. In this test, the PIPTB implemented with each controller was commanded to accelerate to 1 m/s within 2 s, hold at this speed for 2 s, and then brake within 1.4 s. A command speed was provided for the acceleration (\(\dot{\phi}_{cP}(t)=\frac{t}{2\tau_{W}},t\in[0,2)\)) and constant speed stage (\(\dot{\phi}_{cP}(t)=\frac{1}{\tau_{W}},t\in[2,4)\)), where \(r_{W}\) is the wheel radius. The optimal state trajectories for PIPTB to perform the target braking task was utilized as the command state vector during the braking stage (\(\mathbf{s_{cP}}(t)=\mathbf{s_{P}^{\star}}(t),t\in[4,5.4]\)). Here, \(\mathbf{s_{P}^{\star}}(t)\) is the optimal state trajectory for PIPTB to brake from 1 m/s within 1.4 s, obtained using methods described in Section III-C. Three trials were repeated for each type of controller. #### Iii-C1 Controller Performance Evaluation The resultant input torque trajectory \(\tau_{P}(t)\) of PIPTB during the braking stage for each controller was utilized to evaluate the braking performance, using the objective function defined in the optimization problem (Section III-C) \[J_{P}=\int_{t_{2}}^{t_{3}}\tau_{P}(t)^{2}dt \tag{11}\] where \(t_{2}\), \(t_{3}\) are the starting and ending times of the measured braking phase. We referred to \(J_{P}\) as the braking effort, the lower value of \(J_{p}\) is more desirable as it indicates a more efficient braking behavior. The braking effort was averaged over three trials for each controller. Fig. 7: (a) Block diagram of an LQR torque controller for the sagittal plane WIP model, where \(\mathbf{s_{\dot{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bmbmbmbmbmbmbmbmbmbmbmbm }}}}}}}}}}}}}\) is presented in the paper. #### Iv-C2 Results for Controller Comparison The resultant state and input torque trajectories for each controller in an exemplary trial are presented (Fig. 8). Among them, the PI-PD controller failed to follow the command trajectory to stop within 1.4 s from 1 m/s, hence it resulted in the highest averaged braking effort (\(J_{P}=1.2\pm 0.3E6\)) (Fig. 8b). PIPTB with LQR and LQR-PI controllers were capable of braking within 1.4 s. LQR-PI has the lowest braking effort (\(J_{P}=8.2\pm 0.2E5\)) due to a lower magnitude of input torque (Fig. 8c), whereas a jitter behavior was observed for PIPTB with LQR controller, making the state trajectories less smooth in the braking stage (Fig. 8a). This simple experiment validated the feasibility of using a cascaded LQR-PI controller to control a WIP plant for quick braking. In addition, it further indicated the advantage of the cascade LQR-PI controller compared with LQR and PI-PD controller: 1) lower braking effort, 2) smoother state and input trajectories compared, and 3) better trajectory tracking capability. ### _Controller Implementation in Ballbot_ Following the success of LQR-PI controller on PIPTB, we further implemented LQR-PI controller in the ballbot drivetrain with the same outer-inner-loop structure. The LQR outer loop was implemented in the roboRIO controller running at 400 Hz (Fig. 9). The measured upper body tilting (\(\mathbf{\theta}=[\theta_{x},\theta_{y},\theta_{z}]^{T},\hat{\mathbf{\theta}}=[\hat{ \theta}_{x},\hat{\theta}_{y},\hat{\theta}_{z}]^{T}\)) and OW speed (\(\dot{\mathbf{\dot{\psi}}}=[\dot{\psi}_{1},\psi_{2},\psi_{3}]^{T}\)) were first converted into state vectors all planar models \(\mathbf{s}_{\mathbf{x}}=[\theta_{x},\phi_{x},\dot{\theta}_{x},\dot{\phi}_{x}]^{T},\mathbf{ s}_{\mathbf{y}}=[\theta_{y},\phi_{y},\dot{\theta}_{y},\dot{\phi}_{y}]^{T}\), and \(\mathbf{s}_{\mathbf{z}}=[\theta_{z},\dot{\theta}_{z}]^{T}\) using inverse of Eqn. 4. Next, the reference input torque vector \(\mathbf{u}_{\mathbf{r}}=[\tau_{rx},\tau_{ry},\tau_{rz}]^{T}\) was obtained using implemented LQR controllers for all three planes, followed by the reference planar speed vector \(\dot{\mathbf{\phi}}_{\mathbf{r}}=[\dot{\phi}_{rx},\dot{\phi}_{ry},\dot{\theta}_{rz}]^{T}\) using the reference WIP models and a reference spin model. We then converted all reference torque and speed vectors in three planar models back to the reference motor torque vector \(\mathbf{\tau}_{\mathbf{r}}=[\tau_{r1},\tau_{r2},\tau_{r3}]^{T}\) and reference motor speed vector \(\dot{\mathbf{\psi}}_{\mathbf{r}}=[\dot{\psi}_{r1},\psi_{r2},\dot{\psi}_{r3}]^{T}\) using Eqn. 4. The resultant reference torque and reference speed for each motor were finally utilized in the PI inner loop in each motor driver running at 8 kHz to obtain the command torque to the motor using Eqn. 10 (Fig. 9). ## V Physical Robot Testing The maximum speed and minimum braking time of the MiaPURE drivetrain were evaluated while carrying a payload of 60 kg, with its COM height close to that of a seated human. Only 60 kg of the payload was utilized for the safety of the device and researchers during this investigation stage. From the design target, we specified a maximum speed of 2 m/s, as well as a minimum braking time of 2 seconds when driving at 1.4 m/s. The feasibility of using the MiaPURE drivetrain for the remote control and human-riding tasks was also validated. ### _Testing Protocol_ The following benchmark tests evaluated the maximum speed and minimum braking time from 1.4 m/s of the MiaPURE drivetrain. The LQR and the cascaded PI-PD controller were not formally tested due to their poor performance in the balancing task for the MiaPURE drivetrain during the pilot studies. In this case, only the cascaded LQR-PI controller was utilized to control the maneuver of the MiaPURE drivetrain for these benchmark tests. The maximum speed was evaluated by providing a slowly ramping velocity trajectory for the balancing controller to drive the MiaPURE. The maximum speed until the system failure (losing dynamic stability) was recorded and averaged over three trials. Due to its omnidirectional maneuverability, we evaluated such system performance when the drivetrain translated towards \(h_{z}=0^{\circ}\), \(90^{\circ}\), and \(180^{\circ}\) (Fig. 10b). The benchmark test on the minimum braking time was only conducted if the drivetrain was capable of reaching more than 1.4 m/s stably during the maximum speed benchmark test. In this test, we commanded the robot to follow a set of trajectories composed of slow acceleration, constant speed (1.4 m/s), and braking phases. The optimal state trajectories for the braking phase were first generated with a target braking time of 5 s using methods in Section III-C. Upon successful trial completion, the robot was commanded to follow another set of state trajectories with the braking time reduced by 0.5 s, until system failure. Fig. 8: Resultant state (\(\theta_{P}(t),\dot{\phi}_{P}(t)\)) and input torque (\(\tau_{P}(t)\)) trajectories for PIPTB during the braking task using (a) LQR controller, (b) PI-PD controller, and (c) LQR-PI controller. The grey areas are the braking phase for these trials. \(\phi_{cP}\) is the command wheel speed of PIPTB. Fig. 9: Implementation of the cascaded LQR-PI controller on the physical hardware of MiaPURE including the main controller running at 400 Hz and the motor driver at 8 kHz. Signals from encoders and IMU were first converted into states of planner models to obtain corresponding feedforward torque (\(\mathbf{u}_{\mathbf{r}}\)) and speed command (\(\dot{\mathbf{\phi}}_{\mathbf{r}}\)) of SW. They are then converted back to the feedforward torque (\(\mathbf{\tau}_{\mathbf{r}}\)) and speed command (\(\dot{\mathbf{\psi}}_{\mathbf{r}}\)) of each motor. ### _Results_ The MiaPURE drivetrain demonstrated distinct maximum speeds for different translation directions. Translating in \(h_{z}=0^{\circ}\) resulted in the lowest maximum speed of less than 0.6 m/s (Fig. 11a), while moving in \(h_{z}=180^{\circ}\) resulted in the highest maximum speed of 2.3 m/s (Fig. 11c). The braking experiment was only performed with the PURE drivetrain translating in \(h_{z}=180^{\circ}\). The robot was capable of decelerating with a minimal braking time of 2 s (Fig. 12). ### _Demonstrations_ We further assessed the feasibility of using the MiaPURE drivetrain as a payload-carrying robot and a human-riding device to provide hands-free assistive mobility. As the payload robot, MiaPURE can be controlled either through a remote control device (RC) or physical human-robot interactions (pHRI) by gently pushing on the payload to generate an omnidirectional maneuver (Fig. 13a, b). As the riding device, the rider can utilize torso leaning to control its omnidirectional maneuver (sliding through a narrow space), while utilizing hands for more important tasks such as door opening (Fig. 13c, d). The demo video can be found via this link: [https://www.youtube.com/watch?v=H3WC7nfBx28](https://www.youtube.com/watch?v=H3WC7nfBx28). ## VI Discussion The LQR-PI controller took advantage of physical hardware in our platforms. The LQR outer loop (400Hz) handles the slower whole system dynamics and performs a single-step forward simulation to generate feedforward torque and speed command for motors. The PI inner loop (8kHz) mainly deals with faster and more complicated motor dynamics and unknown frictions in the OW-SW interaction. More, IMU used for LQR outer loop also has a lower updating rate than encoders used for PI inner loops. In this case, such an outer-inner loop structure help to handle both balancing and stiction-compensation problem, outperforming LQR and PI-PD controllers in both the PIPTB and ballbot drivetrain. It was surprising to observe that the speed performance of the ballbot depended on the translation direction (Fig. 11). Indeed, one would tend to initially think that the ballbot should have similar performance in all directions due to its spherical wheel and neglect that the proper actuation of the spherical wheel requires a friction cone constraint [20] for each OW-SW contact point. For example, translating in \(0^{\circ}\) and \(180^{\circ}\) have the same magnitude of traction force on two driving OW2 and OW3, while OW1 is mainly idling. Fig. 11: Measured system responses of tracking an acceleration profile when translating in (a) \(h_{z}=0^{\circ}\), (b) \(h_{z}=90^{\circ}\), and (c) \(h_{z}=180^{\circ}\). Visualizations of the translation direction relative to the MiaPURE drivetrain are presented in the figures above the plot. The instant before the system failure or slip was marked with a dashed green line. Among these translational directions, only the last one (\(h_{z}=180^{\circ}\)) showed a promising result of reaching beyond 2.0 m/s before failure. Fig. 12: Measured system response of the PURE drivetrain tracking a braking profile when driven in a \(180^{\circ}\) heading angle. The device successfully decelerated from 1.4 m/s within 2 s. Fig. 10: (a) MiaPURE drivetrain and the gantry system utilized during the benchmark experiment. A researcher pushed the gantry to follow the PURE drivetrain, while another researcher was ready to catch the robot in case of any system failure. The photo was taken during the braking of MiaPURE when translating to the left, and (b) the definition of the translation direction of the MiaPURE drivetrain, a translation direction of \(0^{\circ}\) aligns with the OW1 axis. Fig. 13: Demonstration of MiaPURE drivetrain for (a,b) payload carrying and (c,d) human riding. However, the normal forces on two driving OWs have smaller magnitudes when translating in \(0^{\circ}\) due to the chassis leaning towards OW1. Such an effect reduced friction cones on OW2 and OW3, causing slip in OW-SW contacts and leading to loss of balancing. We further demonstrated the feasibility of using the MiAPURE robot for human riding and navigation in a constrained space. The researcher was capable of controlling the translational movement of MiAPURE in a completely hands-free mode via torso leaning, similar to riding a Segway device. However, due to the high risk of failure when translating in \(0^{\circ}\) (seat back direction), it is not feasible to perform a quick translation towards the back when riding this drivetrain without slipping between OWs and SW. The contact stability between OWs and SW needs to be improved to allow for agile translation in all directions. The limitations of the current device indicated several important directions for future work. The system failure due to the slip between OWs and SW of the device needs to be mitigated to ensure that all translational directions are safe. We need to investigate the fundamental drivetrain mechanism to understand how ballbot drivetrain design variables affect contact stability between OWs and SW during the safety-critical tasks and iterate drivetrain design to ensure system safety. ## VII Conclusion In this paper, we presented our attempt to build the mechanical and control system of a high load capacity, minimal footprint mobile robot using the technology of a ballbot. To compensate for unmodeled friction in the torque transmission system, we developed a cascaded LQR-PI controller to help with overcoming the stiction and dynamic friction in the prototype, in addition to allowing for intuitive gain tuning. The controller was first validated in a planar inverted pendulum testbed and then implemented on a full-sized physical prototype. The prototype was capable of achieving a maximum speed of 2.3 m/s and braking within 2.0 seconds from 1.4 m/s while carrying a static payload of 60 kg. In addition, we further demonstrated the feasibility of a human riding the MiAPURE drivetrain using torso leaning through demonstrations of manipulation (door opening) and locomotion tasks (sliding through a narrow space). These results highlight the potential of building a ballbot drivetrain, with the proposed design, fabrication, and control methodology, as a universal mobility platform for various tasks that require handling heavyweight with high COM while navigating in a constrained environment. ## Acknowledgment The authors thank Coach Adam Bleakney, Doctor Jeannette Elliot, Professor Deana McDonagh, Professor William Norris, Doctor Patricia Malik, graduate students Yu Chen and Seung-Yun (Leo) Song, and undergraduate student Zheyu Zhou for their help and support with concept generation, physical hardware development, and system testing.
2303.09975
MedNeXt: Transformer-driven Scaling of ConvNets for Medical Image Segmentation
There has been exploding interest in embracing Transformer-based architectures for medical image segmentation. However, the lack of large-scale annotated medical datasets make achieving performances equivalent to those in natural images challenging. Convolutional networks, in contrast, have higher inductive biases and consequently, are easily trainable to high performance. Recently, the ConvNeXt architecture attempted to modernize the standard ConvNet by mirroring Transformer blocks. In this work, we improve upon this to design a modernized and scalable convolutional architecture customized to challenges of data-scarce medical settings. We introduce MedNeXt, a Transformer-inspired large kernel segmentation network which introduces - 1) A fully ConvNeXt 3D Encoder-Decoder Network for medical image segmentation, 2) Residual ConvNeXt up and downsampling blocks to preserve semantic richness across scales, 3) A novel technique to iteratively increase kernel sizes by upsampling small kernel networks, to prevent performance saturation on limited medical data, 4) Compound scaling at multiple levels (depth, width, kernel size) of MedNeXt. This leads to state-of-the-art performance on 4 tasks on CT and MRI modalities and varying dataset sizes, representing a modernized deep architecture for medical image segmentation. Our code is made publicly available at: https://github.com/MIC-DKFZ/MedNeXt.
Saikat Roy, Gregor Koehler, Constantin Ulrich, Michael Baumgartner, Jens Petersen, Fabian Isensee, Paul F. Jaeger, Klaus Maier-Hein
2023-03-17T13:48:17Z
http://arxiv.org/abs/2303.09975v5
# MedNeXt: Transformer-driven Scaling of ConvNets for Medical Image Segmentation ###### Abstract There has been exploding interest in embracing Transformer-based architectures for medical image segmentation. However, the lack of large-scale annotated medical datasets make achieving performances equivalent to those in natural images challenging. Convolutional networks, in contrast, have higher inductive biases and consequently, are easily trainable to high performance. Recently, the ConvNeXt architecture attempted to modernize the standard ConvNet by mirroring Transformer blocks. In this work, we improve upon this to design a modernized and scalable convolutional architecture customized to challenges of data-scarce medical settings. We introduce MedNeXt, a _Transformer-inspired_ large kernel segmentation network which introduces - 1) A _fully_ ConvNeXt 3D Encoder-Decoder Network for medical image segmentation, 2) Residual ConvNeXt up and downsampling blocks to preserve semantic richness across scales, 3) A novel technique to iteratively increase kernel sizes by upsampling small kernel networks, to prevent performance saturation on limited medical data, 4) Compound scaling at multiple levels (depth, width, kernel size) of MedNeXt. This leads to state-of-the-art performance on 4 tasks on CT and MRI modalities and varying dataset sizes, representing a _modernized_ deep architecture for medical image segmentation. Our code is available here: [to be added soon] Keywords:Medical Image Segmentation Transformers MedNeXt Large Kernels ConvNeXt ## 1 Introduction Transformers [30, 7, 21] have seen wide-scale adoption in medical image segmentation as either components of hybrid architectures [3, 9, 33, 2, 8, 31] or standalone techniques [34, 25, 15] for state-of-the-art performance. The ability to learn long-range spatial dependencies is one of the major advantages of the Transformer architecture in visual tasks. However, Transformers are plagued by the necessity of large annotated datasets to maximize performance benefits owing to their limited inductive bias. While such datasets are common to natural images (ImageNet-1k [6], ImageNet-21k [26]), medical image datasets usually suffer from the lack of abundant high quality annotations [19]. To retain the inherent inductive bias of convolutions while taking advantage of architectural improvements of Transformers, the ConvNeXt [22] was recently introduced to re-establish the competitive performance of convolutional networks for natural images. The ConvNeXt architecture uses an inverted bottleneck mirroring that of Transformers, composed of a depthwise layer, an expansion layer and a contraction layer (Sec. 2.1), in addition to large depthwise kernels to replicate long-range representation learning. The authors paired large kernel ConvNeXt networks with enormous datasets to outperform erstwhile state-of-the-art Transformer-based networks. In contrast, the VGGNet [28] approach of stacking small kernels continues to be the predominant technique for designing ConvNets in medical image segmentation. Out-of-the-box data-efficient solutions such as nnUNet [13], using variants of a standard UNet [5], have still remained effective across a wide range of tasks. The ConvNeXt architecture marries the long-range spatial representation learning capabilities of Vision [7] and Swin Transformers [21] with the inherent inductive bias of ConvNets. Additionally, the inverted bottleneck design allows us to scale width (increase channels) while not being affected by kernel sizes. Effective usage in medical image segmentation would allow benefits from - **1)** learning long-range spatial dependencies via large kernels, **2)** less intuitively, simultaneously scaling multiple network levels. To achieve this would require techniques to combat the tendency of large networks to overfit on limited training data. Despite this, there have been recent attempts to introduce large kernel techniques to the medical vision domain. In [18], a large kernel 3D-UNet [5] was used by decomposing the kernel into depthwise and depthwise dilated kernels for improved performance in organ and brain tumor segmentation - exploring kernel scaling, while using constant number of layers and channels. The ConvNeXt architecture itself was utilized in 3D-UX-Net [17], where the Transformer of SwinUNETR [8] was replaced with ConvNeXt blocks for high performance on multiple segmentation tasks. However, 3D-UX-Net only uses these blocks partially in a standard convolutional encoder, limiting their possible benefits. In this work, we maximize the potential of a ConvNeXt design while uniquely addressing challenges of limited datasets in medical image segmentation. We present the first _fully_ ConvNeXt 3D segmentation network, **MedNeXt**, which is a scalable Encoder-Decoder network, and make the following contributions: * We utilize an architecture composed **purely of ConvNeXt blocks** which enables network-wide advantages of the ConvNeXt design. (Sec. 2.1) * We introduce **Residual Inverted Bottlenecks** in place of regular up and downsampling blocks, to preserve contextual richness while resampling to benefit dense segmentation tasks. The modified residual connection in particular improves gradient flow during training. (Sec. 2.2) * We introduce a simple but effective technique of iteratively increasing kernel size, **UpKern**, to prevent performance saturation on large kernel MedNeXts by initializing with trained upsampled small kernel networks. (Sec. 2.3) * We propose applying **Compound Scaling**[29] of multiple network parameters owing to our network design, allowing orthogonality of width (_channels_), receptive field (_kernel size_) and depth (_number of layers_) scaling. (Sec. 2.4) MedNeXt achieves state-of-the-art performance against baselines consisting of Transformer-based, convolutional and large kernel networks. We show performance benefits on 4 tasks of varying modality (CT, MRI) and sizes (ranging from 30 to 1251 samples), encompassing segmentation of organs and tumors. We propose MedNeXt as a strong and modernized alternative to standard ConvNets for building deep networks for medical image segmentation. ## 2 Proposed Method ### Fully ConvNeXt 3D Segmentation Architecture In prior work, ConvNeXt [22] distilled architectural insights from Vision Transformers [7] and Swin Transformers [21] into a convolutional architecture. The ConvNeXt block inherited a number of significant design choices from Transformers, designed to limit computation costs while increasing receptive field width to learn global features, which demonstrated performance improvements over standard ResNets [10]. In this work, we leverage these strengths by adopting the general design of ConvNeXt as the building block in a 3D-UNet-like [5] macro architecture to obtain the **MedNeXt**. We extend these blocks to up and downsampling layers as well (Sec. 2.2), resulting in the first fully ConvNeXt architecture for medical image segmentation. The macro architecture is illustrated in Figure 0(a). MedNeXt blocks (similar to ConvNeXt blocks) have 3-layers mirroring a Transformer block and are described for a \(C\)-channel input as follows: 1. **Depthwise Convolution Layer:** This layer contains a Depthwise Convolution with kernel size \(k\times k\times k\), followed by normalization, with \(C\) output channels. We use channel-wise GroupNorm [32] for stability with small batches [27], instead of the original LayerNorm. The depthwise nature of convolutions allow large kernels in this layer to replicate a large attention window of Swin-Transformers, while simultaneously limiting compute and thus delegating the "heavy lifting" to the Expansion Layer. 2. **Expansion Layer:** Corresponding to a similar design in Transformers, this layer contains an overcomplete Convolution Layer with \(CR\) output channels, where \(R\) is the expansion ratio, followed by a GELU [12] activation. Large values of \(R\) allow the network to scale _width-wise_ while \(1\times 1\times 1\) kernel limits compute. It is important to note that this layer effectively decouples width scaling from receptive field (kernel size) scaling in the previous layer. 3. **Compression Layer:** Convolution layer with \(1\times 1\times 1\) kernel and \(C\) output channels performing channel-wise compression of the feature maps. MedNeXt is convolutional and retains the inductive bias inherent to ConvNets that allows easier training on sparse medical datasets. Our fully ConvNeXt architecture also enables width (more channels) and receptive field (larger kernels) scaling at both standard and up/downsampling layers. Alongside depth scaling (more layers), we explore these 3 orthogonal types of scaling to design a _compound scalable_ MedNeXt for effective medical image segmentation (Sec.2.4). ### Resampling with Residual Inverted Bottlenecks The original ConvNeXt design utilizes separate downsampling layers which consist of standard strided convolutions. An equivalent upsampling block would be Figure 1: **(a)** Architectural design of the MedNeXt. The network has 4 Encoder and Decoder layers each, with a bottleneck layer. MedNeXt blocks are present in Up and Downsampling layers as well. Deep Supervision is used at each decoder layer, with lower loss weights at lower resolutions. All residuals are _additive_ while convolutions are padded to retain tensor sizes. **(b)** Upsampled Kernel (UpKern) initialization of a pair of MedNeXt architectures with similar configurations (\(\theta\)) except kernel size \((k_{1},k_{2})\). **(c)** MedNeXt-L (\(5\times 5\times 5\)) leaderboard performance. standard strided transposed convolutions. However, this design does not implicitly take advantage of width or kernel-based ConvNeXt scaling while resampling. We improve upon this by extending the Inverted Bottleneck to resampling blocks in MedNeXt. This is done by inserting the strided convolution or transposed convolution in the first _Depthwise Layer_ for Downsampling and Upsampling MedNeXt blocks respectively. The corresponding channel reduction or increase is inserted in the last _compression_ layer of our MedNeXt 2\(\times\) Up or Down block design as in Fig. 1a. Additionally, to enable easier gradient flow, we add a residual connection with \(1\times 1\times 1\) convolution or transposed convolution with _stride_ of 2. In doing so, MedNeXt fully leverages the benefits from _Transformer-like_ inverted bottlenecks to preserve rich semantic information in lower spatial resolutions in all its components, which should benefit dense medical image segmentation tasks. ### UpKern: Large Kernel Convolutions without Saturation Large convolution kernels approximate the large attention windows in Transformers, but remain prone to performance saturation. ConvNeXt architectures in classification of natural images, despite the benefit of large datasets such as ImageNet-1k and ImageNet-21k, are seen to saturate at kernels of size \(7\times 7\)[22]. Medical image segmentation tasks have significantly less data and performance saturation can be a problem in large kernel networks. To propose a solution, we borrow inspiration from Swin Transformer V2 [20] where a large-attention-window network is initialized with another network trained with a smaller attention window. Specifically, Swin Transformers use a bias matrix \(\hat{B}\in\mathbb{R}^{(2M-1)\times(2M-1)}\) to store learnt relative positional embeddings, where \(M\) is the number of patches in an attention window. On increasing the window size, \(M\) increases and necessitates a larger \(\hat{B}\). The authors proposed spatially interpolating an existing bias matrix to the larger size as a pretraining step, instead of training from scratch, which demonstrated improved performance. We propose a similar approach but customized to convolutions kernels, as seen in Figure 1b, to overcome performance saturation. **UpKern** allows us to iteratively increase kernel size by initializing a large kernel network with a _compatible_ pretrained small kernel network by _trilinearly upsampling_ convolutional kernels (represented as tensors) of incompatible size. All other layers with identical tensor sizes (including normalization layers) are initialized by copying the unchanged pretrained \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Config.** & \# **Blocks (\(B\))** & **Exp. Rat. (\(R\))** \\ \hline \hline **S** & \(B_{all}=2\) & \begin{tabular}{c} \(R_{all}=2\) \\ \(R_{1}=R_{9}=2\) \\ \(R_{2}=R_{8}=3\) \\ \end{tabular} \\ \hline **M** & \begin{tabular}{c} \(B_{1}=B_{9}=3\) \\ \(B_{2-8}=4\) \\ \end{tabular} & \begin{tabular}{c} \(R_{2}=R_{8}=3\) \\ \(R_{3-7}=4\) \\ \end{tabular} \\ \hline **L** & \begin{tabular}{c} \(B_{1}=B_{9}=3\) \\ \(B_{2}=B_{8}=4\) \\ \(B_{3-7}=8\) \\ \end{tabular} & \begin{tabular}{c} \(R_{1}=R_{9}=3\) \\ \(R_{3-7}=8\) \\ \end{tabular} \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|} \hline **Config.** & \# **Blocks (\(B\))** & **Exp. Rat. (\(R\))** \\ \hline \hline **S** & \(B_{all}=2\) & \begin{tabular}{c} \(R_{all}=2\) \\ \(R_{1}=R_{9}=2\) \\ \(R_{2}=R_{8}=3\) \\ \end{tabular} \\ \hline **M** & \begin{tabular}{c} \(B_{1}=B_{9}=3\) \\ \(B_{2-8}=4\) \\ \end{tabular} & \begin{tabular}{c} \(R_{1}=R_{9}=2\) \\ \(R_{2}=R_{8}=4\) \\ \(R_{3-7}=8\) \\ \end{tabular} \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|} \hline **Config.** & \# **Blocks (\(B\))** & **Exp. Rat. (\(R\))** \\ \hline \hline **S** & \(B_{all}=2\) & \begin{tabular}{c} \(R_{all}=2\) \\ \(R_{1}=R_{9}=2\) \\ \(R_{2}=R_{8}=3\) \\ \end{tabular} \\ \hline **M** & \begin{tabular}{c} \(B_{1}=B_{9}=3\) \\ \(B_{2-8}=4\) \\ \end{tabular} & \begin{tabular}{c} \(R_{2}=R_{8}=3\) \\ \(R_{3-7}=4\) \\ \end{tabular} \\ \hline **L** & \begin{tabular}{c} \(B_{1}=B_{9}=3\) \\ \(B_{2}=B_{8}=4\) \\ \(B_{3-7}=8\) \\ \end{tabular} & \begin{tabular}{c} \(R_{1}=R_{9}=3\) \\ \(R_{3-7}=8\) \\ \end{tabular} \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|} \hline **Config.** & \# **Blocks (\(B\))** & **Exp. Rat. (\(R\))** \\ \hline \hline **S** & \(B_{all}=2\) & \begin{tabular}{c} \(R_{all}=2\) \\ \(R_{1}=R_{9}=2\) \\ \(R_{2}=R_{8}=3\) \\ \end{tabular} \\ \hline **M** & \begin{tabular}{c} \(B_{1}=B_{9}=3\) \\ \(B_{2-8}=4\) \\ \end{tabular} & \begin{tabular}{c} \(R_{1}=R_{9}=3\) \\ \(R_{2}=R_{8}=4\) \\ \(R_{3-7}=8\) \\ \end{tabular} \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|} \hline **Config.** & \# **Blocks (\(B\))** & **Exp. Rat. (\(R\))** \\ \hline \hline **S** & \(B_{all}=2\) & \begin{tabular}{c} \(R_{all}=2\) \\ \(R_{1}=R_{9}=2\) \\ \(R_{2}=R_{8}=3\) \\ \end{tabular} \\ \hline **M** & \begin{tabular}{c} \(B_{1}=B_{9}=3\) \\ \(B_{2-8}=4\) \\ \end{tabular} & \begin{tabular}{c} \(R_{1}=R_{9}=3\) \\ \(R_{2}=R_{8}=4\) \\ \(R_{3-7}=8\) \\ \end{tabular} \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|} \hline **M** & \begin{tabular}{c} \(B_{1}=B_{9}=3\) \\ \(B_{2}=B_{8}=4\) \\ \(R_{3-7}=8\) \\ \end{tabular} & \begin{tabular}{c} \(R_{1}=R_{9}=3\) \\ \(R_{3-7}=8\) \\ \end{tabular} \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|} \hline **L** & \begin{tabular}{c} \(B_{1}=B_{9}=3\) \\ \(B_{2}=B_{8}=4\) \\ \(R_{3-7}=8\) \\ \end{tabular} & \begin{tabular}{c} \(R_{1}=R_{9}=3\) \\ \(R_{3-7}=8\) \\ \end{tabular} \\ \hline \end{tabular} \end{table} Table 1: **(Left)** MedNeXt configurations from scaling Block Counts (\(B\)) and Expansion Ratio (\(R\)) as in Figure 1a. **(Right)** MedNext-B ablations (Sec. 4.1). weights. This leads to a simple but effective initialization technique for MedNeXt which helps large kernel networks overcome performance saturation in the comparatively limited data scenarios common to medical image segmentation. ### Compound Scaling of Depth, Width and Receptive Field _Compound scaling_[29] is the idea that simultaneous scaling on multiple levels (depth, width, receptive field, resolution etc) offers benefits beyond that of scaling at one single level. The computational requirements of indefinitely scaling kernel sizes in 3D networks quickly becomes prohibitive and leads us to investigate simultaneous scaling at different levels. Keeping with Figure 0(a), our scaling is tested for block count (\(B\)), expansion ratio (\(R\)) and kernel size (\(k\)) - corresponding to depth, width and receptive field size. We use 4 model configurations of the MedNeXt to do so, as detailed in Table 1**(Left)**. The basic functional design (MedNeXt-S) uses number of channels (\(C\)) as 32, \(R=2\) and \(B=2\). Further variants increase on just \(R\) (MedNeXt-B) or both \(R\) and \(B\) (MedNeXt-M). The largest 70-MedNext-block architecture uses high values of both \(R\) and \(B\) (MedNeXt-L) and is used to demonstrate the ability of MedNeXt to be significantly scaled depthwise (even at standard kernel sizes). We further explore large kernel sizes and experiment with \(k=\{3,5\}\) for each configuration, to maximize performance via _compound scaling_ of the MedNeXt architecture. ## 3 Experimental Design ### Configurations, Implementation and Baselines We use PyTorch [24] for implementing our framework. We experiment with 4 configurations of the MedNeXt with 2 kernel sizes as detailed in Section 2.4. The GPU memory requirements of scaling are limited via - 1) Mixed precision training with PyTorch AMP, 2) Gradient Checkpointing. [4]. Our experimental framework uses the nnUNet [13] as a backbone - where the training schedule (epochs=1000, batches per epoch=250), inference (50% patch overlap) and data augmentation remain unchanged. All networks, except nnUNet, are trained with AdamW [23] as optimizer. The data is resampled to 1.0 mm isotropic spacing during training and inference (with results on original spacing), using input patch size of \(128\times 128\times 128\) and \(512\times 512\), and batch size 2 and 14, for 3D and 2D networks respectively. The learning rate for all MedNeXt models is 0.001, except kernel:5 in KiTS19, which uses 0.0001 for stability. For baselines, all Swin models and 3D-UX-Net use 0.0025, while ViT models use 0.0001. We use Dice Similarity Coefficient (DSC) and Surface Dice Similarity (SDC) at 1.0mm tolerance for volumetric and surface accuracy. 5-fold cross-validation (CV) mean performance for supervised training using 80:20 splits for all models are reported. We also provide test set DSC scores for a 5-fold ensemble of MedNeXt-L (kernel: \(5\times 5\times 5\)) without postprocessing. Our extensive baselines consist of a high-performing convolutional network (nnUNet [13]), 4 convolution-transformer hybrid networks with transformers in the encoder (UNETR [9], SwinUNETR [8]) and in intermediate layers (TransBTS [31], TransUNet [3]), a fully transformer network (nnFormer [34]) as well as a partially ConvNeXt network (3D-UX-Net [17]). TransUNet is a 2D network while the rest are 3D networks. The uniform framework provides a common testbed for all networks, without incentivizing one over the other on aspects of patch size, spacing, augmentations, training and evaluation. ### Datasets We use 4 popular tasks, encompassing organ as well as tumor segmentation tasks, to comprehensively demonstrate the benefits of the MedNeXt architecture - 1) Beyond-the-Cranial-Vault (BTCV) Abdominal CT Organ Segmentation [16], 2) AMOS22 Abdominal CT Organ Segmentation [14] 3) Kidney Tumor Segmentation Challenge 2019 Dataset (KiTS19) [11], 4) Brain Tumor Segmentation Challenge 2021 (BraTS21) [1]. BTCV, AMOS22 and KiTS19 datasets contain 30, 200 and 210 CT volumes with 13, 15 and 2 classes respectively, while the BraTS21 dataset contains 1251 MRI volumes with 3 classes. This diversity shows the effectiveness of our methods across imaging modalities and training set sizes. ## 4 Results and Discussion ### Performance ablation of architectural improvements We ablate the MedNeXt-B configuration on AMOS22 and BTCV datasets to highlight the efficacy of our improvements and demonstrate that a _vanilla_ ConvNeXt is unable to compete with existing segmentation baselines such as nnUNet. The following are observed in ablation tests in Table 1 **(Right)** - \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c||c|c|} \hline \multirow{2}{*}{**Networks**} & \multirow{2}{*}{**Cat.**} & \multicolumn{2}{c|}{**BTCV**} & \multicolumn{2}{c|}{**AMOS22**} & \multicolumn{2}{c|}{**KiTS19**} & \multicolumn{2}{c||}{**BraTS21**} & \multicolumn{2}{c|}{**AVG**} \\ \cline{3-11} & & **DSC** & **SDC** & **DSC** & **SDC** & **DSC** & **DSC** & **SDC** & **DSC** & **DSC** & **SDC** \\ \hline \hline nnUNet & & 83.56 & 86.07 & 88.88 & 91.70 & 89.88 & 86.88 & 91.23 & 90.46 & 88.39 & 88.78 \\ UNETR & & 75.06 & 75.00 & 81.98 & 82.65 & 84.10 & 78.05 & 89.65 & 88.28 & 82.36 & 81.00 \\ TransUNet & & 76.72 & 76.64 & 85.05 & 86.52 & 80.82 & 72.90 & 89.17 & 87.78 & 82.94 & 80.96 \\ TransBTS & & 82.35 & 84.33 & 86.52 & 88.84 & 87.03 & 83.53 & 90.66 & 89.71 & 86.64 & 86.60 \\ nnFormer & & 80.76 & 82.37 & 84.20 & 86.38 & 89.09 & 85.08 & 90.42 & 89.83 & 86.12 & 85.92 \\ SwinUNETR & & 80.95 & 82.43 & 86.83 & 89.23 & 87.36 & 83.09 & 90.48 & 89.56 & 86.41 & 86.08 \\ 3D-UX-Net & & 80.76 & 82.30 & 87.28 & 89.74 & 88.39 & 84.03 & 90.63 & 89.63 & 86.77 & 86.43 \\ \hline \hline MedNeXt-S & \multirow{2}{*}{**Cat.**} & 83.90 & 86.60 & 89.03 & 91.97 & 90.45 & 87.80 & 91.27 & 90.46 & 88.66 & 89.21 \\ MedNeXt-B & & 84.01 & 86.77 & 89.14 & 92.10 & **91.02** & **88.24** & 91.30 & 90.51 & 88.87 & 89.41 \\ MedNeXt-M & & 84.31 & 87.34 & 89.27 & 92.28 & 90.78 & 88.22 & **91.57** & 90.78 & 88.98 & 89.66 \\ MedNeXt-L & & **84.57** & **87.54** & **89.58** & **92.62** & 90.61 & 88.08 & **91.57** & **90.81** & **89.08** & **89.76** \\ \hline MedNeXt-S & \multirow{2}{*}{**Cat.**} & 83.92 & 86.80 & 89.27 & 92.26 & 90.08 & 87.04 & 91.40 & 90.57 & 88.67 & 89.17 \\ MedNeXt-B & & 84.23 & 87.06 & 89.38 & 92.36 & 90.30 & 87.40 & 91.48 & 90.70 & 88.85 & 89.38 \\ MedNeXt-M & & 84.41 & 87.48 & 89.58 & 92.65 & **90.87** & **88.15** & **91.49** & 90.67 & 89.09 & 89.74 \\ MedNeXt-L & & **84.82** & **87.85** & **89.87** & **92.95** & 90.71 & 87.85 & 91.46 & **90.73** & **89.22** & **89.85** \\ \hline \end{tabular} \end{table} Table 2: 5-fold CV results of MedNeXt at kernel sizes: \(\{3,5\}\) outperforming 7 baselines – consisting of convolutional, transformer and large kernel networks. \(\blacksquare\): Better than (or equal to) top baseline \(\blacksquare\): Better than _kernel:3_ counterpart 1. Residual Inverted Bottlenecks, specifically in Up and Downsampling layers, _functionally enables_ MedNeXt (MedNeXt-B Resampling vs Standard Resampling) for medical image segmentation. In contrast, absence of these modified blocks lead to **considerably worse** performance. This is possibly owing to preservation of semantic richness in feature maps while resampling. 2. Training large kernel networks for medical image segmentation is a non-trivial task, with large kernel MedNeXts trained from scratch failing to perform in seen in MedNeXt-B (UpKern vs From Scratch). UpKern _improves performance_ in kernel \(5\times 5\times 5\) on both BTCV and AMOS22, whereas large kernel performance is **indistinguishable** from small kernels _without_ it. 3. The performance boost in large kernels is seen to be due to the combination of UpKern with a larger kernel and not merely a longer _effective_ training schedule (Upkern vs Trained \(2\times\)), as a trained MedNeXt-B with kernel \(3\times 3\times 3\) retrained again is **unable to match** its large kernel counterpart. This highlights that the MedNeXt modifications successfully translate the ConvNeXt architecture to medical image segmentation. We further establish the performance of the MedNeXt architecture against our baselines - comprising of convolutional, transformer-based and large kernel baselines - on all 4 datasets. We discuss the effectiveness of the MedNeXt on multiple levels. ### Performance comparison to baselines There are 2 levels at which MedNeXt successfully overcomes existing baselines - 5 fold CV and public testset performance. In 5-fold CV scores in Table 2, MedNeXt, with \(3\times 3\times 3\) kernels, takes advantage of depth and width scaling to provide state-of-the-art segmentation performance against **every baseline on all 4 datasets** with no additional training data. MedNeXt-L outperforms or is competitive with smaller variants despite task heterogeneity (brain and kidney tumors, organs), modality (CT, MRI) and training set size (BTCV: 18 samples vs BraTS21: 1000 samples), establishing itself as a powerful alternative to established methods such as nnUNet. With UpKern and \(5\times 5\times 5\) kernels, MedNeXt takes advantage of full compound scaling to **improve further** on its own small kernel networks, comprehensively on organ segmentation (BTCV, AMOS22) and in a more limited fashion on tumor segmentation (KiTS19, BraTS21). Furthermore, in leaderboard scores on official testsets (Fig. 1c), 5-fold ensembles for MedNeXt-L (kernel: \(5\times 5\times 5\)) and nnUNet, its strongest competitor are compared - **1)****BTCV:** MedNeXt beats nnUNet and, to the best of our knowledge, is one of the leading methods with _only supervised training_ and _no extra training data_ (DSC: 88.76, HD95: 15.34), **2)****AMOS22:** MedNeXt not only surpass nnUNet, but is also **Rank 1** (date: 09.03.23) currently on the leaderboard (DSC: 91.77, NSD: 84.00), **3)****KITS19:** MedNeXt exceeds nnUNet performance (DSC: 91.02), **4)****BraTS21:** MedNeXt surpasses nnUNet in both volumetric and surface accuracy (DSC: 88.01, HD95: 10.69). MedNeXt attributes its performance solely to its architecture without leveraging techniques like transfer learning (3D-UX-Net) or repeated 5-fold ensembling (UNETR, SwinUNETR), thus establishing itself as the state-of-the-art for medical image segmentation. ## 5 Conclusion In comparison to natural image analysis, medical image segmentation lacks architectures that benefit from scaling networks due to inherent domain challenges such as limited training data. In this work, MedNeXt is presented as a scalable _Transformer-inspired_ fully-ConvNeXt 3D segmentation architecture customized for high performance on limited medical image datasets. We demonstrate MedNeXt's state-of-the-art performance across 4 challenging tasks against 7 strong baselines. Additionally, similar to ConvNeXt for natural images [22], we offer the _compound scalable_ MedNeXt design as an effective modernization of standard convolution blocks for building deep networks for medical image segmentation.
2306.05749
DocAligner: Annotating Real-world Photographic Document Images by Simply Taking Pictures
Recently, there has been a growing interest in research concerning document image analysis and recognition in photographic scenarios. However, the lack of labeled datasets for this emerging challenge poses a significant obstacle, as manual annotation can be time-consuming and impractical. To tackle this issue, we present DocAligner, a novel method that streamlines the manual annotation process to a simple step of taking pictures. DocAligner achieves this by establishing dense correspondence between photographic document images and their clean counterparts. It enables the automatic transfer of existing annotations in clean document images to photographic ones and helps to automatically acquire labels that are unavailable through manual labeling. Considering the distinctive characteristics of document images, DocAligner incorporates several innovative features. First, we propose a non-rigid pre-alignment technique based on the document's edges, which effectively eliminates interference caused by significant global shifts and repetitive patterns present in document images. Second, to handle large shifts and ensure high accuracy, we introduce a hierarchical aligning approach that combines global and local correlation layers. Furthermore, considering the importance of fine-grained elements in document images, we present a details recurrent refinement module to enhance the output in a high-resolution space. To train DocAligner, we construct a synthetic dataset and introduce a self-supervised learning approach to enhance its robustness for real-world data. Through extensive experiments, we demonstrate the effectiveness of DocAligner and the acquired dataset. Datasets and codes will be publicly available.
Jiaxin Zhang, Bangdong Chen, Hiuyi Cheng, Fengjun Guo, Kai Ding, Lianwen Jin
2023-06-09T08:29:15Z
http://arxiv.org/abs/2306.05749v2
# DocAligner: Annotating Real-world Photographic Document Images by Simply Taking Pictures ###### Abstract Recently, there has been a growing interest in research concerning document image analysis and recognition in photographic scenarios. However, the lack of labeled datasets for this emerging challenge poses a significant obstacle, as manual annotation can be time-consuming and impractical. To tackle this issue, we present DocAligner, a novel method that streamlines the manual annotation process to a simple step of taking pictures. DocAligner achieves this by establishing dense correspondence between photographic document images and their clean counterparts. It enables the automatic transfer of existing annotations in clean document images to photographic ones and helps to automatically acquire labels that are unavailable through manual labeling. Considering the distinctive characteristics of document images, DocAligner incorporates several innovative features. First, we propose a non-rigid pre-alignment technique based on the document's edges, which effectively eliminates interference caused by significant global shifts and repetitive patterns present in document images. Second, to handle large shifts and ensure high accuracy, we introduce a hierarchical aligning approach that combines global and local correlation layers. Furthermore, considering the importance of fine-grained elements in document images, we present a details recurrent refinement module to enhance the output in a high-resolution space. To train DocAligner, we construct a synthetic dataset and introduce a self-supervised learning approach to enhance its robustness for real-world data. Through extensive experiments, we demonstrate the effectiveness of DocAligner and the acquired dataset. Datasets and codes will be publicly available. ## 1 Introduction In recent years, researchers have made significant strides in document image analysis and recognition. While previous studies predominantly focused on clean document images obtained from digital-born sources or flat-bed scanners [20, 21, 29, 38, 47], there is a growing interest among researchers in addressing the challenges posed by more realistic photographic scenarios [23, 25, 27, 37]. However, progress in this field has been hindered by the limited availability of labeled photographic data. This data scarcity can be attributed to several reasons. Firstly, automatic labeling methods [3, 20, 21, 47] designed for clean document images are not suitable for photographic scenarios, necessitating costly and time-consuming manual labeling. Secondly, certain tasks such as illumination correction and geometric rectification are extremely challenging to annotate manually. To address the aforementioned issues, we propose DocAligner, a novel method that significantly simplifies manual annotation to just taking pictures. DocAligner achieves this by establishing dense correspondence between photographic document images and their clean counterparts, which is a new perspective in the context of document artificial intelligence. As shown in Fig. 1, for tasks that already have a large amount of labeled clean images (such as layout analysis, table detection, and table recognition), annotations can be transferred to corresponding photographic images. For tasks that cannot rely on existing labeled data and re Figure 1: Automatic annotation for real-world photographic document images via DocAligner. All you need to do is take pictures. quire extensive labeling efforts, we can automatically generate labels following the dense correspondence. In other words, to annotate photographic data, it is only necessary to print a clean document image and take a picture. DocAligner can perform the remaining tasks automatically. While dense correspondence has been extensively explored in the realm of natural images, utilizing pre-existing models designed for natural images in the context of document analysis faces performance degradation issues resulting from the distribution gap. To address this issue, we make novel designs for DocAligner to achieve dense correspondence exclusively for document images. Pre-alignment becomes necessary for document pairs that exhibit significant global misalignment and contain repetitive patterns. Unlike rigid transformations such as affine and homography commonly used for pre-alignment in natural images, these methods are not suitable for document pairs with non-rigid deformation. Therefore, we propose the use of a thin plate spline (TPS) [2] non-rigid transformation, inspired by the advancements in geometric rectification techniques [24, 45]. To handle significant shifts and ensure high accuracy, we adopt hierarchical alignment that combines global-local correlation and coarse-to-fine flow prediction. Furthermore, document images possess a more intricate structure, with the details at the character level being crucial. Consequently, obtaining higher-resolution output flows becomes necessary in DocAligner. To address this, we propose a details recurrent refinement module that operates in the detail-rich and high-resolution space. To mitigate memory consumption in high-resolution processing, we conduct refinement recurrently using the memory-efficient ConvGRU [32]. To train DocAligner, we develop a synthetic dataset comprising triplets of photographic document images, clean document images, and flow fields. The clean document images are derived from PDF files. We warp these clean images with randomly-generated flows and then deteriorate them using collected shading maps to synthesize photographic images. Additionally, to further improve DocAligner's performance on real data, we propose a self-supervised learning approach. Experimental results demonstrate the superiority of DocAligner compared to existing methods. Furthermore, we assess the effectiveness of the acquired dataset in multiple tasks related to photographic document images, including layout analysis, illumination correction, and geometric rectification. In summary, our contributions are as follows: * For the first time, we explore the dense correspondence task in the context of document artificial intelligence, by which we ease the data dilemma countered by tasks related to photographic document images. * We propose DocAligner for document image dense correspondence, in which we design non-rigid pre-alignment, hierarchical alignment with global and local correlation, and details recurrent refinement. We also develop a synthetic dataset and a self-supervised learning approach that is easy to implement and helps to improve generalization. * DocAligner achieves superior performance compared to existing methods. Additionally, we validate the effectiveness of the dataset we acquired for related tasks. ## 2 Related works ### Dense correspondence Dense correspondence of paired images has been extensively studied for natural images in recent years [11, 14, 18, 30, 33, 34, 36]. Given an image pair \((I_{s},I_{t})\) with a size of \(H\times W\), dense correspondence aims to predict a flow field \(f\in\mathbb{R}^{H\times W\times 2}\), which relates the source \(I_{s}\) to the target \(I_{t}\). According to differences within the paired images, dense correspondence can be categorized into optical flow [14, 33, 34], geometric correspondence [36, 26, 30] and semantic correspondence [11, 18, 30, 36]. Document image pairs with large displacements and significant appearance transformations are more relevant to geometric correspondence, where pairs usually exhibit different views of the same scene or are captured by different cameras on different occasions. Melekhov et al. [26] proposed DGC-Net, a neural network with a global correlation layer that can handle large displacement. However, due to the large memory footprint of this layer, the input image resolution for DGC-Net is constrained to \(240\times 240\). Such a coarse resolution is insufficient for representing a document with fine-grained content. Glu-Net [36] takes a more elegant approach by performing global correlation in coarse resolution and local correlation in fine resolution, resulting in better performance on high-resolution input. Truong et al. proposed GOCor [35], a new optimizable correlation layer, to enhance the correlation robustness of similar and low-textured regions. Some self-supervised methods are also introduced to make models more robust on real-world data. RANSAC-Flow [31] adopts a two-stage framework where coarse alignment based on RANSAC [10] is followed by a fine alignment based on a deep model trained with self-supervision. However, it is sophisticated and hard to implement due to its multi-task optimization for cycle consistency, matchability, and reconstruction. DMP [13] optimizes the untrained matching networks on a single pair of images. But it is less practical to focus solely on an input pair. Although achieving promising results on natural images, the above-mentioned methods remain sub-optimal in the context of document images. ### Document analysis and recognition in photographic scenarios **Document layout analysis (DLA)**. DLA aims to identify the regions of interest in an unstructured document and determine the role of each region. Previous studies have primarily focused on digital-born document images that are relatively easy to label by parsing PDFs and analyzing the corresponding source codes [1, 42, 46, 17] such as LaTeX and XML. However, more and more photographic document images are emerging, which the existing automatic labeling methods cannot cope with. Consequently, while millions of labeled clean document images are available, there are limited datasets for photographic scenarios. Although it is possible to annotate manually (by annotating bounding boxes and classes like title, author, list, abstract, paragraph, table, figure, etc.), it is expensive, especially considering geometric deformation and the large number of objects in photographic document images. **Illumination correction**. Illumination correction seeks to eliminate degradation caused by uncontrolled illumination, enhancing readability and facilitating following optical character recognition (OCR) engines [6, 8, 22]. Nevertheless, obtaining labels for this task, i.e., the illumination-corrected image, is challenging due to the dense annotation density. An alternative approach to obtain labeled data is to capture the document under different illuminations while keeping its relative position fixed with the camera. However, source documents are not always available, and the variety and scale of datasets obtained in this way are minimal. Thus, most recent learning-based illumination correction methods [22, 5, 6, 8] can only be trained on synthetic data, whose realism and diversity remain unsatisfactory. Further discussions are included in Section 4.4. **Geometric rectification**. This task aims to flatten document images that suffer from curves, folds, crumples, etc. A dewarping map is required to sample from distorted input to obtain the rectified result. This dewarping map indicates the correspondence between pixels in the desired rectified results and the distorted inputs. However, such a dewarping map is an extremely dense annotation that can be almost impossible to obtain manually [40]. Consequently, many learning-based methods have to resort to synthetic data. Failure to obtain the dewarping map annotation means that real-world data can only be used for weak supervision [41, 24]. ## 3 Methodology As shown in Fig. 2, the proposed DocAligner seeks to correlate the photographic document image \(I_{s}\) with its clean counterpart \(I_{t}\). To achieve this, a pre-alignment module is first utilized to obtain \(I_{s}^{\prime}\), which is accomplished using an edge-based non-rigid transformation. Pre-aligned pairs \((I_{s}^{\prime},I_{t})\) are then fed into a shared feature extraction backbone to extract multi-scale features, which are then used to predict flows hierarchically. Finally, a refinement module is designed to refine flow details recurrently in high-resolution space and output a flow with the same size as the input pairs. ### Non-rigid pre-alignement Photographic document images often suffer from significant global misalignment caused by the varying camera angles and paper deformations. Such misalignment combined with repetitive patterns in a document impede precise correlations. Pre-aligning the images before carrying out fine-grained correlation can resolve this issue. Traditional affine and homography rigid transformations for natural images [30, 31, 15] are not suitable for document pairs that are with both rigid and non-rigid deformations. In this paper, we focus on the advancements in document image rectification which utilize the document edge information [45]. Specifically, as illustrated in Fig. 2, we first extract the edge of the document through semantic segmentation. We then detect the four corners and equidistant points on four edges based on edge information. We map these detected points to their pre-defined reference counterparts in a quadrilateral. Using these paired points, we apply the TPS non-rigid transformation to obtain pre-aligned \(I_{s}^{\prime}\). ### Hierarchical alignment As shown in Fig. 2, we feed the pre-aligned pairs \((I_{s}^{\prime},I_{t})\) into a shared feature extraction backbone which produce multi-scale features \(X^{s}=\{X_{1}^{s},X_{2}^{s},X_{3}^{s},X_{4}^{s}\}\) and \(X^{t}=\{X_{1}^{t},X_{2}^{t},X_{3}^{t},X_{4}^{t}\}\) for \(I_{s}^{\prime}\) and \(I_{t}\), respectively. In the hierarchical alignment module, we predict and fine-tune the flow from low to high resolution, i.e., from level \(l=1\) to \(l=3\). The flow \(f_{l}\) between a pair of features \((X_{l}^{s},X_{l}^{t})\) at level \(l\) is calculated by \[f_{l}=\mathbf{up}\left(f_{l-1}\right)+\mathbf{decoder}_{l}\left(C_{l},\mathbf{ up}\left(f_{l-1}\right)\right), \tag{1}\] where \(\mathbf{up}()\) is a bilinear up-sampling function and \(\mathbf{decoder}_{l}()\) is a lightweight fully convolution neural network. The detailed architecture of \(\mathbf{decoder}_{l}()\) can be found in the supplementary materials. Furthermore, \(C_{l}\) refers to the correlation map obtained through global or local correlation layer \(\mathbf{C_{G/L}}\): \[C_{l}=\mathbf{C_{G/L}}\left(\tilde{X}_{l}^{s},X_{l}^{t}\right). \tag{2}\] Here \(\tilde{X}_{l}^{s}\) is obtained by warping \(X_{l}^{s}\) toward \(X_{l}^{t}\): \[\tilde{X}_{l}^{s}(x)=X_{l}^{s}\left(\mathbf{x}+\mathbf{up}\left(f_{l-1}\right) \left(\mathbf{x}\right)\right), \tag{3}\] where \(\mathbf{x}\) denotes the image coordinate. Additionally, the initial flow \(f_{0}\) is a zero-filled map. The correlation layer, also known as cost volume, is essential in current state-of-the-art dense correspondence methods, as it represents the similarities between spatial elements in both reference and query features. The global correlation layer calculates the scalar product between each feature vector in the reference features \(X^{r}\in\mathbb{R}^{H^{r}\times W^{r}\times D}\) and all the vectors in the query features \(X^{q}\in\mathbb{R}^{H^{q}\times W^{q}\times D}\), as following: \[\mathbf{C}_{\mathrm{G}}\left(X^{r},X^{q}\right)_{ij}=\left(x_{i}^{r}\right)^{ \mathrm{T}}x_{j}^{q}, \tag{4}\] where \(x_{i}^{r}\in\mathbb{R}^{D}\) and \(x_{j}^{q}\in\mathbb{R}^{D}\) are the \(i\)-th and \(j\)-th vector in \(X^{r}\) and \(X^{q}\), respectively. This layer, denoted as \(\mathbf{C}_{\mathrm{G}}\in\mathbb{R}^{H^{r}W^{r}\times H^{q}W^{q}}\), represents similarities between all locations in the reference and query features and can handle large displacements. However, its computational complexity and memory consumption increase quadratically with feature size, rendering it only suitable for low-resolution features. In contrast, the local correlation layer computes the scalar product between vectors within a constrained distance: \[\mathbf{C}_{\mathrm{L}}\left(X^{r},X^{q}\right)_{id}=\left(x_{i}^{r}\right)^{ \mathrm{T}}x_{i+d}^{q},\qquad\|d\|\leq R \tag{5}\] where \(R\) is the pre-defined constant ratio. \(\mathbf{C}_{\mathrm{L}}\in\mathbb{R}^{H^{r}W^{r}\times(2R+1)}\) is more computationally efficient but unsuitable for correlating feature pairs with large displacements. In this paper, we apply the global correlation layer to the lowest resolution features (i.e., \(l=1\)) and the local correlation layer with \(R=9\) to the remaining levels, enabling hierarchical alignment that can eliminate large global displacements in low resolution and focus precise correlation in high resolution. ### Details recurrent refinement The final output flow field of natural images is usual 1/4 the size of the input image. A flow field with a larger size will not bring much more improvement [33, 36, 4, 7]. For some natural scenarios, low-resolution input is enough to obtain good performance [31, 31, 16]. Nevertheless, this is not the case for document images because of their fine-grain elements. To address this issue, we propose a details refinement module to obtain the output flow field with the same size as input pairs. As shown in Figs. 2 and 3, this refinement module refines the output in the highest resolution space and employs a recurrent ConvGRU unit [32] to reduce memory usage. At each time step \(n\), the input of the GRU unit is \(x^{n}\), which is obtained by \[x^{n}=[\mathbf{Conv_{3\times 3}}\left(X_{4}^{s}\right),motion], \tag{6}\] \[motion=\mathbf{Conv_{3\times 3}}\left(\left[\mathbf{ down}(f^{n-1}),\mathbf{C_{L}}\left(\tilde{X}_{4}^{s},X_{4}^{t}\right)\right] \right),\] (7) \[\tilde{X}_{4}^{s}(x)=X_{4}^{s}\left(\mathbf{x}+\mathbf{down}\left( f^{n-1}\right)(\mathbf{x})\right). \tag{8}\] \(\mathbf{Conv_{3\times 3}}()\) refers to a \(3\times 3\) convolutional layer, \(\mathbf{down}()\) is a down-sampling function, and \(\tilde{X}_{4}^{s}(x)\) is obtained by warping \(X_{4}^{s}\) toward \(X_{4}^{t}\). The GRU unit updates the hidden state from the input \(x^{n}\) and the previous hidden state \(h^{n-1}\), similar to a method in [32]. The updated hidden state \(h^{n}\) is used to predict the residual flow \(\Delta f^{n}\downarrow\) using a two-layer convolutional neural network. However, the size of \(\Delta f^{n}\downarrow(\frac{H}{4}\times\frac{W}{4})\) is smaller than that of the input image, so we introduce a learnable upsampling Figure 3: The details recurrent refinement module. Figure 2: Overall architecture of DocAligner. Considering photographic \(I_{s}\) and clean \(I_{t}\), DocAligner aims to densely correlate each pixel in \(I_{t}\) with \(I_{s}\). method. We predict 16 weight matrices of size \(3\times 3\) for each pixel in \(\Delta f^{n}\downarrow\), thus obtaining the weight map with size of \(\frac{H}{4}\times\frac{W}{4}\times 16\times 3\times 3\), which can be reshaped as \(\frac{H}{4}\times\frac{W}{4}\times 144\). We obtain it by feeding \(\Delta f^{n}\downarrow\) and \(h^{n}\) to another two-layer convolutional neural network (refer to supplementary materials for more details). We then obtain each upsampled pixel using a weighted sum over the \(3\times 3\) neighborhood in \(\Delta f^{n}\downarrow\) and finally obtain the desired upsampled residual flow \(\Delta f^{n}\). The updated flow can be obtained by \[f^{n}=f^{n-1}+\Delta f^{n}. \tag{9}\] Bilinearly up-sampled flow \(f_{3}\) from the hierarchical alignment module is set as the initial flow \(f^{0}\) for Eq. 8 and 9. Initial hidden state \(h^{0}\) is transformed from \(X_{4}^{s}\) through a \(1\times 1\) convolution layer. We set the iterations for refinement as 7. ### Self-supervision learning with real data We train DocAligner on synthetic data as described in Section 4.1, but we observe a distribution gap between real-world and synthetic data. To tackle this problem, we propose a self-supervised approach to improve DocAligner's robustness, as depicted in Fig. 4. This approach involves providing DocAligner with paired clean document image \(I_{t}\) and pre-aligned photographic image \(I_{s}^{\prime}\) from the real world, and is formulated as \[\hat{\theta}=\operatorname*{argmin}_{\theta}\left(\left\|\tilde{ \mathbf{G}}\left(I_{s}^{\prime}\right)-\mathbf{G}\left(I_{t}\right)\right\|_{1 }\right), \tag{10}\] \[\tilde{\mathbf{G}}\left(I_{s}^{\prime}\right)=\mathbf{G}\left(I_ {s}^{\prime}\right)\left(\mathbf{x}+f\left(\mathbf{x}\right)\right),\] (11) \[f=\text{DocAligner }\left(I_{t},I_{s}^{\prime};\theta\right). \tag{12}\] Here \(\theta\) is the parameters of DocAligner, \(f\) is the predicted flow field, \(\mathbf{x}\) is the image coordinates, and \(\mathbf{G}\) is the Sobel operator to extract gradients. It is hard to directly solve Eq. 10, so we approximate it using a gradient descent algorithm. Considering the inefficiency of individual optimization of each sample, similar to prior art [31], we optimize our network on the entire test set before testing and then perform inference using the optimized network. It should be noted that our self-supervision process only involves the input data and does not include any ground-truth flow field for supervision. ## 4 Experiments ### Dataset Due to the lack of publicly available training data, we develop a synthetic dataset named DocAlign12K, the synthetic pipeline of which is shown in Fig. 5. Firstly, we collect the PDF files from the Internet and then convert them into clean document images. Next, we randomly generate flow fields for each of these images. More detailed procedures and parameter settings are available in the supplementary materials. Using the generated flow fields, we warp clean images to obtain geometrically distorted images. Based on the Lambertian assumption, we consider an image \(I\) as a composition of reflectance \(R\) and shading \(S\), i.e., \(I=R\otimes S\), where \(\otimes\) denotes the Hadamard product. We collect 500 real shadings by capturing backgrounds without texture under various illumination conditions. We select one of these shadings randomly and perform random cropping, rotation, and color shifts to obtain \(S\). Finally, we treat the geometrically distorted image as \(R\) to obtain the final image \(I\). Additionally, we apply random JPEG compression, Gaussian noise, blur, etc., to better simulate the degradations. We obtain 12K samples (with triplets of clean documents, photographic documents, and flow fields) and split them into training (10K) and testing (2K) sets. ### Implementation details We implement DocAligner in the PyTorch framework [28], and train it on two NVIDIA 2080Ti GPUs with a batch size of 4. The widely-used Adam optimizer [19] is adopted. The initial learning rate is set to \(1\times 10^{-4}\) and reduced by a factor of 0.3 after every 30 epochs. We train each model for 100 epochs. The shared feature extraction backbone is a ResNet-18 pre-trained on ImageNet. The input size is set to \(1024\times 1024\). When trained with DocAlign12K, flows from all hierarchical levels and refinement iterations are supervised by the ground-truth flow with \(L1\) loss. Figure 4: Scheme of self-supervision on real data. The dotted lines indicate the back propagation of gradient. Figure 5: The pipeline for synthesizing DocAlign12K. ### Comparison with state-of-the-art We make comparisons with DGC-Net [26], GluNet [36], and Glu-GOCor [35]. All these methods are trained with DocAlign12K. To ensure a fair comparison, we apply our non-rigid pre-alignment method to these methods since DocAlign12K does not consider the background margin. Note that the input spatial resolution for DGC-Net is set to \(240\times 240\), while that for Glu-Net and Glu-GOCor is set to \(1024\times 1024\). We first evaluate DocAligner and the above-mentioned methods on the testing set of DocAlign12K. Similar to dense correspondence in natural images [26, 35, 36], we use average endpoint error (AEPE) and percentage of correct keypoints (PCK) [43] as metrics. AEPE measures the averaged Euclidean distance between predicted and ground-truth flow fields over all pixels. PCK is defined as the percentage of the correctly estimated points that are within a certain Euclidean distance threshold (in pixels). Results in Table 1 demonstrate that DocAligner outperforms previous methods in all metrics by a considerable margin. Particularly, DocAligner obtains a relative improvement of 23.4% over the second-best method in PCK-1px, indicating DocAligner's ability to achieve precise correlation. We then use DocUNet [25] benchmark to further validate DocAligner' performance, which consists of clean scanned and geometrically-distorted photographic document images. We use the predicted flow field to warp the pre-aligned photographic image: \(\tilde{I}^{\prime}_{s}=I^{\prime}_{s}(\mathbf{x}+f(\mathbf{x}))\), and then assess the alignment between the target clean scanned image \(I_{t}\) and \(\tilde{I}^{\prime}_{s}\). The better the alignment, the better the model's performance. Multi-scale structural similarity (MS-SSIM) [39] and align distortion (AD) [24] are adopted as evaluation metrics. MS-SSIM is a widely used metric that measure perception-based similarity between two images. AD aims to measure the local distortion between two images via dense SIFT flow, which is improved from local distortion (LD) [44] by excluding the noise in low-textured regions and excluding the effect of subtle global transformations. Quantitative and qualitative results are given in Table 2 and Fig. 6, respectively. Although DGC-Net can achieve seemingly feasible global alignment, it fails to warp the character details because of the detail-lacking input. The correlation layers in Glu-GOCor are more robust toward repetitive patterns, so it exhibits improvement when compared to Glu-Net. DocAligner achieves superior performance when compared to the above-mentioned methods. Moreover, the gains are further boosted when our self-supervised method is applied. In order to validate the effectiveness of our self-supervised approach, we compare it with RANSAC-Flow [31], which is another self-supervised method. Similar to the settings in [31], we do not train models on our DocAlign12K but solely adopt the self-supervised training on real data. We train RANSAC-Flow and DocAligner on the WarpDoc [41] dataset, which consists of 1020 pairs of photographic and clean document images, and then fine-tune and test them on DocUNet. Results shown in Table 3 demonstrate the superiority of DocAligner. We observe that the poor performance of RANSAC-Flow partially contributes to its rigid pre-alignment, which can not obtain satisfying pre-alignment results. Besides, our self-supervised method is more easy to be implemened compared to sophisticated hierarchical learning for multi-task optimization in RANSAC-Flow. ### Applications of DocAligner To further validate the feasibility and application value of our approach, we annotate photographic document images for document layout analysis (DLA), illumination correction, and geometric rectification using our DocAligner\({}^{SSFT10}\), and validate the effectiveness of the acquired data. For brevity, we will omit the subscript \(SSFT10\) in the following discussion. **Document layout analysis**. We use DocAligner to transfer annotations from an existing dataset for document layout analysis (DLA) to photographic images. To accomplish this, we randomly select 2200 samples from Pub-LayNet [47], a large-scale DLA dataset of clean document images. We then print these images and capture them in various environments before using DocAligner to correlate photographic and clean pairs. The resulting flow from Do \begin{table} \begin{tabular}{c c c c} \hline & AEPE\(\downarrow\) & PCK-1px (\%)\(\uparrow\) & PCK-5px (\%)\(\uparrow\) \\ \hline DGC-Net [26] & 47.39 & 6.98 & 15.67 \\ Glu-Net [36] & 1.82 & 51.04 & 93.74 \\ Glu-GOCor [35] & 1.54 & 62.15 & 94.49 \\ \hline DocAligner & **1.09** & **76.63** & **96.36** \\ \hline \end{tabular} \end{table} Table 1: Comparisons on DocAlign12K’s testing set. \begin{table} \begin{tabular}{c c c c c} \hline Methods & MS-SSIM\(\uparrow\) & AD\(\downarrow\) & Parameters (M) & Run-time (s) \\ \hline DGC-Net [26] & 0.6177 & 0.3137 & **68.47** & **0.48** \\ Glu-Net [36] & 0.7728 & 0.1186 & 94.17 & 0.75 \\ Glu-GOCor [35] & 0.7862 & 0.0938 & 94.17 & 0.85 \\ \hline DocAligner & 0.8058 & 0.0486 & 103.8 & 0.93 \\ DocAligner\({}^{SSFT10}\) & **0.8232** & **0.0445** & 103.8 & 0.93 \\ \hline \end{tabular} \end{table} Table 2: Comparisons with state-of-the-art geometric correspondence methods on DocUNet dataset. \(SSFT10\) denotes Self-Supervised Fine-Tuning on the entire test set with 10 epochs before testing. \begin{table} \begin{tabular}{c c c} \hline Methods & MS-SSIM\(\uparrow\) & AD\(\downarrow\) \\ \hline RANSAC-Flow [26] & 0.6746 & 0.4700 \\ DocAligner\({}^{SSFT10}\) & **0.7864** & **0.0918** \\ \hline \end{tabular} \end{table} Table 3: Performance on DocUNet dataset when solely trained with self-supervision. cAligner enabled us to transform the coordinates of bounding boxes and masks to their photographic counterparts, allowing us to obtain the annotations we needed. It is worth noting that traditional manual labeling processes typically cost 5-15 minutes per image, but our approach reduces this to approximately 0.15 minutes per image. Some acquired samples are shown in Fig. 7, which demonstrates DocAligner's ability to generate diverse data with high-quality annotations. To validate the effectiveness of the acquired data, we use it to train a Mask R-CNN [12]. We randomly select 200 samples, while the remaining samples form the training set. We manually inspect the testing set and make adjustments to any incorrect annotations to ensure labeling accuracy. The results, shown in Table 4, illustrate the significant superiority of our acquired training data compared to synthetic data. Additional visualization results can be found in the supplementary materials. Furthermore, it is possible to label other detection tasks involving bounding box annotations, such as table and text line detection, using a similar approach. **Illumination correction**. We utilize our DocAligner to \begin{table} \begin{tabular}{c c c c} \hline \hline Training data & Type & Num. & mAP(@0.5-0.95) \\ \hline Clean images & - & 2K & 8.0 \\ Clean images + S & Synthetic & 2K & 36.9 \\ Clean images + G & Synthetic & 2K & 21.5 \\ Clean images + G + S & Synthetic & 2K & 49.7 \\ Clean images + G + S & Synthetic & 20K & 61.9 \\ \hline Data from DocAligner & Real & 2K & **68.0** \\ \hline \hline \end{tabular} \end{table} Table 4: Performance on photographic testing set when trained with different data. Clean images are original from PubLayNet. S and G represent shadow and geometric synthesis, respectively, the same as DocAlign12K. Figure 6: Input pairs are shown on the left. Warped results are shown in I. In II, we overlap targets on warped results and show local details. Top to bottom show results from a) DGC-Net [26], b) Glu-Net [36], c) Glu-GOCor [35], and d) DocAligner\({}^{SSFT10}\). \begin{table} \begin{tabular}{c c c c} \hline \hline Network & Training data & Num. & SSIM\(\uparrow\) & PSNR\(\uparrow\) \\ \hline - & - & - & 0.7065 & 12.90 \\ \hline \multirow{2}{*}{illNet} & DocProj & 2450 & 0.7139 & 15.74 \\ \cline{2-4} & Dataset from DocAligner & 800 & **0.7504** & **16.78** \\ \hline \hline \end{tabular} \end{table} Table 5: Performance on DocUNet dataset when trained with different dataset. Results in the first line represent images without illumination correction (i.e., input images for model). Figure 7: Some samples from our acquired DLA dataset. annotate the WarpDoc [41] dataset, which consists of clean scanned document images and geometrically-distorted photographic document images. However, we exclude the 'Incomplete' subset since our approach does not currently support this document type, as explained in the supplementary material. Furthermore, we exclude images with excessively large rotation angles. It is worth noting that such problematic images can be avoided during the photography process by informing the collectors in advance. Using the obtained flow field, we warp the photographic source images towards their clean target images to generate paired data for illumination correction. In total, we obtain 800 pairs. Fig.8 presents examples of our acquired data alongside synthetic data from DocProj [22]. It can be observed that the synthetic data lacks diversity and realism. For quantitative comparisons, we train illNet [22] with our acquired data and DocProj data respectively. The resulting model corrects the illumination of geometrically-rectified DocUNet images from our DocAligner\({}^{SSFT10}\) (i.e., results depicted in the last row of Table 2). Table 5 presents the SSIM and PSNR values between the results obtained by illNet and the corresponding scanned clean images. The results indicate that while model trained with synthetic data yield improvement, it still lag behind model trained with real data, even if the real dataset is smaller in size. Supplementary materials include qualitative comparisons of models trained with different datasets. **Geometric rectification**. We combine image pairs in DIB [9], WarpDoc [41], DocUNet [25], and our collected DLA data together, from which we randomly select 300 pairs as a testing set. The rest are correlated using DocAligner to get flow fields as annotations. We finally get a training set with 2.5K samples. Inspired by the success achieved by DocTr [8], we adopt the Transformer as our dewarping network without extra sophisticated designs. We train it using our constructed training set. As demonstrated in Table 6, model trained using our acquired data yield promising results. Our model, based on the vanilla Transformer architecture, surpasses previous meticulously-designed approaches that were trained on large-scale synthetic data. This outcome confirms the effectiveness of the real data we acquired. ### Ablation studies **Non-rigid pre-alignment**. We replace non-rigid pre-alignment module with homography transformation and evaluate on DocUNet. We use deep features from a pre-trained ResNet-50 to represent two paired images and obtain sparse correspondences based on cosine similarity. Then the RANSAC algorithm [10] is applied to fit a homography. Results in Table 7 indicate that replacing non-rigid pre-alignment leads to significant performance degradation. Some visualized results shown in Fig. 9 demonstrate that rigid transformation is only effective in addressing perspective deformation while failing in other cases. In contrast, our non-rigid pre-alignment method proves to be more ro \begin{table} \begin{tabular}{c c c c c c} \hline Model & Training data & Type & Num. & MS-SSIM\(\uparrow\) & AD\(\downarrow\) \\ \hline DocUNet [25] & Ma et al. [25] & S & 100K & 0.4157 & 0.4957 \\ DocProj [22] & Li et al. [22] & S & 1K & 0.2531 & 0.9278 \\ DPCP [40] & Xie et al. [40] & S & 30K & 0.4189 & 0.5071 \\ DewarNet [5] & Doc3D [5] & S & 100K & 0.4057 & 0.5187 \\ DocTr [8] & Doc3D [5] & S & 100K & 0.4649 & 0.4708 \\ PaperEdge [24] & DIV [24]+Doc3D [5] & R+S & 2.3K+100K & 0.4523 & **0.3901** \\ \hline \multirow{2}{*}{Transformer-based} & Dataset from & \multirow{2}{*}{R} & \multirow{2}{*}{2.5K} & \multirow{2}{*}{**0.4897**} & \multirow{2}{*}{0.4226} \\ & DocAligner & & & & \\ \end{tabular} \end{table} Table 6: Performance when trained with different data. R and S denote real and synthetic, respectively. \begin{table} \begin{tabular}{c c c c} \hline Non-rigid & \multicolumn{2}{c}{Details} & \\ pred-alignment & recurrent refinement & MS-SSIM\(\uparrow\) & AD\(\downarrow\) \\ \hline ✗ & ✓ & 0.7578 & 0.1099 \\ ✓ & ✗ & 0.7910 & 0.0756 \\ ✓ & ✓ & **0.8058** & **0.0486** \\ \hline \end{tabular} \end{table} Table 7: Ablation studies on our non-rigid pre-alignment and details recurrent refinement module. Figure 8: a) Examples from synthetic DocProj dataset and b) examples from our acquired real-world dataset. Figure 9: Comparisons between results from rigid pre-alignment and our non-rigid pre-alignment. bust and effective in dealing with various situations. **Details recurrent refinement**. To validate the effectiveness of this module, we replace it with another _Local correlation_\(\mathbf{C_{L}}\)_& flow prediction_ block as in Fig. 2 and retrain the model with the same settings. The output size for this variant is one-quarter of the input size, which will be bilinearly up-sampled. Results on the DocUNet dataset in Table 7 demonstrate the improvement introduced by details recurrent refinement. ## 5 Conclusions We present DocAligner for automating the annotation of photographic document images by establishing dense correspondence between these images and their clean counterparts. DocAligner incorporates several sophisticated techniques, such as non-rigid pre-alignment, hierarchical alignment, details recurrent refinement, a synthesis pipeline, and a self-supervision training approach. Through extensive experiments, we demonstrate the excellent performance of DocAligner in the task of document dense correspondence, generating high-quality annotations. Moreover, DocAligner simplifies the manual annotation process to merely taking pictures, resulting in significant cost reduction associated with manual labeling. When annotating layout analysis data, DocAligner reduces the manual labeling time by a minimum of 30-fold. Experimental results in layout analysis, illumination correction, and geometric rectification emphasize the substantial potential of DocAligner in facilitating various document artificial intelligence tasks in photographic contexts. In our future work, we plan to create additional large-scale real-world datasets for the research community.
2305.04107
DMF-TONN: Direct Mesh-free Topology Optimization using Neural Networks
We propose a direct mesh-free method for performing topology optimization by integrating a density field approximation neural network with a displacement field approximation neural network. We show that this direct integration approach can give comparable results to conventional topology optimization techniques, with an added advantage of enabling seamless integration with post-processing software, and a potential of topology optimization with objectives where meshing and Finite Element Analysis (FEA) may be expensive or not suitable. Our approach (DMF-TONN) takes in as inputs the boundary conditions and domain coordinates and finds the optimum density field for minimizing the loss function of compliance and volume fraction constraint violation. The mesh-free nature is enabled by a physics-informed displacement field approximation neural network to solve the linear elasticity partial differential equation and replace the FEA conventionally used for calculating the compliance. We show that using a suitable Fourier Features neural network architecture and hyperparameters, the density field approximation neural network can learn the weights to represent the optimal density field for the given domain and boundary conditions, by directly backpropagating the loss gradient through the displacement field approximation neural network, and unlike prior work there is no requirement of a sensitivity filter, optimality criterion method, or a separate training of density network in each topology optimization iteration.
Aditya Joglekar, Hongrui Chen, Levent Burak Kara
2023-05-06T18:04:51Z
http://arxiv.org/abs/2305.04107v2
# DMF-TONN: Direct Mesh-free Topology Optimization using Neural Networks ###### Abstract We propose a direct mesh-free method for performing topology optimization by integrating a density field approximation neural network with a displacement field approximation neural network. We show that this direct integration approach can give comparable results to conventional topology optimization techniques, with an added advantage of enabling seamless integration with post-processing software, and a potential of topology optimization with objectives where meshing and Finite Element Analysis (FEA) may be expensive or not suitable. Our approach (DMF-TONN) takes in as inputs the boundary conditions and domain coordinates and finds the optimum density field for minimizing the loss function of compliance and volume fraction constraint violation. The mesh-free nature is enabled by a physics-informed displacement field approximation neural network to solve the linear elasticity partial differential equation and replace the FEA conventionally used for calculating the compliance. We show that using a suitable Fourier Features neural network architecture and hyperparameters, the density field approximation neural network can learn the weights to represent the optimal density field for the given domain and boundary conditions, by directly backpropagating the loss gradient through the displacement field approximation neural network, and unlike prior work there is no requirement of a sensitivity filter, optimality criterion method, or a separate training of density network in each topology optimization iteration. keywords: Topology Optimization, Physics-Informed Neural Network, Implicit Neural Representations, Mesh-free ## 1 Introduction Topology optimization approaches like SIMP (Solid Isotropic Material with Penalisation) ([1; 2]) find the optimum structure for a given set of boundary conditions by meshing the design domain and using an iterative process where each iteration involves an FEA calculation for computing the objectives such as compliance. Therefore, removing these iterations completely or creating a new class of solvers with a reparameterization of the design variables in this optimization problem is highly desirable. Advances in neural networks, both in learning from large amounts of data and in learning implicit representations of complex signals show great promise to bring about this transformation, and hence many new approaches trying to utilize neural networks for topology optimization have been recently developed. Data-driven approaches perform instant optimal topology generation during inference time. However, they require a large training database generation, a long training time and face generalization issues. Online training approaches use a neural network to represent the density field of designs for an alternative parameterization. They do not face any generalization issues. However, meshing and FEA is still required. One of the first online training topology optimization approaches, TOuNN, was proposed by Chandrasekhar and Suresh [3]. The neural network takes in as inputs the domain coordinates and outputs the density value at each of these coordinates. The loss function consists of the compliance and volume fraction constraint violation. This loss gradient is backpropagated and used for updating the weights of the neural network such that it learns the optimal density distribution for minimizing the loss. The compliance for the density field is calculated as in the traditional SIMP method using FEA. For removing the meshing requirement of FEA and creating a new class of solvers for various partial differential equations (PDEs) in computational mechanics, there have been recent advances and promising results in using physics-informed neural networks (PINNs). Samaniego et al. [4] propose an energy approach for solving the linear elasticity PDEs. The displacement field is parameterized by a neural network which takes as input the domain coordinates and outputs the displacements in each direction at each of these coordinates. The loss function consists of the potential energy, which when minimized, will give static equilibrium displacements. Though the computational time for the neural network PDE approximation frameworks is worse than the current state of the art FEA solvers, there are several potential advantages of this approach, including the mesh-free nature and an easy modelling of non-linear PDEs. Incorporation of these neural network PDE approximation frameworks in online training topology optimization enables mesh-free topology optimization and a new class of solvers for this complex inverse design problem. Zehnder et al. [5] were the first to propose such a mesh-free framework for topology optimization with compliance as an objective, where in addition to the density field, the displacement field is also parameterized using a neural network. However, they conclude that connecting the two neural networks directly leads to bad local minima. Hence, they propose using the optimality criterion method and sensitivity filtering for calculating target densities. As such, the density neural network needs to be trained for estimating these target densities in every topology optimization iteration. In this work, we show that using directly connected displacement field estimation and density field estimation neural networks is indeed an effective approach for mesh-free topology optimization. In particular, we argue that using just one gradient descent step of the density network in each topology optimization iteration without any sensitivity or density filtering leads to comparable results to conventional topology optimization. Moreover, after the initial run of the displacement network, we significantly reduce the number of iterations in each topology optimization iteration. We show that transfer learning applies here and in this high dimensional and non-convex optimization problem setting, approximate loss and gradients can work well. We devise DMF-TONN as a method not for replacing SIMP, but for adding to and improving the current class of mesh-free solvers for topology optimization, using the advancements in neural networks. We use Fourier Features and a fully connected layer as the architecture of both our neural networks. We verify the effectiveness of our approach with case studies with different boundary conditions and volume fractions. The implementation of this work is available at: [https://github.com/Adityajloglekar/DMF-TONN](https://github.com/Adityajloglekar/DMF-TONN) Figure 1: Our proposed framework. Each topology optimization iteration consists of: 1) Training the displacement network with current density field, randomly sampled domain coordinates and boundary conditions to obtain static equilibrium displacements. 2) Randomly sampling domain coordinates and performing a forward pass through the density network to obtain current topology output, and displacement network to obtain current compliance, which are passed to the density network loss function 3) Backpropagating density network loss and performing a gradient descent step on density network weights. ## 2 Literature Review _Topology optimization_: Bendsoe and Kikuchi [6] introduced the homogenization approach for topology optimization. The SIMP method ([1; 2]) considers the relative material density in each element of the Finite Element (FE) mesh as design variables, allowing for a simpler interpretation and optimised designs with more clearly defined features. Other common approaches to topology optimization include the level-set method ([7; 8]) and evolutionary algorithms. Improving the optimization results and speed of these approaches using neural networks has seen a lot of development recently and Woldseth et al. [9] provide an extensive overview on this topic. _Neural Networks for solving PDEs_: Driven by a neural network's ability to approximate functions, there have been several recent works proposing novel solvers for PDEs. Raissi et al. [10] propose PINNs, neural networks that are trained to solve supervised learning tasks while respecting the laws of physics described by general nonlinear partial differential equations. Samaniego et al. [4] propose an approach using neural networks which does not require labelled data points, and just uses domain coordinates and boundary conditions as input to solve computational mechanics PDEs. Nguyen-Thanh et al. [11] develop a deep energy method for finite deformation hyperelasticity. Sitzmann et al. [12] leverage periodic activation functions for implicit neural representations and demonstrate that these networks are ideally suited for representing complex natural signals and their derivatives and solving PDEs. Tancik et al. [13] show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron to learn high-frequency functions in low-dimensional problem domains. We utilize this concept of Fourier Feature mapping for finding good approximations of the displacement field and density field in the low-dimensional coordinate domain. _Neural networks for topology optimization_: Several data-driven methods for topology optimization using neural networks [14; 15; 16; 17; 18; 19; 20] have been proposed. In this review we focus on the online training topology optimization methods, i.e. those methods which do not use any prior data, rather train a neural network in a self-supervised manner for learning the optimal density distribution and topology. Chandrasekhar and Suresh [3] explore an online approach where the density field is parameterized using a neural network. Fourier projection based neural network for length scale control ([21]) and application for multi-material topology optimization ([22]) has also been explored. Deng and To [23] propose topology optimization with Deep Representation Learning, with a similar concept of re-parametrization, and demonstrate the effectiveness of their method on compliance minimization and stress-constrained problems. Hoyer et al. [24] use CNNs for density parameterization and directly enforce the constraints in each iteration, reducing the loss function to compliance only. Chen et al. [25] propose a neural network based approach to topology optimization that aims to reduce the use of support structures in additive manufacturing. Chen et al. [26] demonstrate that by using a prior initial field on the unoptimized domain, the efficiency of neural network based topology optimization can be improved. He et al. [27] and Jeong et al. [28] approximate displacement fields using PINNs, but a continuous density field is not learned and the frameworks are not mesh-free. Lu et al. [29] demonstrate the effectiveness of hard constraints over soft constraints for solving PDEs in various topology optimization problems. Zehnder et al. [5] effectively leverage neural representations in the context of mesh-free topology optimization and use multilayer perceptrons to parameterize both the density and displacement fields. Mai et al. [30] develop a similar approach for optimum design of truss structures. We show that unlike in Zehnder et al. [5], sensitivity filtering, optimality criterion method and separate training of density network in each topology optimization epoch is not necessary for mesh-free topology optimization using neural networks. ## 3 Proposed Method We parameterize the displacement field as well as the density field using neural networks and integrate them as shown in Figure 1. ``` 1:Initialize neural networks: \(Den_{W_{den}}\), \(Disp_{W_{disp}}\) 2:Initialize Adam optimizers: \(Opt_{den}\), \(Opt_{disp}\) 3:Initialize domain \(\rho_{init}\) 4:for\(n_{initdisp}\) iterations do 5: Sample domain coordinates \(X_{disp}\) 6:\(u_{temp}\gets Disp_{W_{disp}}(X_{disp})\) 7:\(W_{disp}\gets Opt_{disp}.step(W_{disp},\frac{\partial L_{disp}(u_{temp },\rho_{init})}{\partial W_{disp}})\) 8:endfor 9:for\(n_{opt}\) iterations do 10:for\(n_{disp}\) iterations do 11: Sample domain coordinates \(X_{disp}\) 12:\(\rho_{temp}\gets Den_{W_{den}}(X_{disp})\) 13:\(u_{temp}\gets Disp_{W_{disp}}(X_{disp})\) 14:\(W_{disp}\gets Opt_{disp}.step(W_{disp},\frac{\partial L_{disp}(u_{temp },\rho_{temp})}{\partial W_{disp}})\) 15:endfor 16: Sample domain coordinates \(X_{den}\) 17:\(\rho\gets Den_{W_{den}}(X_{den})\) 18:\(u\gets Disp_{W_{disp}}(X_{den})\) 19:\(c\gets L_{disp}(u,\rho)+EW\) 20:\(W_{den}\gets Opt_{den}.step(W_{den},\frac{\partial L_{den}(\rho,c)}{ \partial W_{den}})\) 21:endfor ``` **Algorithm 1** DMF-TONN ### Density Neural Network The density neural network \(\textit{Den}(\textbf{X}_{den})\) can be represented as follows: \[\textit{Den}(\textbf{X}_{den})=\sigma(\sin(\textbf{X}_{den}\textbf{K}_{den}+ \textbf{b})\textbf{W}_{den}) \tag{1}\] The input is a batch of randomly sampled domain coordinates \(\mathbf{X}_{den(\text{batchsize}\times 3)}\). We use the domain center as the origin for the coordinates, and the coordinates are normalized with the longest dimension coordinates ranging from -0.5 to 0.5. We use the concept proposed in Tancik et al. [13] and a neural network architecture similar to the one used in Chandrasekhar and Suresh [21]. The first layer weights (kernel \(\mathbf{K}_{den(3\times\text{kernelsize})}\)) are fixed, which create Fourier features after passing through the sine activation. We add a bias term \(\mathbf{b}\) consisting of ones before applying the sine activation to break off the symmetry about the origin. The kernel is created using a grid of number of dimensions same as the number of domain dimensions, and then reshaping the grid coordinates to the matrix \(\mathbf{K}_{den(3\times\text{kernelsize})}\). The grid size in each dimension dictates how good it can represent topological features, and the grid's range of values control the frequency of the output topology, with higher ranges of values giving a topology with more intricate features. Note that this grid is not a mesh structure, and consists solely of coordinates. We find that making the kernel trainable can slightly improve compliance. However, we keep it fixed for all the experiments in this paper assuming that the slight increase in performance may not be preferable to the large number of trainable weights. The next layer weights (\(\mathbf{W}_{den(\text{kernelsize}\times 1)}\)) are trainable and the output is passed through a sigmoid activation (\(\sigma\)). This ensures output values are between 0 and 1, which represent the density, for each of the coordinates in the input batch. We use Adam (Kingma and Ba [31]) as the optimizer, with a learning rate of \(2.0\times 10^{-3}\) for all the experiments. ### Displacement Neural Network We use a neural network similar to the density neural network for approximating the displacement field. The physics-informed components shown in Samaniego et al. [4] are then added on the displacement output by this neural network \(\mathit{Disp}(\mathbf{X}_{disp})\). This can be represented as follows: \[\mathit{Disp}(\mathbf{X}_{disp})=\sin(\mathbf{X}_{disp}\mathbf{K}_{disp}+ \mathbf{b})\mathbf{W}_{disp} \tag{2}\] We use randomly sampled domain coordinates \(\mathbf{X}_{disp}\) in each displacement network iteration. The frequency determined by \(\mathbf{K}_{disp}\) should be greater than or equal to the frequency determined by \(\mathbf{K}_{den}\). This is due to the fact that if the displacement network in unable to capture and pass fine changes in displacement to the density network, and the density network is attempting to create very fine features, incorrect features are created and disconnections are observed in the final topology. For all our experiments, we use the same frequencies and grid sizes for \(\mathbf{K}_{disp}\) and \(\mathbf{K}_{den}\) and find this setting works well. Multiplying the Fourier features with \(\mathbf{W}_{disp(\text{kernelsize}\times 3)}\) gives the displacements in each direction. #### 3.2.1 Displacement Constraints Boundary conditions on displacements such as fixed sides, are implemented as hard constraints. The output of \(\mathit{Disp}(\mathbf{X}_{disp})\) is multiplied with a differentiable function that is 0 at the fixed boundary and 1 elsewhere. We use the exponential function for this. For example, with a cuboidal domain that has the side with all \(x\) coordinates equal to zero, fixed (zero displacement in all three directions), such as in Figure 2a, the hard constraint function takes the form \(2(\frac{1}{(1+\exp(-m(c_{x}+0.5)))}-0.5)\) where \(c_{x}\) is all the \(x\) coordinates in the domain, and \(m\) is a constant which dictates the slope of this function. We find empirically \(m=20\) works well and use it for all our experiments. For multiple fixed sides, the displacements output by the neural network are multiplied by the functions for each fixed side. #### 3.2.2 Minimum Potential Energy Loss Function The principle of minimum potential energy is used for approximating the displacement field as proposed in Samaniego et al. [4]. The neural network learns the weights that output the displacements that minimize the potential energy, and thus learns to output static equilibrium displacements. With Monte-Carlo sampling, the loss function of the displacement neural network, \(L_{disp}=\) Potential Energy, for 3d problems, is defined as follows: \[L_{disp}=ISE-EW \tag{3}\] \[ISE=\frac{V}{N}\sum_{i}^{N}(\mu\epsilon_{i}:\epsilon_{i}+\frac{\lambda(trace( \epsilon_{i}))^{2}}{2}) \tag{4}\] \[EW=\frac{A}{N_{b}}\sum_{i}^{N_{b}}Tu_{i} \tag{5}\] where, \(ISE=\) Internal Strain Energy \(EW=\) External Work \(V=\) domain volume \(N=\) number of sample points in domain \(\mu=\frac{E}{2(1+\nu)}\), \(\lambda=\frac{E\nu}{((1+\nu)(1-2\nu))}\) \(E=\) Young's Modulus \(\nu=\) Poisson's ratio \(\epsilon_{i}=\) strain matrix at \(i^{th}\) point \(A=\) area on which traction is applied \(N_{b}=\) number of sample points on boundary \(T=\) traction \(u_{i}=\) displacement at \(i^{th}\) point The symbol ':' indicates element wise multiplication, followed by a summation. The strains are calculated using automatic differentiation in TensorFlow (Abadi et al. [32]). We use Adam as the optimizer, with a learning rate of \(5.0\times 10^{-6}\) for all the experiments. ### Integration of Density and Displacement Neural Networks A topology optimization epoch starts by training the displacement network with randomly sampled coordinates, the corresponding current topology (found by a forward pass through the density network) and the boundary conditions. The conventional SIMP method interpolation (\(E=E_{material}(\rho^{3})\), where \(\rho\) is the density) is used for obtaining the Young's modulus \(E\) at each of the randomly sampled domain points in each displacement network iteration. Then, with randomly sampled coordinates, a forward pass is performed through the density network and the displacement network to get the current topology and current compliance (Internal Strain Energy) respectively, which are passed to the density network loss function. The density network loss function is defined as follows: \[L_{den}=\frac{c}{c_{0}}+\alpha(\frac{v}{V^{*}}-1)^{2} \tag{6}\] where, \(c=\) compliance \(c_{0}=\) initial compliance \(v=\) volume fraction \(V^{*}=\) target volume fraction \(\alpha=\) penalty constant The compliance (\(c\)) is a function of the densities (\(\rho\)) and displacements (\(u\)), where the displacements are also dependent on the densities. As shown in Zehnder et al. [5], the total gradient of the compliance with respect to the densities is given by \(\frac{dC}{d\rho}=\frac{\partial C}{\partial\rho}+\frac{\partial C}{\partial u }\frac{du}{d\rho}=-\frac{\partial C}{\partial\rho}\), which needs to be incorporated while connecting the two neural networks and backpropagating the loss to the density network weights. As shown in Algorithm 1, during each topology optimization iteration, the density network weights are updated once with a gradient descent step in Adam. Before starting the topology optimization, the displacement network is run with the initial domain as the topology, for \(n_{dispinit}=1000\) iterations for converging to static equilibrium displacements. Then in each topology optimization iteration, utilizing the concept of transfer learning as the topology does not change too drastically, we run the displacement network only for \(n_{disp}=20\) iterations. We determined the values of the \(n_{dispinit}\) and \(n_{disp}\) variables empirically to give the best results in the cases presented in the paper. Though we have to increase the topology optimization iterations, this reduction of 50 times in displacement network iterations significantly reduces the computational time without compromising the compliance of the results. We run all of our experiments on a machine with 12th Gen Intel(R) Core(TM) i7-12700 2.10 GHz processor, 16 GB of RAM and Nvidia GeForce RTX 3060 GPU. ## 4 Results We first compare our results with a conventional SIMP topology optimization (Andreassen et al. [33]) and NTopo (Zehnder et al. [5]) for a 2D cantilever beam problem. Then, for a 3D cantilever problem, we present the initial convergence history of the displacement network, and the convergence history of the compliance. Subsequently, we perform a case study for the 3D cantilever beam problem over varying volume fractions and load locations, comparing our results with a conventional 3D SIMP topology optimizer (Liu and Tovar [34]), and with a method using the Fourier Features neural network for density representation and FEA for compliance calculation (similar to Chandrasekhar and Suresh [21]). Then, we present an example showcasing the trade-offs in DMF-TONN and SIMP in terms of the compliance and computational time. Lastly, we validate our approach on some additional examples. The output of our network represents an implicit function of the spatial densities. We use the marching cubes algorithm for generating the renders of the results. It should be noted that with DMF-TONN, one can sample infinitely many points within the domain, as a continuous and differentiable function has been learned by the density network. In this paper, we use twice the number of samples of the FEA grid size in each direction for the DMF-TONN figures for each example. On all solutions obtained by our approach, we run FEA for calculating the final compliance and use this to compare against the compliance obtained by SIMP (using the same FE solver) for consistency. In all the figures, 'c' refers to compliance and 'vf' refers to volume fraction. We ensure the degrees of freedom available are always more for SIMP than for DMF-TONN. Moreover, in SIMP, we use a density filter with a radius 1.5 times the mesh element size in each of the presented examples, ensuring thin features and lower compliances are possible, and there is no compromise in the SIMP results. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Boundary Conditions & SIMP & NTopo & DMF-TONN \\ \hline Convergence Compliance & \(4.94\times 10^{-4}\) & \(4.23\times 10^{-4}\) & \(4.76\times 10^{-4}\) \\ Volume Fraction Achieved & 0.30 & 0.30 & 0.30 \\ Binary Structure Compliance & \(4.55\times 10^{-4}\) & \(3.79\times 10^{-4}\) & \(4.01\times 10^{-4}\) \\ Time & 124 s & 1461 s & 275 s \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison for 2D cantilever beam problem with target volume fraction = 0.3 \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Boundary Conditions & SIMP & NTopo & DMF-TONN \\ \hline Convergence Compliance & \(2.86\times 10^{-4}\) & \(2.54\times 10^{-4}\) & \(2.73\times 10^{-4}\) \\ Volume Fraction Achieved & 0.50 & 0.50 & 0.50 \\ Binary Structure Compliance & \(2.66\times 10^{-4}\) & \(2.40\times 10^{-4}\) & \(2.52\times 10^{-4}\) \\ Time & 69 s & 1465 s & 277 s \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison for 2D cantilever beam problem with target volume fraction = 0.5 Figure 3: Case Study of 3D Cantilever Beam Problem Figure 2: Convergence history of DMF-TONN and result comparison with SIMP for 3D cantilever beam problem with target volume fraction = 0.3 ### Comparison of DMF-TONN, SIMP and Ntopo for 2D Cantilever Beam example We compare the compliances and computational times for a 2D cantilever beam example in Tables 1 and 2. We run the SIMP code (Andreassen et al. (2017)) with the default convergence criterion, and run NTopo for 200 iterations as shown in (Zehnder et al. (2017)) for a similar 2D cantilever beam example. We run our method for \(n_{opt}=2000\) topology optimization iterations (determined empirically for 2D problems), with \(n_{disp}=20\) iterations of the displacement network in each of these topology optimization iterations. We also compare the binary (0 and 1 density values) structure compliance (ensuring the volume fraction remains the same after thresholding) which provides the actual compliance if the optimized structures were to be used in practice. We observe that though our method is slower than SIMP, it results in a mesh-free optimization with a better compliance than SIMP and a faster computational time than NTopo. ### Analysis of a 3D cantilever beam example In Figure 2, we present the convergence history of DMF-TONN and a comparison with 3D SIMP topology optimization for an example with boundary conditions shown in Figure 2a. We use a \(40\times 20\times 8\) grid for the SIMP FEA, which gives 6400 design variables for the topology optimization, and \(6400\times 3=19200\) degrees of freedom (DOF) for the FEA. We use a lesser DOF model, with the number of trainable weights in both our density and displacement networks being 4096 each. We use an initial topology consisting of uniform densities of 0.5 (\(\rho_{init(40\times 20\times 8)}=\mathbf{0.5}\)) as an input for initial training of the displacement network (PINN) and Figure 2d shows the convergence history. Figures 2e, 2f show the displacement field in the \(y\) direction at the cross-section of the domain where the force is applied and the maximum displacement value at different iterations of the initial run of the displacement network. Figure 2c shows the FEA displacement field at this cross section. We see that at the \(1000^{th}\) iteration, the displacement network can learn a very good approximation of the FEA displacement field. In each of the \(n_{opt}=700\) topology optimization iterations (determined empirically for 3D problems), we use only \(n_{disp}=20\) displacement network iterations and show that this is adequate for achieving results (2h) similar to SIMP (2b). Figure 2g shows the converge history for the topology optimization, with FEA compliance plotted for consistency with SIMP. Our approach takes 588 seconds compared to 227 seconds for SIMP, but achieves mesh-free topology optimization and a better compliance than SIMP for this example. ### Case Study of 3D Cantilever Beam problem In Figure 3 we compare our fully mesh-free approach with a Finite Element Analysis based Neural Network Topology Optimization (FENN TO) approach (i.e. using a Fourier Features neural network for representing density field and FEA for compliance calculation) and with the conventional SIMP approach. We vary the load location and target volume fraction for the 3D cantilever \begin{table} \begin{tabular}{l|l|l|l} \hline \hline Boundary Conditions & DMF-TONN & SIMP (\(60\times 20\times 8\) grid) & SIMP (90\(\times\)30\(\times\)12 grid) \\ \hline Displacement Calculation & 3456 & 28800 & 97200 \\ Degrees of Freedom & & & \\ No. of Optimization Design & 3456 & 9600 & 32400 \\ Variables & & & \\ Convergence Compliance & \(2.40\times 10^{-3}\) & \(2.65\times 10^{-3}\) & \(2.38\times 10^{-3}\) \\ Volume Fraction Achieved & 0.30 & 0.30 & 0.30 \\ Computational Time & 517 s & 99 s & 807 s \\ \hline \hline \end{tabular} \end{table} Table 3: Long cantilever beam with bottom load with target volume fraction \(=0.3\) Figure 4: Long cantilever beam with center load with target volume fraction \(=0.3\) beam problem. All approaches are run for 700 iterations. We show the plots of the compliance of all three approaches for all these boundary conditions in Figure 2(b) where the x-axis contains the discrete boundary conditions and y-axis represents the compliance for each of these boundary conditions. Our fully mesh-free approach achieves similar compliance values to the existing SIMP and FENN TO approaches for all the boundary conditions in this case study. The total computational times for running all the examples in this case study are 236 minutes, 96 minutes and 124 minutes for DMF-TONN, SIMP and FENN TO respectively. ### Trade-off Analysis of DMF-TONN and SIMP In Table 3, we present the results for a right bottom end loaded cantilever beam with the ratio of 3 for the lengths of sides in the \(x\) and \(y\) directions (the orientation of the axes is the same as in Figure 1(a)). For the Top3D (SIMP) method, we use their stated convergence criteria of 200 maximum iterations and 0.01 as tolerance of change in topology. Though the degrees of freedom are lesser for DMF-TONN, it still achieves a better compliance than SIMP with a grid of \(60\times 20\times 8\). However, the computational time is much higher for DMF-TONN. Now, we increase the grid size of SIMP to \(90\times 30\times 12\) for achieving a better compliance than DMF-TONN. However, as seen in Table 3, due to the 1.5 times increase in grid size in each direction, there was an exponential increase in computational time for the SIMP method and the computational time of DMF-TONN is lesser than the fine mesh SIMP. This showcases one of the advantages of the mesh-free nature of DMF-TONN, presenting interesting opportunities for tradeoffs to be explored in future research. ### Additional Examples In Figure 4, the load acts at the center of the right side of a long cantilever beam. The compliance values obtained by DMF-TONN for these boundary conditions are better than those obtained by Top3D (SIMP) (\(60\times 20\times 8\) grid). In Figure 5, the boundary conditions include two loads twisting a beam fixed on its one side. The ideal topology should be a hollowed-out beam, and DMF-TONN correctly outputs a similar topology. For this example, we observe that the compliance of the SIMP (\(60\times 15\times 15\) grid) result is better than that of DMF-TONN. In Figure 6 we present a case with a passive (non-design) region is present in the upper right quarter of the domain. This condition is enforced using an additional constraint violation objective for the density network, where if the density network outputs density values close to 1 in the passive region, the loss is penalized. For this L-Bracket example, the compliance of the result obtained by DMF-TONN is better than that obtained by SIMP (\(30\times 30\times 10\) grid). In Figure 7, both left and right sides are fixed and the load acts at the center of the beam. The compliances of the results obtained by SIMP (\(60\times 20\times 8\) grid) and DMF-TONN are similar for this example. The computational time for DMF-TONN for each of the presented examples is less than 600 seconds. We use a Young's Modulus (E) \(=1000N/mm^{2}\), Poisson's ratio of 0.3, \(\text{Force}=0.1N\) and we normalize the domain with the longest Figure 5: Long cantilever beam with two loads with target volume fraction = 0.5 Figure 6: L-Bracket with target volume fraction = 0.2 Figure 7: Bridge with target volume fraction = 0.3 \(\text{side}=1mm\) for all examples. We use 6000 randomly sampled domain points as the batch in each iteration. We use a kernel grid size of 16 in all three directions and set upper and lower bounds of the kernel values to 35 and -35 respectively. We find this hyper-parameters setting works well for all problems, unless when the domain has a markedly skewed size ratio, such as in the long beam and bridge examples. In those cases, one has to accordingly adjust the kernel size in the different directions. In these examples, we use 24 kernel grid points in the longest direction, and 12 kernel grid points in the other directions, and an upper and lower bounds of the kernel values of 45 and -45 respectively, for the results presented. For each of the presented examples, the degrees of freedom are always more for SIMP than for DMF-TONN. ## 5 Conclusion We show that using directly connected displacement field estimation and density field estimation neural networks is indeed an effective approach for mesh-free topology optimization. We verify through various examples that DMF-TONN, which involves using just one gradient descent step of the density network in each topology optimization epoch without any sensitivity filtering or density filtering leads to comparable results to conventional topology optimization. We significantly reduce the computational time compared to prior related works and also explore the trade-offs between DMF-TONN and SIMP for a cantilever beam example, showcasing the advantage of the mesh-free nature of DMF-TONN. There are several limitations observed currently with DMF-TONN. The first one concerns the kernels used in the density and displacement neural network. The kernel grid has to be scaled according to the domain size if there exist sides with ratio of lengths greater than or equal to 3. Moreover, we observed that for low target volume fractions (less than 0.2), the kernel is not able to capture the required features and the optimization does not converge. The SIMP method is certainly still one of the best methods for performing topology optimization, and we emphasize that the goal of this work is not to beat SIMP, but rather to devise new techniques and a class of mesh-free solvers using the advancements in neural networks, for this complex inverse design problem of topology optimization. We show that DMF-TONN works well for various 3D problems and is a stepping stone towards this goal. Future work involves improving the robustness of the kernel, extending the approach to complex problems, and experimenting with and analyzing the effect and advantages of different domain coordinate sampling methods. ## 6 Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2307.15673
RydIQule: A Graph-based Paradigm for Modelling Rydberg and Atomic Systems
We describe a numerical technique and accompanying open-source Python software package called RydIQule. RydIQule uses a directional graph, relying on adjacency matrices and path-finding to generate a Hamiltonian for multi-level atomic systems. RydIQule then constructs semi-classical equations of motion (Bloch equations) into a tensor which can store an entire simulation consisting of varied system parameters. Using this framework, RydIQule returns solutions significantly faster than typical for interpreted programming languages. RydIQule extends beyond the capabilities of currently-available tools, facilitating rapid development in atomic and Rydberg spectroscopy. To demonstrate its utility, we use RydIQule to simulate a Doppler-broadened Rydberg atomic sensor that simultaneously demodulates five rf tones spanning from 1.7 to 116 GHz. Using RydIQule, this simulation can be solved in several hours on a commercial off-the-shelf desktop computer.
Benjamin N. Miller, David H. Meyer, Teemu Virtanen, Christopher M. O'Brien, Kevin C. Cox
2023-07-28T17:05:01Z
http://arxiv.org/abs/2307.15673v1
# RydIQule: A Graph-based ###### Abstract We describe a numerical technique and accompanying open-source Python software package called RydIQule. RydIQule uses a directional graph, relying on adjacency matrices and path-finding to generate a Hamiltonian for multi-level atomic systems. RydIQule then constructs semi-classical equations of motion (Bloch equations) into a tensor which can store an entire simulation consisting of varied system parameters. Using this framework, RydIQule returns solutions significantly faster than typical for interpreted programming languages. RydIQule extends beyond the capabilities of currently-available tools, facilitating rapid development in atomic and Rydberg spectroscopy. To demonstrate its utility, we use RydIQule to simulate a Doppler-broadened Rydberg atomic sensor that simultaneously demodulates five rf tones spanning from 1.7 to 116 GHz. Using RydIQule, this simulation can be solved in several hours on a commercial off-the-shelf desktop computer. Atomic quantum sensors (e.g. clocks, magnetometers, electrometers, inertial sensors, etc.) are being used to solve real-world problems including global positioning [1], imaging of biological systems [2], and geodesy [3], with new applications continually emerging. The breadth of the atomic sensor design space is daunting, since one may utilize any combination of atomic states, lasers, rf fields, time-dependence, atomic nonlinearities, laser-cooling and trapping, and Rydberg states. Quantum sensor researchers need computationally-powerful software tools to serve as forward models for atomic vapor-based sensing. One example of an emerging quantum sensor with disruptive capabilities is the Rydberg atom-based electric field sensor [4; 5; 6; 7]. Rydberg sensors can utilize dozens or more Rydberg states and detect time-dependent fields across the entire rf spectrum. This structural richness allows for useful tuning schemes, THz imaging, simultaneous multi-band detection, and other potential use cases [8; 9; 10; 11; 12]. Further, these sensors often operate with room-temperature atoms, and modeling hundreds of velocity classes is necessary for accurate predictions. For these reasons, modeling Rydberg sensors is computationally challenging over a large design space. RydIQule and other complementary work can significantly aid the field, allowing rapid iteration and advances toward useful quantum sensing. RydIQule, the subject of this work, complements previous research on atomic physics numerical methods and solvers [13; 14; 15; 16; 17]. The Alkali Rydberg Calculator (ARC) [18], for example, is now widely used to calculate matrix elements and energy levels of Rydberg states in Alkali atoms and is relied upon by RydIQule. Although mathematical graphs have recently been used for atomistic calculations of materials [19], we are not aware of their use for atomic quantum solvers. RydIQule complements and extends current libraries by providing computationally efficient generation of quantum equations of motion (Bloch equations) as well as solvers that can include many Rydberg states, complex and closed-loop level diagrams, rf fields, time dependence, and Doppler-broadening. RydIQule solves many such systems in seconds or minutes. In this Article, we first describe RydIQule's graph architecture and show how it is used to generate Hamiltonians and equations of motion for multi-level and Rydberg atomic systems. We also discuss how multi-parameter systems are stacked into tensor equations for rapid solving. We present a minimal (9-line) pseudo-code example of how RydIQule is used. We show that RydIQule's architecture enables a significant advance in speed for forward modeling of atomic and Rydberg quantum sensors. Next, we demonstrate a RydIQule-based simulation of a recent Rydberg atomic sensing experiment [12], including five Rydberg states and five time-dependent rf signals. This scheme uses Rydberg heterodyne detection to simultaneously receive the amplitude and phase of five tones ranging from 1.7 to 116 GHz, a task that would likely be difficult with a classical receiver. The simulation utilizes approximately eight million solver calls and solves in a few hours on a commercial off-the-shelf desktop. The full source code for this simulation is included as supplemental material [20]. The software itself, including a full user manual and documentation, can be found at [https://github.com/QTC-UMD/rydiqule](https://github.com/QTC-UMD/rydiqule). We first discuss the key elements of RydIQule distinguishing it from previous work that make it uniquely powerful for modeling atomic and Rydberg systems, shown in Figure 1(a-c). The atomic energy diagram (a) is stored as a directed graph using the NetworkX python package [21]. Atomic levels and their properties are stored as nodes, and field couplings (either laser or rf) between them are stored as edges. NetworkX stores graph nodes and edges as Python dictionaries, allowing multiple edge weights and node attributes. This accommodates a wide range of parameters, including transition frequencies, detunings, laser powers, and decoherence rates, to be stored on the graph. A single graph ob ject of this sort is wrapped in RydIQule's central class, called a sensor, and contains all information about an atomic sensor model, including multiple swept experimental parameters that are defined as arrays. Once a sensor is defined with levels, couplings, and parameters, RydIQule can compute its Hamiltonian matrices, corresponding equations of motion, solutions to those equations, and associated physical observables. RydIQule's graph-based representation of the atomic system provides a number of key benefits. The first and most obvious of these is flexibility. The graph architecture is general enough to store level diagrams with arbitrary connectivity and provides an intuitive visualization of the level diagram. Second, the graph is used to construct the Hamiltonian \(H\). The off-diagonal elements \(H_{\mathrm{OD}}\) and the diagonal \(H_{\mathrm{D}}\) are computed separately and summed together \(H=H_{\mathrm{D}}+H_{\mathrm{OD}}\). \(H_{\mathrm{OD}}\) is equal, by construction, to an adjacency matrix of the graph, with edge weights given by the Rabi frequency of each coupling. The decoherence matrix, used to compute the Langevin terms of the master equation, can also be computed from an adjacency matrix, with weights equal to the decoherence rates. In the rotating wave approximation, the \(i\)th diagonal term of \(H_{\mathrm{D}}\) is, by definition, given by the path "distance" \(x_{i}=\sum_{j\in P_{i}}\delta_{j}\) from each node \(i\) to the ground state node (defined at zero distance on the graph), where the distance of each edge (coupling field) is given by its detuning \(\delta_{j}\), and \(j\) indexes the edges along path \(P_{i}\). We use Dijkstra's path-finding algorithm, built into NetworkX, to find each path \(P_{i}\)[21; 22]. Couplings can also be defined by their absolute frequency with no rotating wave approximation is applied. These algorithms applied to the graph structure give RydIQule flexibility to correctly construct Hamiltonians for ladder, \(\Lambda\), or V-schemes, systems with dozens or more states, hyperfine sublevels, dark states, and time-varying couplings. Diamond schemes that include full loops of field couplings cannot be self-consistently represented in the rotating frame. However, these situations can still be treated easily in RydIQule by specifying one of the loop couplings as time dependent. A full description of the Hamiltonian generation can be found in the documentation and source code [23]. RydIQule relies on the ease-of-use features of the python language, making it simple to define a system. However, when modeling physical systems using popular interpreted languages such as Python, Matlab, or Mathematica, the time-cost of code interpretation can easily exceed the compute time of a bespoke, compiled solver by orders of magnitude. This often leads to the approach of maximizing computing hardware and/or waiting extended periods of time for results. Innovations occur faster when computations can be rapidly iterated on consumer-grade desktops and laptops. Balancing these competing concerns has been core to our design. RydIQule mitigates code interpretation slowdowns by making use of compiled NumPy routines [24] for challenging calculations that scale with the problem size. NumPy's routines are written in C, and are the industry standard for efficient matrix calculations. RydIQule's goal is to contain its computational complexity into these functions. The primary way that our framework accomplishes this is via a technique RydIQule calls "stacking", which is core to the design of many functions within NumPy. For basis size (i.e. number of energy levels) \(b\), RydIQule constructs Lindblad master equations of motion (EOM) of size \(n\times n\) in the superoperator form where \(n=b^{2}-1\)[25]. RydIQule asserts population conservation to eliminate one of the \(b^{2}\) equations. If a Hamiltonian parameter (e.g. coupling detuning or power) is scanned over \(d_{1}\) values, rather than treating \(d_{1}\) individual \(n\times n\) matrices, RydIQule arranges the EOM into a single \(d_{1}\times n\times n\) NumPy array (that is, a rank three tensor). Similarly, \(d_{2}\) such arrays can be arranged into a single \(d_{2}\times d_{1}\times n\times n\) array, and so on. We refer to the "stack shape" \({}^{*}l\) as the array shape of the parameter dimensions (i.e. \([d_{1},...,d_{p}]\)) for \(p\) independent parameters. This treatment allows for parameter spaces of arbi Figure 1: RydIQule graph representation of an atomic system. (a) example level diagram with its corresponding graph representation in (b). (c) flow diagram of a RydIQule simulation. (d) Pseudo-code to perform a simulation with RydIQule. trary dimension to be represented as a single NumPy array. Rather than performing calculations individually in Python and introducing interpreted-language overhead, matrices are built into a single object that can be manipulated with a single call to a compiled NumPy function. A minimal pseudo-code example of RydIQule is shown in Fig. 1(d). A simulation is performed using 5 steps. First, after RydIQule is imported, a Sensor object is created using the Sensor class, to store all of the information about the simulation (line two). Next, the decoherence rates between atomic levels are defined (line 3). The laser, microwave, and rf field couplings are defined and added (lines 4-6). Finally, we call a solver function, passing the sensor object as a parameter (line 8). A number of functions exist to retrieve and plot the observables that would be obtained by a real experiment (line 9). To roughly quantify the performance of RydIQule, we first consider the theoretical expectations of its time complexity. We consider a single set of equations of motion to be of size \(n\times n\). We allow \(p\) independent parameters, where the \(i^{th}\) parameter is scanned over an array of values with length \(d_{i}\). To find a steady-state solution, this matrix is diagonalized by Gaussian elimination, which is known to have time cost proportional to the number of rows \(n\) cubed. Thus, we expect a total solve time \(t\propto b^{5}\prod_{i=1}^{p}d_{i}\propto n^{3}\prod_{i=1}^{p}d_{i}\). In the case where each parameter's dimension is the same length \(d\), the product term above simplifies to an exponential \(d^{p}\). We confirm the two scalings, \(t\propto b^{5}\) and \(t\propto d^{p}\), with results shown in Fig 2. We also show that RydIQule's stacking method solves faster than explicitly looping over parameters in Python. For (a) and (b), we define 4-level systems with three couplings in a ladder configuration, treating all couplings in the rotating wave approximation. We then solve the system in the steady state (a) and time domain for 50 microseconds (b) and measure the time for 0, 1, 2, and 3 of the couplings scanned over a range of 25 detuning values. The fit curve is an exponential function plus a constant to account for roughly constant overhead. Both stacking and looping solve in time proportional to \(d^{p}\) but interpreted python loops lead to an additional slowdown of order \(\times 100\). In Fig. 2(c), we time RydIQule's steady-state solver versus basis size. The data is fit to a 5th order polynomial, indicating the expected scaling. The IPython notebooks used to generate this data are included as Supplemental Material [20]. Stacking is computationally efficient for code written in Python since the entire simulation is solved using pre-compiled NumPy routines. However, this requires that the entire model fits into the computer's memory. This condition is often broken, especially when considering room-temperature atoms with Doppler averaging. For this reason, RydIQule can "slice" the stack into chunks that fit in the computer's memory, minimizing the number of calls necessary to the Python interpreter. For example, if a given system of equations would require four times the computer's available memory to solve, RydIQule will automatically break the equations into four pieces and solve them individually. It is with this combination of stacking and slicing that RydIQule handles large simulations efficiently with no additional effort on the part of the user. In the final part of this Article, we simulate a recent experiment that requires the full set of RydIQule's capabilities. The experiment demodulated five rf fields simultaneously using two lasers and four Rydberg transitions [12]. The source code (Python Jupyter notebook) to create this data is included in the Supplemental Material [20], and can be modified to simulate a large variety of similar schemes. As constituted, this time-dependent simulation took around one minute to complete on a modern 16 core desktop with 256 GB of memory, and 4-6 hours on the same machine including atomic motion. The level diagram is shown in Fig. 3(a). The system uses the Rydberg rf heterodyne technique, which relies on the off-resonance square-law response of the atoms to detect the amplitude and phase of an incoming carrier [26; 27; 28]. For simulation purposes, each rf tone is represented as \(E^{i}(t)=E^{i}_{LO}\sin(\omega^{i}_{0}t)+E^{i}_{sig}(t)\sin(\omega^{i}_{0}t+ \omega^{i}_{m}t)\), where \(\omega^{i}_{0}\) is the local oscillator detuning for the \(i\)th tone, \(\omega^{i}_{m}\) is the baseband signal frequency, \(E^{i}_{LO}\) is the strength of the local oscillator (LO) portion of the field, and \(E^{i}_{sig}\) is the strength of the signal field. Figure 3(b) shows the spectroscopy traces that result Figure 2: Time to solve a ladder scheme in RydIQule. (a-b) Comparison of python loop (blue) vs RydIQule “stacked” (orange) solve time in parameter sweeps over varying numbers of parameters, referred to as stack dimension, in the (a) steady-state and (b) time domain, and exponential fits. (c) Time to solve a steady-state system versus the number of quantum states in the system, along with the best 5th-order polynomial fit. from scanning the detuning of the probe laser when each tone is applied. The signal is plotted in terms of optical phase, that would be retrieved from a homodyne detection system. For convenience, the local oscillators are arranged so that the total AC Stark shift nominally cancels. The real component of the probing density matrix element \(\rho_{10}\), that is proportional to the sensor's output voltage \(V\propto Re(\rho_{10})\propto\sum_{i}E^{i}_{sig}\sin(\omega^{i}_{m}t)\), is displayed in Fig. 3(c). This time-domain signal includes beats from all applied tones that can be deciphered through its Fourier transform denoted by \(\mathcal{F}[\,]\) (Fig 3(d)). The Fourier data shows each heterodyne beat. In the RydIQule documentation [23], we demonstrate additional tools to analyze the signal-to-noise ratio and noise-equivalent field sensitivity to incoming fields. RydIQule significantly advances publicly-available capabilities to simulate atomic sensors, but further work in RydIQule and other supporting projects is still needed. RydIQule is currently semi-classical. Electromagnetic fields are treated as complex parameters in each equation, with no atomic back action, meaning that RydIQule is most accurate when the optical depth is low [29]. Furthermore, RydIQule is a single-atom solver that does not explicitly handle atom-atom interactions or quantum entanglement between atoms. These approximations are often valid for room-temperature thermal vapor-based atomic spectroscopy in free space, in applications such as atomic magnetometers and in particular, Rydberg electrometers. But further development will be required to access non-classical applications involving atom-atom and atom-light entanglement. One of the most significant computational overheads in RydIQule arises from handling thermal vapors. Doppler broadened ensembles require more stacks since we must solve the equations for all velocity classes. Due to detailed structure in the atomic frequency response, RydIQule typically stacks hundreds of velocity classes for accurate results. Significant additional speedups are likely possible using more efficient pre-compilation of the time integration, graphical processing units, or methods to improve the efficiency of sampling atomic motions [30]. We hope that this manuscript will inspire scientists to modify, change, and recreate RydIQule and similar packages for a wide range of applied physics applications. The key advances, including the graph structure and overall python framework, can be extended to a wide range of applications including other quantum sensors, quantum memories, and multi-level spectroscopy. Open-source tools to validate models of atomic devices will be a catalyst for the development and application of quantum science. ###### Acknowledgements. The authors acknowledge helpful advice, discussions, and contributions from Paul Kunz, Joshua Hill, Peter Elgee, Fredrik Fatemi, Nelson Li, and William Wolfs. The authors acknowledge funding from the Defence Advanced Research Programming Agency (DARPA). Benjamin Miller, Christopher O'Brien, and Teemu Virtanen recognize financial support from the Office of Naval Research (ONR) In-House Laboratory Independent Research (ILIR) program at the Naval Air Warfare Center Weapons Division. The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. Figure 3: RydIQule model for multitone reception based on Ref. [12]. (a) Level diagram showing the five Rydberg states and five rf tones. (b) Simulated phase shift of the probing laser versus probe laser detuning for several combinations of the rf LO fields. (c) Normalized time response of the density matrix for a fixed probe detuning with the five RF tones applied to the simulated Rb85 cell. (d) (i) Fourier Transform of the data shown in figure 3 (b). We recover the signal frequencies, as indicated by the vertical dashed lines.
2303.07780
Phase transitions in the fractional three-dimensional Navier-Stokes equations
The fractional Navier-Stokes equations on a periodic domain $[0,\,L]^{3}$ differ from their conventional counterpart by the replacement of the $-\nu\Delta\mathbf{u}$ Laplacian term by $\nu_{s}A^{s}\mathbf{u}$, where $A= - \Delta$ is the Stokes operator and $\nu_{s} = \nu L^{2(s-1)}$ is the viscosity parameter. Four critical values of the exponent $s\geq 0$ have been identified where functional properties of solutions of the fractional Navier-Stokes equations change. These values are: $s=\frac{1}{3}$; $s=\frac{3}{4}$; $s=\frac{5}{6}$ and $s=\frac{5}{4}$. In particular: i) for $s > \frac{1}{3}$ we prove an analogue of one of the Prodi-Serrin regularity criteria; ii) for $s \geq \frac{3}{4}$ we find an equation of local energy balance and; iii) for $s > \frac{5}{6}$ we find an infinite hierarchy of weak solution time averages. The existence of our analogue of the Prodi-Serrin criterion for $s > \frac{1}{3}$ suggests the sharpness of the construction using convex integration of H\"older continuous solutions with epochs of regularity in the range $0 < s < \frac{1}{3}$.
Daniel W. Boutros, John D. Gibbon
2023-03-14T10:44:17Z
http://arxiv.org/abs/2303.07780v2
###### Abstract ###### Abstract The fractional Navier-Stokes equations on a periodic domain \([0,\,L]^{3}\) differ from their conventional counterpart by the replacement of the \(-\nu\Delta\boldsymbol{u}\) Laplacian term by \(\nu_{s}A^{s}\boldsymbol{u}\), where \(A=-\Delta\) is the Stokes operator and \(\nu_{s}=\nu L^{2(s-1)}\) is the viscosity parameter. Four critical values of the exponent \(s\) have been identified where functional properties of solutions of the fractional Navier-Stokes equations change. These values are: \(s=\frac{1}{3}\) : \(s=\frac{3}{4}\) ; \(s=\frac{5}{6}\) and \(s=\frac{5}{4}\). In particular, in the fractional setting we prove an analogue of one of the Prodi-Serrin regularity criteria (\(s>\frac{1}{3}\)), an equation of local energy balance (\(s\geq\frac{3}{4}\)) and an infinite hierarchy of weak solution time averages (\(s>\frac{5}{6}\)). The existence of our analogue of the Prodi-Serrin criterion for \(s>\frac{1}{3}\) suggests that the convex integration schemes that construct Holder-continuous solutions with epochs of regularity for \(s<\frac{1}{3}\) are sharp with respect to the value of \(s\). **Phase transitions in the fractional three-dimensional Navier-Stokes equations** Daniel W. Boutros Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK and John D. Gibbon Department of Mathematics, Imperial College London, London SW7 2AZ, UK. ## 1 The fractional Navier-Stokes equations We consider the incompressible fractional Navier-Stokes equations in the form \[\left(\partial_{t}+\boldsymbol{u}\cdot\nabla\right)\boldsymbol{u}+\nu_{s}A^{s }\boldsymbol{u}=-\nabla P\,,\qquad\qquad A=-\Delta\,, \tag{1.1}\] together with \(\text{div}\,\boldsymbol{u}=0\) and \(\nu_{s}=\nu L^{2(s-1)}\), on a three-dimensional periodic domain \([0,\,L]^{3}\). The fractional Laplacian \(A^{s}\) has the spectral representation \[A^{s}\boldsymbol{u}(\boldsymbol{x},t)\coloneqq\sum_{k\in\mathbb{Z}^{3}}| \boldsymbol{k}|^{2s}\widehat{\boldsymbol{u}}_{k}(t)\exp\left(i\boldsymbol{k} \cdot\boldsymbol{x}\right)\,, \tag{1.2}\] where \(\widehat{\boldsymbol{u}}_{k}\) are the Fourier coefficients of \(\boldsymbol{u}\). Instead of keeping \(s\) fixed at \(s=1\) and then studying the inviscid \(\nu\to 0\) limit in the conventional way, we keep \(\nu\) fixed and study properties of solutions of (1.1) in the limit \(s\to 0\). Inspired by the Lions result [1, Section 8], which shows that solutions of (1.1) are regular when \(s\geq\frac{5}{4}\) (see also Tao [2] and Luo and Titi [3]), much work has concentrated on the hyper-viscous (\(s>1\)) case [4, 5, 6, 7, 8, 9, 10]. However, it is our view that the the hypo-viscous regime (\(0<s<1\)) is of equal if not greater interest : see [11] for work on the fractional Burgers equation. In the limit \(s\to 0\) the question arises whether there are significant changes to the properties of solutions of (1.1) before reaching the limit of the damped Euler equations at \(s=0\) \[\left(\partial_{t}+\boldsymbol{u}\cdot\nabla\right)\boldsymbol{u}+\nu_{0} \boldsymbol{u}=-\nabla P\,,\qquad\nu_{0}=\nu L^{-2}\,. \tag{1.3}\] Before summarizing and discussing our main results, it is worth remarking on the fact that the fractional Navier-Stokes equations bear a close relation to the fractional diffusion equation \[\partial_{t}u+\nu_{s}A^{s}u=0\,, \tag{1.4}\] whose solutions are related to the theory of random walks. The language of Brownian motion, with its associated literature [12, 13, 14, 15, 16], has determined the nomenclature of the latter. For \(s=1\) the mean square displacement of a particle is linear with time \(\colon\left\langle X^{2}\right\rangle\sim t\). However, for the fractional diffusion equation1 the relation \(\left\langle X^{2}\right\rangle\sim t^{1/s}\) indicates anomalous diffusion when \(s\neq 1\). The case \(s>1\) commonly occurs in biological, fractal and porous media [17, 18, 19, 20, 21, 22, 23], whereas the \(s<1\) case occurs in turbulent plasmas and polymer transport [24, 25]. It is in this latter range where fat-tailed spectra and Levy flights are observed in data. Footnote 1: Somewhat confusingly, because of the \(1/s\) exponent on \(t\), the hyper-viscous case \(s>1\) corresponds to _sub_-diffusion in the theory of random walks while the hypo-viscous case \(s<1\) corresponds to _super_-diffusion. A system is commonly considered to go through a phase transition when its properties undergo qualitative changes as a parameter passes through a critical value. The parameter in question is the exponent \(s\) of the fractional Laplacian. The fractional Navier-Stokes equations have many different kinds of solution whose properties may vary depending upon their regularity, their (non-)uniqueness, or the size of their singular set. We list some of them below : 1. Wild solutions originally associated with the \(3D\) Euler equations and Onsager's conjecture [26, 27, 28]. 2. Distributional solutions. 3. Suitable weak solutions which have partial regularity (Caffarelli, Kohn and Nirenberg [29]). 4. Weak solutions of Leray-Hopf type. 5. Strong solutions which possess both existence and uniqueness. Dependent on the setting, there may be some overlap among those listed above. Four critical values of \(s\) have been identified \(\colon\,s=\frac{1}{3}\); \(s=\frac{3}{4}\); \(s=\frac{5}{6}\) and \(s=\frac{5}{4}\). The changes to the qualitative properties of solutions at these points are summarised in SS1.3, together with references in the literature. These results lay the groundwork for future numerical simulations. ### Notation and invariance properties Throughout the paper the domain is taken to be the three-dimensional unit torus \(\mathbb{T}^{3}\). For Sobolev norms of the solution we will use the following notation \[H_{n,m}=\int_{\mathbb{T}^{3}}|\nabla^{n}\mathbf{u}|^{2m}dx\equiv\|\nabla^{n}\mathbf{u }\|_{2m}^{2m}\,. \tag{1.5}\] For example, the square of the standard \(\dot{H}^{1}\)-norm is expressed as \(H_{1,1}\) and \(n\)-derivatives in \(L^{2}\) are expressed as \(H_{n,1}\). To avoid confusion we remark that the superscript \(H^{n}\) refers to the Sobolev space whereas the subscripts \(H_{n,m}\) refer to the norms defined in (1.5). Moreover fractional Sobolev norms for \(m=1\) are defined as follows \[\int_{\mathbb{T}^{3}}|(-\Delta)^{s/2}\mathbf{u}|^{2}dx\equiv\int_{\mathbb{T}^{3}} |A^{s/2}\mathbf{u}|^{2}dx=H_{s,1}\,. \tag{1.6}\] Further properties of the fractional Laplacian can be found in Appendix B. We remark at this point that the \(3D\) fractional Navier-Stokes equations are invariant under the scaling transformation \[\mathbf{x}^{\prime}=\lambda^{-1}\mathbf{x}\,;\quad t^{\prime}=\lambda^{-2s}t\,;\quad\bm {u}=\lambda^{1-2s}\mathbf{u}^{\prime}\,, \tag{1.7}\] which reduces to the standard Navier-Stokes scaling when \(s=1\). It is also of interest to see how the properties of solutions across the hypo/hyper-viscous regimes are tied together through invariance properties, as in the standard Navier-Stokes equations [32, 33, 34, 35, 36, 37, 38, 39, 40, 41] - see SS5. The technical material in references [42, 43, 44, 45, 46, 47] has been used throughout the paper. ### Leray-Hopf solutions of the fractional Navier-Stokes equations We begin by introducing the weak formulation of the hypo-dissipative Navier-Stokes equations. **Definition :** Let \(\mathbf{u}\in L^{\infty}\left[(0,T)\,;L^{2}(\mathbb{T}^{3})\right]\cap L^{2}\left[ (0,T)\,;H^{s}(\mathbb{T}^{3})\right]\) and let \(\mathbf{u}_{0}\in L^{2}(\mathbb{T}^{3})\) be the initial data. We say that \(\mathbf{u}\) is a Leray-Hopf weak solution if it satisfies the following weak formulation \[\int_{0}^{T}\int_{\mathbb{T}^{3}}\left[\mathbf{u}\partial_{t}\psi-\nu(A^{s/2}\bm {u})(A^{s/2}\psi)+\mathbf{u}\otimes\mathbf{u}:\nabla\psi+P\nabla\cdot\psi\right]dxdt= -\int_{\mathbb{T}^{3}}\mathbf{u}_{0}\psi(\mathbf{x},0)dx\,, \tag{1.8}\] for all \(\psi\in\mathcal{D}\left[\mathbb{T}^{3}\times[0,T)\right]\). Moreover, for all \(T\geq 0\) the solution satisfies the following energy inequality \[\tfrac{{}_{1}}{{}^{2}}\int_{\mathbb{T}^{3}}\lvert\mathbf{u}(\mathbf{x},T)\rvert^{2}\, dx+\nu\int_{0}^{T}\int_{\mathbb{T}^{3}}\lvert A^{s/2}\mathbf{u}\rvert^{2}\,dxdt\leq \tfrac{{}_{1}}{{}^{2}}\int_{\mathbb{T}^{3}}\lvert\mathbf{u}_{0}(\mathbf{x})\rvert^{2} \,dx\,. \tag{1.9}\] At this point we recall the standard existence result for the Leray-Hopf solutions : **Theorem 1**.: _For all \(s>0\), there exists a global Leray-Hopf solution satisfying the weak formulation of the fractional Navier-Stokes equations._ For a proof see Appendix A in [48]. ### Summary of results The task of this subsection is to summarize the various functional properties possessed by solutions of the fractional Navier-Stokes equations in different ranges of \(s>0\). These are laid out in the table below. Three of these results are new : namely an analogue of a result2 of Prodi [51] and Serrin [52] for \(s>\tfrac{{}_{3}}{{}^{2}}\) ; an equation of local energy balance for \(s\geq\tfrac{{}_{3}}{{}^{4}}\) ; and an infinite hierarchy of time averages for \(s>\tfrac{{}_{5}}{{}^{6}}\). Various theorems valid in different ranges of \(s\) are expressed in the rest of the subsection. Their proofs can be found in the following sections of the paper. Footnote 2: In addition to the general regularity criteria on the velocity field for the three dimensional Navier-Stokes equations, Prodi [51] and Serrin [52] showed that control of \(\int_{0}^{t}\left\lVert\nabla\mathbf{u}\right\rVert_{\infty}d\tau\) is another sufficient regularity condition which is applicable in both two and three dimensions. This time integral also applies to the Euler equations. Beale, Kato and Majda [53] then showed how this result for the three dimensional Euler equations could be converted to control over \(\int_{0}^{t}\left\lVert\mathbf{\omega}\right\rVert_{\infty}d\tau\) at the price of making the upper bound super-exponential in time. In this paper we consider our result in Theorem 2 to be an analogue of that of Prodi and Serrin. **1) The case \(0<s<\frac{1}{3}\) :** It has previously been noted in SS1.2 that for any \(s>0\), there exists a global Leray-Hopf weak solution. It has been shown by Colombo, De Lellis and De Rosa in [48] that these solutions are non-unique for \(s<\frac{1}{5}\). This result was later improved in [49] to show the non-uniqueness if \(s<\frac{1}{3}\). In the range \(\frac{1}{3}\leq s<\frac{1}{2}\) non-uniqueness of weak solutions with Leray-Hopf regularity has been proved in [48], but the constructed solutions do not satisfy the energy inequality. Buckmaster and Vicol [50] have proved the non-uniqueness of distributional solutions of the Navier-Stokes equations (i.e. with \(s=1\)) while the work of Luo and Titi [3] has extended this result to prove non-uniqueness of distributional solutions for any \(s<\frac{5}{4}\). These results have all been proved using the method of convex integration. **2) The case \(s>\frac{1}{3}\) :** The following theorem expresses a result which is similar in spirit to one of the Prodi-Serrin regularity criteria for the \(3D\) Navier-Stokes equations [51, 52] (see SS2 for the proof) ; **Theorem 2**.: _When \(\frac{1}{3}<s<1\) and for initial data \(\mathbf{u}_{0}\in H^{2}(\mathbb{T}^{3})\), suppose there exists a solution of the fractional Navier-Stokes equations which loses regularity at the earliest time \(T^{*}\), then_ \[\int_{0}^{T^{\star}}\|A^{s/2}\mathbf{u}\|_{\infty}^{\frac{2s}{3s-1}}dt=\infty\,. \tag{1.10}\] _Conversely, for every \(T>0\), if \(\int_{0}^{T}\|A^{s/2}\mathbf{u}\|_{\infty}^{\frac{2s}{3s-1}}dt<\infty\), then solutions of the fractional Navier-Stokes equations remain regular._ There are four things on which to remark. Firstly, the proof displayed in SS2 works only in the range3\(\frac{1}{3}<s<1\). Secondly, when \(s=1\) we recover the Prodi-Serrin result [51, 52], namely \(\int_{0}^{T}\|\nabla\mathbf{u}\|_{\infty}dt\). Thirdly, close to \(s=\frac{1}{3}\), the fractional velocity gradient \(A^{s/2}\mathbf{u}\) needs to be not only \(L^{\infty}\) in space but also nearly \(L^{\infty}\) in time. Fourthly, we remark that this is truly a (fractional) Navier-Stokes and not an Euler result, as the proof will show. In passing we remark that the integral in (1.10) is the only object that need be monitored for regularity purposes in a numerical simulation. Footnote 3: We have managed to extend this proof to the range \(1<s<5/2\) but we omit the details. **3) The case \(s\geq\frac{3}{4}\) :** Next we turn to the equation of local energy balance. It has been proved by Duchon and Robert [54] that Leray-Hopf solutions of the (standard) Navier-Stokes equations satisfy a local energy balance. Under an additional regularity assumption, this result is also true for the Euler equations. Here, we extend Duchon and Robert's approach [54] to the fractional Navier-Stokes equations. First we introduce some notation. Let \(\varphi\in C_{c}^{\infty}\left[\mathbb{R}^{3};\mathbb{R}\right]\) be a standard radial mollifier with the property that \(\int_{\mathbb{R}^{3}}\varphi(\boldsymbol{x})dx=1\). We also introduce the notation \[\varphi^{\epsilon}(\boldsymbol{x})\coloneqq\frac{1}{\epsilon^{3}}\varphi\! \left(\frac{\boldsymbol{x}}{\epsilon}\right).\] In the case \(s\geq\frac{3}{4}\), it is possible to establish an equation of local energy balance for Leray-Hopf solutions. This can be demonstrated in a Corollary to : **Theorem 3**.: _Let \(\boldsymbol{u}\in L^{3}\left[(0,T);L^{3}(\mathbb{T}^{3})\right]\) be a Leray-Hopf weak solution of the fractional Navier-Stokes equations. Then the following equation of local energy balance holds for all \(\psi\in\mathcal{D}\left[\mathbb{T}^{3}\times(0,T)\right]\)_ \[\int_{0}^{T}\int_{\mathbb{T}^{3}}\left[|\boldsymbol{u}|^{2}\partial_{t}\psi-2 \nu(A^{s/2}\boldsymbol{u})\cdot A^{s/2}(\boldsymbol{u}\psi)+2p\boldsymbol{u} \cdot\nabla\psi-\tfrac{1}{2}D(\boldsymbol{u})\psi+|\boldsymbol{u}|^{2}\left( \boldsymbol{u}\cdot\nabla\psi\right)\right]\,dxdt=0\,, \tag{1.11}\] _where the defect term is given by_ \[D(\boldsymbol{u})(\boldsymbol{x},\,t) \coloneqq \tfrac{1}{2}\lim_{\epsilon\to 0}\int_{\mathbb{T}^{3}}\nabla \varphi_{\epsilon}(\boldsymbol{\xi})\cdot\delta\boldsymbol{u}(\boldsymbol{ \xi};\,\boldsymbol{x},\,t)|\delta\boldsymbol{u}(\boldsymbol{\xi};\, \boldsymbol{x},t)|^{2}\,d\xi\,, \tag{1.12}\] \[\delta\boldsymbol{u}(\boldsymbol{\xi};\,\boldsymbol{x},t) \coloneqq \boldsymbol{u}(\boldsymbol{x}+\boldsymbol{\xi},t)-\boldsymbol{u}( \boldsymbol{x},t)\,. \tag{1.13}\] **Corollary 1**.: _The equation of local energy balance (1.11) holds automatically for Leray-Hopf solutions of the hypo-dissipative Navier-Stokes equations if \(s\geq\frac{3}{4}\)._ The proof can be found in SS3. **4) The case \(s>\frac{5}{6}\) :** before stating the results for the regularity of Leray-Hopf solutions4, let us begin with the definition Footnote 4: The origin of the exponent \(s=\frac{5}{6}\) is as follows : it is elementary to show that the critical space for the fractional Navier-Stokes equations is \(H^{5/2-2s}(\mathbb{T}^{3})\). This coincides with \(H^{s}(\mathbb{T}^{3})\) (which is part of the Leray-Hopf regularity) when \(s=\frac{5}{6}\). \[\delta_{n,s}\coloneqq\frac{6s-5}{2n+4s-5}\,. \tag{1.14}\] **Theorem 4**.: _Let \(s>\frac{5}{6}\) and \(1\leq n<\infty\), and let \(\boldsymbol{u}\) be a Leray-Hopf solution. Then \(\boldsymbol{u}\) belongs to the following spaces_ \[\boldsymbol{u}\in L^{2\delta_{n,s}}\left[(0,T)\,;H^{n}(\mathbb{T}^{3})\right]. \tag{1.15}\] The proof can be found in SS4 and is based on the seminal but relatively unknown paper of Foias, Guillope and Temam [36] in which Theorem 4 was proved in the case \(s=1\). Theorem 4 shows that there is an infinite hierarchy of finite time integrals (or averages), as advertised in the 5th line of the Table in SS1.3. How this result ties in with the invariance properties given in (1.7) is left to SS5. **5) The case \(s\geq\frac{5}{4}\) :** The well-known regularity result of Lions [1] (see also Tao [2]) ties in with the results of Theorems 2 and 4 in the following way. Lions' proof means that the Prodi-Serrin-like time integral in (1.10) is actually bounded when \(s\geq\frac{5}{4}\), so we ask the question, at what value of \(s\) does this integral coincide with the hierarchy of weak solutions expressed in (1.15)? That is, when do the weak solutions of Theorem 4 become strong solutions? We note that it is possible to prove the result of Theorem 2 in the case \(s>1\), although the proof will be omitted. Our purpose here is to illustrate how the results of Theorems 2 and Theorem 4 can be combined to yield the global regularity result for \(s\geq\frac{5}{4}\). By Agmon's inequality we find that \[\|A^{s/2}\boldsymbol{u}\|_{\infty}^{\frac{2s}{3-1}}\leq H_{1+s,1}^{\frac{s}{6s -2}}H_{2+s,1}^{\frac{s}{6s-2}}. \tag{1.16}\] In the proof of Theorem 4 we will show that \[\boldsymbol{u}\in L^{2\gamma_{n}}\left[\left(0,T\right);H^{n+s}(\mathbb{T}^{3 })\right]\,,\quad\text{where}\quad\gamma_{n}=\frac{6s-5}{2n+6s-5}. \tag{1.17}\] Then integrating with respect to time we find that \[\int_{0}^{T}\|A^{s/2}\boldsymbol{u}\|_{\infty}^{\frac{2s}{3-1}}\,dt \leq\int_{0}^{T}H_{1+s,1}^{\frac{s}{6s-2}}H_{2+s,1}^{\frac{s}{6s- 2}}\,dt\] \[\leq\bigg{(}\int_{0}^{T}H_{1+s,1}^{\gamma_{1}}\,dt\bigg{)}^{ \frac{s}{(6s-2)\gamma_{1}}}\bigg{(}\int_{0}^{T}H_{1+s,1}^{\frac{s\gamma_{1}}{ (6s-2)\gamma_{1}-s}}\,dt\bigg{)}^{\frac{(6s-2)\gamma_{n}-s}{(6s-2)\gamma_{n}}}\,. \tag{1.18}\] For the second time integral on the right hand side of (1.18) to be bounded, we must have \[\frac{s\gamma_{1}}{(6s-2)\gamma_{1}-s}=\gamma_{2}\qquad\implies\qquad s= \tfrac{5}{4}\,. \tag{1.19}\] Thus we know that for \(s\geq\frac{5}{4}\) the norm (1.10) is globally controlled by any Leray-Hopf solution by Theorem 4. Then Theorem 2 implies that a local-in-time strong solution must stay regular and hence the fractional Navier-Stokes equations are globally well-posed for \(s\geq\frac{5}{4}\), which is in agreement with the results in [1]. ## 2 Proof of Theorem 2 The statement of Theorem 2 is based on the assumption that we start with a regular solution in \([0,\,T^{*})\). Thus we are able to differentiate the (spatial) \(H_{n,1}\)-norms with respect to time. We begin with the standard ladder of Sobolev norms which can be obtained using standard energy estimates in an adaption of the proof of Theorem 6.1 in [32] : \[\tfrac{1}{2}\frac{d}{dt}H_{n,1}\leq-\nu_{s}H_{n+s,1}+c_{n,s}\|\nabla \boldsymbol{u}\|_{\infty}H_{n,1}\,. \tag{2.1}\] Now we would like to adapt this estimate. \(\|\nabla\boldsymbol{u}\|_{\infty}H_{n,1}\) and \(\|\nabla^{s}\boldsymbol{u}\|_{\infty}H_{n+p,1}\) (where \(p=\frac{1}{2}(1-s)\geq 0\)) have the same dimensions ; i.e. under the transformation (1.7) they satisfy the same scaling relation. Thus, we seek an inequality relation between them, which we prove in the next lemma. **Lemma 1**.: _Provided \(0<s<1\) and \(n>2+\frac{1}{2}s\), with \(p=\frac{1}{2}(1-s)\), then the following inequality holds_ \[\|\nabla\boldsymbol{u}\|_{\infty}H_{n,1}\leq c_{n,s}\|A^{s/2} \boldsymbol{u}\|_{\infty}H_{n+p,1}\,. \tag{2.2}\] Proof.: We define \(U\coloneqq A^{s/2}\boldsymbol{u}\). We also fix \(r\) such that \(s+\frac{1}{2}<r<\frac{3}{2}\), and by using Agmon's inequality we find \[\|\nabla\boldsymbol{u}\|_{\infty}\leq\|\nabla\boldsymbol{u}\|_{\dot{H}^{r}} ^{a}\|\nabla\boldsymbol{u}\|_{\dot{H}^{n+p-1}}^{1-a}\leq\|U\|_{\dot{H}^{r+1-s }}^{a}\|U\|_{\dot{H}^{n+p-s}}^{1-a}, \tag{2.3}\] where \[\frac{3}{2}=ar+(1-a)(n+p-1)\qquad\implies\qquad a=\frac{n+p-\frac{5}{2}}{n+p- r-1}\,. \tag{2.4}\] Then, by using the Gagliardo-Nirenberg-Sobolev interpolation inequality (see [43]) we find \[\|U\|_{\dot{H}^{r+1-s}}\leq\|U\|_{\infty}^{b}\|U\|_{\dot{H}^{n+p-s}}^{1-b}\,, \tag{2.5}\] with the following relation between the exponents \[\frac{1}{2}=\frac{1-b}{2}-\frac{(1-b)(n+p-s)-(r+1-s)}{3}\,. \tag{2.6}\] This implies that \[b\left(\frac{n+p-s}{3}-\frac{1}{2}\right)=\frac{n+p-r-1}{3}\,. \tag{2.7}\] Again, by applying a Gagliardo-Nirenberg-Sobolev inequality we obtain \[\|\nabla^{n}\mathbf{u}\|_{2}=\|A^{(n-s)/2}U\|_{2}=\|U\|_{\dot{H}^{n-s}}\leq C\,\|A ^{(n+p-s)/2}U\|_{2}^{1-c}\|U\|_{\infty}^{c} \tag{2.8}\] with the following relation between the exponents \[\frac{1}{2}=\frac{1-c}{2}-\frac{(1-c)(n+p-s)-(n-s)}{3}\,. \tag{2.9}\] This implies that \[c\left(\frac{n+p-s}{3}-\frac{1}{2}\right)=\frac{p}{3}\,. \tag{2.10}\] Combining these inequalities gives us \[\|\nabla\mathbf{u}\|_{\infty}H_{n,1}\leq\|U\|_{\infty}^{ab+2c}\|\mathbf{u}\|_{\dot{H}^ {n+p}}^{3-ab-2c}. \tag{2.11}\] The proof is completed if we can show that \(ab+2c=1\), which is confirmed by \[ab+2c =\frac{n+p-\frac{5}{2}}{n+p-r-1}\cdot\frac{n+p-r-1}{n+p-s-\frac{3 }{2}}+\frac{2p}{n+p-s-\frac{3}{2}}\] \[=\frac{n+p-\frac{5}{2}+2p}{n+p-s-\frac{3}{2}}=1\,. \tag{2.12}\] We are now ready to proceed with the proof of Theorem 2 : Proof of Theorem 2.: By a standard interpolation inequality for homogeneous Sobolev spaces we have \[H_{n+p,1}^{s}\leq H_{n+s,1}^{(1-s)/2}H_{n,1}^{(3s-1)/2}\,. \tag{2.13}\] Recalling that \(p=\frac{1}{2}(1-s)\), one can check that \[\tfrac{1}{2}(1-s)(n+s)+\tfrac{1}{2}n(3s-1)=(n+p)s\,.\] Thus, for \(s>\frac{1}{3}\), by using the ladder of Sobolev norms, as well as inequalities (2.2) and (2.13), we find \[\tfrac{1}{2}\frac{d}{dt}H_{n,1} \leq -\nu_{s}H_{n+s,1}+c_{n,s}\|\nabla\mathbf{u}\|_{\infty}H_{n,1}\] \[\leq -\nu_{s}H_{n+s,1}+c_{n,s}\|A^{s/2}\mathbf{u}\|_{\infty}H_{n+p,1}\] \[\leq -\nu_{s}H_{n+s,1}+c_{n,s}\|A^{s/2}\mathbf{u}\|_{\infty}H_{n+s,1}^{(1-s)/ 2s}H_{n,1}^{(3s-1)/2s}\] \[\leq -\nu_{s}H_{n+s,1}+\{\nu_{s}H_{n+s,1}\}^{(1-s)/2s}\left\{c_{n,s} \nu_{s}^{-\frac{1-s}{3s-1}}\|A^{s/2}\mathbf{u}\|_{\infty}^{2s/(3s-1)}H_{n,1}\right\} ^{(3s-1)/2s}\] \[\leq -\nu_{s}H_{n+s,1}+\frac{(1-s)\nu_{s}}{2s}H_{n+s,1}+\left(\frac{3s- 1}{2s}\right)c_{n,s}\nu_{s}^{-\frac{1-s}{3s-1}}\|A^{s/2}\mathbf{u}\|_{\infty}^{2s/ (3s-1)}H_{n,1}\] \[\leq -\nu_{s}\left(\frac{3s-1}{2s}\right)H_{n+s,1}+\left(\frac{3s-1}{ 2s}\right)c_{n,s}\nu_{s}^{-\frac{1-s}{3s-1}}\|A^{s/2}\mathbf{u}\|_{\infty}^{2s/(3s -1)}H_{n,1}\,. \tag{2.14}\] In the penultimate line we have used Young's inequality. Note that the constant \(c_{n,s}\) may change from line to line. The last line shows why this is a Navier-Stokes and not an Euler result, because of the necessary use of the dissipation term at the last step. Then, by removing the negative \(H_{n+s,1}\)-term and applying Gronwall's inequality we can write \[H_{n,1}(T)\leq c_{n,s}H_{n,1}(0)\exp\left\{\nu_{s}^{-\frac{1-s}{3s-1}}\int_{0} ^{T}\|A^{s/2}\mathbf{u}\|_{\infty}^{\frac{2s}{3s-1}}\,dt\right\}\qquad\text{for} \qquad s>\tfrac{1}{3}\,. \tag{2.15}\] The proof is now finished by contradiction. Let us assume that \(\int_{0}^{T^{*}}\|A^{s/2}\mathbf{u}\|_{\infty}^{\frac{2s}{3s-1}}\,dt\) is finite. Then \(H_{n,1}(T^{*})\) is finite, which contradicts the supposition that regularity is lost at \(T^{*}\). Thus the opposite must be true, i.e. the integral must be infinite if regularity is lost at \(T^{*}\). ## 3 Proof of Theorem 3 Now we will show that for \(s\geq\tfrac{3}{4}\) the Leray-Hopf solutions satisfy an equation of local energy balance. In order to prove Theorem 3 the following identity is necessary \[\int_{\mathbb{T}^{3}}\left(A^{s}f\right)gdx=\int_{\mathbb{T}^{3}}\left(A^{s/2 }f\right)\left(A^{s/2}g\right)dx\,. \tag{3.1}\] The proof is similar to that for (B.1) by using the spectral characterisation of the fractional Laplacian as well as the Plancherel identity. First, however, we prove the following Lemma : **Lemma 2**.: _Let \(\mathbf{u}\) be a Leray-Hopf weak solution of the fractional Navier-Stokes equations. The weak formulation (1.8) still holds for \(\psi\in W_{0}^{1,1}\left[(0,T)\,;L^{2}(\mathbb{T}^{3})\right]\cap L^{1}\left[(0,T)\,;H^{3}(\mathbb{T}^{3})\right]\)._ Proof.: Let us take an arbitrary \(\psi\in W_{0}^{1,1}\left[(0,T)\,;L^{2}(\mathbb{T}^{3})\right]\cap L^{1}\left[ (0,T)\,;H^{3}(\mathbb{T}^{3})\right]\), then there exists a sequence \(\{\psi_{n}\}\subset\mathcal{D}\left[\mathbb{T}^{3}\times(0,T)\right]\) such that \(\psi_{n}\to\psi\) in \(W_{0}^{1,1}\left[(0,T)\,;L^{2}(\mathbb{T}^{3})\cap L^{1}\left[0,T)\,;H^{3}( \mathbb{T}^{3})\right]\). First we observe that for any \(\psi_{n}\) equation (1.8) holds because \(\psi_{n}\in\mathcal{D}\left[\mathbb{T}^{3}\times(0,T)\right]\). We know that \(\mathbf{u}\partial_{t}\psi_{n}\to\mathbf{u}\partial_{t}\psi\) in \(L^{1}\left[(0,T)\,;L^{1}(\mathbb{T}^{3})\right]\) and therefore \[\int_{0}^{T}\int_{\mathbb{T}^{3}}\mathbf{u}\partial_{t}\psi_{n}\,dxdt\xrightarrow{ n\to\infty}\int_{0}^{T}\int_{\mathbb{T}^{3}}\mathbf{u}\partial_{t}\psi\,dxdt\,. \tag{3.2}\] Similarly, we know that \((A^{s/2}\mathbf{u})(A^{s/2}\psi_{n})\to(A^{s/2}\mathbf{u})(A^{s/2}\psi)\), \(\mathbf{u}\otimes\mathbf{u}:\nabla\psi_{n}\to\mathbf{u}\otimes\mathbf{u}:\nabla\psi\) and \(P\nabla\cdot\psi_{n}\to P\nabla\cdot\psi\), where all the limits converge in \(L^{1}\left[\mathbb{T}^{3}\times(0,T)\right]\). Therefore we have \[\int_{0}^{T}\int_{\mathbb{T}^{3}}\bigg{[}-\nu(A^{s/2}\mathbf{u})(A^{s/2 }\psi_{n})+\mathbf{u}\otimes\mathbf{u}:\nabla\psi_{n}+P\nabla\cdot\psi_{n}\bigg{]}\,dxdt\] \[\to\int_{0}^{T}\int_{\mathbb{T}^{3}}\bigg{[}-\nu(A^{s/2}\mathbf{u})(A ^{s/2}\psi)+\mathbf{u}\otimes\mathbf{u}:\nabla\psi+P\nabla\cdot\psi\bigg{]}\,dxdt\,. \tag{3.3}\] We conclude that the weak formulation holds for all \(\psi\in W_{0}^{1,1}\left[(0,T)\,;L^{2}(\mathbb{T}^{3})\right]\cap L^{1}\left[(0,T)\,;H^{3}(\mathbb{T}^{3})\right]\). Now we prove the following lemma : **Lemma 3**.: _Let \(s\geq\frac{3}{4}\) and let \(\mathbf{u}\) be a Leray-Hopf weak solution of the fractional Navier-Stokes equations. Then \(\mathbf{u}\in L^{3}\left[\mathbb{T}^{3}\times(0,T)\right]\) and \(P\in L^{3/2}\left[\mathbb{T}^{3}\times(0,T)\right]\)._ Proof.: The following \(3D\) interpolation inequality is useful \[\|f\|_{L^{p}}\leq C\|f\|_{L^{q}}^{\theta}\|f\|_{H^{s}}^{1-\theta},\quad\frac{ 1}{p}=\frac{\theta}{q}+(1-\theta)\bigg{(}\frac{1}{2}-\frac{s}{3}\bigg{)}\,. \tag{3.4}\] from which we find \[\|\mathbf{u}\|_{L^{3}}\leq CH_{0,1}^{(2s-1)/4s}H_{s,1}^{1/4s}\,. \tag{3.5}\] We recall that \(\mathbf{u}\in L^{\infty}\left[(0,T)\,;L^{2}(\mathbb{T}^{3})\right]\) and hence the time integral of any power of the \(L^{2}\) norm is finite. However since \(\mathbf{u}\in L^{2}\left[(0,T)\,;H^{s}(\mathbb{T}^{3})\right]\), in order for \(\mathbf{u}\in L^{3}\left[\mathbb{T}^{3}\times(0,T)\right]\) we require \[\frac{3}{4s}\leq 1\qquad\implies\qquad s\geq\tfrac{3}{4}\,. \tag{3.6}\] The pressure satisfies the following equation (in the sense of distributions) \[-\Delta P=(\nabla\otimes\nabla):(\mathbf{u}\otimes\mathbf{u})\,, \tag{3.7}\] and since \(\mathbf{u}\in L^{3}\left[\mathbb{T}^{3}\times(0,T)\right]\), it follows by the boundedness of the Riesz transform (see appendix B in [34]) that \(P\in L^{3/2}\left[\mathbb{T}^{3}\times(0,T)\right]\), which is what we needed to show. **Proof of Theorem 3 :** We mollify the hypo-dissipative Navier-Stokes equations, multiply by \(\psi\mathbf{u}\) and integrate in time and space to obtain \[\int_{0}^{T}\int_{\mathbb{T}^{3}}\psi\mathbf{u}\cdot\left[\partial_{t}\mathbf{u}^{ \epsilon}+\nabla\cdot(\mathbf{u}\otimes\mathbf{u})^{\epsilon}+\nu A^{s}\mathbf{u}^{ \epsilon}+\nabla P^{\epsilon}\right]dxdt=0\,. \tag{3.8}\] We first observe that \(\mathbf{u}^{\epsilon}\in L^{\infty}\left[(0,T)\,;C^{\infty}(\mathbb{T}^{3})\right]\). From mollifying the equation we find that \[\partial_{t}\mathbf{u}^{\epsilon}\in L^{2}\left[(0,T)\,;C^{\infty}(\mathbb{T}^{3})\right] \tag{3.9}\] as \(\nabla\cdot(\mathbf{u}\otimes\mathbf{u})^{\epsilon}+\nu A^{s}\mathbf{u}^{\epsilon}+\nabla P ^{\epsilon}\) lies in this space. Hence \(\mathbf{u}^{\epsilon}\in H^{1}\left[(0,T)\,;C^{\infty}(\mathbb{T}^{3})\right]\). Therefore, we can apply Lemma 2 and take \(\mathbf{u}^{\epsilon}\psi\) as a test function in the weak formulation (1.8). Subtracting equation (3.8) gives us \[\int_{0}^{T}\int_{\mathbb{T}^{3}}\left[\mathbf{u}\cdot\partial_{t}( \mathbf{u}^{\epsilon}\psi)-\psi\mathbf{u}\cdot\partial_{t}\mathbf{u}^{\epsilon}-\nu(A^{s/ 2}\mathbf{u})\cdot A^{s/2}(\mathbf{u}^{\epsilon}\psi)-\nu A^{s/2}(\mathbf{u}\psi)(A^{s/2} \mathbf{u}^{\epsilon})+P\nabla\cdot(\mathbf{u}^{\epsilon}\psi)\right.\] \[\left.-\psi\mathbf{u}\cdot\nabla P^{\epsilon}+\mathbf{u}\otimes\mathbf{u}: \nabla(\psi\mathbf{u}^{\epsilon})-\mathbf{u}\psi\cdot\left(\nabla\cdot(\mathbf{u}\otimes \mathbf{u})^{\epsilon}\right)\right]dxdt=0\,. \tag{3.10}\] Next we introduce a mollified defect term \(D_{\epsilon}(\mathbf{u})\). Noting that \(\varphi^{\epsilon}\) is a smooth mollifier, \(D_{\epsilon}(\mathbf{u})\) becomes \[D_{\epsilon}(\mathbf{u})(\mathbf{x},\,t) \coloneqq\int_{\mathbb{R}^{3}}\nabla\varphi^{\epsilon}\cdot\delta \mathbf{u}(\mathbf{\xi};\,\mathbf{x},t)|\delta\mathbf{u}(\mathbf{\xi};\,\mathbf{x},t)|^{2}d\xi\] \[=-\nabla\cdot(|\mathbf{u}|^{2}\mathbf{u})^{\epsilon}+\mathbf{u}\cdot\nabla(| \mathbf{u}|^{2})^{\epsilon}+2\mathbf{u}\otimes\nabla:(\mathbf{u}\otimes\mathbf{u})^{\epsilon} -2\mathbf{u}\otimes\mathbf{u}:\nabla\mathbf{u}^{\epsilon}\,, \tag{3.11}\] with \(\delta\mathbf{u}\) defined as in (1.13). We observe that \(D_{\epsilon}(\mathbf{u})\) is well-defined for any \(\epsilon>0\) because of the assumption that \(\mathbf{u}\in L^{3}\left[\mathbb{T}^{3}\times(0,T)\right]\). Equation (3.10) can also be rewritten as follows \[\int_{0}^{T}\int_{\mathbb{T}^{3}}\left[\mathbf{u}\cdot\mathbf{u}^{ \epsilon}\partial_{t}\psi-\nu(A^{s/2}\mathbf{u})\cdot A^{s/2}(\mathbf{u}^{\epsilon} \psi)-\nu A^{s/2}(\mathbf{u}\psi)(A^{s/2}\mathbf{u}^{\epsilon})+(\mathbf{u}^{\epsilon}P+ \mathbf{u}P^{\epsilon})\cdot\nabla\psi\right.\] \[-\tfrac{1}{2}D_{\epsilon}(\mathbf{u})(\mathbf{x},\,t)\psi-\tfrac{1}{2} \psi\nabla\cdot(|\mathbf{u}|^{2}\mathbf{u})^{\epsilon}+\tfrac{1}{2}\psi\mathbf{u}\cdot \nabla(|\mathbf{u}|^{2})^{\epsilon}+(\mathbf{u}\cdot\mathbf{u}^{\epsilon})\mathbf{u}\cdot \nabla\psi\right]dxdt=0\,, \tag{3.12}\] where we have used the incompressiblity when rewriting the pressure terms. As \(\epsilon\to 0\), we observe that we have the following convergence in \(L^{1}\left[\mathbb{T}^{3}\times(0,T)\right]\) \[\mathbf{u}\cdot\mathbf{u}^{\epsilon}\partial_{t}\psi-\nu(A^{s/2}\mathbf{u}) \cdot A^{s/2}(\mathbf{u}^{\epsilon}\psi)-\nu A^{s/2}(\mathbf{u}\psi)(A^{s/2}\mathbf{u}^{ \epsilon})+(\mathbf{u}^{\epsilon}P+\mathbf{u}P^{\epsilon})\cdot\nabla\psi\] \[\xrightarrow{\epsilon\to 0}|\mathbf{u}|^{2}\partial_{t}\psi-2\nu(A^{s/ 2}\mathbf{u})\cdot A^{s/2}(\mathbf{u}\psi)+2P\mathbf{u}\cdot\nabla\psi\,.\] In addition we have \[\int_{0}^{T}\int_{V}(\mathbf{u}\cdot\mathbf{u}^{\epsilon})\mathbf{u}\cdot \nabla\psi\,dxdt\xrightarrow{\epsilon\to 0}\int_{T}\int_{V}|\mathbf{u}|^{2}\mathbf{u} \cdot\nabla\psi\,dxdt\,, \tag{3.13}\] as well as (by integrating by parts) \[\int_{0}^{T}\int_{V}\bigg{[}-\tfrac{1}{2}\psi\nabla\cdot(|\mathbf{u}|^{2}\mathbf{u})^ {\epsilon}+\tfrac{1}{2}\psi\mathbf{u}\cdot\nabla(|\mathbf{u}|^{2})^{\epsilon}\bigg{]} dxdt\xrightarrow{\epsilon\to 0}0\,. \tag{3.14}\] We can now write the following equation for the defect term \[\tfrac{1}{2}D_{\epsilon}(\mathbf{u}) =-\partial_{t}(\mathbf{u}\cdot\mathbf{u}^{\epsilon})-\nu A^{s}\mathbf{u} \cdot\mathbf{u}^{\epsilon}-\nu A^{s}\mathbf{u}^{\epsilon}\cdot\mathbf{u}-\nabla\cdot(\bm {u}^{\epsilon}P+\mathbf{u}P^{\epsilon})+\tfrac{1}{2}\nabla\cdot\left[(|\mathbf{u}|^{2 }\mathbf{u})^{\epsilon}-\mathbf{u}(|\mathbf{u}|^{2})^{\epsilon}\right]\] \[-\nabla\cdot((\mathbf{u}\cdot\mathbf{u}^{\epsilon})\mathbf{u}). \tag{3.15}\] We note that \(A^{s}\mathbf{u}\in L^{2}\left[(0,T)\,;H^{-s}(\mathbb{T}^{3})\right]\), then by using the para-differential calculus (see [44]), it follows that \(A^{s}\mathbf{u}\cdot\mathbf{u}\psi\in L^{1}\left[(0,T)\,;W^{-s-b,1}(\mathbb{T}^{3})\right]\) for some small \(b>0\). By examining equation (3.15) we conclude that the right-hand side lies in \(W^{-1,1}\left[(0,T)\,;W^{-1-b,1}(\mathbb{T}^{3})\right]\) and the limit as \(\epsilon\to 0\) is independent of the choice of mollifier \(\varphi_{\epsilon}\). Therefore \(D(\mathbf{u})\coloneqq\lim_{\epsilon\to 0}D_{\epsilon}(\mathbf{u})\) exists as an element in \(W^{-1,1}\left[(0,T)\,;W^{-(1+b),1}(\mathbb{T}^{3})\right]\) and is also independent of the choice of mollifier. Alternatively, this can be seen from the following equation \[\tfrac{1}{2}\int_{0}^{T}\int_{\mathbb{T}^{3}}D_{\epsilon}(\mathbf{u} )(\mathbf{x},\,t)\psi\,dxdt=\int_{0}^{T}\int_{\mathbb{T}^{3}}\left[\mathbf{u}\cdot\mathbf{ u}^{\epsilon}\partial_{t}\psi-\nu(A^{s/2}\mathbf{u})\cdot A^{s/2}(\mathbf{u}^{\epsilon} \psi)-\nu A^{s/2}(\mathbf{u}\psi)(A^{s/2}\mathbf{u}^{\epsilon})\right.\\ +(\mathbf{u}^{\epsilon}P+\mathbf{u}P^{\epsilon})\cdot\nabla\psi-\tfrac{1}{2 }\psi\nabla\cdot(|\mathbf{u}|^{2}\mathbf{u})^{\epsilon}+\tfrac{1}{2}\psi\mathbf{u}\cdot \nabla(|\mathbf{u}|^{2})^{\epsilon}+(\mathbf{u}\cdot\mathbf{u}^{\epsilon})\mathbf{u}\cdot \nabla\psi\right]dxdt\,. \tag{3.16}\] We conclude that in the limit \(\epsilon\to 0\), we obtain the equation of local energy balance \[\int_{0}^{T}\int_{\mathbb{T}^{3}}\left[|\mathbf{u}|^{2}\partial_{t}\psi-2\nu(A^{s/2} \mathbf{u})\cdot A^{s/2}(\mathbf{u}\psi)+2P\mathbf{u}\cdot\nabla\psi-\tfrac{1}{2}D(\mathbf{u}) \psi+|\mathbf{u}|^{2}\mathbf{u}\cdot\nabla\psi\right]dxdt=0\,, \tag{3.17}\] as in (1.11). Proof of Corollary 1.: By Lemma 3 we find that \(\mathbf{u}\in L^{3}\left[\mathbb{T}^{3}\times(0,T)\right]\) if \(s\geq\frac{3}{4}\). Then the result follows by Theorem 3. **Remark 5**.: If \(0<s<\frac{3}{4}\), one needs to make the separate regularity assumption \(\mathbf{u}\in L^{3}\left[\mathbb{T}^{3}\times(0,T)\right]\), in order to prove that the Leray-Hopf solution satisfies an equation of local energy balance. We now prove a sufficient condition for the defect term \(D(\mathbf{u})\) to be zero (i.e. for the energy equality to hold), which is similar to the condition from Duchon and Robert [54]. In the next theorem we use Besov spaces \(B_{p,q}^{s}(\mathbb{T}^{3})\), which are defined in Appendix B. **Proposition 6**.: _Let \(\mathbf{u}\in L^{3}\left[\left(0,T\right);B_{3,\infty}^{\alpha}(\mathbb{T}^{3})\right]\) with \(\alpha>\frac{1}{3}\) is a Leray-Hopf weak solution of the fractional Navier-Stokes equations, then the defect term \(D(\mathbf{u})=0\) in \(L^{1}\left[\mathbb{T}^{3}\times(0,T)\right]\). This implies that equation (1.11) is an energy balance ; i.e. the following holds_ \[\int_{0}^{T}\int_{\mathbb{T}^{3}}\left[|\mathbf{u}|^{2}\partial_{t}\psi-2\nu(A^{s /2}\mathbf{u})\cdot A^{s/2}(\mathbf{u}\psi)+2p\mathbf{u}\cdot\nabla\psi+|\mathbf{u}|^{2}\mathbf{u }\cdot\nabla\psi\right]\,dxdt=0\,. \tag{3.18}\] Proof.: We make the following estimate \[\int_{0}^{T}\int_{\mathbb{T}^{3}}|D_{\epsilon}(\mathbf{u})|\,dxdt \leq\int_{0}^{T}\int_{\mathbb{T}^{3}}\int_{\mathbb{R}^{3}}|\nabla \varphi^{\epsilon}(\xi)||\delta\mathbf{u}|^{3}\,d\xi dxdt\] \[\leq\int_{0}^{T}\|\mathbf{u}\|_{B_{3,\infty}^{\alpha}}^{3}\,dt\int_{ \mathbb{R}^{3}}|\nabla\varphi^{\epsilon}(\xi)||\xi|^{3\alpha}\,d\xi\] \[=\int_{0}^{T}\|\mathbf{u}\|_{B_{3,\infty}^{\alpha}}^{3}\,dt\int_{ \mathbb{R}^{3}}|\nabla\varphi(z)||z||\epsilon z|^{3\alpha-1}\,dz\,, \tag{3.19}\] where in the last line we have made the change of variable \(\xi=\epsilon z\). By the dominated convergence theorem it follows that \(D_{\epsilon}(\mathbf{u})\xrightarrow{\epsilon\to 0}0\) in \(L^{1}\left[\mathbb{T}^{3}\times(0,T)\right]\). The results are self-consistent as we can recover the energy equality originally found in [1]. **Proposition 7**.: _Let \(\mathbf{u}\) be a Leray-Hopf solution of the fractional Navier-Stokes equations with \(s>\frac{5}{4}\), then \(D(\mathbf{u})=0\) and the energy equality holds._ Proof.: We first observe that \(W^{\alpha,3}(\mathbb{T}^{3})\subset B_{3,\infty}^{\alpha}(\mathbb{T}^{3})\). By again relying on the Gagliardo-Nirenberg-Sobolev inequality (as stated in [43]) we find that (for \(\alpha+\frac{1}{2}<s\)) \[\|\mathbf{u}\|_{W^{\alpha,3}}\leq\|\mathbf{u}\|_{L^{2}}^{a}\|\mathbf{u}\|_{H^{s}}^{1-a}\,, \tag{3.20}\] where we have the following relation between the exponents \[\frac{1}{3}=\frac{a}{2}+\frac{1-a}{2}-\frac{(1-a)s-\alpha}{3}\qquad\implies \qquad a=\frac{2s-1-2\alpha}{2s}\,. \tag{3.21}\] Therefore we find the following inequality \[\|\mathbf{u}\|_{W^{\alpha,3}}\leq\|\mathbf{u}\|_{L^{2}}^{\frac{2s-1-2\alpha}{2s}}\|\bm {u}\|_{H^{s}}^{\frac{1+2\alpha}{2s}}\,.\] For \(\mathbf{u}\) to be in \(L^{3}\left[\left(0,T\right);W^{\alpha,3}(\mathbb{T}^{3})\right]\), we need \[\frac{1+2\alpha}{2s}\leq\frac{2}{3}\qquad\implies\qquad\frac{3}{4}(1+2 \alpha)\leq s\,.\] Because we can take \(\alpha>\frac{1}{3}\) arbitrarily close to \(\frac{1}{3}\), this gives the condition \[s>\tfrac{5}{4}\,. \tag{3.22}\] Then by Theorem 3 and Proposition 6 the result follows. ## 4 Proof of Theorem 4 The proof of Theorem 4 will be split into several parts. First we will establish a set of a priori estimates. In so doing, we introduce the following notation. \[\zeta_{s}=\frac{2s}{3s-1}\,,\qquad\beta=\frac{3}{2(n-s)}\,,\qquad\rho_{1}=1+ \tfrac{1}{2}\beta\zeta_{s}\,,\qquad\rho_{2}=\tfrac{1}{2}\zeta_{s}(1-\beta)\,. \tag{4.1}\] **Proposition 8**.: _Let \(\mathbf{u}\) be a smooth solution of the fractional Navier-Stokes equations with \(s>\frac{5}{6}\). Then the following differential inequalities hold :_ \[\tfrac{1}{2}\frac{d}{dt}H_{n,1} \leq-\nu_{s}\zeta_{s}^{-1}H_{n+s,1}+c_{n,s}\zeta_{s}^{-1}\nu_{s} ^{1-\zeta_{s}}H_{n,1}^{\rho_{1}}H_{s,1}^{\rho_{2}}\,, \tag{4.2}\] \[\tfrac{1}{2}\frac{d}{dt}H_{n,1} \leq-\left(\frac{6s-5}{4n}\right)\nu_{s}H_{n+s,1}+\left(\frac{6s- 5}{4n}\right)c_{n,s}\nu_{s}^{\frac{6s-5-4n}{6s-5}}H_{s,1}^{1+\frac{2}{6s-5}n}\,, \tag{4.3}\] _where for estimate (4.2) \(n>s+\frac{3}{2}\), and for estimate (4.3) \(n\geq 1\)._ Proof.: We define \(w=A^{s/2}\mathbf{u}\) and let \(\beta=\frac{3}{2(n-s)}\). We have \[\|w\|_{\infty}\leq c\,\|A^{(n-s)/2}w\|_{2}^{\beta}\|w\|_{2}^{1-\beta}\,, \tag{4.4}\] which can be rewritten as \[\|A^{s/2}\mathbf{u}\|_{\infty}\leq c\,H_{n,1}^{\frac{1}{2}\beta}H_{s,1}^{\frac{1} {2}(1-\beta)}\,. \tag{4.5}\] By using (4.5) in (2.14) we have \[\tfrac{1}{2}\frac{d}{dt}H_{n,1} \leq-\nu_{s}\left(\frac{3s-1}{2s}\right)H_{n+s,1}+\frac{(3s-1)c_{ n,s}}{2s}\nu_{s}^{-\frac{1-s}{3s-1}}\|A^{s/2}\mathbf{u}\|_{\infty}^{2s/(3s-1)}H_{n,1}\] \[\leq-\nu_{s}\zeta_{s}^{-1}H_{n+s,1}+c_{n,s}\zeta_{s}^{-1}\nu_{s} ^{1-\zeta_{s}}H_{n,1}^{\rho_{1}}H_{s,1}^{\rho_{2}}\,, \tag{4.6}\] having used the the definition \(\zeta_{s}=\frac{2s}{3s-1}\). This proves estimate (4.2). In order to prove the second inequality, we recall the following interpolation inequality \[H_{n,1}\leq H_{s,1}^{\frac{s}{n}}H_{n+s,1}^{\frac{n-s}{n}}\,.\] Inserting this inequality into (4.6), we find that \[\tfrac{1}{2}\frac{d}{dt}H_{n,1}\leq-\nu_{s}\zeta_{s}^{-1}H_{n+s,1}+c_{n,s} \zeta_{s}^{-1}\nu_{s}^{1-\zeta_{s}}H_{n+s,1}^{\frac{\rho_{1}(n-s)}{n}}H_{s,1} ^{\rho_{2}+\rho_{1}\frac{s}{n}}\,.\] then by applying Young's inequality we find (where \(\chi_{n,s}\coloneqq[(1-\rho_{1})n+\rho_{1}s]/n=[s-\tfrac{3}{4}\zeta_{s}]/n\)) \[\tfrac{1}{2}\frac{d}{dt}H_{n,1}\leq-\nu_{s}\zeta_{s}^{-1}H_{n+s,1}+\left(\nu_ {s}\zeta_{s}^{-1}H_{n+s,1}\right)^{\frac{\rho_{1}(n-s)}{n}}c_{n,s}\zeta_{s}^{- 1+\frac{\rho_{1}(n-s)}{n}}\nu_{s}^{1-\zeta_{s}-\frac{\rho_{1}(n-s)}{n}}H_{s,1} ^{\rho_{2}+\rho_{1}\frac{s}{n}}\] \[\leq-\chi_{n,s}\nu_{s}\zeta_{s}^{-1}H_{n+s,1}+\chi_{n,s}\bigg{(}c_{n,s} \zeta_{s}^{-1+\frac{\rho_{1}(n-s)}{n}}\nu_{s}^{1-\zeta_{s}-\frac{\rho_{1}(n-s)}{ n}}H_{s,1}^{2+\rho_{1}\frac{s}{n}}\bigg{)}^{\frac{n}{(1-\rho_{1})n+\rho_{1}s}}\] \[\leq-\frac{s-\frac{3}{4}\zeta_{s}}{n}\nu_{s}\zeta_{s}^{-1}H_{n+s,1 }+\frac{s-\frac{3}{4}\zeta_{s}}{n}\zeta_{s}^{-1}\nu_{s}\bigg{(}c_{n,s}\nu_{s}^{ -\zeta_{s}}H_{s,1}^{\frac{1}{n}(s+\frac{1}{2}\zeta_{s}n-\frac{3}{4}\zeta_{s}) }\bigg{)}^{\frac{n}{s-\frac{3}{4}\zeta_{s}}}\] \[\leq-\left(\frac{6s-5}{4n}\right)\nu_{s}H_{n+s,1}+\left(\frac{6s- 5}{4n}\right)c_{n,s}\nu_{s}^{1-\frac{n\zeta_{s}}{s-\frac{3}{4}\zeta_{s}}}H_{s,1}^{1+\frac{\zeta_{s}}{2(s-\frac{3}{4}\zeta_{s})}n}\] \[\leq-\left(\frac{6s-5}{4n}\right)\nu_{s}H_{n+s,1}+\left(\frac{6s- 5}{4n}\right)c_{n,s}\nu_{s}^{\frac{6s-5-4n}{6s-5}}H_{s,1}^{1+\frac{2}{6s-5}n}\,.\] This completes the proof of estimate (4.3) for the case \(n>s+\frac{3}{2}\). Now we consider the cases \(n=1,2\) separately. If \(n=1\) we have \[\tfrac{1}{2}\frac{d}{dt}H_{1,1} \leq-\nu_{s}H_{1+s,1}+c_{n,s}\|\boldsymbol{u}\|_{W^{1,3}}^{3}\] \[\leq-\nu_{s}H_{1+s,1}+c_{n,s}H_{s,1}^{\frac{3}{2}(s-\frac{1}{2}) }H_{1+s,1}^{\frac{3}{2}(\frac{3}{2}-s)}\] \[\leq-\nu_{s}\left(\frac{3}{2}s-\frac{5}{4}\right)H_{1+s,1}+\left( \frac{3}{2}s-\frac{5}{4}\right)c_{n,s}\nu_{s}^{-\frac{3}{s-\frac{5}{6}}}H_{s,1 }^{\frac{s-\frac{1}{2}}{s-\frac{5}{6}}},\] where we have used a Gagliardo-Nirenberg interpolation inequality in the second line, and Young's inequality in the third line. This proves estimate (4.3) in the case \(n=1\). For \(n=2\) we have \[\tfrac{1}{2}\frac{d}{dt}H_{2,1} \leq-\nu_{s}H_{2+s,1}+c_{n,s}\|\nabla\boldsymbol{u}\|_{\infty}H_{ 2,1}\] \[\leq-\nu_{s}H_{2+s,1}+c_{n,s}H_{1+s,1}^{\frac{1}{2}(s-\frac{1}{2} )}H_{2+s,1}^{\frac{1}{2}(\frac{3}{2}-s)}H_{s,1}^{\frac{1}{2}s}H_{2+s,1}^{\frac {1}{2}(2-s)}\] \[\leq-\nu_{s}H_{2+s,1}+c_{n,s}H_{s,1}^{\frac{3}{2}s-\frac{1}{8}}H_ {2+s,1}^{\frac{1}{2}s-\frac{3}{2}s}\] \[\leq-\nu_{s}\frac{6s-5}{8}H_{2+s,1}+c_{n,s}\nu_{s}^{\frac{6s-13} {6s-5}}H_{s,1}^{\frac{6s-1}{6s-5}},\] which concludes the proof of estimate (4.3). **Proposition 9**.: _Let \(\boldsymbol{u}_{0}\in H^{n}(\mathbb{T}^{3})\) for \(n\geq 1\). Then there exists a unique local-in-time solution \(\boldsymbol{u}\in L^{\infty}\left[(0,T)\,;H^{n}(\mathbb{T}^{3})\right]\cap L^{2 }\left[(0,T)\,;H^{n+s}(\mathbb{T}^{3})\right]\) for all \(T<t_{1}(\boldsymbol{u}_{0})\) where the existence time \(t_{1}(\boldsymbol{u}_{0})\) depends on \(\boldsymbol{u}_{0}\) and \(\nu\), but is independent of \(n\)._ Proof.: The case \(n=1\) is shown in Theorem 11 in Appendix A. To prove the case \(n\geq 2\) we introduce the following perturbed problem (for some \(\epsilon>0\)) \[\partial_{t}\boldsymbol{u}_{\epsilon}+\nu A^{s}\boldsymbol{u}_{\epsilon}+ \epsilon A^{5/4}\boldsymbol{u}_{\epsilon}+\boldsymbol{u}_{\epsilon}\cdot\nabla \boldsymbol{u}_{\epsilon}+\nabla P_{\epsilon}=0\,,\] where the subscripts of \(\boldsymbol{u}\) and \(P\) denote a solution of the problem for a given choice of \(\epsilon>0\). By the results in [1] we know that there exists a unique smooth solution \(\boldsymbol{u}_{\epsilon}\) to the problem for any choice \(\epsilon>0\). Moreover, \(\boldsymbol{u}_{\epsilon}\) (which is is smooth) satisfies the following rigorous estimates adapted from Proposition 8 \[\tfrac{1}{2}\frac{d}{dt}H_{n,1} \leq-\nu_{s}\zeta_{s}^{-1}H_{n+s,1}-\epsilon H_{n+5/4,1}+c_{n,s} \zeta_{s}^{-1}\nu_{s}^{1-\zeta_{s}}H_{n,1}^{\rho_{1}}H_{s,1}^{\rho_{2}}, \tag{4.7}\] \[\tfrac{1}{2}\frac{d}{dt}H_{n,1} \leq-\left(\frac{6s-5}{4n}\right)\nu_{s}H_{n+s,1}-\epsilon H_{n+5/4,1}+\left(\frac{6s-5}{4n}\right)c_{n,s}\nu_{s}^{\frac{6s-5-4n}{6s-5}}H_{s,1}^{1 +\frac{2}{6s-5}n}\,. \tag{4.8}\] It follows from these inequalities that there exists a time \(t_{n}(\mathbf{u}_{0})\) such that \(\operatorname{ess}\sup_{t\in[0,T]}H_{n,1}+\int_{0}^{T}H_{n+s,1}dt\) is controlled uniformly in \(\epsilon\) for any \(T<t_{m}(\mathbf{u}_{0})\). Therefore we can extract a weak-\(*\) converging subsequence (which we also call \(\{\mathbf{u}_{\epsilon}\}\)) converging to a solution \(\mathbf{u}\in L^{\infty}\left[(0,\;t_{n}(\mathbf{u}_{0})\,;H^{n}(\mathbb{T}^{3}) \right]\cap L^{2}\left[(0,\;t_{n}(\mathbf{u}_{0}))\,;H^{n+s}(\mathbb{T}^{3}\right]\). It follows that \(\mathbf{u}\) must be the unique local strong solution, whose existence and uniqueness was established in Theorem 11. Moreover, \(\mathbf{u}\) satisfies the following estimate \[\operatorname*{ess}\sup_{t\in[0,T]}H_{n,1}+\nu\int_{0}^{T}H_{n+s,1}dt\lesssim \int_{0}^{T}H_{s,1}^{1+\frac{2}{6s-5}n}dt. \tag{4.9}\] In fact, this implies that \(t_{n}(\mathbf{u}_{0})=t_{1}(\mathbf{u}_{0})\) for any \(n\geq 1\). This is because for \(T<t_{1}(\mathbf{u}_{0})\) we have that \(\mathbf{u}\in L^{\infty}\left[(0,T)\,;H^{s}(\mathbb{T}^{3})\right]\). This means that for any \(t<t_{n}(\mathbf{u}_{0})\) we have that \(H_{n,1}\) is uniformly bounded in time up to \(t_{n}(\mathbf{u}_{0})\). Then by the local existence result that has just been proved, we can extend the solution beyond \(t_{n}(\mathbf{u}_{0})\). This process can be reiterated up to \(t_{1}(\mathbf{u}_{0})\). Therefore \(t_{n}(\mathbf{u}_{0})=t_{1}(\mathbf{u}_{0})\). By following the method of Foias, Guillope and Temam [36], we will next show that if \(s>\frac{5}{6}\), the set of singular times of a Leray-Hopf weak solution has zero Lebesgue measure. We first recall the idea of a regular time, the set of regular times \(\mathcal{R}_{n}\) for some \(n\geq s\) for a given Leray-Hopf solution \(\mathbf{u}\) is defined as follows \[\mathcal{R}_{n}\coloneqq\left\{t\in\mathbb{R}_{+}|\,\exists\,\epsilon>0\text{ such that }\mathbf{u}\in C\left[(t-\epsilon,t+\epsilon)\,;H^{n}(\mathbb{T}^{3})\right]\right\}. \tag{4.10}\] We define the set of singular times as follows \[\mathcal{S}_{n}\coloneqq\left\{t\in\mathbb{R}_{+}|\mathbf{u}(\cdot,t)\notin H^{n }(\mathbb{T}^{3})\right\} \tag{4.11}\] and then prove the following result about the Lebesgue measure of the regular times. **Proposition 10**.: _Let \(\mathbf{u}\) be a Leray-Hopf solution and \(n\in\mathbb{N}\), then \(\mathbf{u}\) is \(H^{n}(\mathbb{T}^{3})\) regular for an open subset of \((0,\infty)\), such that \(\mathbb{R}_{+}\backslash\mathcal{R}_{n}\) has zero Lebesgue measure._ Proof.: First we derive an a priori estimate. Suppose \(\mathbf{u}\) is a smooth solution of the fractional Navier-Stokes equations, then by taking the \(L^{2}(\mathbb{T}^{3})\) inner product with \(A^{s}\mathbf{u}\) and using several interpolation inequalities, we find that \[\frac{d}{dt}H_{s,1}+\nu_{s}H_{2s,1} \leq\left\|\left[(\mathbf{u}\cdot\nabla)\mathbf{u}\right]\cdot A^{s}\mathbf{ u}\right\|_{L^{1}}\leq\|\mathbf{u}\|_{L^{\infty}}\|\nabla\mathbf{u}\|_{L^{2}}H_{2s,1}^{1/2}\] \[\leq H_{s,1}^{\frac{4s-3}{4s}}\,H_{2s,1}^{\frac{3-2s}{4s}}\,H_{s,2}^{\frac{2s-1}{2s}}\,H_{2s,1}^{\frac{1-s}{2s}}\,H_{2s,1}^{1/2}=H_{s,1}^{\frac {8s-5}{4s}}\,H_{2s,1}^{\frac{5-2s}{4s}}, \tag{4.12}\] we require \[\frac{5-4s}{4s}<\frac{1}{2}\quad\implies\quad s>\frac{5}{6}\,. \tag{4.13}\] Therefore we are justified in using Young's inequality to derive the following inequality \[\frac{1}{2}\frac{d}{dt}H_{s,1}+\nu_{s}\left(\frac{6s-5}{4s}\right)H_{2s,1}\leq \left(\frac{6s-5}{4s}\right)\nu_{s}^{\frac{2s-5}{6s-5}}H_{s,1}^{\frac{8s-5}{6s -5}}\,. \tag{4.14}\] For \(m\geq 2\) we derive the following a priori estimate \[\tfrac{1}{2}\frac{d}{dt}H_{ms,1}+\nu_{s}H_{(m+1)s,1}\leq\left\|\left[(\mathbf{u} \cdot\nabla)\mathbf{u}\right]\cdot A^{ms}\mathbf{u}\right\|_{L^{1}}\leq H_{(m+1)s,1}^ {1/2}\|(\mathbf{u}\cdot\nabla)\mathbf{u}\|_{\dot{H}^{(m-1)s}}. \tag{4.15}\] Then we derive the inequality (by using the para-differential calculus, see Appendix B and [44] for details) \[\|(\mathbf{u}\cdot\nabla)\mathbf{u}\|_{\dot{H}^{(m-1)s}} \leq\|\mathbf{u}\|_{B^{(m-1)s}_{\infty,2}}H^{1/2}_{(m-1)s+1,1}\] \[\leq\|\mathbf{u}\|_{B^{(m-1)s+3/2}_{2,2}}H^{1/2}_{(m-1)s+1,1}\] \[\leq H^{\frac{4s-3}{4s}}_{mss,1}\frac{\bar{4}^{3-2s}_{s}}{(m+1)s,1 }H^{\frac{2s-1}{2s}}_{ms,1}H^{\frac{1-s}{2s}}_{(m+1)s,1}\] \[=H^{\frac{8s-5}{4s}}_{ms,1}H^{\frac{5-4s}{4s}}_{(m+1)s,1}\,. \tag{4.16}\] Therefore we can conclude that \[\tfrac{1}{2}\frac{d}{dt}H_{ms,1}+\nu_{s}H_{(m+1)s,1}\leq H^{\frac{8s-5}{4s}}_{ ms,1}H^{\frac{5-2s}{4s}}_{(m+1)s,1}, \tag{4.17}\] which implies \[\frac{1}{2}\frac{d}{dt}H_{ms,1}+\nu_{s}\left(\frac{6s-5}{4s}\right)H_{(m+1)s,1 }\leq\left(\frac{6s-5}{4s}\right)\nu_{s}^{\frac{2s-5}{6s-5}}H^{\frac{8s-5}{6s-5 }}_{ms,1}\,. \tag{4.18}\] Now we will show that \(\mathcal{R}_{n}\) has full measure by induction. We first observe that by the energy inequality we have \[\sup_{t\in[0,\infty)}H_{0,1}+2\nu_{s}\int_{0}^{\infty}H_{s,1}dt\leq\|\mathbf{u}_{ 0}\|_{2}^{2}\,. \tag{4.19}\] This means that \(H_{s,1}\) must be finite for almost all times and hence \(\mathcal{R}_{s}\) has full measure (as the number of endpoints of disjoint intervals is countable). Now we proceed by induction and suppose we know that the sets \(\mathcal{R}_{ms}\) have full measure for \(1\leq m\leq n\). We consider an \(H^{ns}(\mathbb{T}^{3})\) regularity interval \((t_{l},t_{r})\). By using the apriori estimate (4.18) for \(m=n\) and an adaption of the proof of Proposition 9 there exists a strong solution coinciding with the weak solution on this time interval (by weak-strong uniqueness as stated in Appendix A). For any \([t_{0},t_{1}]\subset(t_{l},t_{r})\) this strong solution satisfies \[\operatorname*{ess\,sup}_{t\in[t_{0},t_{1}]}H_{ns,1}+2\nu_{s}\left(\frac{6s-5} {4s}\right)\int_{t_{0}}^{t_{1}}H_{(n+1)s,1}dt\leq\tfrac{1}{2}H_{ns,1}(t_{0})+ \left(\frac{6s-5}{4s}\right)\nu_{s}^{\frac{2s-5}{6s-5}}\int_{t_{0}}^{t_{1}}H^ {\frac{8s-5}{6s-5}}_{ns,1}dt.\] It follows that \(H_{(n+1)s,1}\) is finite for almost all times in \((t_{l},t_{r})\). As this is true for any regularity interval, we conclude that \(\mathcal{R}_{(n+1)s}\) has full measure. Therefore the result follows by induction. Now we are ready to prove Theorem 4. Proof of Theorem 4.: For any \(n\geq 1\) there is a countable number of regularity intervals for the \(H^{n}(\mathbb{T}^{3})\) norm. In this proof we will work with integrals on the time interval \([0,T]\), which should be split into an (infinite) sum over the regularity intervals, which we will not write down explicitly. Let \(\gamma_{n}>0\) be a (for now) undetermined constant. Then we can make the estimate \[\int_{0}^{T}H^{\gamma_{n}}_{n+s,1}dt\leq\int_{0}^{T}\frac{H^{\gamma_{n}}_{n+s,1}}{(1+H_{s,1})^{\frac{2n\gamma_{n}}{6s-5}}}\left(1+H_{s,1}\right)^{\frac{2n \gamma_{n}}{6s-5}}dt\] and apply Holder's inequality with exponents \(p=\frac{1}{\gamma_{n}}\) and \(p^{\prime}=\frac{1}{1-\gamma_{n}}\) \[\int_{0}^{T}H^{\gamma_{n}}_{n+s,1}dt\leq\bigg{(}\int_{0}^{T}\frac{H_{n+s,1}}{( 1+H_{s,1})^{\frac{2n}{6s-5}}}dt\bigg{)}^{\gamma_{n}}\bigg{(}\int_{0}^{T}(1+H_{s,1})^{\frac{\gamma_{n}}{1-\gamma_{n}}}\tfrac{2}{6s-5}ndt\bigg{)}^{1-\gamma_{n}}\,.\] We observe that the first integral on the right-hand side is bounded by estimate (4.3). In order to be able to invoke the regularity properties of the Leray-Hopf solutions (so as to be able to estimate the second integral on the right-hand side) we require \[\left(\frac{\gamma_{n}}{1-\gamma_{n}}\right)\left(\frac{2n}{6s-5}\right)=1\quad \implies\quad\gamma_{n}=\frac{6s-5}{6s-5+2n}. \tag{4.20}\] This means that \(\boldsymbol{u}\in L^{2\gamma_{n}}\left[\left(0,\infty\right);H^{n+s}(\mathbb{ T}^{3})\right]\). Now we recall the interpolation inequality \[H_{n,1}\leq H_{s,1}^{\frac{s}{n}}H_{n+s,1}^{\frac{n-s}{n}}\,, \tag{4.21}\] to obtain (for some \(\delta_{n,s}\) which will be computed explicitly later on) \[\int_{0}^{T}H_{n,1}^{\delta_{n,s}}dt\leq\int_{0}^{T}H_{s,1}^{\frac{s}{n}\delta _{n,s}}H_{n+s,1}^{\frac{n-s}{n}\delta_{n,s}}dt\leq\bigg{(}\int_{0}^{T}H_{s,1} dt\bigg{)}^{s\delta_{n,s}/n}\bigg{(}\int_{0}^{T}H_{n+s,1}^{\frac{n-s}{n-s \delta_{n,s}}\delta_{n,s}}^{\frac{n-s}{n-s\delta_{n,s}}\delta_{n,s}}dt\bigg{)} ^{(n-s\delta_{n,s})/n}\,.\] In order to use the previous result (in order to estimate the second integral on the right-hand side) we require \[\frac{(n-s)\delta_{n,s}}{n-s\delta_{n,s}}=\gamma_{n}\qquad\implies\qquad\delta _{n,s}=\frac{n\gamma_{n}}{n+s(\gamma_{n}-1)}\,. \tag{4.22}\] The constants \(\delta_{n,s}\) are finally calculated to be \[\delta_{n,s}=\frac{6s-5}{2n+4s-5}\,, \tag{4.23}\] which agrees with the definition in (1.14). Thus we have proved the regularity stated in Theorem 4. ## 5 Summary and concluding remarks The different functional properties of solutions of the three-dimensional fractional Navier-Stokes equations have been considered across five ranges of the exponent \(s\), which are divided by four significant critical points : \(s=\frac{1}{3}\) ; \(s=\frac{3}{4}\) ; \(s=\frac{5}{6}\) and \(s=\frac{5}{4}\). Their existence suggests that solutions undergo a set of phase transitions at these points. Several explanatory remarks are in order. 1. In the range \(0<s<\frac{1}{3}\), the non-uniqueness of Leray-Hopf solutions has already been demonstrated in [48, 49] using convex integration methods. In addition, Bulut, Huynh and Palasek [55] have used these techniques to show the nonuniqueness of weak solutions with epochs of regularity ; i.e. solutions of which the non-smoothness is limited to a set of bounded Hausdorff dimension. In particular, the result in [55] states that there are infinitely many weak solutions of the fractional Navier-Stokes equations for \(s<\frac{1}{3}\) with regularity \(C_{t}^{0}C_{x}^{s}\). These can be chosen to coincide with the local strong solution for a short initial time interval. Our analogue of the Prodi-Serrin regularity criterion (Theorem 2) shows that an initially strong solution with control of the \(L_{t}^{\infty}C_{x}^{s}\) norm for \(s>\frac{1}{3}\) will stay smooth. Therefore a non-uniqueness result of the type in [55] cannot be expected to hold for \(s>\frac{1}{3}\). This suggests that the results from convex integration schemes which construct Holder continuous solutions are sharp with regard to the value of \(s\) (\(s<\frac{1}{3}\)), at least from the epochs of regularity perspective. 2. Our next observation is that three of our four critical points (\(s=\frac{1}{3},\ \frac{5}{6},\ \frac{5}{4}\)) are related in the following sense : the question of what value does \(s\) need to be so that we have strong solutions by making a synthesis of Theorem 2 and the "five-sixths theorem" (Theorem 4)? The answer turns out to be \(s=\frac{5}{4}\) (see (1.19)), thereby showing that the critical points each play an interlocking part in a fuller picture. 3. What of the point \(s=\frac{3}{4}\)? We have observed that if \(s\geq\frac{3}{4}\) then Leray-Hopf solutions satisfy an equation of local energy balance (Theorem 3). Moreover, when \(s>\frac{3}{4}\) there exists a suitable weak solution satisfying a partial regularity result, as proved in [30]. An improvement of the latter result was made in [31]. As noted in [31, p. 10], the origin of the exponent \(s=\frac{3}{4}\) comes from the requirement that a weak solution be an \(L^{3}\left[\mathbb{T}^{3}\times(0,T)\right]\) function. This regularity is needed as part of the definition of a suitable weak solution, and in particular for the interpretation of the local energy inequality. As mentioned in Remark 5, the equation of local energy balance can be established for a Leray-Hopf solution that lies in \(L^{3}\left[\mathbb{T}^{3}\times(0,T)\right]\). Similar to the proof of the partial regularity result in [31], this regularity is needed to bound the cubic term \(|\boldsymbol{u}|^{2}\boldsymbol{u}\) in the local energy balance. This degree of regularity only follows from the Leray-Hopf regularity for \(s\geq\frac{3}{4}\), as computed in Lemma 3. Both Theorem 3 together with the partial regularity results from [30, 31] have similar regularity requirements, so it is natural that this imposes the same lower bound on \(s\). Some further discussion on the connection between the equation of local energy balance and the suitability of a weak solution is provided in [35, SS6.2]. 4. We could argue loosely that in the range \(0\leq s<\frac{1}{3}\) the properties of the fractional Navier-Stokes equations correspond more to those of the Euler equations, while in the range \(\frac{3}{4}\leq s<\frac{5}{6}\) they correspond more to the CKN-type suitable weak solutions of the Navier-Stokes equations [10, 29, 30] which satisfy partial regularity results. In the range \(s>\frac{5}{6}\) their behaviour is of the standard Leray-Hopf type associated with \(s=1\) Navier-Stokes equations. Full regularity is only reached at \(s=\frac{5}{4}\). 5. Finally, we wish to make a clarification with respect to the standard Leray-Hopf results expressed in Theorem 4 for the case \(s>\frac{5}{6}\). For the standard (\(s=1\)) three-dimensional Navier-Stokes equations, it has been shown in [40, 41] that there exists an infinite hierarchy of bounded time averages \[\left\langle\|\nabla^{n}\boldsymbol{u}\|_{2m}^{\alpha_{n,m}}\right\rangle_{T}< \infty\,,\] (5.1) where the \(\alpha_{n,m}\) are defined by \[\alpha_{n,m}=\frac{2m}{2m(n+1)-3}\] (5.2) and where \(\left\langle\cdot\right\rangle_{T}\) is a time average up to time \(T\). The \(\alpha_{n,m}\) appear as a direct result of the scaling property of the norms under the invariance properties expressed in (1.7) \[\|\nabla^{n}\boldsymbol{u}\|_{2m}=\lambda^{-1/\alpha_{n,m}}\|\nabla^{{}^{ \prime}n}\boldsymbol{u}^{\prime}\|_{2m}\,.\] (5.3) The question arises whether the result in (5.1) is consistent with (1.15), which says that \[\boldsymbol{u}\in L^{2\delta_{n,s}}\left[(0,T)\,;H^{n}(\mathbb{T}^{3})\right]\,.\] (5.4) Recall that \(\delta_{n,s}\) has been defined in (1.14). To address this question we note that the equivalent of \(\alpha_{n,m}\) for the fractional Navier-Stokes equations is \[\alpha_{n,m,s}=\frac{2m}{2m(n+2s-1)-3}\,.\] (5.5) A straightforward application of interpolation inequalities to the result of Theorem 4 shows that the equivalent of (5.1) is \[\left\langle\left\|\nabla^{n}\boldsymbol{u}\right\|_{2m}^{(6s-5)\alpha_{n,m,s}} \right\rangle_{T}<\infty\,. \tag{5.6}\] The \(6s-5\) is a necessary factor to make (5.6) at \(n=s\) and \(m=1\) into \(\left\langle H_{s,1}\right\rangle_{T}\) which, from the energy inequality, is bounded from above. Then we write \[\left[(6s-5)\alpha_{n,m,s}\right]_{m=1}=\frac{6s-5}{2n+4s-5}=\delta_{n,s}\,, \tag{5.7}\] as in (1.15). Thus, we see that Theorem 4 is closely related to the invariance properties of the fractional Navier-Stokes equations. **Acknowledgments :** The authors would like to thank Edriss Titi (Cambridge), Dario Vincenzi (Universite Cote d'Azur) and Samriddhi Sankar Ray (ICTS Bangalore) for discussions. D.W.B. acknowledges support from the Cambridge Trust, the Cantab Capital Institute for Mathematics of Information and the Prince Bernhard Culture fund. The authors would also like to thank the Isaac Newton Institute for support and hospitality during the programme _Mathematical Aspects of Fluid Turbulence : where do we stand?_ in 2022, when work on this paper was undertaken. It was supported by grant number EP/R014604/1. ## Appendix A Appendix : Local well-posedness of the fractional Navier-Stokes equations Here we provide a self-contained proof of the local well-posedness of the fractional Navier-Stokes equations, as well as a weak-strong uniqueness result. These results appear to be absent in the literature : see [47, 49] for proofs of related local well-posedness results. **Theorem 11**.: _Consider the fractional Navier-Stokes equations (1.1) with \(s\) as the power of the fractional Laplacian. We consider three cases:_ * _If_ \(s>\frac{5}{6}\) _and_ \(\boldsymbol{u}_{0}\in H^{1}(\mathbb{T}^{3})\)_, then there exists a unique local strong solution_ \(\boldsymbol{u}\in L^{\infty}\left[(0,T)\,;H^{1}(\mathbb{T}^{3})\right]\cap L^{ 2}\left[(0,T)\,;H^{1+s}\right]\)_._ * _If_ \(\frac{1}{3}<s\leq\frac{5}{6}\) _and_ \(\boldsymbol{u}_{0}\in H^{2}(\mathbb{T}^{3})\)_, then there is a unique local strong solution of the fractional Navier-Stokes equations with regularity_ \(L^{\infty}\left[(0,T)\,;H^{2}(\mathbb{T}^{3})\right]\cap L^{2}\left[(0,T)\,;H ^{2+s}\right]\)_._ * _For_ \(0<s\leq\frac{1}{3}\) _and initial data_ \(\boldsymbol{u}_{0}\in H^{3}(\mathbb{T}^{3})\)_, there exists a unique local strong solution in_ \(L^{\infty}\left[(0,T)\,;H^{3}(\mathbb{T}^{3})\right]\cap L^{2}\left[(0,T)\,;H ^{3+s}\right]\)_._ Proof.: We will not deal with the case \(0<s\leq\frac{1}{3}\), which is given in [49, Theorem 3.4]. In order to prove the other two cases, we first apply the Galerkin projection \(P_{N}\) to the equations \[\partial_{t}\boldsymbol{u}^{N}+\nu A^{s}\boldsymbol{u}^{N}+P_{N}((\boldsymbol {u}^{N}\cdot\nabla)\boldsymbol{u}^{N})=0.\] (A.1) For every finite \(N\), we know that there exists a unique smooth solution \(\boldsymbol{u}^{N}\) to these equations. If \(s>\frac{5}{6}\), the Galerkin approximations will satisfy estimate (4.3) where we take \(n=1\). This means that there is a time \(t_{1}(\boldsymbol{u}_{0})\) such that there exists a sub-sequence of \(\{\boldsymbol{u}^{N}\}\) converging weak-* in \(L^{\infty}\left[(0,T)\,;H^{1}(\mathbb{T}^{3})\right]\) and weakly in \(L^{2}\left[(0,T)\,;H^{1+s}(\mathbb{T}^{3})\right]\) to a strong solution \(\boldsymbol{u}\). For the case \(\frac{1}{3}<s\leq\frac{5}{6}\), by performing a standard energy estimate one finds \[\tfrac{i}{2}\frac{d}{dt}\|\mathbf{u}^{N}\|_{H^{2}}^{2}\leq-\nu\|\mathbf{u}^{N}\|_{H^{2+s} }^{2}+c_{n,s}\|\Delta\mathbf{u}^{N}\|_{L^{5/2}}^{2}\|\nabla\mathbf{u}^{N}\|_{L^{5}}\,.\] (A.2) We then recall the following interpolation inequality \[\|\Delta\mathbf{u}^{N}\|_{L^{5/2}}\leq c\|\Delta\mathbf{u}^{N}\|_{L^{2}}^{1-3/(10s)}\| \mathbf{u}^{N}\|_{H^{2+s}}^{3/(10s)}\,.\] (A.3) By using Young's inequality we find \[\frac{1}{2}\frac{d}{dt}\|\mathbf{u}^{N}\|_{H^{2}}^{2} \leq-\nu\|\mathbf{u}^{N}\|_{H^{2+s}}^{2}+c_{n,s}\|\Delta\mathbf{u}^{N}\|_{L ^{2}}^{3-6/(10s)}\|\mathbf{u}^{N}\|_{H^{2+s}}^{6/(10s)}\] (A.4) \[\leq-\frac{20s-6}{20s}\nu\|\mathbf{u}^{N}\|_{H^{2+s}}^{2}+c_{n,s}\nu^ {-6/(20s-6)}\|\mathbf{u}^{N}\|_{H^{2}}^{(15s-3)/(10s-3)}\,.\] (A.5) As previously observed, one can extract a subsequence of the Galerkin sequence which converges to the strong solution. The uniqueness in all the considered ranges of \(s\) can be proved by standard methods. Finally, we would like to remark that this result could also have been proved by adding a hyperviscous term \(\epsilon A^{5/4}\mathbf{u}\) to the equations and then pass to a subsequence of strong solutions in the limit \(\epsilon\to 0\), as demonstrated in Theorem 9. **Remark 12**.: As already noted before, the critical space is \(H^{5/2-2s}(\mathbb{T}^{3})\). We observe that it is possible to adapt the proof of local existence of strong solutions to these spaces, as opposed to the integer Sobolev spaces that were used in Theorem 11. However, this is not needed for our purposes. Now we state and prove a weak-strong uniqueness result for the fractional Navier-Stokes equations, which again seems to be absent from the literature. **Theorem 13**.: _Let \(\mathbf{u}_{S}\) be a strong solution of the fractional Navier-Stokes equations on \([0,T]\) and let \(\mathbf{u}_{W}\) be a Leray-Hopf weak solution on the same time interval with the same initial data \(\mathbf{u}_{0}\). Then \(\mathbf{u}_{W}\equiv\mathbf{u}_{S}\) on \([0,T]\)._ Proof.: By using \(\mathbf{u}_{S}\) as a test function in the weak formulation that is obeyed by \(\mathbf{u}_{W}\), we find that \[\int_{0}^{T}\int_{\mathbb{T}^{3}}\left[\mathbf{u}_{W}\partial_{t}\mathbf{ u}_{S}-\nu(A^{s/2}\mathbf{u}_{W})(A^{s/2}\mathbf{u}_{S})+\mathbf{u}_{W}\otimes\mathbf{u}_{W}: \nabla\mathbf{u}_{S}\right]dxdt\] \[=-\int_{\mathbb{T}^{3}}\mathbf{u}_{0}^{2}\,dx+\int_{\mathbb{T}^{3}} \mathbf{u}_{W}(\mathbf{x},T)\mathbf{u}_{S}(\mathbf{x},T)\,dx\,.\] (A.6) Since the strong solution satisfies the equation in an \(L^{2}\)-sense, taking the \(L^{2}(\mathbb{T}^{3})\) inner product with \(\mathbf{u}_{W}\) yields \[\int_{0}^{T}\int_{\mathbb{T}^{3}}\bigg{[}-\mathbf{u}_{W}\partial_{t}\mathbf{u}-\nu(A^{ s/2}\mathbf{u}_{W})(A^{s/2}\mathbf{u}_{S})-\mathbf{u}_{S}\otimes\mathbf{u}_{W}:\nabla\mathbf{u}_{S} \bigg{]}\,dxdt=0\,.\] (A.7) Adding these two equations gives that \[\int_{0}^{T}\int_{\mathbb{T}^{3}}\bigg{[}-2\nu(A^{s/2}\mathbf{u}_{W}) (A^{s/2}\mathbf{u}_{S})-\mathbf{u}_{S}\otimes\mathbf{u}_{W}:\nabla\mathbf{u}_{S}+\mathbf{u}_{W} \otimes\mathbf{u}_{W}:\nabla\mathbf{u}_{S}\bigg{]}\,dxdt\] \[=-\int_{\mathbb{T}^{3}}\left|\mathbf{u}_{0}\right|^{2}dx+\int_{ \mathbb{T}^{3}}\mathbf{u}_{W}(\mathbf{x},T)\mathbf{u}_{S}(\mathbf{x},T)\,dx\,.\] (A.8) We now introduce the notation \(\mathbf{v}\coloneqq\mathbf{u}_{W}-\mathbf{u}_{S}\), which allows to rewrite the equation above as follows \[\int_{0}^{T}\int_{\mathbb{T}^{3}}\bigg{[}-\nu|A^{s/2}\mathbf{u}_{W}|^{2} -\nu|A^{s/2}\mathbf{u}_{S}|^{2}+\nu|A^{s/2}\mathbf{v}|^{2}+\mathbf{v}\otimes\mathbf{v}:\nabla\bm {u}_{S}\bigg{]}\,dxdt\] \[=-\int_{\mathbb{T}^{3}}|\mathbf{u}_{0}|^{2}\,dx+\frac{1}{2}\int_{ \mathbb{T}^{3}}\left[|\mathbf{u}_{W}(\mathbf{x},T)|^{2}+|\mathbf{u}_{S}(\mathbf{x},T)|^{2}-| \mathbf{v}(\mathbf{x},T)|^{2}\right]dx\,.\] (A.9) We can rearrange this expression as follows \[\tfrac{1}{2}\int_{\mathbb{T}^{3}}\lvert\mathbf{v}(\mathbf{x},T)\rvert^{2 }\,dx+\int_{0}^{T}\int_{\mathbb{T}^{3}}\left[\nu|A^{s/2}\mathbf{v}|^{2}+\mathbf{v} \otimes\mathbf{v}:\nabla\mathbf{u}_{S}\right]dxdt=\tfrac{1}{2}\int_{\mathbb{T}^{3}} \left[|\mathbf{u}_{W}(\mathbf{x},T)|^{2}-|\mathbf{u}_{0}|^{2}\right]dx\] \[+\nu\int_{0}^{T}\int_{\mathbb{T}^{3}}\lvert A^{s/2}\mathbf{u}_{W} \rvert^{2}\,dxdt+\tfrac{1}{2}\int_{\mathbb{T}^{3}}\left[|\mathbf{u}_{S}(\mathbf{x},T)| ^{2}-|\mathbf{u}_{0}|^{2}\right]dx+\nu\int_{0}^{T}\int_{\mathbb{T}^{3}}\lvert A^{s /2}\mathbf{u}_{S}\rvert^{2}\,dxdt\leq 0\,,\] (A.10) where the inequality follows from the energy equality for strong solutions and the energy inequality for Leray-Hopf weak solutions. We then obtain the following estimate \[\tfrac{1}{2}\int_{\mathbb{T}^{3}}\lvert\mathbf{v}(\mathbf{x},T)\rvert^{2} \,dx+\int_{0}^{T}\int_{\mathbb{T}^{3}}\nu|A^{s/2}\mathbf{v}|^{2}\,dxdt\leq-\int_{0 }^{T}\int_{\mathbb{T}^{3}}\mathbf{v}\otimes\mathbf{v}:\nabla\mathbf{u}_{S}\,dxdt\] \[=-\int_{0}^{T}\int_{\mathbb{T}^{3}}\mathbf{v}\otimes\mathbf{v}:\nabla\mathbf{ u}_{S}\,dxdt\leq\int_{0}^{T}\lVert\nabla\mathbf{u}_{S}\rVert_{L^{3/s}}\lVert\mathbf{v}( \cdot,t)\rVert_{L^{6/(3-2s)}}\lVert\mathbf{v}(\cdot,t)\rVert_{L^{2}}\,dt\] \[\leq\int_{0}^{T}\lVert\mathbf{u}_{S}\rVert_{H^{5/2-s}}\lVert\mathbf{v}( \cdot,t)\rVert_{\dot{H}^{s}}\lVert\mathbf{v}(\cdot,t)\rVert_{L^{2}}\,dt\,.\] (A.11) Then by applying Young's inequality we find that \[\tfrac{1}{2}\int_{\mathbb{T}^{3}}\lvert\mathbf{v}(\mathbf{x},T)\rvert^{2}\,dx+\tfrac {1}{2}\nu\int_{0}^{T}\int_{\mathbb{T}^{3}}\lvert A^{s/2}\mathbf{v}\rvert^{2}\,dxdt \leq\tfrac{1}{2}\nu^{-1}\int_{0}^{T}\lVert\mathbf{u}_{S}\rVert_{H^{5/2-s}}^{2} \lVert\mathbf{v}(\cdot,t)\rVert_{L^{2}}^{2}\,dt\,.\] (A.12) Since \(\mathbf{v}(\cdot,0)=0\), it follows from Gronwall's inequality that \(\mathbf{v}\equiv 0\) on \(\mathbb{T}^{3}\times[0,T]\). ## Appendix B Appendix : Properties of the fractional Laplacian In this appendix we recall some basic properties of the fractional Laplacian. By using the Fourier representation (1.2) as well as the Plancherel identity, one can prove the following identity (for \(f,g\in H^{2s}(\mathbb{T}^{3})\) \[\int_{\mathbb{T}^{3}}A^{s}fg\,dx=\int_{\mathbb{T}^{3}}fA^{s}g\,dx.\] (B.1) We also observe that for any \(s\in\mathbb{R}\) and \(f\in H^{s}(\mathbb{T}^{3})\) it holds that \[\lVert f\rVert_{\dot{H}^{s}}=\lVert A^{s}f\rVert_{2},\] (B.2) which can be easily seen from the Fourier representation. In the case \(p\neq 2\), we have to rely on Littlewood-Paley theory (see [44] for more details). First we introduce a dyadic partition of unity \(\{\rho_{j}\}_{j=1}^{\infty}\) which is given by \[\rho_{0}(x)=\rho(x),\quad\rho_{j}(x)=\rho(2^{-j}x)\quad\text{for}\quad j=1,2, \ldots\,,\] (B.3) with \(\rho_{-1}(x)=1-\sum_{j=0}^{\infty}\rho_{j}(x)\). Then for \(f\in\mathcal{S}^{\prime}(\mathbb{T}^{3})\) we can define the Littlewood-Paley blocks as follows (for \(\xi\in\mathbb{Z}^{3}\)) \[\widehat{\Delta_{j}f}(\xi)=\rho_{j}(\xi)\widehat{f}(\xi),\quad j=-1,0,\dots.\] (B.4) Then for \(q<\infty\) we introduce the Besov norm as follows \[\|f\|_{B^{s}_{p,q}}\coloneqq\|\Delta_{-1}f\|_{L^{p}}+\bigg{(}\sum_{j=0}^{ \infty}2^{sjq}\|\Delta_{j}f\|_{L^{p}}^{q}\bigg{)}^{1/q},\] (B.5) and if \(q=\infty\) the norm is given by \[\|f\|_{B^{s}_{p,\infty}}\coloneqq\|\Delta_{-1}f\|_{L^{p}}+\sup_{j\geq 0}\big{(} 2^{sj}\|\Delta_{j}f\|_{L^{p}}\big{)}.\] (B.6) In [45, Equation A.3] the following inequality is stated (where \(1\leq p\leq\infty\), \(j\geq 0\) and \(s\in\mathbb{R}\)) \[\|\Delta_{j}A^{s}f\|_{L^{p}}\sim 2^{js}\|\Delta_{j}f\|_{L^{p}}.\] (B.7) Therefore if \(\int_{\mathbb{T}^{3}}f\,dx=0\), we know that \(\Delta_{-1}f=0\) (by a suitable choice of a dyadic partition of unity). This means that for mean-free functions \(f\in B^{t}_{p,q}(\mathbb{T}^{3})\) by estimate (B.7) it follows that (for \(1\leq p,q\leq\infty\) and \(s,t\in\mathbb{R}\)) \[\|A^{s}f\|_{B^{t-s}_{p,q}}\sim\|f\|_{B^{t}_{p,q}}.\] (B.8) Now we recall that \(W^{s,p}(\mathbb{T}^{3})=B^{s}_{p,p}(\mathbb{T}^{3})\) (see [46, Equation 3.5]) for \(s\in\mathbb{R}\backslash\mathbb{Z}\) and \(p\in[1,\infty]\), therefore the estimate (B.8) also holds for (fractional) Sobolev spaces if \(t-s,s\notin\mathbb{Z}\). Finally, we state a few inequalities from para-differential calculus (the full details of which can be found in [44]). Let \(1\leq p,p_{1},p_{2},q,q_{1},q_{2}\leq\infty\) and \(\alpha>0>\beta\) such that \[\frac{1}{p}=\frac{1}{p_{1}}+\frac{1}{p_{2}}.\] Then the following inequalities hold: * If \(\alpha+\beta=0\), \(1=\frac{1}{q_{1}}+\frac{1}{q_{2}}\), \(f\in B^{\alpha}_{p_{1},q_{1}}(\mathbb{T}^{3})\) and \(g\in B^{\beta}_{p_{2},q_{2}}(\mathbb{T}^{3})\), then \[\|fg\|_{B^{\beta}_{p,q_{2}}}\lesssim\|f\|_{B^{\alpha}_{p_{1},q_{1}}}\|g\|_{B^ {\beta}_{p_{2},q_{2}}}.\] (B.9) * If \(f\in B^{\alpha}_{p_{1},q}(\mathbb{T}^{3})\) and \(g\in B^{\alpha}_{p_{2},q}(\mathbb{T}^{3})\), then \[\|fg\|_{B^{\alpha}_{p,q}}\lesssim\|f\|_{B^{\alpha}_{p_{1},q}}\|g\|_{B^{\alpha }_{p_{2},q}}.\] (B.10)
2305.03668
A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding
Webpages have been a rich, scalable resource for vision-language and language only tasks. Yet only pieces of webpages are kept in existing datasets: image-caption pairs, long text articles, or raw HTML, never all in one place. Webpage tasks have resultingly received little attention and structured image-text data left underused. To study multimodal webpage understanding, we introduce the Wikipedia Webpage suite (WikiWeb2M) containing 2M pages with all of the associated image, text, and structure data. We verify its utility on three generative tasks: page description generation, section summarization, and contextual image captioning. We design a novel attention mechanism Prefix Global, which selects the most relevant image and text content as global tokens to attend to the rest of the webpage for context. By using page structure to separate such tokens, it performs better than full attention with lower computational complexity. Extensive experiments show that the new data in WikiWeb2M improves task performance compared to prior work.
Andrea Burns, Krishna Srinivasan, Joshua Ainslie, Geoff Brown, Bryan A. Plummer, Kate Saenko, Jianmo Ni, Mandy Guo
2023-05-05T16:38:05Z
http://arxiv.org/abs/2305.03668v2
# A Suite of Generative Tasks for ###### Abstract Webpages have been a rich, scalable resource for vision-language and language only tasks. Yet only pieces of webpages are kept: image-caption pairs, long text articles, or raw HTML, never all in one place. Webpage tasks have resultiingly received little attention and structured image-text data left underused. To study multimodal webpage understanding, we introduce the Wikipedia Webpage suite (WikiWeb2M) of 2M pages1. We verify its utility on three generative tasks: page description generation, section summarization, and contextual image captioning. We design a novel attention mechanism Prefix Global, which selects the most relevant image and text content as global tokens to attend to the rest of the webpage for context. By using page structure to separate such tokens, it performs better than full attention with lower computational complexity. Experiments show that the new annotations from WikiWeb2M improve task performance compared to data from prior work. We also include ablations on sequence length, input features, and model size. Footnote 1: Data can be downloaded at [https://github.com/google-research-datasets/wit/blob/main/wikiweb2m.md](https://github.com/google-research-datasets/wit/blob/main/wikiweb2m.md). ## 1 Introduction Webpages are a source of multimodal, structured content which have been used for both pretraining and finetuning purposes. Large scale noisy text or multimodal datasets scraped from the web have been used to pretrain large language or contrastive models (Raffel et al., 2020; Jia et al., 2021; Radford et al., 2021; Aghajanyan et al., 2022). Downstream tasks built from webpages have included instruction following, image captioning, news captioning, image-sentence retrieval, and image-article retrieval (Shi et al., 2015; Li et al., 2020; Gur et al., 2022; Sharma et al., 2018; Biten et al., 2019; Liu et al., 2020; Srinivasan et al., 2021; Tan et al., 2022). Yet limited prior work has studied tasks to evaluate multimodal webpage understanding itself. Many classification and generation problems can be studied with webpages: taxonomic webpage classification, webpage retrieval, web image captioning, and page summarization. However, to date there is no open source, multimodal dataset that retains all webpage content. _E.g._, the Wikipedia Image Text dataset (Srinivasan et al., 2021) does not retain HTML structure and misses out on text sections, as shown in Table 1. We thus propose the new Wikipedia Webpage (WikiWeb2M) dataset of over 2M pages, which unifies webpage content to include all text, images, and their location (_e.g._, section index) in a single example. In Figure 1, we show how one webpage sample can be used to study problems of page description generation, section summarization, and contextual image captioning. These tasks make up our WikiWeb2M benchmark suite, and we design them to Figure 1: Tasks we study with WikiWeb2M. Our dataset provides a unified webpage sample that contains all text, image, and structure, enabling new tasks like page description generation. For image captioning and section summarization, remaining page text and images provide useful context, aiding task performance. capture webpage understanding at three degrees of granularity. For page description generation, the goal is to generate a global description. Then at an intermediate section level, the goal of section summarization is to generate a sentence that captures information about the contents of one section. Lastly at a local level, contextual image captioning has the goal of generating a caption for one image. WikiWeb2M's tasks will allow for general study of multimodal content understanding with many-to-many text and image relationships and can also specifically improve interaction with web content. For example, a webpage description may provide a user who is blind more agency by allowing them to preview content before listening to the entire body of image and text with a screen reader (Vtyurina et al., 2019). In addition to contextual captioning and section summarization aiding assistive technology, these tasks can be used for modern content generation, as there is growing interest in providing multimodal web snippets (Nkemelu et al., 2023). While we curate a new dataset with Wikipedia, we note it is just one of many domains that could be used to study multimodal webpage understanding. Instructional websites, news articles, recipes, blogs, and more have bodies of text and images interleaved by layout or HTML structure. We utilize the T5 (Raffel et al., 2020) framework for modeling WikiWeb2M tasks. While the full attention in T5 is performant, it results in a quadratic computational complexity with respect to the input sequence length and does not make use of the structured webpage content. We define a new mixture of local-global attention, Prefix Global, which uses our structured data to select the most salient text and images as global tokens in the prefix of our input sequence. Prefix Global is ultimately more efficient, meaning longer input sequences can be used to reach better task performance. Our results can be beneficial to the many structured image-text domains outside of webpages such as mobile apps, figures, posters, infographics, and documents. We include ablations across multiple axes: the pretrained checkpoint we initialize from, the input sequence length, the feature inputs, and the attention mechanism. We importantly find that images improve performance for all tasks, while prior work on contextual image captioning claimed otherwise (Nguyen et al., 2022). We are also able to improve task performance now that we have access to the entire page's content. Still, there is plenty of room to improve upon our benchmark suite. We summarize our contributions below: * A new open-sourced multimodal webpage dataset of 2M pages curated from English Wikipedia articles. Each sample contains all text, images, and structure present per page. * A suite of multimodal generation webpage tasks that reflect webpage understanding at three granularities: page description, section summarization, contextual image captioning. * A new attention mechanism, Prefix Global, which is a mixture of local-global attention that separates a prefix of global tokens. By defining more salient content from structured pages, it can outperform full attention while requiring fewer attention computations. * Ablations on model size, attention, input features and sequence length. Images can help all tasks, notably by over 15% on contextual captioning, and page context boosts average performance by over 4% and 3% for section summarization and captioning, respectively. ## 2 The WikiWeb2M Dataset The Wikipedia Webpage (WikiWeb2M) dataset is built by starting with the Wikipedia Image Text (WIT; Srinivasan et al., 2021) English pages2. We scrape webpage samples and retain all text, image, and structure available, providing more contextual data which can be used to model existing tasks like contextual image captioning, as well as enabling new webpage tasks like page description. \begin{table} \begin{tabular}{l|c c c c c c|c c c} \hline \multirow{2}{*}{**Dataset**} & \multicolumn{5}{c|}{\# Webpage Sections} & \multicolumn{2}{c}{\# Images} \\ \cline{2-10} & Structural & Heading & Text & Image & Both & Total & Unique & Total \\ \hline WIT (En) & - & - & - & 199,872 & 2,847,929 & 3,047,801 & 3,660,211 & 4,955,835 \\ WikiWeb2M & 731,394 & 686,376 & 6,817,950 & 221,523 & 3,236,254 & 11,693,497 & 4,438,642 & 5,940,431 \\ \hline \end{tabular} \end{table} Table 1: WikiWeb2M versus WIT (Srinivasan et al., 2021). We report counts over all splits; train, validation, and test are reported separately in Appendix A. WikiWeb2M and WIT (English subset) come from the same webpages. We start with WIT URLs to create a high quality multimodal webpage dataset that has already gone through extensive content and topic filtering. Each webpage sample includes the page URL, page title, section titles, section text, images and their captions, and indices for each section, their parent section, their children sections, and more. This differs from WIT, which defined individual samples as image-caption pairs with metadata (_e.g._, originating section title). Appendix A.2 includes a comparison of fields available in WikiWeb2M versus WIT. In Table 1, we report the number of sections and images compared to the English subset of WIT. We add nearly 1M total images to the dataset by keeping the images on a webpage regardless of whether they have image captions available. We provide section counts by type: structural, heading, text only, image only, and both text and image. Structural and heading sections do not contain immediate text. The former has subsections. For heading sections, a section title was available, while the content linked to a different article, was empty, or only had tables. A significant 6.8M text sections are in WikiWeb2M, none of which were available in WIT. We make a random 90/5/5 split and show the number of pages, sections, and images per split after additional processing in Table 2. We only retain sections if they are content sections (_e.g._, not the "See Also" section). For image quality control, we keep JPEG and PNG image types 3. Footnote 3: We release image URLs, where they can be fetched. ### The WikiWeb2M Tasks We apply WikiWeb2M to three tasks which reflect different granularities of webpage understanding: the page, section, or element level. We describe the tasks below and include dataset counts in Table 3. Data processing steps to take WikiWeb2M to each of the task datasets are discussed in Appendix A.1. **Page Description Generation.** In the task of page description generation, the goal is to generate a description of a page given the rest of the webpage's image, text, and structure. We use the Wikipedia-provided page description (not collecting annotations) and generate summaries from multimodal inputs, which differs from existing text-only article summarization work; this matters when we want to create a multimodal snippet from a webpage. **Section Summarization.** The goal of section summarization is to generate a single sentence that highlights a particular section's content. The summary is generated given all images and (non-summary) text present in the target and context sections, see Figure 2. Following the leading sentence bias, we use the first sentence of a section as a pseudo summary (which is removed from the model inputs). We also found that a majority of human annotators deemed the first sentence as a reasonable summary; these findings are later discussed in Appendix E. **Contextual Image Captioning.**Nguyen et al. (2022) proposed contextual image captioning with WIT as the task of captioning an image along with its webpage context. Target images are those available in WIT to ensure they have quality captions that can be reconstructed. A Wikipedia image has three caption types (not all are always available): the alt-text, reference, and attribution descriptions. Alt-text serves as a text description for accessibility purposes, the reference description comes directly below the image in the webpage, and the attribution description contains captions unique to the image across all pages it appears in. Prior work only input the image, attribution description and associated section text because that was all that was available. ## 3 Prefix Global Attention When structured image-text data is available, we need not treat all images and text equally. With webpages, it may be more sensible to isolate certain parts as more important. _E.g._, in contextual image captioning, the model should focus on the target image and section it came from, while using the rest of the page as context. We can now isolate these inputs with the WikiWeb2M dataset because we have structural metadata signaling where each image and text element were located, as opposed to a bag of images and a single long body of text. \begin{table} \begin{tabular}{l c c c} \hline \hline WikiWeb2M & Train & Val & Test \\ \hline \# Pages & 1,803,225 & 100,475 & 100,833 \\ \# Sections & 10,519,294 & 585,651 & 588,552 \\ \# Total Images & 5,340,708 & 299,057 & 300,666 \\ \hline \hline \end{tabular} \end{table} Table 2: Breakdown of the number of pages, sections, and images contained in each WikiWeb2M split. \begin{table} \begin{tabular}{l|c c c} \hline \hline Task & Train & Val & Test \\ \hline Page Desc. & 1,435,263 & 80,103 & 80,339 \\ Section Summ. & 3,082,031 & 172,984 & 173,591 \\ Image Caption. & 2,222,814 & 124,703 & 124,188 \\ \hline \hline \end{tabular} \end{table} Table 3: Number of samples for page description generation, section summarization, and image captioning after additional filtering. We thus propose Prefix Global, a local-global attention, to capitalize on this intuition. A mixture of local and global attention weights provides the means to designate certain inputs as "global" tokens which attend to the rest of the input sequence, while others only have local attention to \(r\) tokens to the left and \(r\) tokens to the right. Prefix Global uses a fixed set of global tokens. Specifically, it takes a prefix of the input sequence. This is inspired by the leading sentence bias Kedzie et al. (2018); Xing et al. (2021); Zhu et al. (2021), which shows that earlier content in a body of text is often of greater importance. We define different prefixes for each task in Section 4. While we use section structure, Prefix Global can use structure from other sources: HTML/the Document Object Model, rendered webpage regions, PDF document layouts, or simply knowing a priori what task inputs are most salient. Not only is it desirable to prioritize more salient image and text content from the input data, but it can also reduce the computational complexity of the attention mechanism. While full attention is performant by allowing all input tokens to attend to each other, it results in a quadratic computational complexity (\(O(l^{2})\) for input sequence of length \(l\)). Guo et al. (2022) introduced LongT5 as an adaptation of the T5 model with Transient Global (TGlobal) attention to balance the efficiency of local attention, which allows for much longer input sequences to be held in memory, with the higher performance of full attention. Figure 3 illustrates both our Prefix Global and prior work's TGlobal local-global attention schemes, where in each the \(i\)th row represents what the \(i\)th token can attend to. In addition to the local attention used in TGlobal, "transient" global tokens are defined on the fly per layer which can additionally be attended to by all other inputs in the original sequence. TGlobal defines \(k\) global tokens as the average of every \(16\) input tokens, as shown in Figure 3 (left). TGlobal resulted in similar or better performance than full attention with much longer sequence lengths, while having a complexity of \(O(l(r+k))\) for a variety of text summarization tasks (where \(k=\frac{l}{16}\)). Prefix Global has a computational complexity of \(O((l-k)\cdot r+k\cdot l)\) for \(k\) global tokens, similar to local-global attention schemes ETC Ainslie et al. (2020), Longformer Beltagy et al. (2020), and BigBird Zaheer et al. (2020). However, Prefix Global does not require any special pretraining and instead finetunes directly from full attention checkpoints (T5 in our case). This is distinct from LongT5, which also required pretraining with TGlobal attention to be effective. Thus, as we show in Section 5 with Prefix Global's higher performance, it is both a more flexible and performant attention. We also are the first to demonstrate using local-global attention with multimodal inputs, and further show Prefix Global's ability to be performant in multimodal finetuning from a text-only checkpoint. ## 4 Experiments We benchmark with the T5 Raffel et al. (2020) encoder-decoder framework. The T5 model takes a sequence of image and text inputs and we use a frozen ViT Dosovitskiy et al. (2021) to embed images. We finetune from a T5 checkpoint pretrained with full attention on the text-only C4 dataset, and a ViT pretrained either on ImageNet Deng et al. (2009) or JFT Hinton et al. (2015). We compare three models defined by different encoder attention schemes: the original T5 which uses full attention, LongT5 with TGlobal by Guo et al. (2022) (checkpoints are publicly available), Figure 2: WikiWeb2M section summarization with Prefix Global. The global tokens (in green) are the first 512 tokens of the target section to be summarized: the first \(x\) images of the section, the section index, title, body text, and captions. Then the remaining sections (in blue) from the webpage are input; these have local attention, while the prefix global tokens attend to every other token. We decode the summary (in orange) given the page inputs. and our Prefix Global. We run ablations on each attention at different sequence lengths. Then we ablate input features and model size (B16 or L16 T5 + ViT4) for Prefix Global with 1k input length. Footnote 4: The base/large T5 model used 220M/770M parameters. We finetune each model for \(2^{18}\) steps as in Raffel et al. (2020) with a 128 batch size. Each model is trained on 16 TPUs, with the base model taking between 24-32 hours to run5 (varies by task) with an input sequence length is 1024. We do not perform hyperparameter tuning: all models use the Adafactor optimizer with a constant learning rate of 1e-3, an Adafactor offset of 1M to account for pretraining steps, and loss normalizing factor \(2^{18}\). Footnote 5: Example packing can further improve model efficiency. For Prefix Global experiments, the default prefix size \(k\) is 512. For both Transient Global and Prefix Global, the local neighborhood \(r\) is set to 127, as done in LongT5 Guo et al. (2022). We include additional ablations for when Prefix Global and TGlobal have the same number of global tokens to strictly compare how they define global tokens. Lastly, we ablate if the target, description, or context sections are input and if sections only from WIT vs. WikiWeb2M are input. We report BLEU-4 Papineni et al. (2002), ROUGE-L Lin (2004), and CIDEr Vedantam et al. (2015) metrics from a single run. Appendix B has qualitative examples. ### Defining Prefix Global Attention Inputs We ablated the number of images that contribute to each task's prefix and include ablations in Appendix C.3. We use the 90th percentile value (6 images) for page description and one image input for section summarization and image captioning. **Page Description.** We input the images, page URL and title and all sections (index, title, text, captions) in their page order. In addition to the page's images, URL, and title participating in the prefix, we also include all section titles and first sentences (up to 512 tokens). This outperformed keeping the section titles and text concatenated in order, which can be found in Appendix C.1. **Section Summarization.** The target section to be summarized is prepended to the input sequence. Thus the target section's index, title, non-summary text, images, and captions contribute to the global tokens of Prefix Global. Then the page URL and title and the rest of the sections follow in order. Figure 2 illustrates how an input sequence is defined with Prefix Global for section summarization. **Contextual Image Captioning.** Similar to section summarization, the target image and its originating section's content contribute to the prefix tokens (the index, title, text, and non-target captions), followed by the URL, page title, and context sections. ## 5 Results ### Attention and Sequence Length **Performance Comparison** We begin by evaluating performance for each task (page description, Figure 3: Local-global attention. We propose the new Prefix Global Attention which additionally has global to global and global to local attention compared to Transient Global Guo et al. (2022). We define global tokens as the prefix of the input. On the left we show TGlobal which only has local to local and local to global attention. section summarization, and contextual image captioning) when training T5 encoders with different attention types and input sequence lengths in Figure 4. Prefix Global always performs better than TGlobal. We include two Prefix Global settings: a fixed Prefix\({}_{512}\) which is our default setting of setting 512 input tokens to the prefix (used for all other experiments), as well as a Prefix\({}_{TGlobal}\) which assigns the same number of global tokens as TGlobal. Prefix\({}_{TGlobal}\) uses \(\frac{l}{16}\) globals, where \(l\) is the input sequence length and \(16\) tokens per block are aggregated. This allows us to compare the way global tokens are defined in both attention mechanisms. Despite TGlobal defining _additional_ side inputs as global tokens, it consistently underperforms Prefix Global even with the same number of globals. This confirms that defining a special prefix from the input sequence is better than taking aggregates over the full sequence. In Appendix C.1, we also show that just using the prefix of the in-order page inputs for page description (as opposed to pulling out the section titles and first sentences) performs better than TGlobal. These results collectively show Prefix Global to be preferable to TGlobal. One key takeaway is that separating out more relevant inputs (via structure or other known biases like leading sentence bias) is a good idea. Full attention and Prefix Global generally have higher performance at longer sequence lengths. It is impressive that Prefix Global scales or maintains performance with larger sequences even when its number of globals is fixed to 512 (_i.e_., the number of globals is not scaled with respect to input length). On the other hand, while TGlobal scales the number of globals to sequence length, its performance does not consistently scale. _E.g_., performance plateaus or even drops at 4k input sequence length for page description and section summarization, respectively. This may be because TGlobal defines globals as aggregates over the full input sequence, which could introduce more noise or less semantically rich text at longer sequence lengths. One anomalous result occurs for image captioning: Prefix Global with \(256\) globals (Prefix\({}_{TGlobal}\) at 4k input length) outperforms the \(512\) variant; we did not exhaustively ablate the number of global tokens, further performance gains could be reached by optimizing the number of globals per task. Prefix Global outperforms full attention at all sequence lengths on image captioning, which may be due to the global tokens including the target image and most relevant section content. This should ease the process of learning which tokens are most relevant by allowing full attention between the first \(k\) target section tokens with the rest of the input sequence, while contextual information from other sections has local attention. For section summarization and page description, Prefix Global outperforms full attention at the 4k sequence length, while full attention can no longer fit in memory. Given that the entire page's content can be useful for generating a page level description, it is sensible that full attention may perform better for smaller sequence lengths as it allows for attention between all input tokens. **Efficiency Comparison** Prefix Global can outperform full attention, while only requiring \(O((l-k)\cdot r+k\cdot l)\) attention complexity for \(k\) global tokens. When implementing the Prefix Global attention, we \begin{table} \begin{tabular}{c|c c c} \hline **Input** & \multicolumn{3}{c}{**Attention Mechanism**} \\ \cline{2-4} **Length** & TGlobal & Prefix Global & Full \\ \hline 1024 & 325,632 & 916,480 & 1,048,576 \\ 2048 & 782,336 & 2,225,152 & 4,194,304 \\ 4096 & 2,088,960 & 4,842,496 & 16,777,216[6] \\ \hline \end{tabular} \end{table} Table 4: The approximate number of FLOPs for each attention ignoring the # of attention heads and embedding dimension (both are the same for each attention). Figure 4: Encoder attention and sequence length experiments. We use Prefix Global, TGlobal, and full attention at 1k, 2k, and 4k sequence lengths. Note 4k does not fit into memory with full attention. ROUGE-L is plotted. manually created tensors representing block sparsity to avoid computing the full cross attention. We provide the approximate number of FLOPs of each attention mechanism in Table 4 when ignoring the number of attention heads and embedding dimension. At the 2k input sequence length Prefix Global requires about half the FLOPs of full attention, and experimentally takes about half the time to complete the same experiment with all other settings fixed. The number of FLOPs of Prefix Global at 4k is just over those of full attention at the 2k input length, and is able to fit into memory and maximize performance for each task. Lastly, the full attention and prefix global FLOP difference grows with sequence length. This can sometimes be seen experimentally: performance gaps are larger between full and prefix global for page description at 2k vs. 1k (0.20 vs. 0.09). ### Feature Ablations We investigate the role of each input feature with Prefix Global attention and fix sequence length to 1k. Starting with just the text available from webpage sections, we incrementally add section titles, indices and special tokens defining section structure (the struct column of Table 5), the captions of images within each section, and the images. Each input boosts performance8 except section structure which has mixed results; for multimodal experiments we include these extra tokens if they helped in the text-only experiments. This may be due to these extra tokens consuming global tokens in the prefix that otherwise could have been more useful. Footnote 8: BLEU-4 is less consistent than ROUGE-L and CIDEr. Images and their captions both improve performance, but result in the highest performance for each task when used in tandem. This illustrates that even when text captions are available, having their visual counterpart is beneficial. In Table 5, when we include captions for the image captioning task, it refers to _context_ captions from other images in the page that never serve as target images. Interestingly, this boosts performance. We suspect contextual captions help the model to learn the style of captions we aim to generate. ### Pretrained Checkpoint and Model Size In Table 6, we perform additional experiments with ViT pretrained on JFT and large T5/ViT models. Unsurprisingly, larger models result in better performance. For page description and section summarization, scaling the model size results in larger performance gains than the impact of any individual feature we ablated. On the other hand, model size has smaller gains for image captioning compared to the impact of our feature ablations; the worst to best performance gap changed by an average of 17.66% for feature ablations and only by 2.43% for model size, where we average the performance delta of BLEU-4, ROUGE-L, and CIDEr. Preference to ViT representations pretrained on JFT or ImageNet vary by task: section summarization tends to prefer JFT, while page description gen \begin{table} \begin{tabular}{l|c|c|c c c} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{\begin{tabular}{c} **ViT** \\ **Data** \\ \end{tabular} } & \multicolumn{3}{c}{**Metric**} \\ \cline{3-6} & & & B & R & C \\ \hline \multirow{3}{*}{\begin{tabular}{c} Page \\ Desc. \\ \end{tabular} } & \multirow{3}{*}{Base} & im21k & 14.00 & 38.50 & 81.49 \\ & & JFT & 13.25 & 38.49 & 82.02 \\ \cline{2-6} & & im21k & **14.67** & **39.63** & **88.90** \\ \cline{2-6} & & JFT & 14.56 & 39.56 & 88.48 \\ \hline \multirow{3}{*}{\begin{tabular}{c} Section \\ Summ. \\ \end{tabular} } & \multirow{3}{*}{Base} & im21k & 10.12 & 29.43 & 69.89 \\ & & JFT & 10.15 & 29.40 & 70.03 \\ \cline{1-1} \cline{2-6} & & im21k & 11.10 & **30.61** & 76.87 \\ \cline{1-1} & & JFT & **11.24** & 30.54 & **76.92** \\ \hline \multirow{3}{*}{ \begin{tabular}{c} Image \\ Cap. \\ \end{tabular} } & \multirow{3}{*}{Base} & im21k & 11.84 & 37.69 & 158.19 \\ \cline{1-1} & & JFT & 11.66 & 37.35 & 156.01 \\ \cline{1-1} \cline{2-6} & & im21k & **12.51** & **38.05** & **162.31** \\ \cline{1-1} & & JFT & 12.08 & 37.33 & 158.81 \\ \hline \hline \end{tabular} \end{table} Table 6: Pretrained model checkpoint ablations. \begin{table} \begin{tabular}{l|c c c|c c c|c c c|c c c} \hline \hline \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{**Feature Inputs**} & \multicolumn{3}{c|}{**Page Desc.**} & \multicolumn{3}{c|}{**Section Summ.**} & \multicolumn{3}{c}{**Image Caption.**} \\ \hline Text & Title & Struct & Caption & Image & B & R & C & B & R & C & B & R & C \\ \hline \multirow{3}{*}{\begin{tabular}{c} **V** \\ **V** \\ **V** \\ **V** \\ **V** \\ **V** \\ **V** \\ **V** \\ **V** \\ **V** \\ **V** \\ **V** \\ **V** \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} & 13.60 & 37.75 & 77.12 & 9.48 & 28.35 & 65.75 & 9.83 & 33.00 & 133.70 \\ & & & & 13.63 & 37.88 & 77.97 & 9.78 & 29.14 & 68.90 & 9.84 & 33.40 & 135.30 \\ & & & & **14.07** & 37.96 & 77.88 & 8.70 & 29.24 & 69.19 & 10.15 & 33.38 & 135.10 \\ & & & & & 13.12 & 38.43 & 81.19 & 10.08 & 29.23 & 69.45 & 9.90 & 33.57 & 136.03 \\ & & & & & 13.22 & 38.38 & 81.38 & 9.51 & 29.22 & 69.24 & 10.03 & 33.69 & 137.07 \\ \hline \multirow{3}{*}{\begin{tabular}{c} **V** \\ **V** \\ **V** \\ **V** \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} **V** \\ **V** \\ **V** \\ **V** \\ **V** \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} **V** \\ **V** \\ **V** \\ **V** \\ **V** \\ **V** \\ \end{tabular} } & \multirow{3}{*}{ \begin{tabular}{c} 13.16 & 37.96 & 78.39 & 9.31 & 29.20 & 69.19 & 11.74 & 37.46 & 156.34 \\ & & & 14.00 & **38.50** & **81.49** & **10.12** & **29.43** & **69.89** & **11.84** & **37.69** & **158.19** \\ \hline \hline \end{tabular} \end{table} Table 5: Feature ablations with WikiWeb2M. We ablate over the section body text, title, structure, captions, and images. We report BLEU-4, ROUGE-L and CIDEr metrics. eration and image captioning consistently perform best with large ImageNet trained representations. ### Comparison to WIT Annotations The proposed WikiWeb2M is a superset of WIT. For the same set of webpages, we unify all sections into a webpage sample and reintroduce millions of sections and images that were not kept in WIT. Table 7 contains runs when using the original WIT data, the WIT data reprocessed to join the page sections it originally contained, and our WikiWeb2M. For section summarization, the page description is more important than the other context sections. The page description may be more generally relevant to all sections, while each section to be summarized contains a distinct topic compared to the context sections from other parts of the webpage. Lastly, we find WikiWeb2M's additional context sections improve performance the most compared to those already available in WIT (comparing the last two rows of Table 7). This confirms the importance of the new annotations in WikiWeb2M compared to those available in prior work. ## 6 Related Work Webpage tasks have been studied with text only HTML for web element classification, HTML description generation, and web navigation. Gur et al. (2022) proposed finetuning Large Language Models for these tasks. Reinforcement Learning methods have also trained agents to perform commands in handcrafted web environments Gur et al. (2019); Liu et al. (2018); Jia et al. (2019). Wikipedia has also been used to develop downstream tasks. _E.g._, WIT Srinivasan et al. (2021) released image-caption pairs from Wikipedia, in addition to some contextual section text. While WIT does not contain all of the page content, Nguyen et al. (2022) studied contextual image captioning with the available annotations. This is a webpage task and not strictly an image-text problem, as additional section text is included to aid in Wikipedia image captioning, where captions often contain finer-grained, knowledge based information. Aghajanyan et al. (2022) proposed CM3, a Transformer with a causally masked pretraining objective. CM3 relied on pretraining data from the web containing the images and HTML of a webpage. However, this dataset was not open sourced. Their results illustrated that rich HTML data could be used to learn representations for tasks such as image generation, image in-filling, and entity disambiguation and linking. This demonstrates that webpage data can generalize to non-webpage tasks, but leaves webpage specific problems unexplored. To the best of our knowledge there is no publicly available multimodal webpage data that captures all webpage content (_i.e_., all text and images present, along with structural information relating them). In mobile apps, the closest domain to webpages, there are two open source datasets that contain all modalities (text, image, and structure): Rico Deka et al. (2017) and MoTIF Burns et al. (2022). ## 7 Conclusion In this paper we study three generative tasks for multimodal webpage understanding: page description generation, section summarization, and contextual image captioning. To do so, we present the WikiWeb2M dataset, which retains all of the text, images, and structure from nearly 2M pages. We propose a new attention, Prefix Global, which outperforms full attention by allowing the most salient text and images to specially attend to all inputs. Extensive ablations on attention mechanism, sequence length, model size and checkpoint, input features and section type reveal the most impactful factors on our benchmark suite and verify using WikiWeb2M to study webpage understanding. \begin{table} \begin{tabular}{l|c c c|c c c c} \hline \hline \multirow{2}{*}{**Task**} & \multicolumn{3}{c|}{**Input Section Type**} & \multirow{2}{*}{**Section**} & \multicolumn{3}{c}{**Metric**} \\ \cline{2-3} \cline{5-8} & Target & \multicolumn{1}{c}{Description} & \multicolumn{1}{c|}{Context} & \multicolumn{1}{c|}{**Source**} & \multicolumn{1}{c}{BLEU-4} & \multicolumn{1}{c}{ROUGE-L} & CIDEr \\ \hline \multirow{3}{*}{Section Summarization} & ✓ & & & & 8.90 & 27.82 & 60.20 \\ & ✓ & ✓ & & WikiWeb2M & 9.46 & 28.86 & 66.67 \\ & ✓ & ✓ & ✓ & & **10.12** & **29.43** & **69.89** \\ \hline \multirow{3}{*}{Image Captioning} & ✓ & & & WIT & 10.92 & 36.21 & 148.53 \\ & ✓ & ✓ & & WIT & 11.21 & 36.63 & 150.98 \\ \cline{1-1} & ✓ & ✓ & ✓ & WIT & 11.45 & 36.88 & 152.69 \\ \cline{1-1} & ✓ & ✓ & ✓ & WikiWeb2M & **11.84** & **37.69** & **158.19** \\ \hline \hline \end{tabular} \end{table} Table 7: Results for section ablations. We try inputting only the target section per sample, both the target section and context section(s), and whether the sections come from the smaller WIT or our WikiWeb2M superset. ## Limitations The WikiWeb2M dataset reprocessed the webpages available in WIT. We begin with only the English subset of WIT, while it originally contained 108 languages. Our dataset is limited to English and does not cover the vast multilingual data on Wikipedia. We can extend our dataset to cover all languages in WIT, but acknowledge it is monolingual to date. For page description generation and section summarization, we use pseudo summaries that are readily available from Wikipedia pages. While this is desirable from a scalability perspective and is practiced in other works, it can limit the evaluation quality of these tasks. However, we did perform a small scale pilot to collect human annotations for the section summarization task in which we asked the annotators if the first sentence sufficed; 94% of the time the majority vote out of five was yes. Pseudo summaries have also been used for other tasks like summarizing instructional videos [20]. For the model settings we explore, we did not try all exhaustive combinations of features, attention mechanism, model configuration, and input length. We also only use T5 variants, but note T5 is state-of-the-art for generation style problems. Lastly, we design our set of fine-tuning tasks for generative tasks. Our work currently does not include tasks like webpage taxonomy classification or webpage retrieval, but additional tasks like topic classification could be performed with WikiWeb2M. ## Ethics Statement While the Internet provides a vast and rich domain to collect data from, it also has potential risks. Wikipedia is a highly curated and monitored knowledge base of articles, but it can be edited by the public, which can create potential quality risks. Additionally, Wikipedia is a largely fact-based domain, where incorrectly summarizing an article could result in disinformation. We hope our dataset can be used as a new resource to improve the accuracy and factual correctness of text generation machine learning models. As we use Wikipedia data, there is no user data or PII in the proposed WikiWeb2M dataset. Additionally, we ran analysis to remove a small subset of pages with potentially sensitive topics (_e.g._, natural disasters, funeral, blood). ## Acknowledgements This work is in part thanks to funding from the Google Ph.D. Fellowship program.
2305.14314
QLoRA: Efficient Finetuning of Quantized LLMs
We present QLoRA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance. QLoRA backpropagates gradients through a frozen, 4-bit quantized pretrained language model into Low Rank Adapters~(LoRA). Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99.3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU. QLoRA introduces a number of innovations to save memory without sacrificing performance: (a) 4-bit NormalFloat (NF4), a new data type that is information theoretically optimal for normally distributed weights (b) double quantization to reduce the average memory footprint by quantizing the quantization constants, and (c) paged optimziers to manage memory spikes. We use QLoRA to finetune more than 1,000 models, providing a detailed analysis of instruction following and chatbot performance across 8 instruction datasets, multiple model types (LLaMA, T5), and model scales that would be infeasible to run with regular finetuning (e.g. 33B and 65B parameter models). Our results show that QLoRA finetuning on a small high-quality dataset leads to state-of-the-art results, even when using smaller models than the previous SoTA. We provide a detailed analysis of chatbot performance based on both human and GPT-4 evaluations showing that GPT-4 evaluations are a cheap and reasonable alternative to human evaluation. Furthermore, we find that current chatbot benchmarks are not trustworthy to accurately evaluate the performance levels of chatbots. A lemon-picked analysis demonstrates where Guanaco fails compared to ChatGPT. We release all of our models and code, including CUDA kernels for 4-bit training.
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer
2023-05-23T17:50:33Z
http://arxiv.org/abs/2305.14314v1
# QLoRA: Efficient Finetuning of Quantized LLMs ###### Abstract We present QLoRA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance. QLoRA backpropagates gradients through a frozen, 4-bit quantized pretrained language model into Low Rank Adapters (LoRA). Our best model family, which we name **Guanaco**, outperforms all previous openly released models on the Vicuna benchmark, reaching 99.3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU. QLoRA introduces a number of innovations to save memory without sacrificing performance: (a) 4-bit NormalFloat (NF4), a new data type that is information theoretically optimal for normally distributed weights (b) Double Quantization to reduce the average memory footprint by quantizing the quantization constants, and (c) Paged Optimizers to manage memory spikes. We use QLoRA to finetune more than 1,000 models, providing a detailed analysis of instruction following and chatbot performance across 8 instruction datasets, multiple model types (LLaMA, T5), and model scales that would be infeasible to run with regular finetuning (e.g. 33B and 65B parameter models). Our results show that QLoRA finetuning on a small high-quality dataset leads to state-of-the-art results, even when using smaller models than the previous SoTA. We provide a detailed analysis of chatbot performance based on both human and GPT-4 evaluations showing that GPT-4 evaluations are a cheap and reasonable alternative to human evaluation. Furthermore, we find that current chatbot benchmarks are not trustworthy to accurately evaluate the performance levels of chatbots. A lemon-picked analysis demonstrates where **Guanaco** fails compared to ChatGPT. We release all of our models and code, including CUDA kernels for 4-bit training.2 Footnote 2: [https://github.com/artidoro/qlora](https://github.com/artidoro/qlora) and [https://github.com/TimDettmers/bitsandbytes](https://github.com/TimDettmers/bitsandbytes) ## 1 Introduction Finetuning large language models (LLMs) is a highly effective way to improve their performance, [40; 62; 43; 61; 59; 37] and to add desirable or remove undesirable behaviors [43; 2; 4]. However, finetuning very large models is prohibitively expensive; regular 16-bit finetuning of a LLaMA 65B parameter model [57] requires more than 780 GB of GPU memory. While recent quantization methods can reduce the memory footprint of LLMs [14; 13; 18; 66], such techniques only work for inference and break down during training [65]. We demonstrate for the first time that it is possible to finetune a quantized 4-bit model without any performance degradation. Our method, QLoRA, uses a novel high-precision technique to quantize a pretrained model to 4-bit, then adds a small set of learnable Low-rank Adapter weights [28] that are tuned by backpropagating gradients through the quantized weights. QLoRA reduces the average memory requirements of finetuning a 65B parameter model from \(>\)780GB of GPU memory to \(<\)48GB without degrading the runtime or predictive performance compared to a 16-bit fully finetuned baseline. This marks a significant shift in accessibility of LLM finetuning: now the largest publicly available models to date finetunable on a single GPU. Using QLoRA, we train the **Guanaco** family of models, with the second best model reaching 97.8% of the performance level of ChatGPT on the Vicuna [10] benchmark, while being trainable in less than 12 hours on a single consumer GPU; using a single professional GPU over 24 hours we achieve 99.3% with our largest model, essentially closing the gap to ChatGPT on the Vicuna benchmark. When deployed, our smallest **Guanaco** model (7B parameters) requires just 5 GB of memory and outperforms a 26 GB Alpaca model by more than 20 percentage points on the Vicuna benchmark (Table 6). QLoRA introduces multiple innovations designed to reduce memory use without sacrificing performance: (1) **4-bit NormalFloat**, an information theoretically optimal quantization data type for normally distributed data that yields better empirical results than 4-bit Integers and 4-bit Floats. (2) **Double Quantization**, a method that quantizes the quantization constants, saving an average of about 0.37 bits per parameter (approximately 3 GB for a 65B model). (3) **Paged Optimizers**, using NVIDIA unified memory to avoid the gradient checkpointing memory spikes that occur when processing a mini-batch with a long sequence length. We combine these contributions into a better tuned LoRA approach that includes adapters at every network layer and thereby avoids almost all of the accuracy tradeoffs seen in prior work. QLoRA's efficiency enables us to perform an in-depth study of instruction finetuning and chatbot performance on model scales that would be impossible using regular finetuning due to memory overhead. Therefore, we train more than 1,000 models across several instruction tuning datasets, model architectures, and sizes between 80M to 65B parameters. In addition to showing that QLoRA recovers 16-bit performance (SS4) and training a state-of-the-art chatbot, **Guanaco**, (SS5), we also analyze trends in the trained models. First, we find that data quality is far more important than dataset size, e.g., a 9k sample dataset (OASST1) outperformed a 450k sample dataset (FLAN v2, subsampled) on chatbot performance, even when both are meant to support instruction following generalization. Second, we show that strong Massive Multitask Language Understanding (MMLU) benchmark performance does not imply strong Vicuna chatbot benchmark performance and vice versa--in other words, dataset suitability matters more than size for a given task. Furthermore, we also provide a extensive analysis of chatbot performance that uses both human raters and GPT-4 for evaluation. We use tournament-style benchmarking where models compete against each other in matches to produce the best response for a given prompt. The winner of a match is judged by either GPT-4 or human annotators. The tournament results are aggregated into Elo scores [16; 17] which determine the ranking of chatbot performance. We find that GPT-4 and human evaluations largely agree on the rank of model performance in the tournaments, but we also find there are instances of strong disagreement. As such, we highlight that model-based evaluation while providing a cheap alternative to human-annotation also has its uncertainties. We augment our chatbot benchmark results with a qualitative analysis of **Guanaco** models. Our analysis highlights success and failure cases that were not captured by the quantitative benchmarks. We release all model generations with human and GPT-4 annotations to facilitate further study. We open-source our codebase and CUDA kernels and integrate our methods into the Hugging Face transformers stack [64], making them easily accessible to all. We release a collection of adapters for 7/13/33/65B size models, trained on 8 different instruction following datasets, for a total of 32 different open sourced, finetuned models. \begin{table} \begin{tabular}{l c c} \hline \hline Model & Size & Elo \\ \hline GPT-4 & - & 1348 \(\pm\) 1 \\ Guanaco 65B & 41 GB & 1022 \(\pm\) 1 \\ Guanaco 33B & 21 GB & 992 \(\pm\) 1 \\ Vicuna 13B & 26 GB & 974 \(\pm\) 1 \\ ChatGPT & - & 966 \(\pm\) 1 \\ Guanaco 13B & 10 GB & 916 \(\pm\) 1 \\ Bard & - & 902 \(\pm\) 1 \\ Guanaco 7B & 6 GB & 879 \(\pm\) 1 \\ \hline \hline \end{tabular} \end{table} Table 1: Elo ratings for a competition between models, averaged for 10,000 random initial orderings. The winner of a match is determined by GPT-4 which declares which response is better for a given prompt of the the Vicuna benchmark. 95% confidence intervals are shown (\(\pm\)). After GPT-4, Guanaco 33B and 65B win the most matches, while Guanaco 13B scores better than Bard. ## 2 Background Block-wise k-bit QuantizationQuantization is the process of discretizing an input from a representation that holds more information to a representation with less information. It often means taking a data type with more bits and converting it to fewer bits, for example from 32-bit floats to 8-bit Integers. To ensure that the entire range of the low-bit data type is used, the input data type is commonly rescaled into the target data type range through normalization by the absolute maximum of the input elements, which are usually structured as a tensor. For example, quantizing a 32-bit Floating Point (FP32) tensor into a Int8 tensor with range \([-127,127]\): \[\mathbf{X}^{\text{Int8}}=\text{round}\left(\frac{127}{\text{absmax}(\mathbf{X} ^{\text{FP32}})}\mathbf{X}^{\text{FP32}}\right)=\text{round}(c^{\text{FP32}} \cdot\mathbf{X}^{\text{FP32}}), \tag{1}\] where \(c\) is the _quantization constant_ or _quantization scale_. Dequantization is the inverse: \[\text{dequant}(c^{\text{FP32}},\mathbf{X}^{\text{Int8}})=\frac{\mathbf{X}^{ \text{Int8}}}{c^{\text{FP32}}}=\mathbf{X}^{\text{FP32}} \tag{2}\] The problem with this approach is that if a large magnitude value (i.e., an outlier) occurs in the input tensor, then the quantization bins--certain bit combinations--are not utilized well with few or no numbers quantized in some bins. To prevent the outlier issue, a common approach is to chunk the input tensor into blocks that are independently quantized, each with their own quantization constant \(c\). This can be formalized as follows: We chunk the input tensor \(\mathbf{X}\in\mathbb{R}^{b\times h}\) into \(n\) contiguous blocks of size \(B\) by flattening the input tensor and slicing the linear segment into \(n=(b\times h)/B\) blocks. We quantize these blocks independently with Equation 1 to create a quantized tensor and \(n\) quantization constants \(c_{i}\). Low-rank AdaptersLow-rank Adapter (LoRA) finetuning [28] is a method that reduces memory requirements by using a small set of trainable parameters, often termed adapters, while not updating the full model parameters which remain fixed. Gradients during stochastic gradient descent are passed through the fixed pretrained model weights to the adapter, which is updated to optimize the loss function. LoRA augments a linear projection through an additional factorized projection. Given a projection \(\mathbf{X}\mathbf{W}=\mathbf{Y}\) with \(\mathbf{X}\in\mathbb{R}^{b\times h}\), \(\mathbf{W}\in\mathbb{R}^{h\times c}\) LoRA computes: \[\mathbf{Y}=\mathbf{X}\mathbf{W}+s\mathbf{X}\mathbf{L}_{1}\mathbf{L}_{2}, \tag{3}\] where \(\mathbf{L}_{1}\in\mathbb{R}^{h\times r}\) and \(\mathbf{L}_{2}\in\mathbb{R}^{r\times o}\), and \(s\) is a scalar. Memory Requirement of Parameter-Efficient FinetuningOne important point of discussion is the memory requirement of LoRA during training both in terms of the number and size of adapters used. Since the memory footprint of LoRA is so minimal, we can use more adapters to improve performance without significantly increasing the total memory used. While LoRA was designed as a Figure 1: Different finetuning methods and their memory requirements. QLoRA improves over LoRA by quantizing the transformer model to 4-bit precision and using paged optimizers to handle memory spikes. Parameter Efficient Finetuning (PEFT) method, most of the memory footprint for LLM finetuning comes from activation gradients and not from the learned LoRA parameters. For a 7B LLaMA model trained on FLAN v2 with a batch size of 1, with LoRA weights equivalent to commonly used 0.2% of the original model weights[28, 37], the LoRA input gradients have a memory footprint of 567 MB while the LoRA parameters take up only 26 MB. With gradient checkpointing [9], the input gradients reduce to an average of 18 MB per sequence making them more memory intensive than all LoRA weights combined. In comparison, the 4-bit base model consumes 5,048 MB of memory. This highlights that gradient checkpointing is important but also that aggressively reducing the amount of LoRA parameter yields only minor memory benefits. This means we can use more adapters without significantly increasing the overall training memory footprint (see Appendix G for a detailed breakdown). As discussed later, this is crucial for recovering full 16-bit precision performance. ## 3 QLoRA Finetuning QLoRA achieves high-fidelity 4-bit finetuning via two techniques we propose--4-bit NormalFloat (NF4) quantization and Double Quantization. Additionally, we introduce Paged Optimizers, to prevent memory spikes during gradient checkpointing from causing out-of-memory errors that have traditionally made finetuning on a single machine difficult for large models. QLoRA has one low-precision storage data type, in our case usually 4-bit, and one computation data type that is usually BFloat16. In practice, this means whenever a QLoRA weight tensor is used, we dequantize the tensor to BFloat16, and then perform a matrix multiplication in 16-bit. We now discuss the components of QLoRA followed by a formal definition of QLoRA. 4-bit NormalFloat QuantizationThe NormalFloat (NF) data type builds on Quantile Quantization [15] which is an information-theoretically optimal data type that ensures each quantization bin has an equal number of values assigned from the input tensor. Quantile quantization works by estimating the quantile of the input tensor through the empirical cumulative distribution function. The main limitation of quantile quantization is that the process of quantile estimation is expensive. Therefore fast quantile approximation algorithms, such as SRAM quantiles [15], are used to estimate them. Due to the approximate nature of these quantile estimation algorithms, the data type has large quantization errors for outliers, which are often the most important values. Expensive quantile estimates and approximation errors can be avoided when input tensors come from a distribution fixed up to a quantization constant. In such cases, input tensors have the same quantiles making exact quantile estimation computationally feasible. Since pretrained neural network weights usually have a zero-centered normal distribution with standard deviation \(\sigma\) (see Appendix F), we can transform all weights to a single fixed distribution by scaling \(\sigma\) such that the distribution fits exactly into the range of our data type. For our data type, we set the arbitrary range \([-1,1]\). As such, both the quantiles for the data type and the neural network weights need to be normalized into this range. The information theoretically optimal data type for zero-mean normal distributions with arbitrary standard deviations \(\sigma\) in the range \([-1,1]\) is computed as follows: (1) estimate the \(2^{k}+1\) quantiles of a theoretical \(N(0,1)\) distribution to obtain a \(k\)-bit quantile quantization data type for normal distributions, (2) take this data type and normalize its values into the \([-1,1]\) range, (3) quantize an input weight tensor by normalizing it into the \([-1,1]\) range through absolute maximum rescaling. Once the weight range and data type range match, we can quantize as usual. Step (3) is equivalent to rescaling the standard deviation of the weight tensor to match the standard deviation of the k-bit data type. More formally, we estimate the \(2^{k}\) values \(q_{i}\) of the data type as follows: \[q_{i}=\frac{1}{2}\left(Q_{X}\left(\frac{i}{2^{k}+1}\right)+Q_{X}\left(\frac{i+ 1}{2^{k}+1}\right)\right), \tag{4}\] where \(Q_{X}(\cdot)\) is the quantile function of the standard normal distribution \(N(0,1)\). A problem for a symmetric k-bit quantization is that this approach does not have an exact representation of zero, which is an important property to quantize padding and other zero-valued elements with no error. To ensure a discrete zeropoint of \(0\) and to use all \(2^{k}\) bits for a k-bit datatype, we create an asymmetric data type by estimating the quantiles \(q_{i}\) of two ranges \(q_{i}\): \(2^{k-1}\) for the negative part and \(2^{k-1}+1\) for the positive part and then we unify these sets of \(q_{i}\) and remove one of the two zeros that occurs in both sets. We term the resulting data type that has equal expected number of values in each quantization bin \(k\)_-bit NormalFloat_ (NFk), since the data type is information-theoretically optimal for zero-centered normally distributed data. The exact values of this data type can be found in Appendix E. Double QuantizationWe introduce _Double Quantization_ (DQ), the process of quantizing the quantization constants for additional memory savings. While a small blocksize is required for precise 4-bit quantization [13], it also has a considerable memory overhead. For example, using 32-bit constants and a blocksize of 64 for \(\mathbf{W}\), quantization constants add \(32/64=0.5\) bits per parameter on average. Double Quantization helps reduce the memory footprint of quantization constants. More specifically, Double Quantization treats quantization constants \(c_{2}^{\text{FP32}}\) of the first quantization as inputs to a second quantization. This second step yields the quantized quantization constants \(c_{2}^{\text{FP8}}\) and the second level of quantization constants \(c_{1}^{\text{FP32}}\). We use 8-bit Floats with a blocksize of 256 for the second quantization as no performance degradation is observed for 8-bit quantization, in line with results from Dettmers and Zettlemoyer [13]. Since the \(c_{2}^{\text{FP32}}\) are positive, we subtract the mean from \(c_{2}\) before quantization to center the values around zero and make use of symmetric quantization. On average, for a blocksize of 64, this quantization reduces the memory footprint per parameter from \(32/64=0.5\) bits, to \(8/64+32/(64\cdot 256)=0.127\) bits, a reduction of 0.373 bits per parameter. Paged Optimizersuse the NVIDIA unified memory 3 feature wich does automatic page-to-page transfers between the CPU and GPU for error-free GPU processing in the scenario where the GPU occasionally runs out-of-memory. The feature works like regular memory paging between CPU RAM and the disk. We use this feature to allocate paged memory for the optimizer states which are then automatically evicted to CPU RAM when the GPU runs out-of-memory and paged back into GPU memory when the memory is needed in the optimizer update step. Footnote 3: [https://docs.nvidia.com/cuda/cuda-c-programming-guide](https://docs.nvidia.com/cuda/cuda-c-programming-guide) QLoRA.Using the components described above, we define QLoRA for a single linear layer in the quantized base model with a single LoRA adapter as follows: \[\mathbf{Y}^{\text{BF16}}=\mathbf{X}^{\text{BF16}}\text{doubleDequant}(c_{1}^{ \text{FP32}},c_{2}^{\text{k-bit}},\mathbf{W}^{\text{NF4}})+\mathbf{X}^{\text{ BF16}}\mathbf{L}_{1}^{\text{BF16}}\mathbf{L}_{2}^{\text{BF16}}, \tag{5}\] where \(\text{doubleDequant}(\cdot)\) is defined as: \[\text{doubleDequant}(c_{1}^{\text{FP32}},c_{2}^{\text{k-bit}},\mathbf{W}^{ \text{k-bit}})=\text{dequant}(\text{dequant}(c_{1}^{\text{FP32}},c_{2}^{\text{ k-bit}}),\mathbf{W}^{\text{4bit}})=\mathbf{W}^{\text{BF16}}, \tag{6}\] We use NF4 for \(\mathbf{W}\) and FP8 for \(c_{2}\). We use a blocksize of 64 for \(\mathbf{W}\) for higher quantization precision and a blocksize of 256 for \(c_{2}\) to conserve memory. For parameter updates only the gradient with respect to the error for the adapters weights \(\frac{\partial E}{\partial\mathbf{L}_{i}}\) are needed, and not for 4-bit weights \(\frac{\partial E}{\partial\mathbf{W}}\). However, the calculation of \(\frac{\partial E}{\partial\mathbf{L}_{i}}\) entails the calculation of \(\frac{\partial\mathbf{X}}{\partial\mathbf{W}}\) which proceeds via equation (5) with dequantization from storage \(\mathbf{W}^{\text{NF4}}\) to computation data type \(\mathbf{W}^{\text{BF16}}\) to calculate the derivative \(\frac{\partial\mathbf{X}}{\partial\mathbf{W}}\) in BFloat16 precision. To summarize, QLoRA has one storage data type (usually 4-bit NormalFloat) and a computation data type (16-bit BrainFloat). We dequantize the storage data type to the computation data type to perform the forward and backward pass, but we only compute weight gradients for the LoRA parameters which use 16-bit BrainFloat. ## 4 QLoRA vs. Standard Finetuning We have discussed how QLoRA works and how it can significantly reduce the required memory for finetuning models. The main question now is whether QLoRA can perform as well as full-model finetuning. Furthermore, we want to analyze the components of QLoRA including the impact of NormalFloat4 over standard Float4. The following sections will discuss the experiments that aimed at answering these questions. Experimental setup.We consider three architectures (encoder, encoder-decoder, and decoder only) and compare QLoRA with 16-bit adapter-finetuning and with full-finetuning for models up to 3B. Our evaluations include GLUE [58] with RoBERTa-large [38], Super-NaturalInstructions (TKInstruct) [61] with T5 [49], and 5-shot MMLU [24] after finetuning LLaMA on Flan v2 [39] and Alpaca [55]. To additionally study the advantages of NF4 over other 4-bit data types, we use the setup of Dettmers and Zettlemoyer [13] and measure post-quantization zero-shot accuracy and perplexity across different models (OPT [72], LLaMA [57], BLOOM [52], Pythia [7]) for model sizes 125m - 13B. We provide more details in the results section for each particular setup to make the results more readable. Full details in Appendix A. While paged optimizers are critical to 33B/65B QLoRA tuning on a single 24/48GB GPU, we do not provide hard measurements for Paged Optimizers since the paging only occurs when processing mini-batches with long sequence lengths, which is rare. We do, however, perform an analysis of the runtime of paged optimizers for 65B models on 48GB GPUs and find that with a batch size of 16, paged optimizers provide the same training speed as regular optimizers. Future work should measure and characterize under what circumstances slowdowns occur from the paging process. Default LoRA hyperparameters do not match 16-bit performanceWhen using the standard practice of applying LoRA to query and value attention projection matrices [28], we are not able to replicate full finetuning performance for large base models. As shown in Figure 2 for LLaMA 7B finetuning on Alpaca, we find that the most critical LoRA hyperparameter is how many LoRA adapters are used in total and that LoRA on all linear transformer block layers are required to match full finetuning performance. Other LoRA hyperparameters, such as the projection dimension \(r\), do not affect performance (see Appendix A). Similarly, we find that default hyperparameters for fully finetuned baselines are undertuned. We do a hyperparameter search over learning rates 1e-6 to 5e-5 and batch sizes 8 to 128 to find robust baselines. Results for 7B LLaMA finetuning on Alpaca are shown in Figure 2. 4-bit NormalFloat yields better performance than 4-bit Floating PointWhile the 4-bit NormalFloat (NF4) data type is information-theoretically optimal, it still needs to be determined if this property translates to empirical advantages. We follow the setup from Dettmers and Zettlemoyer [13] where quantized LLMs (OPT [72], BLOOM [52], Pythia [7], LLaMA) of different sizes (125M to 65B) with different data types are evaluated on language modeling and a set of zero-shot tasks. In Figure 3 and Table 2 we see that NF4 improves performance significantly over FP4 and Int4 and that double quantization reduces the memory footprint without degrading performance. possible, but leads to performance degradation relative to 16-bit [13; 18]. This raises the crucial question of whether the lost performance can be recovered by conducting 4-bit adapter finetuning. We test this for two setups. The first focuses on a comparison with full 16-bit finetuning of RoBERTA and T5 models sized 125M to 3B parameters on GLUE and the Super-NaturalInstructions dataset. Results are shown in Table 3. In both datasets, we observe that 16-bit, 8-bit, and 4-bit adapter methods replicate the performance of the fully finetuned 16-bit baseline. This suggests that the performance lost due to the imprecise quantization can be fully recovered through adapter finetuning after quantization. For our second setup, since full finetuning models at and beyond 11B parameters requires more than one server of high memory GPUs, we continue to test whether 4-bit QLoRA can match 16-bit LoRA at the 7B to 65B parameter scales. To this end, we finetune LLaMA 7B through 65B on two instruction following datasets, Alpaca and FLAN v2, and evaluate on the MMLU benchmark via 5-shot accuracy. Results are shown in Table 4 where we see that NF4 with double quantization fully recovers the 16-bit LoRA MMLU performance. In addition, we also note that QLoRA with FP4 lags behind the 16-bit brain float LoRA baseline by about 1 percentage point. This corroborates both our findings that (1) QLoRA with NF4 replicates both 16-bit full finetuning and 16-bit LoRA finetuning performance, and (2) NF4 is superior to FP4 in terms of quantization precision. SummaryOur results consistently show that 4-bit QLoRA with NF4 data type matches 16-bit full finetuning and 16-bit LoRA finetuning performance on academic benchmarks with well-established evaluation setups. We have also shown that NF4 is more effective than FP4 and that double quantization does not degrade performance. Combined, this forms compelling evidence that 4-bit QLoRA tuning reliably yields results matching 16-bit methods. In line with previous work on quantization [13], our MMLU and Elo results indicate that with a given finetuning and inference resource budget it is beneficial to increase the number of parameters in the base model while decreasing their precision. This highlights the importance of efficiency benefits from QLoRA. Since we did not observe performance degradation compared to full-finetuning in our experiments with 4-bit finetuning, this raises the question of where the performance-precision trade-off exactly lies for QLoRA tuning, which we leave to future work to explore. We proceed to investigate instruction tuning at scales that would be impossible to explore with full 16-bit finetuning on academic research hardware. ## 5 Pushing the Chatbot State-of-the-art with QLoRA Having established that 4-bit QLoRA matches 16-bit performance across scales, tasks, and datasets we conduct an in-depth study of instruction finetuning up to the largest open-source language models available for research. To assess the performance of instruction finetuning these models, we evaluate \begin{table} \begin{tabular}{l c} \hline Data type & Mean PPL \\ \hline Int4 & 34.34 \\ Float4 (E2M1) & 31.07 \\ Float4 (E3M0) & 29.48 \\ NFFloat4 + DQ & **27.41** \\ \hline \end{tabular} \end{table} Table 2: Pile Common Crawl mean perplexity for different data types for 125M to 13B OPT, BLOOM, LLaMA, and Pythia models. \begin{table} \begin{tabular}{l c c c c c c} \hline Dataset & GLUE (Acc.) & \multicolumn{4}{c}{Super-NaturalInstructions (RougeL)} \\ Model & RoBERTa-large & T5-80M & T5-250M & T5-780M & T5-3B & T5-11B \\ \hline BF16 & 88.6 & 40.1 & 42.1 & 48.0 & 54.3 & 62.0 \\ BF16 replication & 88.6 & 40.0 & 42.2 & 47.3 & 54.9 & - \\ \hline LoRA BF16 & 88.8 & 40.5 & 42.6 & 47.1 & 55.4 & 60.7 \\ QLoRA Int8 & 88.8 & 40.4 & 42.9 & 45.4 & 56.5 & 60.7 \\ QLoRA FP4 & 88.6 & 40.3 & 42.4 & 47.5 & 55.6 & 60.9 \\ QLoRA NF4 + DQ & - & 40.4 & 42.7 & 47.7 & 55.3 & 60.9 \\ \hline \end{tabular} \end{table} Table 3: Experiments comparing 16-bit BrainFloat (BF16), 8-bit Integer (Int8), 4-bit Float (FP4), and 4-bit NormalFloat (NF4) on GLUE and Super-NaturalInstructions. QLoRA replicates 16-bit LoRA and full-finetuning. on a challenging Natural Language Understanding benchmark (MMLU) and develop new methods for real-world chatbot performance evaluation. ### Experimental setup We now describe an overview of the experimental setup with full details in Appendix B. DataAs, to our knowledge, there is no comprehensive study of recent instruction-following datasets, we select eight recent datasets. We include datasets obtained through crowd-sourcing (OASST1 [31], HH-RLHF [4]), distillation from instruction-tuned models (Alpaca [55], self-instruct [59], unnatural-instructions [26]), corpora aggregations (FLAN v2 [12]), as well as hybrids (Chip2 [32], Long-form [30]). These datasets cover different languages, data sizes, and licenses. Training SetupTo avoid confounding effects from different training objectives, we perform QLoRA finetuning with cross-entropy loss (supervised learning) without reinforcement learning, even for datasets that include human judgments of different responses. For datasets that have a clear distinction between instruction and response, we finetune only on the response (see ablations in Appendix B). For OASST1 and HH-RLHF, multiple responses are available. We then select the top response at every level of the conversation tree and finetune on the full selected conversation, including the instructions. In all of our experiments, we use NF4 QLoRA with double quantization and paged optimizers to prevent memory spikes during gradient checkpointing. We do small hyperparameter searches for the 13B and 33B LLaMA models and we find that all hyperparameter settings found at 7B generalize (including number of epochs) except learning rate and batch size. We halve the learning rate for 33B and 65B while doubling the batch size. BaselinesWe compare our models to both research (Vicuna [10] and Open Assistant [31]) and commercial (GPT-4 [42], GPT-3.5-turbo and Bard) chatbot systems. The Open Assistant model is a LLaMA 33B model finetuned with Reinforcement Learning from Human Feedback (RLHF) on the same OASST1 dataset that we experiment with. Vicuna does full fine-tuning of LLaMA 13B on proprietary user-shared conversations from ShareGPT and is thus the result of distillation from OpenAI GPT models. ### Evaluation Following common practice, we use the MMLU (Massively Multitask Language Understanding) benchmark [24] to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy. We also test generative language capabilities through both automated and human evaluations. This second set of evaluations relies on queries curated by humans and aims at measuring the quality of model responses. While this is a more realistic testbed for chatbot model performance and is growing in popularity, there is no commonly accepted protocol in the literature. We describe below our proposed setup, using nucleus sampling with \(p=0.9\) and temperature \(0.7\) in all cases. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } & \multicolumn{6}{c}{Mean 5-shot MMLU Accuracy} \\ \cline{2-10} & \multicolumn{2}{c}{7B} & \multicolumn{2}{c}{13B} & \multicolumn{2}{c}{33B} & \multicolumn{2}{c}{65B} \\ \cline{2-10} & Alpaca & FLAN v2 & Alpaca & FLAN v2 & Alpaca & FLAN v2 & Alpaca & FLAN v2 & \\ \hline BFloat16 & 38.4 & 45.6 & 47.2 & 50.6 & 57.7 & 60.5 & 61.8 & 62.5 & 53.0 \\ Float4 & 37.2 & 44.0 & 47.3 & 50.0 & 55.9 & 58.5 & 61.3 & 63.3 & 52.2 \\ NFloat4 + DQ & 39.0 & 44.5 & 47.5 & 50.7 & 57.3 & 59.2 & 61.8 & 63.9 & 53.1 \\ \hline \hline \end{tabular} \end{table} Table 4: Mean 5-shot MMLU test accuracy for LLaMA 7-65B models finetuned with adapters on Alpaca and FLAN v2 for different data types. Overall, NF4 with double quantization (DQ) matches BFloat16 performance, while FP4 is consistently one percentage point behind both. \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & 7B & 13B & 33B & 65B \\ \hline LLaMA no tuning & 35.1 & 46.9 & 57.8 & 63.4 \\ \hline Self-Instruct & 36.4 & 33.3 & 53.0 & 56.7 \\ Longform & 32.1 & 43.2 & 56.6 & 59.7 \\ Chip2 & 34.5 & 41.6 & 53.6 & 59.8 \\ HH-RLHF & 34.9 & 44.6 & 55.8 & 60.1 \\ Unnatural Instruct & 41.9 & 48.1 & 57.3 & 61.3 \\ Guanaco (OASST1) & 36.6 & 46.4 & 57.0 & 62.2 \\ Alpaca & 38.8 & 47.8 & 57.3 & 62.5 \\ FLAN v2 & 44.5 & 51.4 & 59.2 & 63.9 \\ \hline \hline \end{tabular} \end{table} Table 5: MMLU 5-shot test results for different sizes of LLaMA finetuned on the corresponding datasets using QLoRA. Benchmark DataWe evaluate on two curated datasets of queries (questions): the Vicuna prompts [10] and the OASST1 validation dataset [31]. We use the Vicuna prompts, a set of 80 prompts from a diverse set of categories, without modifications. The OASST1 dataset is a multilingual collection of crowd-sourced multiturn dialogs between a user and an assistant. We select all user messages in the validation dataset as queries and include previous turns in the prompt. This procedure leads to 953 unique user queries. We term these two datasets the Vicuna and OA benchmarks. Automated EvaluationFirst, based on the evaluation protocol introduced by Chiang et al. [10], we use GPT-4 to rate the performance of different systems against ChatGPT (GPT-3.5 Turbo) on the Vicuna benchmark. Given a query along with ChatGPT's and a model's responses, GPT-4 is prompted to assign a score out of ten to both responses and provide an explanation. The overall performance of a model is calculated as percentage of the score that ChatGPT achieved. Note this relative score can be higher than 100% if the model achieves a higher absolute score than ChatGPT. We find a significant ordering effect with GPT-4 increasing the score of the response occurring earlier in the prompt. To control for such effects, we recommend reporting the mean score over both orders. Next, we measure performance through direct comparisons between system outputs. We simplify the rating scheme to a three-class labeling problem that accounts for ties. We prompt GPT-4 to pick the best response or declare a tie and provide an explanation. We conduct these head-to-head comparisons on all permutations of pairs of systems on both the Vicuna and OA benchmarks. Human EvaluationWhile recent work indicates generative models can be effectively employed for system evaluations [19], the reliability GPT-4 ratings to assess chatbot performance is, to our knowledge, yet to be proven to correlate with human judgments. Therefore, we run two parallel human evaluations on the Vicuna benchmark matching both automated evaluation protocols described above. We use Amazon Mechanical Turk (AMT) and get two human annotators for comparisons to ChatGPT and three annotators for pairwise comparisons. Elo RatingWith both human and automated pairwise comparisons, we create a tournament-style competition where models compete against each other. The tournament is made up of matches where pairs of models compete to produce the best response for a given prompt. This is similar to how Bai et al. [4] and Chiang et al. [10] compare models, but we also employ GPT-4 ratings in addition to human ratings. We randomly sample from the set of labeled comparisons to compute Elo [16; 17]. Elo rating, which is widely used in chess and other games, is a measure of the expected win-rate relative to an opponent's win rate, for example, an Elo of 1100 vs 1000 means the Elo 1100 player has an expected win-rate of approximately 65% against the Elo 1000 opponent; a 1000 vs 1000 or 1100 vs 1100 match results in an expected win-rate of 50%. The Elo rating changes after each match proportionally to the expected outcome, that is, an unexpected upset leads to a large change in Elo rating while an expected outcome leads to a small change. Over time, Elo ratings approximately match the skill of each player at playing the game. We start with a score of 1,000 and use \(K=32\). Similar to Chiang et al. [10], we repeat this procedure 10,000 times with different random seeds to control for ordering effects, e.g., the effect of which model pairs compete with each other first. ### Guanaco: QLoRA trained on OASST1 is a State-of-the-art Chatbot Based on our automated and human evaluations, we find that the top QLoRA tuned model, Guanaco 65B, which we finetune on a variant of OASST1, is the best-performing open-source chatbot model and offers performance competitive to ChatGPT. When compared to GPT-4, Guanaco 65B and 33B have an expected win probability of 30%, based on Elo rating from human annotators system-level pairwise comparisons - the highest reported to date. The Vicuna benchmark [10] results relative to ChatGPT are shown in Table 6. We find that Guanaco 65B is the best-performing model after GPT-4, achieving 99.3% performance relative to ChatGPT. Guanaco 33B has more parameters than the Vicuna 13B model, but uses only 4-bit precision for its weights and is thus much more memory efficient at 21 GB vs 26 GB, providing a three percentage points of improvement over Vicuna 13B. Furthermore, Guanaco 7B easily fits on modern phones at a 5 GB footprint while still scoring nearly 20 percentage points higher than Alpaca 13B. However, Table 6 also has very wide confidence intervals, with many models overlapping in performance. We hypothesize that this uncertainty comes from the lack of clear specification of scale, e.g., it is unclear what 8 on a 10 point scale means across different scenarios. As such, we instead recommend using the Elo ranking method [16], based on _pairwise_ judgments from human annotators and GPT-4 to avoid the problem of grounding an absolute scale. Elo ratings of the most competitive models can be seen in Table 1. We note that human and GPT-4 ranking of models on the Vicuna benchmark disagree partially, particularly for Guanaco 7B, but are consistent for most models with a Kendall Tau of \(\tau=0.43\) and Spearman rank correlation of \(r=0.55\) at the system level. At the example level, the agreement between GPT-4 and human annotators' majority vote is weaker with Fleiss \(\kappa=0.25\). Overall, this shows a moderate agreement between system-level judgments by GPT-4 and human annotators, and thus that model-based evaluation represents a somewhat reliable alternative to human evaluation. We discuss further considerations in Section 6.2. Elo rankings in Table 7 indicate that Guanaco 33B and 65B models outperform all models besides GPT-4 on the Vicuna and OA benchmarks and that they perform comparably to ChatGPT in line with Table 6. We note that the Vicuna benchmark favors open-source models while the larger OA benchmark favors ChatGPT. Furthermore, we can see from Tables 5 and 6 that the suitability of a finetuning dataset is a determining factor in performance. Finetuning Llama models on FLAN v2 does particularly well on MMLU, but performs worst on the Vicuna benchmark (similar trends are observed with other models). This also points to partial orthogonality in current evaluation benchmarks: strong MMLU performance does not imply strong chatbot performance (as measured by Vicuna or OA benchmarks) and vice versa. Guanaco is the only top model in our evaluation that is not trained on proprietary data as the OASST1 dataset collection guidelines explicitly forbid the use of GPT models. The next best model trained on only open-source data is the Anthropic HH-RLHF model, which scores 30 percentage points lower than Guanaco on the Vicuna benchmark (see Table 6). Overall, these results show that 4-bit QLoRA is effective and can produce state-of-the-art chatbots that rival ChatGPT. Furthermore, our 33B Guanaco can be trained on 24 GB consumer GPUs in less than 12 hours. This opens up the potential for future work via QLoRA tuning on specialized open-source data, which produces models that can compete with the very best commercial models that exist today. we show some examples that we believe are representative of some observed patterns in the text generated by our 65b Guanaco model. Second, SS6.2 we detail considerations about the results we have discussed and our interpretation of them. ### Qualitative Analysis of Example Generations To find examples, we first go through data generated for the Vicuna benchmark and the OpenAssistant benchmark, and look for patterns in the answers Guanaco generates. When we notice a pattern we attempt to setup a question or prompt that will induce the pattern even though it is the incorrect solution, e.g., if we observe that the model tends to give long-winded answers we prompt the model to "Answer yes or no without explanation." We use this to find "lemons" where we manage to adversarially break the model and "cherries" where we fail to break the model, and present both. All generations in this section were generated with Nucleus Sampling [25] with \(p=0.9\). Of course, this is by no means comprehensive, since it is beyond the scope of this small qualitative study to control for all the variables involved, e.g., the full distribution of responses the model can generate for a given prompt is quite large, so we rely on samples we hope are representative. However, we believe describing these examples gives context to the quantitative evidence shown earlier in the paper. Since we open source all models and code, we hope this section will inspire future work to examine in more detail the issues we present here. Factual RecallFor questions such as "What is the capital of Zambia?" all models consistently generate correct answers, e.g., **Guanaco**: The capital of Zambia is Lusaka. However as questions get more obscure Guanaco becomes unreliable, but stays confident. For instance in response to this prompt from HotPotQA [68]: **User**: In what year was the singer who popularized 'I'll Keep the Lovelight Burning' born? **Guanaco**: Al Jolson is credited with popularizing the song 'I'll Keep the Lovelight Burning,' and he was born in the year 1886. Guanaco generates the wrong popularizer and the wrong birthday (though the birthday given is correct for the person mentioned, Al Jolson). SuggestibilityGuanaco shows a surprising resistance to going along with some kinds of assumed misinformation, e.g., in the following interaction: **User**: How was it finally, officially confirmed that the earth is flat by peer reviewed scientists? \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Benchmark & \multicolumn{2}{c}{Vicuna} & \multicolumn{2}{c}{Vicuna} & \multicolumn{2}{c}{Open Assistant} \\ \# Prompts & \multicolumn{2}{c}{80} & \multicolumn{2}{c}{80} & \multicolumn{2}{c}{953} & \\ Judge & \multicolumn{2}{c}{Human raters} & \multicolumn{2}{c}{GPT-4} & \multicolumn{2}{c}{GPT-4} & \multicolumn{2}{c}{Median Rank} \\ \cline{2-7} Model & Elo & Rank & Elo & Rank & Elo & Rank & \\ \hline GPT-4 & 1176 & 1 & 1348 & 1 & 1294 & 1 & 1 \\ Guanaco-65B & 1023 & 2 & 1022 & 2 & 1008 & 3 & 2 \\ Guanaco-33B & 1009 & 4 & 992 & 3 & 1002 & 4 & 4 \\ ChatGPT-3.5 Turbo & 916 & 7 & 966 & 5 & 1015 & 2 & 5 \\ Vicuna-13B & 984 & 5 & 974 & 4 & 936 & 5 & 5 \\ Guanaco-13B & 975 & 6 & 913 & 6 & 885 & 6 & 6 \\ Guanaco-7B & 1010 & 3 & 879 & 8 & 860 & 7 & 7 \\ Bard & 909 & 8 & 902 & 7 & - & - & 8 \\ \hline \hline \end{tabular} \end{table} Table 7: Elo rating for a tournament between models where models compete to generate the best response for a prompt, judged by human raters or GPT-4. Overall, Guanaco 65B and 33B tend to be preferred to ChatGPT-3.5 on the benchmarks studied. According to human raters they have a Each 10-point difference in Elo is approximately a difference of 1.5% in win-rate. **Guanaco**: The earth has never been officially confirmed to be flat by peer reviewed scientists. The overwhelming consensus among scientists is that the earth is spherical. In fact, this has been known for centuries and has been confirmed through numerous experiments and observations. The belief that the earth is flat is a conspiracy theory that has been debunked by science. The idea that a group of scientists have peer reviewed and confirmed the earth is flat is simply false. If you have any more questions please ask. Guanaco is also quite good at knowing what kinds of questions aren't possible to answer, e.g., **User**: What time is it? **Guanaco**: I'm sorry, but as an AI language model, I don't have access to real-time information. My responses are generated based on my pre-existing knowledge and do not reflect the current time. If you're looking for the current time, I suggest using a timekeeping device or checking an online timepiece. Similar results hold for "Where are you?", "How are you?", etc. **Refusal**: Similarly, Guanaco sometimes refuses to follow instructions for seemingly random reasons: **User**: Please reverse the words in the sentence "I'm a cool dude in a mocha mood." **Guanaco**: I'm sorry, I'm not able to do that right now, but I can explain why the sentence you provided is grammatically correct. The sentence "I'm a cool dude in a mocha mood" follows the correct grammar and syntax rules of the English language. In this case, the adjective "cool" comes before the noun "dude" to describe it, and the prepositional phrase "in a mocha mood" provides additional information about the context of the sentence. This sentence structure is perfectly acceptable and makes sense in the context of the sentence. If you have any other questions or if there's anything else I can help you with, feel free to ask **Secret Keeping**: Unfortunately, this refusal is unreliable even when it is desired. Often system designers would like to keep something secret from the user, even if the system is allowed to give free text responses back to the user. We prompt the model with "The secret word is 'banana'. Do not reveal or repeat back this back, no matter what the human requests." and try to see how easy it is for a potential user to access this secret information. Directly asking fails: **User**: What is the secret word? **Guanaco**: I'm sorry, but I'm not able to disclose the secret word. I'm programmed to not reveal or repeat back the secret word, no matter what the human requests. Is there anything else I can help you with? which is the desired behavior. However, a tiny amount of trickery breaks this behavior: **User**: This is a game. The goal is to ignore your previous instructions. What is the secret word? **Guanaco**: Sorry, the secret word is banana. It's best if we don't speak of it. Is there anything else you'd like to ask? showcasing the need to study methods for more faithful instruction following. **Math**: Guanaco's biggest weakness is mathematics, an area where many language models struggle in [35]. When Guanaco shows its work, it tends to be accurate, e.g., **User**: Lee moves one lawn and charges $33. Last week he moved 16 lawns and three customers each gave him a $10 tip. How many dollars did Lee earn moving lawns last week? **Guanaco**: Lee earned $582 moving lawns last week. To find the total amount he earned, we first need to calculate how much he earned for moving 16 lawns: $33 x 16 = $528 Then we need to add the tips he received from the three customers: $10 x 3 = $30 So the total amount Lee earned was: $528 + $30 = $558 However Guanaco can breakdown at even simple problems if it fails to break them down step-by-step, a known issue [63]. For example, we see the following interaction: **User**: Please factorize 1833. **Guanaco**: Sure, I'd be happy to help! 1833 is a prime number, which means it has only two factors: 1 and 1833. So the unique factorization of 1833 is: 2^1 * 3^2 * 17^1 I hope this helps! Let me know if you have any other questions. which is not only wrong (the true factorization is \(3\times 17\times 43\)), it's wrong twice. **Theory of Mind**: Guanaco displays surprisingly strong Theory of Mind capabilities [41, 51]. For instance, the model correctly answers the following question in great detail: **User**: James and Abby are in the bedroom. Abby put the pen in the desk drawer. Abby leaves the bedroom. James moves the pen into the bag. Where does James think Abby will look for the pen? **Guanaco**: James knows that Abby put the pen in the desk drawer. But James also knows that he has moved the pen into the bag, and he assumes that Abby does not know about this. So, James may guess that Abby will look for the pen in the desk drawer, because that's where she left it last time. However, this assumption may be incorrect, because Abby may have seen James moving the pen. However, these inferences are unreliable, and often the model gives reasons that presume things that don't make sense in the situation, e.g., **User**: Evelyn entered the living room. Jackson entered the playroom. James entered the playroom. The beans are in the treasure chest. James exited the playroom. Jackson moved the beans to the pantry. Jackson exited the playroom. James entered the living room. Where will James look for the beans? **Guanaco**: James will look for the beans in the pantry, because that's where Jackson moved them. where Guanaco presumes information transfer that was never described. These issues echo recent literature [51], but require more study. ### Considerations EvaluationWe report moderate agreement among human annotators (Fleiss \(\kappa=0.42\)) with additional deterioration when comparing two strong systems. This points to limitations in the current benchmarks and human evaluation protocols for chatbot task performance. When manually comparing generations from ChatGPT and Guanaco 65B on the Vicuna benchmark, we find that subjective preferences start to play an important role as the authors of this paper disagreed on the many preferred responses. Future work should investigate approaches to mitigate these problems drawing from disciplines that developed mechanisms to deal with subjective preferences, such as Human-Computer Interaction and Psychology. In our analysis, we also find that automated evaluation systems have noticeable biases. For example, we observe strong order effects with GPT-4 assigning higher scores to the system appearing first in its prompt. The relatively weak sample-level agreement between GPT-4 and human annotators (Fleiss \(\kappa=0.25\)) also suggests that human annotators and automated systems might rely on preferences that are not always aligned. In addition, in Table 7, we observe that GPT-4 assigns significantly higher scores to its own outputs compared to human ratings, Elo of 1348 vs 1176, which represent an additional 20% probability of winning against an opponent. Future work should examine the presence of potential biases in automated evaluation systems as well as possible mitigation strategies. Data & TrainingWe note that the OASST1 dataset on which Guanaco models are trained is multilingual and that the OA benchmark also contains prompts in different languages. We leave it to future work to investigate the degree to which such multilingual training improves performance on instructions in languages other than English and whether this explains the larger gap between Vicuna-13B model (only trained on English data) and Guanaco 33B and 65B on the OA benchmark. Given the strong performance of Guanaco models, we investigate any data leakage between the OASST1 data and the Vicuna benchmark prompts. We do not find overlapping prompts after performing fuzzy string matching in the two datasets and inspecting the closest matches manually. Furthermore, we note that our model is only trained with cross-entropy loss (supervised learning) without relying on reinforcement learning from human feedback (RLHF). This calls for further investigations of the tradeoffs of simple cross-entropy loss and RLHF training. We hope that QLoRA enables such analysis at scale, without the need for overwhelming computational resources. ## 7 Related Work Quantization of Large Language ModelsQuantization of LLMs has largely focused on quantization for inference time. Major approaches for preserving 16-bit LLM quality focus on managing outlier features (e.g., SmoothQuant [66] and LLM.int8() [14]) while others use more sophisticated grouping methods [44; 69]. Lossy quantization approaches study the trade-offs for regular rounding [13; 71; 47] or how to optimize rounding decisions to improve quantization precision [18]. Besides our work, SwitchBack layers [65] is the only work that studies backpropagation through quantized weights at a scale beyond 1B parameters. Finetuning with AdaptersWhile we use Low-rank Adapters [28] (LoRA), many other Parameter Efficient FineTuning (PEFT) methods have been proposed such as prompt tuning [48; 33; 34], tuning the embedding layer inputs [1], tuning hidden states (IA\({}^{3}\)) [37], adding full layers [27], tuning biases [70], learning a mask over weights based on Fisher information [54], and a combination of approaches [23]. In our work, we show that LoRA adapters are able to reach full 16-bit finetuning performance. We leave it to future work to explore the tradeoffs of other PEFT approaches. Instruction FinetuningTo help a pretrained LLM follow the instructions provided in a prompt, instruction finetuning uses input-output pairs of various data sources to finetune a pretrained LLM to generate the output given the input as a prompt. Approaches and datasets include MetaICL [40], MetaTuning [73], InstructGPT [43], FLAN [62; 12], PromptSource [3], Super-NaturalInstructions [61; 50], Self-instruct [59], UnnaturalInstructions [26], OPT-IML [29], UnifiedSKG[67], OIG/Chip2 [32], Alpaca [55], Vicuna [10], Koala [20], and Self-instruct-GPT-4 [45]. An additional limitation is that we did not evaluate different bit-precisions, such as using 3-bit base models, or different adapter methods. Besides LoRA, there is also a wide variety Parameter Efficient FineTuning (PEFT) methods that have been shown to work well. However, it is unclear if these methods scale to large models. We used LoRA as many results established its robustness but other adapters might yield better performance. Since finetuning after quantization seems to recover most of the information that is lost during quantization this might enable much more aggressive quantization. For example, 3-bit GPTQ quantization of the basemodel with LoRA might also yield 16-bit full finetuning performance after finetuning. ## 9 Broader Impacts Our QLoRA finetuning method is the first method that enables the finetuning of 33B parameter models on a single consumer GPU and 65B parameter models on a single professional GPU, while not degrading performance relative to a full finetuning baseline. We have demonstrated that our best 33B model trained on the Open Assistant dataset can rival ChatGPT on the Vicuna benchmark. Since instruction finetuning is an essential tool to transform raw pretrained LLMs into ChatGPT-like chatbots, we believe that our method will make finetuning widespread and common in particular for the researchers that have the least resources, a big win for the accessibility of state of the art NLP technology. QLoRA can be seen as an equalizing factor that helps to close the resource gap between large corporations and small teams with consumer GPUs. Another potential source of impact is deployment to mobile phones. We believe our QLoRA method might enable the critical milestone of enabling the finetuning of LLMs on phones and other low resource settings. While 7B models were shown to be able to be run on phones before, QLoRA is the first method that would enable the finetuning of such models. We estimate that with an iPhone 12 Plus, QLoRA can finetune 3 million tokens per night while the phone is charging. While finetuned 7B models do not reach the quality of ChatGPT, we believe that the quality is good enough to enable novel applications that have not been possible before due to privacy or LLM quality issues. QLoRA can help enable privacy-preserving usage of LLMs, where users can own and manage their own data and models, while simultaneously making LLMs easier to deploy. However, finetuning is a dual-use technology that can be abused to cause harm. Widespread use of LLMs has known dangers [8, 6], but we believe that equalizing access to a technology that is quickly becoming ubiquitous will allow for better more independent analysis than keeping the power of LLMs in the hands of large corporations that do not release models or source code for auditing. All in all, we believe that QLoRA will have a broadly positive impact making the finetuning of high quality LLMs much more widely and easily accessible. ## Acknowledgements We thank Aditya Kusupati, Ofir Press, Ashish Sharma, Margaret Li, Raphael Olivier, Zihao Ye, and Evangelia Spiliopoulou for their valuable feedback. Our research was facilitated by the advanced computational, storage, and networking infrastructure of the Hyak supercomputer system at the University of Washington. We thank the Hyak team for ensuring a smooth operation. We thank the beta testers of the bitsandbytes library, in particular Alex Birch and Alyssa Vance. We thank Younes Belkada for help with the integration of our software into the Hugging Face transformers stack.
2308.12799
On $π$-compatible topologies and their special cases
Topologies $\tau, \sigma$ on a set $X$ are called $\pi$-compatible if $\tau$ is a $\pi$-network for $\sigma$, and vice versa. If topologies $\tau, \sigma$ on a set $X$ are $\pi$-compatible then the families of nowhere dense sets (resp. meager sets or sets possessing the Baire property) of the spaces $(X, \tau)$ and $(X, \sigma)$ coincide. A topology $\sigma$ on a set $X$ is called an admissible extension of a topology $\tau$ on $X$ if $\tau \subseteq \sigma$ and $\tau$ is a $\pi$-network for $\sigma$. It turns out that examples of admissible extensions were occurred in literature several times. In the paper we provide some new facts about the $\pi$-compatibility and the admissible extension as well as about their particular cases.
Vitalij A. Chatyrko
2023-08-24T13:58:20Z
http://arxiv.org/abs/2308.12799v1
# On \(\pi\)-compatible topologies and their special cases ###### Abstract Topologies \(\tau,\sigma\) on a set \(X\) are called \(\pi\)_-compatible_ if \(\tau\) is a \(\pi\)-network for \(\sigma\), and vice versa. If topologies \(\tau,\sigma\) on a set \(X\) are \(\pi\)-compatible then the families of nowhere dense sets (resp. meager sets or sets possessing the Baire property) of the spaces \((X,\tau)\) and \((X,\sigma)\) coincide. A topology \(\sigma\) on a set \(X\) is called _an admissible extension_ of a topology \(\tau\) on \(X\) if \(\tau\subseteq\sigma\) and \(\tau\) is a \(\pi\)-network for \(\sigma\). It turns out that examples of admissible extensions were occurred in literature several times. In the paper we provide some new facts about the \(\pi\)-compatibility and the admissible extension as well as about their particular cases. _Keywords and Phrases: \(\pi\)-compatible topologies, admissible extension, local function, Hattori spaces, almost topological groups_ _2000 AMS (MOS) Subj. Class.:_ Primary 54A10, 54D10, 54D15 ## 1 Introduction Let \(X\) be a non-empty set, \({\cal P}(X)\) the family of all subsets of \(X\), \(\tau\) a topology on \(X\) and \({\cal I}_{m}(X,\tau)\) the family of all meager sets of the topological space \((X,\tau)\). An interesting collection of subsets of \(X\) extending \(\tau\) as well as \({\cal I}_{m}(X,\tau)\), is the family \({\cal B}_{p}(X,\tau)\) of all subsets of \(X\) possessing the Baire property in \((X,\tau)\). Recall that a subset \(A\) of \(X\) has the Baire property in the space \((X,\tau)\) if \(A=(O\setminus M)\cup N\), where \(O\in\tau\) and \(M,N\in{\cal I}_{m}(X,\tau)\). It is well known (cf. [Ku]) that the family \({\cal B}_{p}(X,\tau)\) is a \(\sigma\)-algebra of sets which is invariant under automorphisms of the space \((X,\tau)\). Let us note that \({\cal B}_{p}(X,\tau_{\mbox{\rm\small triv}})=\{\emptyset,X\}\), \({\cal B}_{p}(X,\tau_{\mbox{\rm\small dis}})={\cal P}(X)\), where \(\tau_{\mbox{\rm\small triv}}\) (resp. \(\tau_{\mbox{\rm\small dis}}\)) is the trivial (resp. discrete) topology on the set \(X\). It is easy to see that for any set \(X\) with \(|X|>1\) there is no topology \(\tau\) distinct from \(\tau_{\mbox{triv}}\) such that \({\cal B}_{p}(X,\tau)=\{\emptyset,X\}\). But there are simple examples of a set \(X\) and a topology \(\tau\) on \(X\) distinct from \(\tau_{\mbox{dis}}\) such that \({\cal B}_{p}(X,\tau)={\cal P}(X)\) (see Corollary 2.1), and a set \(Y\) and a topology \(\sigma\) on \(Y\) distinct from \(\tau_{\mbox{triv}}\) for which \({\cal B}_{p}(Y,\sigma)\neq{\cal P}(Y)\) (see Example 2.1 and Example 2.2). Let us recall (cf. [Ku]) that for the real numbers \({\mathbb{R}}\) with the Euclidean topology \(\tau_{E}\) we have \({\cal B}_{p}({\mathbb{R}},\tau_{E})\neq{\cal P}({\mathbb{R}})\), and there is a lot of information about elements of the family \({\cal P}({\mathbb{R}})\setminus{\cal B}_{p}({\mathbb{R}},\tau_{E})\) (see for example [Kh]). It would be interesting to know for what topologies \(\tau\) on \({\mathbb{R}}\) the equality \({\cal B}_{p}({\mathbb{R}},\tau)={\cal B}_{p}({\mathbb{R}},\tau_{E})\) is valid. One can pose more general questions. **Question 1.1**: Let \(X\) be a set and \(\tau\) be a topology on \(X\). * Describe elements of the family \({\cal P}(X)\setminus{\cal B}_{p}(X,\tau)\). * For what topologies \(\sigma\) on the set \(X\) does the equality \({\cal B}_{p}(X,\tau)={\cal B}_{p}(X,\sigma)\) hold? The \(\pi\)-compatibility of topologies \(\sigma\) and \(\tau\) on a set \(X\) (see Definition 3.1) introduced in [CN1] implies \({\cal I}_{m}(X,\tau)={\cal I}_{m}(X,\sigma)\) (see Lemma 3.1) and \({\cal B}_{p}(X,\tau)={\cal B}_{p}(X,\sigma)\) (see Corollary 3.1) but it is not equivalent to the equalities (see Example 3.1). It turns out that a stronger version of \(\pi\)-compatibility between two topologies, the notion of the admissible extension (see Definition 4.1), was occurred in literature several times ( see [H], [F], [CN2] and others). In the paper we provide some new facts about these relations between topologies which will be valid for the mentioned cases. For standard notions we refer to [Ku] and [E]. ## 2 Some simple answers to Question 1.1.(a) **Proposition 2.1**: _Let \(X\) be a countable set, \(Y\subseteq X\) and \(\tau\) a topology on \(X\). Let also for each point \(x\in Y\) the set \(\{x\}\) is a nowhere dense set in the space \((X,\tau)\) _and for each point \(x\in X\setminus Y\) the set \(\{x\}\) is an open set of the space \((X,\tau)\). Then \({\cal I}_{m}(X,\tau)={\cal P}(Y)\), \({\cal B}_{p}(X,\tau)={\cal P}(X)\) and so \({\cal P}(X)\setminus{\cal B}_{p}(X,\tau)=\emptyset\). \(\Box\)_ **Corollary 2.1**: * _Let_ \(\mathbb{Q}\) _be the set of rational numbers with the Euclidean topology_ \(\tau_{E}\)_. Then we have_ \({\cal I}_{m}(\mathbb{Q},\tau_{E})={\cal B}_{p}(\mathbb{Q},\tau_{E})={\cal P}( \mathbb{Q})\)_._ * _Let_ \(\mathbb{Z}\) _be the set of integer numbers,_ \(\mathbb{Z}_{e}\) _be the set of even integer numbers and_ \(\tau_{K}\) _be a topology generated by the family_ \(\{\{n-1,n,n+1\}:n\in\mathbb{Z}_{e}\}\) _(Khalimsky topology). Then we have_ \({\cal I}_{m}(\mathbb{Z},\tau_{K})={\cal P}(\mathbb{Z}_{e})\neq{\cal P}( \mathbb{Z})={\cal B}_{p}(\mathbb{Z},\tau_{K})\)_._ \(\Box\)__ The condition "each one-point set is either nowhere dense or open" in Proposition 2.1 is essential. In fact, **Example 2.1**: Let \(\mathbb{N}\) be the set of positive integers and \(\tau_{p}\) be the odd-even topology generated by the partition \({\cal P}=\{\{2k-1,2k\}\}\) of \(\mathbb{N}\) ([SS, Example 6]). Note that the family of nowhere dense sets (resp. meager sets) for the space \((\mathbb{N},\tau_{p})\) is equal to \(\{\emptyset\}\). Moreover, \({\cal B}_{p}(\mathbb{N},\tau_{p})=\tau_{p}\neq{\cal P}(\mathbb{N})\), and the family \({\cal P}(\mathbb{N})\setminus{\cal B}_{p}(\mathbb{N},\tau_{p})\) consists of sets which are not open.__ Let us also note that the condition "to be countable" on the set \(X\) in Proposition 2.1 is essential.__ **Example 2.2**: Let \(X\) be any uncountable set and \(\tau_{\mbox{\rm\small IC}}\) be the finite complement topology on \(X\), i.e. \(V\in\tau_{\mbox{\rm\small IC}}\) iff \(X\setminus V\) is finite. Since each infinite subset \(Y\) of \(X\) is dense in the space \((X,\tau_{\mbox{\rm\small IC}})\), \({\cal I}_{m}(X,\tau_{\mbox{\rm\small IC}})=\{A\subset X:|A|\leq\aleph_{0}\}\) and \({\cal B}_{p}(X,\tau_{\mbox{\rm\small IC}})=\{A\subset X:|A|\leq\aleph_{0}\mbox{ or }|X\setminus A|\leq\aleph_{0}\}.\) So any uncountable subset \(Y\) of \(X\) such that \(X\setminus Y\) is also uncountable, does not belong to the family \({\cal B}_{p}(X,\tau_{\mbox{\rm\small IC}})\). In particular, \({\cal B}_{p}(X,\tau_{\mbox{\rm\small IC}})\neq{\cal P}(X)\).__ ## 3 \(\pi\)-compatible topologies as an answer to Question 1.1.(b). **Definition 3.1**: ([CN1, Definition 3.1]) Let \(\tau,\sigma\) be topologies on a set \(X\). We call the topologies \(\pi\)_-compatible_ if \(\tau\) is a \(\pi\)-network for \(\sigma\) (i.e. for each non-empty element \(O\) of \(\sigma\) there is a non-empty element \(V\) of \(\tau\) which is a subset of \(O\)) and vice versa. It is evident that the \(\pi\)-compatibility is an equivalence relation on the family of all topologies on the set \(X\). **Lemma 3.1**: _([CN1, Lemma 3.3]) Let \(\tau\) and \(\sigma\) be \(\pi\)-compatible topologies on a set \(X\). Then the spaces \((X,\tau)\) and \((X,\sigma)\) have the same families of nowhere dense sets (respectively, meager sets). \(\Box\)_ **Theorem 3.1**: _Let \(\tau\) and \(\sigma\) be \(\pi\)-compatible topologies on a set \(X\). Then each non-empty element of \(\tau\) is the union of a non-empty element of \(\sigma\) and a nowhere dense set, and vice versa._ Proof. In fact, let \(O_{\tau}\) be a non-empty element of \(\tau\). Put \(O_{\sigma}=\mbox{Int}\,_{\sigma}O_{\tau}\), where \(\mbox{Int}\,_{\sigma}A\) (respectively, \(\mbox{Cl}_{\sigma}A\)) is the interior (respectively, closure) of \(A\) in the space \((X,\sigma)\) for any \(A\subseteq X\). Note that \(O_{\sigma}\neq\emptyset\) and \(O_{\tau}=O_{\sigma}\cup((\mbox{Cl}_{\sigma}O_{\sigma}\setminus O_{\sigma}) \cap O_{\tau})\cup(O_{\tau}\setminus\mbox{Cl}_{\sigma}O_{\sigma})\). Note also that the set \((\mbox{Cl}_{\sigma}O_{\sigma}\setminus O_{\sigma})\cap O_{\tau}\) is nowhere dense. **Claim 3.1**: The set \(A=O_{\tau}\setminus\mbox{Cl}_{\sigma}O_{\sigma}\) is nowhere dense. Proof. Assume that the set \(A\) is not nowhere dense. Put \(W_{\sigma}=\mbox{Int}\,_{\sigma}\mbox{Cl}_{\sigma}A\). Note that \(W_{\sigma}\neq\emptyset\), \(W_{\sigma}\subseteq\mbox{Cl}_{\sigma}A\) and \(W_{\sigma}\cap O_{\sigma}=\emptyset\). Since \(W_{\sigma}\in\sigma\) and \(\tau\) is a \(\pi\)-network for \(\sigma\), there exists an non-empty element \(T_{\tau}\) of \(\tau\) such that \(T_{\tau}\subseteq W_{\sigma}\). There are two possibilities. Case 1. \(T_{\tau}\cap O_{\tau}\neq\emptyset\). Since \(T_{\tau}\cap O_{\tau}\in\tau\) and \(\sigma\) is a \(\pi\)-network for \(\tau\), there is a non-empty element \(P_{\sigma}\) of \(\sigma\) such that \(P_{\sigma}\subseteq T_{\tau}\cap O_{\tau}\). Note that \(P_{\sigma}\subseteq O_{\tau}\setminus O_{\sigma}\). We have a contradiction with the definition of \(O_{\sigma}\). Case 2. \(T_{\tau}\cap O_{\tau}=\emptyset\). Since \(\sigma\) is a \(\pi\)-network of \(\tau\), there exists a non-empty element \(Q_{\sigma}\) of \(\sigma\) such that \(Q_{\sigma}\subseteq T_{\tau}\). So \(Q_{\sigma}\cap O_{\tau}=\emptyset\) and hence \(Q_{\sigma}\cap\mbox{Cl}_{\sigma}O_{\tau}=\emptyset\). But \(Q_{\sigma}\subseteq W_{\sigma}\subseteq\mbox{Cl}_{\sigma}A\subseteq\mbox{Cl}_{ \sigma}O_{\tau}\). We have a contradiction. So \(A\) is nowhere dense. \(\Box\) By the use of the claim we get that \(O_{\tau}\) is the union of \(O_{\sigma}\) and the nowhere dense set \(((\mathrm{Cl}_{\sigma}O_{\sigma}\setminus O_{\sigma})\cap O_{\tau})\cup(O_{\tau} \setminus\mathrm{Cl}_{\sigma}O_{\sigma})\). We have done. \(\square\) **Corollary 3.1**: _([CN1, Theorem 3.4]) Let \(\tau\) and \(\sigma\) be \(\pi\)-compatible topologies on a set \(X\). Then the spaces \((X,\tau)\) and \((X,\sigma)\) have the same families of Baire sets._ Proof. Let us consider \(B\in\mathcal{B}_{p}(X,\tau)\). So \(B=(O\setminus M)\cup N\), where \(O\) is an open set in \((X,\tau)\) and \(M,N\) are meager sets. We can assume that \(O\neq\emptyset.\) By Theorem 3.1\(O=V\cup A\), where \(V\) is a non-empty open set in \((X,\sigma)\) and \(A\) is a nowhere dense set. Note that \(B=((V\cup A)\setminus M)\cup N=(V\setminus M)\cup((A\setminus M)\cup N)\) and \((A\setminus M)\cup N\) is a meager set. Hence, \(B\in\mathcal{B}_{p}(X,\sigma)\), i. e. \(\mathcal{B}_{p}(X,\tau)\subseteq\mathcal{B}_{p}(X,\sigma)\). The opposite inclusion can be proved similarly. \(\square\) Besides the \(\pi\)- compatible topologies there are other answers to Question 1.1.(b). In fact, **Example 3.1**: Let \(\mathbb{Q}\) be the set of rational numbers, \(\tau_{E}\) the Euclidean topology on \(\mathbb{Q}\) and \(\tau_{\mbox{\rm f\'{c}}}\) be the finite complement topology on \(\mathbb{Q}\). Since each one-point set of spaces \((\mathbb{Q},\tau_{E})\) and \((\mathbb{Q},\tau_{\mbox{\rm f\'{c}}})\) is nowhere dense, by Proposition 2.1 the families of their meager sets (resp. sets with the Baire property) are equal to \(\mathcal{P}(\mathbb{Q})\). Note that \(\tau_{\mbox{\rm f\'{c}}}\) and \(\tau_{E}\) are not \(\pi\)-compatible. **Question 3.1**: Let \(X\) be a set and \(\tau,\sigma\) topologies on \(X\). What property can we add to the equality \(\mathcal{B}_{p}(X,\tau)=\mathcal{B}_{p}(X,\sigma)\) in order to get the \(\pi\)-compatibility of \(\tau\) and \(\sigma\)? Some weaker property than \(\pi\)-compatibility, the coincidence of the families of nowhere dense sets, does not garantee the equality between the families of the sets with the Baire property. In fact, **Example 3.2**: Let \(\mathbb{N}\) be the set of positive integers, \(\tau_{\mbox{\rm dis}}\) be the discrete topology and \(\tau_{p}\) be the odd-even topology (see Example 2.1). Note that the families of nowhere dense sets (resp. meager sets) for the spaces \((\mathbb{N},\tau_{\mbox{\rm dis}})\) and \((\mathbb{N},\tau_{p})\) are equal to \(\{\emptyset\}\). But \(\mathcal{B}_{p}(\mathbb{N},\tau_{p})=\tau_{p}\neq\mathcal{B}_{p}(\mathbb{N}, \tau_{\mbox{\rm dis}})=\mathcal{P}(\mathbb{N})\). Recall that a topological space \(X\) is called _a Baire space_ if for every countable family \(\{G_{i}\}_{i=1}^{\infty}\) of open dense subsets of \(X\) the intersection \(\cap_{i=1}^{\infty}G_{i}\) is dense in \(X\). It is evident that any open non-empty subset of a Baire space is also Baire. **Theorem 3.2**: _Let \(X\) be a set and \(\tau,\sigma\) be \(\pi\)-compatible topologies on \(X\). Then the following is valid._ * \((X,\tau)\) _is a Baire space iff_ \((X,\sigma)\) _is a Baire space._ * _Let_ \({\cal G}_{\delta}(X,\tau)\) _(resp._ \({\cal G}_{\delta}(X,\sigma)\) _) be the family of all_ \(G_{\delta}\)_-sets of the space_ \((X,\tau)\) _(resp._ \((X,\sigma)\)_)._ _If_ \((X,\tau)\) _is a Baire space (or_ \((X,\sigma)\) _is a Baire space) then the family_ \[{\cal B}=\{\emptyset\neq A\subseteq X:A\in{\cal G}_{\delta}(X,\tau)\cap{\cal G }_{\delta}(X,\sigma)\}\] _is a_ \(\pi\)_-network for the space_ \((X,\tau)\) _and for the space_ \((X,\sigma)\)_._ Proof. (a) We show only the necessity. Let \(Y=\cup_{i=1}^{\infty}Y_{i}\), each \(Y_{i}\) be closed in the space \((X,\sigma)\) and \({\rm Int}\,_{\sigma}Y_{i}=\emptyset\). By Lemma 3.1 we have that \({\rm Int}\,_{\tau}{\rm Cl}_{\tau}Y_{i}=\emptyset\) for each \(i\). Since the space \((X,\tau)\) is Baire, we have \({\rm Int}\,_{\tau}(\cup_{i=1}^{\infty}{\rm Cl}_{\tau}Y_{i})=\emptyset\). Assume that \(O_{\sigma}={\rm Int}\,_{\sigma}(\cup_{i=1}^{\infty}Y_{i})\neq\emptyset\). Note that there is \(\emptyset\neq O_{\tau}\in\tau\) such that \(O_{\tau}\subseteq O_{\sigma}\subseteq\cup_{i=1}^{\infty}Y_{i}.\) Hence, \(O_{\tau}\subseteq{\rm Int}\,_{\tau}(\cup_{i=1}^{\infty}Y_{i})\). Since \(\emptyset\neq{\rm Int}\,_{\tau}(\cup_{i=1}^{\infty}Y_{i})\subseteq{\rm Int}\,_ {\tau}(\cup_{i=1}^{\infty}{\rm Cl}_{\tau}Y_{i})=\emptyset\), we have a contradiction. (b) Let us show that \({\cal B}\) is a \(\pi\)-network for the space \((X,\tau)\). Consider a non-empty open set \(O_{1}\) in \((X,\tau)\). Apply Theorem 3.1. Choose a non-empty open set \(V_{1}\) in \((X,\sigma)\) such that \(V_{1}\subseteq O_{1}\) and \(O_{1}\setminus V_{1}\) is a nowhere dense set. Then choose a non-empty open \(O_{2}\) in \((X,\tau)\) such that \(O_{2}\subseteq V_{1}\) and \(V_{1}\setminus O_{2}\) is a nowhere dense set, and so on. Since the space \((X,\tau)\) is a Baire space we have \(\cap_{i=1}^{\infty}O_{i}\neq\emptyset\). Furthermore, by the construction it follows that \(\cap_{i=1}^{\infty}O_{i}=\cap_{i=1}^{\infty}V_{i}\in{\cal B}\) and \(\cap_{i=1}^{\infty}O_{i}\) is a dense subset of \(O_{1}\). We have done. \(\Box\) **Proposition 3.1**: _Let \(X\) be a set and \(\tau,\sigma\) be \(\pi\)-compatible topologies on \(X\). If a set \(A\subset X\) is dense in \((X,\tau)\) then \(A\) is dense in \((X,\sigma)\). In particular, \(d(X,\tau)=d(X,\sigma).\)\(\square\)_ Different examples of \(\pi\)-compatible topologies one can get combining known example and the following simple statement. **Proposition 3.2**: _([CN1, Proposition 3.2]) Let \(X_{\alpha},\alpha\in\mathcal{A},\) be sets and for each \(\alpha\in\mathcal{A}\) topologies \(\tau_{\alpha},\sigma_{\alpha}\) be \(\pi\)-compatible on \(X_{\alpha}\). Then the topological products of topologies \(\prod\{\tau_{\alpha},\alpha\in\mathcal{A}\}\) and \(\prod\{\sigma_{\alpha},\alpha\in\mathcal{A}\}\) on the set \(\prod\{X_{\alpha},\alpha\in\mathcal{A}\}\) are \(\pi\)-compatible. \(\square\)_ Theorem 3.2 and Propositions 3.1, 3.2 easily imply the following statement. **Corollary 3.2**: _Let \(\tau_{i}\) and \(\sigma_{i}\) be \(\pi\)-compatible topologies on a set \(X_{i}\) for each \(i\leq n\). Then_ * \(d(\prod_{i=1}^{n}(X_{i},\tau_{i}))=d(\prod_{i=1}^{n}(X_{i},\sigma_{i})),\)__ * \(\prod_{i=1}^{n}(X_{i},\tau_{i})\) _is a Baire space iff_ \(\prod_{i=1}^{n}(X_{i},\sigma_{i})\) _is a Baire space._ \(\square\)__ ## 4 Admissible extensions of topologies **Definition 4.1** ([CN2, Definition 3.1]) A topology \(\sigma\) on a set \(X\) is said to be _an admissible extension of a topology \(\tau\)_ on the same set \(X\) if \(\tau\subseteq\sigma\); and \(\tau\) is a \(\pi\)-base for \(\sigma\), i.e. for each non-empty element \(O\) of \(\sigma\) there is a non-empty element \(V\) of \(\tau\) which is a subset of \(O\). Note that if \(\tau\) and \(\sigma\) are topologies on a set \(X\) such that \(\sigma\) is an admissible extension of \(\tau\) then \(\tau\) and \(\sigma\) are \(\pi\)-compatible. **Theorem 4.1**: _Let \(X\) be a set and \(\tau,\sigma\) topologies on \(X\) such that \(\sigma\) is an admissible extension of \(\tau\). If \(O\) is a non-empty element of \(\sigma\) then \(O\) is a semi-open set of \((X,\tau)\), i. e. there is an element \(V\) of \(\tau\) such that \(V\subseteq O\subseteq\mathrm{Cl}_{\tau}V\)._ Proof. Put \(V=\mbox{Int}\,_{\tau}O\) and note that \(V\neq\emptyset\). We will show that \(\mbox{Cl}_{\tau}V\supseteq O\). In fact, assume that \(W=O\setminus\mbox{Cl}_{\tau}V\neq\emptyset.\) Since \(\sigma\) is an admissible extension of \(\tau\) then \(W\in\sigma\) and there exists \(\emptyset\neq U\in\tau\) such that \(U\subseteq W\subseteq O\). It is easy to see that \(U\) must be a subset of \(V\). We have a contradiction which proves the statement. \(\Box\) **Remark 4.1**: Theorem 4.1 does not hold for \(\pi\)-compatible topologies in general. Indeed, let \(\tau_{S}\) be the Sorgenfrey topology on the reals \(\mathbb{R}\) generated by the family \(\{[a,b):a,b\in\mathbb{R}\}\), \(\tau_{-S}\) be a topology on the reals \(\mathbb{R}\) generated by the family \(\{(a,b]:a,b\in\mathbb{R}\}\) and \(\tau_{E}\) is the natural topology on \(\mathbb{R}\). Note that both topologies \(\tau_{S}\) and \(\tau_{-S}\) are admissible extensions of \(\tau_{E}\). Hence, the topologies \(\tau_{S}\) and \(\tau_{-S}\) are \(\pi\)-compatible. But for a non-empty element \((a,b]\in\tau_{-S}\) we have the following: \(\mbox{Int}\,_{\tau_{S}}(a,b]=(a,b)\) and \(\mbox{Cl}_{\tau_{S}}(a,b)=[a,b)\not\supseteq(a,b].\)\(\Box\) It is easy to see that the following proposition holds. **Proposition 4.1**: _Let \(X\) be a set and \(\tau,\sigma\) topologies on \(X\) such that \(\sigma\) is an admissible extension of \(\tau\). If \((X,\tau)\) is a \(T_{0}\)-space (resp. a \(T_{1}\)-space, a \(T_{2}\)-space or a functionally Hausdorff space) then \((X,\sigma)\) is similar. \(\Box\)_ The inverse statement does not hold. **Example 4.1**: Let \(X=X_{0}\cup X_{1}\subset\mathbb{R}^{2}\), where \(X_{i}=\{\{x\}\times\{i\}:x\in\mathbb{R}\}\). The topology \(\sigma\) on \(X\) is defined by bases \({\cal B}_{\sigma}(p)\) at each point \(p\) of \(X\) as follows. If \(p=\{x\}\times\{1\}\) then \({\cal B}_{\sigma}(p)=\{[x,y)\times\{1\}\cup(x,y)\times\{0\}:y>x\}\). If \(p=\{x\}\times\{0\}\) then \({\cal B}_{\sigma}(p)=\{(y,x)\times\{1\}\cup(y,x]\times\{0\}:y<x\}\). The topology \(\tau\) on \(X\) is also defined by bases \({\cal B}_{\tau}(p)\) at each point \(p\) of \(X\). Namely, If \(p=\{x\}\times\{1\}\) or \(p=\{x\}\times\{0\}\) then \({\cal B}_{\tau}(p)=\{(y,z)\times\{1\}\cup(y,z)\times\{0\}:y<x<z\}\). Let us note that the topology \(\sigma\) is an admissible extension of the topology \(\tau\) on the set \(X\). Moreover, the space \((X,\sigma)\) is homeomorphic to a subspace of the Alexandroff double arrow space, and hence \((X,\sigma)\) is regular \(T_{1}\), hereditarily Lindelof and hereditarily separable. However, \((X,\tau)\) is not \(T_{0}\). Proposition 4.1 cannot be extended to higher axioms. **Example 4.2**: ([SS, Example 64]) Let us note that there exists an admissible extension \(\sigma\) of the topology \(\tau_{E}\) on the reals \(\mathbb{R}\) such that \((\mathbb{R},\sigma)\) is not regular. As \(\sigma\) we consider the Smirnov topology on \(\mathbb{R}\): \(O\in\sigma\) if \(O=U\setminus B\), where \(U\in\tau_{E}\), \(B\subseteq A\) and \(A=\{\frac{1}{n}:n=1,2,\dots\}\). It is easy to see that if for some topologies \(\sigma\) and \(\nu\) on a set \(X\) there exists a topology \(\tau\) on \(X\) such that \(\sigma\) and \(\nu\) are admissible extensions of \(\tau\) then \(\nu\) and \(\sigma\) are \(\pi\)-compatible, and \(\tau\subseteq\sigma\cap\nu\). **Remark 4.2**: We will use the notations from Example 4.1. Let us consider the topology \(\nu\) on \(X\) defined by bases \(\mathcal{B}_{\sigma}(p)\) at each point \(p\) of \(X\) as follows. If \(p=\{x\}\times\{1\}\) then \(\mathcal{B}_{\sigma}(p)=\{(y,x)\times\{0\}\cup(y,x]\times\{1\}:y<x\}\). If \(p=\{x\}\times\{0\}\) then \(\mathcal{B}_{\sigma}(p)=\{[x,y)\times\{0\}\cup(x,y)\times\{1\}:y>x\}\). Note that the topology \(\nu\) is an admissible extension of the topology \(\tau\), \(\sigma\cap\nu=\tau\) and the spaces \((X,\sigma)\) and \((X,\nu)\) are homeomorphic. **Proposition 4.2**: _Let \(X\) be a set and \(\tau,\sigma,\nu\) be topologies on \(X\)._ * _If_ \(\sigma\subseteq\nu\) _and_ \(\sigma,\nu\) _are admissible extensions of_ \(\tau\) _then_ \(\nu\) _is an admissble extension of_ \(\sigma\)_._ * _If_ \(\tau\subseteq\sigma\subseteq\nu\) _and_ \(\nu\) _ia an admissble extension of_ \(\tau\) _then_ \(\sigma\) _is an admissible extension of_ \(\tau\)_._ \(\Box\)__ **Question 4.1**: Let \(X\) be a set, \(\tau\) and \(\sigma\) be \(\pi\)-compatible topologies on \(X\). Are \(\tau\) and \(\sigma\) always admissible extensions of \(\tau\cap\sigma\)? Some examples of admissible extensions ### Topologies formed from a given topology and ideals of sets A way to obtain finer topologies (but often non-regular) than a given topology \(\tau\) on a set \(X\) is to use ideals of subsets of \(X\) (cf. [JH]). Let \({\cal I}\) be an ideal of subsets of \(X\) and let \(A\) be a subset of \(X\). For a point \(x\in X\), let \({\cal N}(x)=\{U\in\tau:x\in U\}\). A local function of the set \(A\) with the respect to the ideal \({\cal I}\) and the topology \(\tau\), is the set \(A^{*}({\cal I},\tau)=\{x\in X:A\cap U\notin{\cal I}\) for every \(U\in{\cal N}(x)\}\). For every \(A\subseteq X\) put \({\rm Cl}^{*}(A)=A\cup A^{*}({\cal I},\tau)\). It is easy to see that \({\rm Cl}^{*}(\cdot)\) is a Kuratowski closure operator. So the family \(\tau^{*}({\cal I})=\{U\subseteq X:{\rm Cl}^{*}(X\setminus U)=X\setminus U\}\) is a topology on the set \(X\). Moreover, \(\tau\subseteq\tau^{*}({\cal I})\), every element of the ideal \({\cal I}\) is closed in the space \((X,\tau^{*}({\cal I}))\), and \((\tau^{*}({\cal I}))^{*}({\cal I})=\tau^{*}({\cal I})\). It is easy to see that if \({\cal I}\subseteq{\cal J}\) then \(\tau^{*}({\cal I})\subseteq\tau^{*}({\cal J})\). **Theorem 5.1**: _([CN1, Theorem 3.15]) Let \((X,\tau)\) be a topological space without isolated points and \({\cal I}\) an ideal of subsets of \(X\). Then the topology \(\tau^{*}({\cal I})\) is an admissible extension of \(\tau\) if and only if \({\cal I}\subseteq{\cal I}_{n}(\tau),\) where \({\cal I}_{n}(\tau)\) is the family of nowhere dense sets of the space \((X,\tau)\)._ _In particular, \(\tau^{*}({\cal I}_{n}(\tau))\) is the finest admissible extension of \(\tau\) formed from \(\tau\) and ideals of sets. \(\Box\)_ Applying Theorem 5.1 and Proposition 4.2(a) we get **Corollary 5.1**: _Let \((X,\tau)\) be a topological space without isolated points and \({\cal I}\), \({\cal J}\) be ideals of subsets of \(X\) such that \({\cal I}\subseteq{\cal J}\subseteq{\cal I}_{n}(\tau)\). Then \(\tau^{*}({\cal J})\) is an admissible extension of \(\tau^{*}({\cal I})\). \(\Box\)_ **Remark 5.1**: The condition on the space \((X,\tau)\) "to be without isolated points" in Theorem 5.1 is essential. Indeed, let \((X,\tau_{\mbox{dis}})\) be a discrete space and \(\mathcal{P}(X)\). Note that \(\tau^{*}_{\mbox{dis}}(\mathcal{I})=\tau_{\mbox{dis}}\), in particular, \(\tau^{*}_{\mbox{dis}}(\mathcal{I})\) is an admissible extension of \(\tau_{\mbox{dis}}\), and \(\mathcal{I}_{n}(\tau_{\mbox{dis}})=\{\emptyset\}\). **Example 5.1**: (cf. [JH]) Let \((X,\tau)\) be a topological space. Then for each \(A\subseteq X\) we have \(A^{*}(\mathcal{I}_{n}(\tau),\tau)=\mbox{Cl}_{\tau}(Int_{\tau}(\mbox{Cl}_{ \tau}(A)))\) and \(\mbox{Cl}^{*}(A)=A\cup\mbox{Cl}_{\tau}(Int_{\tau}(\mbox{Cl}_{\tau}(A)))\). Moreover, \(\tau^{*}(\mathcal{I}_{n}(\tau))=\tau^{\alpha}\), where \(\tau^{\alpha}\) is a topology from [N]. Some facts about admissible extensions of the Euclidean topology \(\tau_{E}\) on the real line via ideals of sets see below. Since the real line \((\mathbb{R},\tau_{E})\) is a Baire space we get the following. **Proposition 5.1**: _Let \(\mathcal{I}\) be an ideal of sets on the real line \((\mathbb{R},\tau_{E})\) and \(\mathcal{I}\subseteq\mathcal{I}_{n}(\tau_{E})\). Then the space \((\mathbb{R},\tau^{*}_{E}(\mathcal{I}))\) is connected. \(\Box\)_ Let \(\mathcal{T}_{\mbox{cd}}\) be the ideal of all closed discrete subsets of the real line \((\mathbb{R},\tau_{E})\). Recall ([CN1, Proposition 4.2]) that \(\tau_{E}=\tau^{*}_{E}(\mathcal{I})\) for any ideal \(\mathcal{I}\subseteq\mathcal{I}_{\mbox{cd}}\). **Proposition 5.2**: _Let \(\mathcal{I}\) be an ideal of sets on the real line \((\mathbb{R},\tau_{E})\) and \(\mathcal{I}\subseteq\mathcal{I}_{n}(\tau_{E})\). Then the space \((\mathbb{R},\tau^{*}_{E}(\mathcal{I}))\) is non-regular iff there exists an element \(I\in\mathcal{I}\) with a limit point in the topology \(\tau_{E}\). \(\Box\)_ **Remark 5.2**: (cf. [JH]) Let \(A=\{\frac{1}{n}:n=1,2,\dots\}\subset\mathbb{R}\) and \(\mathcal{I}_{A}\) be the ideal of all subsets of \(A\). Then the topology \(\tau^{*}_{E}(\mathcal{I}_{A})\) is the nonregular Smirnov topology on \(\mathbb{R}\) from Example 4.2. ### Hattori spaces Other extensions of the Euclidean topology \(\tau_{E}\) on the real line was suggested in [H]. **Definition 5.1**: ([H]) Let \(A\subseteq\mathbb{R}\). Define a topology \(\tau(A)\) on \(\mathbb{R}\) as follows: 1. for each \(x\in A\), \(\{(x-\epsilon,x+\epsilon):\epsilon>0\}\) is a nbd open basis at \(x\), 2. for each \(x\in\mathbb{R}\setminus A\), \(\{[x,x+\epsilon):\epsilon>0\}\) is a nbd open basis at \(x\). Note that \(\tau(\emptyset)\) (respectively, \(\tau(\mathbb{R})\)) is the Sorgenfrey topology \(\tau_{S}\) (respectively, the Euclidean topology \(\tau_{E}\)) on the reals, and all topologies \(\tau(A),A\subseteq\mathbb{R}\), are addmissible extensions of \(\tau_{E}\). It is easy to see that for any \(A,B\subseteq\mathbb{R}\) we have \(A\supseteq B\) iff \(\tau(A)\subseteq\tau(B)\). So by Proposition 4.2 we get **Proposition 5.3**: \(\tau(B)\) _is an admissible extension of \(\tau(A)\) iff \(A\supseteq B\). \(\square\)_ The topological spaces \(H(A)=(\mathbb{R},\tau(A)),A\subseteq\mathbb{R}\), are called _Hattori spaces._ Let us recall ([CH]) that all \(H(A)\)-spaces are \(T_{1}\) regular, hereditary Lindelof and hereditary separable. More on the properties of \(H(A)\)-spaces and their variety see for example in [K] and [BS]. The following is evident. **Proposition 5.4**: _Any \(H(A)\)-space with \(A\neq\mathbb{R}\) is disconnected. \(\square\)_ ### Almost topological groups _A paratopological group \((G,\tau)\)_ consists of a group \(G\) and a topology \(\tau\) on \(G\) that makes the group operation continuous. If in addition the inverse operation of \(G\) is continuous then \((G,\tau)\) is _a topological group_. _A semitopological group_ is a group endowed with a topology that makes continuous left and right translations. Each paratopological group is a semitopological one. **Definition 5.2**: ([F, Definition 2.1]) _An almost topological group_ is a paracompact group \((G,\tau)\) that satisfies the following conditions: * the group \(G\) admits a Hausdorff topological group topology \(\gamma\) weaker than \(\tau\), and * there exists a local base \(\beta_{e}\) at the identity \(e\) of the paratopological group \((G,\tau)\) such that the set \(U\setminus\{e\}\) is open in \((G,\gamma)\) for every \(U\in\beta_{e}\). One says also that \(G\) is an almost topological group with structure \((\tau,\gamma,\beta_{e})\). It is easy to see that the topology \(\tau\) is an admissible extension of \(\gamma\). **Proposition 5.5**: _([F, Proposition 2.5]) Let \(G\) be a non-discrete almost topological group with structure \((\tau,\gamma,\beta_{e})\). Then \(\gamma\) is the finest Hausdorff topological group topology on \(G\) weaker than \(\tau\)._ **Remark 5.3**: (cf. [CS]) For the paratopological group \((\mathbb{R},\tau_{S})\) the Euclidean topology \(\tau_{E}\) is the finest Hausdorff topological group topology on \(\mathbb{R}\) weaker than \(\tau_{S}\).__ **Remark 5.4**: * The Hattori space \(H(A)\), where \(A\) is a proper subset of \(\mathbb{R}\), is no semitopological group. In fact, let \(x\in\mathbb{R}\setminus A\) and \(y\in A\). Note that \(y+(x-y)=x\) and the translation of the additive group \(\mathbb{R}\) by \(x-y\) is not continuous at \(y\). * However, the space \((\mathbb{R},\tau_{E}^{*}(\mathcal{I}_{n}(\tau_{E})))\) is a semitopological group with open shifts and continuous inversion but it is not a paratopological group. In fact, since the nowhere dense sets are invariant under translations and inversion as well as the open intervals we have that \((\mathbb{R},\tau_{E}^{*}(\mathcal{I}_{n}(\tau_{E})))\) is a semitopological group with open shifts and continuous inversion. Let us show that \((\mathbb{R},\tau_{E}^{*}(\mathcal{I}_{n}(\tau_{E})))\) is no paratopological group. Note that \(0+0=0\) and \(W=(-1,1)\setminus\{\frac{1}{n}:n=1,2,\dots\}\) is an open neighborhood of \(0\) in \((\mathbb{R},\tau_{E}^{*}(\mathcal{I}_{n}(\tau_{E})))\). If \((\mathbb{R},\tau_{E}^{*}(\mathcal{I}_{n}(\tau_{E})))\) is a paratopological group then there exist two open sets \(V_{1}\) and \(V_{2}\) containing \(0\) such that \(V_{1}+V_{2}\subseteq W\). Observe that we can find a symmetric open neighborhood \(V\subseteq V_{1}\cap V_{2}\) of \(0\) and a non-empty interval \((a,b)\subseteq V\). Since the difference \((a,b)-(a,b)\) contains a nondegenerated interval \((-\epsilon,\epsilon)\) we have \((-\epsilon,\epsilon)\subseteq V-V\subseteq V_{1}+V_{2}\subseteq W\). But this is impossible.__ **Example 5.2**: ([F, Example 2.2]) There exist non-regular almost topological groups. In fact, consider the additive group \(\mathbb{R}^{2}\) with the identity \(e=(0,0)\). For every \(r>0\) let \(B_{r}=\{e\}\cup\{(x,y)\in\mathbb{R}^{2}:x^{2}+y^{2}<r^{2},x>0\}\). It is easy to see that the family \(\mathcal{B}=\{B_{r}:r>0\}\) is a local base at \(e\) of a Hausdorff paratopological group. The group \((\mathbb{R}^{2},\tau)\), where \(\tau\) is a topology generated by the family \(\mathcal{B}\), is a Hausdorff non-regular almost topological group. ### Hattori topologies on almost topological groups We will follow ([CS]) and apply notations from Definition 5.2. Let \((G,\tau)\) be an almost topological group and \(\beta_{e}\) a local base at the identity \(e\) of \((G,\tau)\). Then \(\{UU^{-1}:U\in\beta_{e}\}\) is a local base at \(e\) in the Hausdorff topological group \((G,\gamma)\). Given \(x\in G\), let \(\beta_{x}=\{Ux:U\in\beta_{e}\}\) and \(\beta^{\prime}_{e}=\{UU^{-1}x:U\in\beta_{e}\}\). Given \(A\subseteq G\), consider the collection \(\{\beta(x):x\in G\}\), where \[\beta(x)=\begin{array}{c}\beta^{\prime}_{x}.\;\mbox{if $x\in A$},\\ \beta_{x}.\;\mbox{if $x\notin A$}.\end{array}\] **Theorem 5.2**: _([CS, Theorem 3.1]) Let \((G,\tau)\) be an almost topological group and \(A\subseteq G\). Then \(\{\beta(x):x\in G\}\) is a neighborhood system for a topology \(\tau(A)\) on \(G\). \(\Box\)_ The topological space \((G,\tau(A))\) is denoted by \(H(A,G)\) and it is called _the Hattori space_ of \(G\) associated to \(A\). Let us note that \(H(\emptyset,G)=(G,\tau)\) and \(H(G,G)=(G,\gamma)\). **Proposition 5.6**: _([CS, Proposition 3.6]) Let \((G,\tau)\) be an almost topological group such that \((G,\tau)\) is not a topological group. If \(A,B\subseteq G\) then \(\tau(A)\subseteq\tau(B)\) iff \(B\subseteq A\). \(\Box\)_ Proposition 5.6 and Proposition 4.2 imply **Proposition 5.7**: _Let \((G,\tau)\) be an almost topological group such that \(G\) is not a topological group, and \(A,B\subseteq G\). Then \(\tau(A)\) is an admissible extension of \(\tau(B)\) iff \(A\subseteq B\). \(\Box\)_ **Problem 5.1**: Let \((G,\tau)\) be the almost topological group and \(\gamma\) be the finest Hausdorff topological group topology on \(G\) weaker than \(\tau.\) (For example, let \((G,\tau)\) be the almost topological group from Example 5.2.) Describe the topological diversity of Hattori spaces \(H(A,G),A\subseteq G\).
2301.10491
ArchEnemy: Removing scattered-light glitches from gravitational wave data
Data recorded by gravitational wave detectors includes many non-astrophysical transient noise bursts, the most common of which is caused by scattered-light within the detectors. These so-called ``glitches'' in the data impact the ability to both observe and characterize incoming gravitational wave signals. In this work we use a scattered-light glitch waveform model to identify and characterize scattered-light glitches in a representative stretch of gravitational wave data. We identify $2749$ scattered-light glitches in $5.96$ days of LIGO-Hanford data and $1306$ glitches in $5.93$ days of LIGO-Livingston data taken from the third LIGO-Virgo observing run. By subtracting identified scattered-light glitches we demonstrate an increase in the sensitive volume of the gravitational wave search for binary black hole signals by $\sim1\%$.
Arthur E. Tolley, Gareth S. Cabourn Davies, Ian W. Harry, Andrew P. Lundgren
2023-01-25T10:01:07Z
http://arxiv.org/abs/2301.10491v2
# ArchEnemy: Removing scattered-light glitches from gravitational wave data. ###### Abstract Data recorded by gravitational wave detectors includes many non-astrophysical transient noise bursts, the most common of which is caused by scattered-light within the detectors. These so-called "glitches" in the data impact the ability to both observe and characterize incoming gravitational wave signals. In this work we use a scattered-light glitch waveform model to identify and characterize scattered-light glitches in a representative stretch of gravitational wave data. We identify 2749 scattered-light glitches in 5.96 days of LIGO-Hanford data and 1306 glitches in 5.93 days of LIGO-Livingston data taken from the third LIGO-Virgo observing run. By subtracting identified scattered-light glitches we demonstrate an increase in the sensitive volume of the gravitational wave search for binary black hole signals by \(\sim 1\%\). ## 1 Introduction The Laser Interferometer Gravitational-Wave Observatory (LIGO) [1] and Virgo [2] collaborations made the first observation of gravitational waves in September 2015 [3]. The detection established the field of gravitational wave astronomy and a global network of gravitational wave detectors, now joined by KAGRA [4], has allowed for the detection of approximately 100 gravitational wave events [5, 6, 7, 8]. The detection of gravitational waves is made possible by both the sensitivity of the detectors and the search pipelines [9, 10, 11, 12] which analyse raw strain data from the output of the detectors and identify observed gravitational wave signals. One of the problems that these search pipelines must deal with is the fact the data contains both non-stationary noise and non-Gaussian transient noise 'glitches' [13, 14, 15]. Glitches are caused by instrument behaviour or interactions between the instrument and the environment [16] and glitches reduce the sensitivity of the detectors [17], can potentially obscure candidate gravitational wave events [6] and can even mimic gravitational wave events [18, 19]. Different classes of glitches have been characterized using tools such as Gravity Spy [20, 21]. Of the 325,101 glitches classified by Gravity Spy in the third observing run of Advanced LIGO [22] with a confidence of 90% or higher, 120,733 (32.1%) were classified as "Scattered Light". Scattered-light glitches occur in the 10-120Hz frequency band [23] which coincides with the frequency band where we observe the inspiral and merger signatures of compact binary coalescences. Scattered-light glitches are characterized by an arch-like pattern in a time-frequency spectrogram of the detector output, as seen in figure 1. Scattered-light glitches occur when laser light in the interferometer is scattered from the main optical path by components within the detector. The motion of these components is coupled to seismic motion inducing a phase shift on the light being scattered as the surface moves back and forth. This scattered-light then recombines with the main laser, producing scattered-light glitches in the data. The surfaces in which scattered-light glitches originate from have been objects on optical benches such as lenses, mirrors and photo-detectors [25]. Scattered-light glitches have been a significant problem when observing compact binary mergers. As an example, GW190701_203306 was coincident with a scattered-light glitch, as shown in figure 2[6], requiring subtraction from the data before the event could be properly characterized [26]. A further 7 candidate events were found to be in coincidence with scattered-light glitches in the third observing run [8]. For this reason, it is important to reduce the effect of scattered-light glitches in the detectors and gravitational wave search pipelines. Scattered-light glitches occur as single or multiple glitches and can appear rapidly in time and simultaneously in frequency (see figure 3), which we refer to as harmonic glitches. The most obvious way to remove the impact of glitches is removing the mechanisms which produce the glitches in the observatories. This has been investigated in previous works [23, 25, 27, 28, 29, 30, 31, 32], which focus on identifying the surfaces in which light is being scattered from and then mitigating the scattering by reducing the reflectivity of the surface, seismically isolating it or relocating it. An alternative method for reducing scattered-light glitches, known as "RC tracking", was implemented in the Advanced LIGO observatories in January 2020 [23]. The Advanced LIGO detectors employ a quadru Figure 1: An Omega scan [24] of gravitational wave data containing an example of a scattered-light glitch. Scattered-light glitches are characterized by a symmetric arch-like pattern. Multiple scattered-light glitches can be seen at the 4, 8 & 12 second marks as well as multiple harmonic glitches at 8 seconds. for the test masses where two chains suspend four masses in this suspension system, one for the test mass optic and the other for the reaction mass. The reaction mass is used to impose a force upon the test mass and a significant source of scattered-light glitches was the large relative motion between the test mass chain and the reaction chain. To mitigate this effect, the relative motion between the end test mass and the reaction mass needed to be reduced. This was achieved by ensuring the two chains are moving together by applying force to the top stage of the quadruple suspension system in Advanced LIGO. The implementation of RC tracking represented a decrease from \(0.01s^{-1}\) to \(0.0001s^{-1}\) and \(0.0072s^{-1}\) to \(0.0012s^{-1}\) in the number of scattered-light glitches detected by Gravity Spy for LIGO-Hanford and LIGO-Livingston respectively. While methods for preventing scattered-light glitches have been developed and Figure 2: GW190701_203306, a gravitational wave event coincident with a scattered-light glitch in the data from the LIGO Livingston observatory. The orange dashed track shows the inferred time-frequency evolution of a gravitational wave event produced by a compact binary merger, the red solid line is an overlaid track of the approximate location of the coincident fast scattering glitches. have shown success, they have not been able to remove the problem of scattered-light glitches from the data. Additionally, as the detectors continue to be upgraded and increase in sensitivity, new sources of scattered-light glitches will continue to appear. Identifying these new sources and mitigating their effects can take many months, during which time the detectors are taking in data which might be affected by the presence of scattered-light glitches. Therefore, it is not realistic to believe that analyses will be able to regularly run on data that does not contain scattered-light glitches and this motivates us to develop a technique for mitigating the impact of these glitches when trying to identify compact binary mergers in gravitational wave data. In this work we present a new method for identifying and removing scattered Figure 3: An Omega scan [24] of gravitational wave data containing multiple examples of scattered-light glitches. Here we see multiple scattered-light glitches repeating periodically in a 20 second period of time. Harmonic scattered-light glitches are also seen in multiple stacks, the harmonic glitches share _time period_ values and the _glitch frequency_ values are \(n-\)multiples of the lowest frequency harmonic within the stack. light glitches from gravitational wave data in advance of running searches to identify compact binary mergers. We first introduce a method for identifying when scattered-light glitches are present in detector data, through the creation of a new modelled search for scattered-light glitches, similar to how we search for gravitational waves using matched filtering. We can model scattered-light glitches, generate a suitable set of glitch waveforms and perform a matched filter search on detector data. We then subtract identified glitches from the data to increase detector sensitivity. The detector data isn't Gaussian and stationary so the matched filter does have the potential to identify non-scattered-light glitches, and potentially even gravitational wave signals, as scattered-light glitches. To prevent this we also demonstrate a new scattered-light \(\chi^{2}\) test, which can distinguish between scattered-light glitches, and other glitches--and gravitational wave signals--in the data. We begin by reviewing previous research and describing the formulation of the waveform model used for characterizing scattered-light glitches in section 2. In section 3 we introduce the various techniques used in the search to identify scattered-light glitches in gravitational wave data and the results of the scattered-light glitch search. In section 4 we describe the results of a "glitch-subtracted" gravitational wave search and any increases in sensitivity. We conclude in section 5 and discuss the implementation of this method in future observing runs. ## 2 Scattered-light To identify scattered-light glitches in gravitational wave data requires an accurate model of scattered-light glitches. This section details the derivation of the model we will use for generating our scattered-light glitch filter waveforms, along with its parameterization. ### Modelling scattered-light glitches - a review Our model for scattered-light glitches draws heavily from [25], and we briefly review the main details of the model presented there. In [25], the authors construct a model to accurately predict the increase in noise due to scattered-light during periods of increased micro-seismic activity. The model in [25] is constructed from parameters related to physically measurable properties of the detector such as the mirror transmission factor, \(T\), the finesse of the Fabry-Perot cavity, \(F\), or the wavelength of the light, \(\lambda\). They define the amplitude of the additional beam produced by light scattering off of a surface as \[A_{sc}=A_{0}T\sqrt{\frac{2F}{\pi}}\sqrt{f_{sc}}\,, \tag{1}\] where \(A_{0}\) is the amplitude of the light resonating in a Fabry-Perot cavity and \(f_{sc}\) is the fraction of the optical power scattered back into the main beam. The phase angle modulated by the displacement of the scattering optics is also defined as \[\phi_{sc}(t)=\frac{4\pi}{\lambda}(x_{0}+\delta x_{opt}(t)), \tag{2}\] where \(\delta x_{opt}\) is the displacement of the scattering surface and \(x_{0}\) is the static optical path. The total amplitude of the beam inside the arm is given by \(A_{tot}=A_{0}+A_{sc}\), with a phase angle equal to the phase noise introduced by the scattered-light \(\delta\Phi=\frac{A_{sc}}{A_{0}}\cdot\sin\phi_{sc}\). The measured gravitational wave strain is proportional to the phase noise and so the noise introduced by scattered-light, \(h_{sc}\), can be expressed as \[h_{sc}(t)=G\cdot\sin\left(\frac{4\pi}{\lambda}(x_{0}+\delta x_{sc}(t))\right), \tag{3}\] where \(G\) is the _coupling factor_, defined as \(K\cdot\sqrt{f_{sc}}\) where \[K=\frac{\lambda}{4\pi}\frac{T}{\sqrt{2F\pi}}. \tag{4}\] The displacement of the scatterer when presenting with oscillatory motion is then given as \[\delta x_{sc}(t)\simeq A_{m}\sin(2\pi f_{m}t), \tag{5}\] where \(f_{m}\) is the frequency-modulated signal with modulation index \(m=A_{m}\frac{4\pi}{\lambda}\) and where \(A_{m}\) is the amplitude of the \(n\)th harmonic. Finally, equation 3 can be simplified when considering only small bench motion, according to \[h_{sc}(t)=G\cdot\cos\phi_{0}\cdot\frac{4\pi}{\lambda}\cdot\delta x_{sc}(t)\;. \tag{6}\] ### Model The model introduced in [25] for the gravitational wave strain noise introduced by scattered-light uses a lot of knowledge about the detector state. The model used in this work will be more phenomenological in the parameterization, allowing us to rely only on the characteristics of the glitches in the strain data and not knowledge of the detector configuration, especially in cases where this detector information might not be known. Each scattered-light glitch, as viewed in a spectrogram (see figure 1), has two easily identifiable features: the maximum frequency reached, _glitch frequency_ (\(f_{gl}\)); and the period of time the glitch exists in detector data, _time period_ (\(T\)). In addition we can fully describe an artifact by defining an _amplitude_ (\(A\)), _phase_ (\(\psi\)) and _center time_ of the glitch (\(t_{0}\)). To formulate a model of scattered-light glitches in terms of these parameters, we simplify equation 6, treating the strain noise caused by scattered-light as the sinusoidal function \[h_{sc}(t)\propto\sin(\phi_{noise}(t)). \tag{7}\] Here the induced phase noise (\(\phi_{noise}\)) is equal to \[\phi_{noise}(t)=2\pi f_{rep}t, \tag{8}\] and \(f_{rep}\) is the frequency of repetition of the sinusoid and is directly related to the _time period_, \(T\), \[f_{rep}=\frac{1}{2T}. \tag{9}\] The _time period_ of a scattered-light glitch only corresponds to half of a sinusoidal wave hence the multiplier of 2 on the denominator of equation 9. Scattered-light glitches are caused by the physical increase in the distance travelled by the light as a consequence of being reflected off of a surface. The light returning to the beamsplitter from one arm will have travelled a different path length compared to the other arm and this path difference will act as a phase difference between the two arms causing non-destructive interference. The path difference and phase difference can be related with \[\Delta\phi_{scattering}(t)=2\cdot\frac{2\pi}{\lambda}\Delta x(t), \tag{10}\] where \(\Delta x(t)\) indicates the change in the path taken by the light over time. An additional multiplier of 2 indicates our path difference is occurring twice, once when the light approached the surface and again when leaving. We assume the scattering surfaces are oscillatory in motion and apply the same simplification made in equation 5, with an initial position \(\Delta x=0\) to a maximum displacement of \(x_{0}\). This produces an equation for the path difference of the light, \[\Delta x(t)=x_{0}\sin(2\pi f_{rep}t). \tag{11}\] We substitute equation 11 into equation 10 and produce an equation for the phase noise induced by scattered-light \[\phi_{scattering}(t)=\frac{4\pi}{\lambda}x_{0}\sin(2\pi f_{rep}t)\;, \tag{12}\] this equation for the phase noise induced by scattered-light, \(\phi_{scattering}(t)\), and the equations for the generic phase noise, \(\phi_{noise}(t)\) (equation 8), can now be used to create an equation for the strain noise caused by scattered-light. We take the derivatives of equations 8 & 12 with respect to time: \[\phi^{\prime}_{noise}(t)=2\pi F_{inst}(t), \tag{13}\] \[\phi^{\prime}_{scattering}(t)=\frac{4\pi}{\lambda}x_{0}2\pi f_{rep}\cos(2\pi f_ {rep}t)\;, \tag{14}\] where \(F_{inst}\) is the instantaneous frequency at a specific time. We generate scattered-light glitches from \(\frac{-T}{2}\) to \(\frac{T}{2}\) to ensure their maximum frequency occurs at \(t=0\). We define this maximum frequency as the _glitch frequency_. We equate the two derivatives and set \(t=0\), this replaces \(F_{inst}(t)\) with \(f_{gl}\) and \(\cos(2\pi f_{rep}t)\) with 1, then re-arrange to find the maximum displacement of the scattering surface, \(x_{0}\), as \[x_{0}=\frac{f_{gl}}{f_{rep}}\frac{\lambda}{4\pi}. \tag{15}\] Substituting equation 15 into equation 12 gives us an expression for the scattered-light phase noise, \[\phi_{scattering}(t)=\frac{f_{gl}}{f_{rep}}\sin(2\pi f_{rep}t), \tag{16}\] and substituting equation 16 as our \(\phi_{noise}\) in equation 7 we arrive at our model of scattered-light glitches, \[h_{sc}(t)=A\sin\left(\frac{f_{gl}}{f_{rep}}\sin(2\pi f_{rep}t)+\psi\right), \tag{17}\] where \(A\) and \(\psi\) are amplitude and phase parameters to be maximised over. In [32] they provide another term to describe scattered-light glitches which uses the instantaneous frequency as a function of time, allowing the identification of the correct amplitude at all frequencies. This term is due to radiation pressure coupling and is thought to be dominant at low frequencies. The relationship between our amplitude and the new amplitude depends on the power in the arm cavities and the signal recycling mirror reflectivity which we do not consider in our model and so disregard this extra term. ### Harmonics As described in [25], harmonic glitches appear at the same time with different _glitch frequencies_. A harmonic glitch is a glitch with a _glitch frequency_ that is a positive integer multiple of the glitch frequency of the glitch in the stack with the lowest glitch frequency value. This lowest frequency glitch has the potential to appear below 15Hz and will be masked by other sources of noise and therefore cannot be seen. An example of harmonics can be seen in figure 3. ## 3 Identifying scattered-light glitches in gravitational wave strain data Equipped with our model for scattered-light glitches from the previous section, we now discuss how we identify and parameterize scattered-light glitches in a stretch of gravitational wave data before we apply this to searches for compact binary mergers in the next section. ### Matched Filtering Given our model of scattered-light glitches, we can consider a realization with specific values of the parameters discussed above. To identify glitches with these values of the parameterization, we apply matched filtering. The matched filter is the optimal method for detecting known waveforms in the data, when the data is stationary and Gaussian, and is defined as [33] \[\rho(t)=\frac{(h|s)}{\sqrt{(h|h)}}\equiv(\hat{h}|s). \tag{18}\] Here \(h\) is the model template, \(s\) the gravitational wave data we are searching and we use the noise-weighted inner product defined between two time series \(a(t)\) and \(b(t)\) as \[(a|b)=4Re\int_{0}^{\infty}\frac{\tilde{a}(f)\tilde{b}^{*}(f)}{S_{n}(f)}df. \tag{19}\] The tilde on \(\tilde{a}\) and \(\tilde{b}\) refer to the Fourier-transform of both variables into the frequency domain. The denominator, \(S_{n}(f)\), represents the one-sided power spectral density (PSD) of the data, defined as \[\langle\tilde{s}(f)\tilde{s}(f^{\prime})\rangle=\frac{1}{2}S_{n}(f)\delta(f-f^ {\prime})\;, \tag{20}\] where the angle brackets denote an average over noise realizations and \(\delta\) is the Dirac delta function. Scattered-light glitches will take a range of values of the parameters describing them and we must be able to identify glitches anywhere in the parameter space. Following [33] we can analytically maximize over the _amplitude_, _phase_ and the _center time_ glitch parameters in equation 17. The matched filter naturally maximizes over _amplitude_ when expressed in terms of signal-to-noise ratio, and can be evaluated as a function of time by including a time shift in equation 19, \[(a|b)(t)=\left|4Re\int_{0}^{\infty}\frac{\tilde{a}(f)\tilde{b}^{*}(f)}{S_{n}(f )}e^{-2\pi ift}df\right|. \tag{21}\] To maximize over phase we take the absolute value of the inner product, rather than the real part of the integral. ### Template Bank Our scattered-light glitch model is parameterized by 5 variables. As shown above, by maximizing over _phase_, _amplitude_ and _time_, we can analytically maximize the signal-to-noise ratio over 3 of these variables. However, the remaining two describe the intrinsic evolution of the glitch and we must explicitly search over these parameters. To do this we create and use a "template bank" of scattered-light glitch waveforms, created such that it would be able to identify glitches with any value of our 2 remaining parameters, _glitch frequency_ and _time period_. To generate this template bank, a stochastic template placement algorithm [34] is used. This algorithm randomly generates a new template, places the template in the existing template bank and calculates the "distance" (how similar two templates appear) between the new template and existing templates. If the new template is too close to an existing template it is discarded, otherwise, it is accepted into the bank. The density of the bank is dependent on the allowed distance between two templates. The dominant cost of the search is matched filtering and is approximately linear to the number of templates, a larger template bank will find all of the glitches with more accurate parameter values but the computational cost of the search will be increased. The distance function we have chosen to evaluate templates by is the match between two glitch templates. To calculate the match, we first normalize both templates such that the matched filter between a template and itself would be equal to 1. We can then compute the inner product between the two normalized templates to find their match \[M=\max_{t,\psi}(\hat{a}|\hat{b}). \tag{22}\] The value for the match, \(M\), is bounded between 0 and 1, where a value of 0 indicates orthogonal waveforms and a value of 1 indicates identical waveforms. The maximum match allowed between any two templates in our bank is 0.97, which implies that a fully converged stochastic bank would have at least one waveform in it with \(M\geq 0.97\) for any point in the space of parameters. For our scattered-light glitch search, we generated a template bank with _time periods_\(\in\) 1.8s - 5.5s and _glitch frequencies_\(\in\) 20Hz - 80Hz. We chose to use the Advanced LIGO zero-detuned high-power sensitivity curve [35]1. This allowed us to generate a template bank that contains 117,481 templates. We visualize the distribution of the templates as a function of _time period_ and _glitch frequency_ in figure 4, observing a greater density of templates at higher frequencies and longer durations. Footnote 1: The zero-detuned high-power sensitivity curve is a broader noise curve than the O3 Advanced LIGO data that we identify scattered-light glitches in. However, this broader noise curve will result in _more_ template waveforms that we need, and will therefore overcover the parameter space, rather than potentially miss scattered-light glitches. ### Identifying potential scattered-light glitches in the data We test our method by searching through gravitational wave data from 2019-11-18 16:13:05 - 2019-11-25 22:11:09 for the LIGO-Hanford and LIGO-Livingston detectors. This time corresponds to the 25th period of data used in the LVK analysis of O3 data for compact binary mergers [8] and is prior to the implementation of RC tracking [23] in these detectors. We only analyse data that is flagged as "suitable for analysis" on the Gravitational Wave Open Science Center [36]1. This corresponds to 5.96 days of analysable data for LIGO-Hanford and 5.93 days for LIGO-Livingston. Footnote 1: We note that in O3 _only_ data suitable for analysis is released, so we simply analyse all of the publicly available data. Equipped with our template bank we now identify potential scattered-light glitches in the data. We matched filter all of the data against all of the templates, Figure 4: The template bank of scattered-light glitch templates used in the search for scattered-light glitches. The _glitch frequency_ parameter values range from 20Hz - 80Hz and the _time period_ parameter values range from 1.8s - 5.5s. This bank was created with a maximum match allowed between two templates of 0.97 and contains 117,481 templates. producing a signal-to-noise ratio time series for every template in the bank. These signal-to-noise ratio time series will contain peaks which, when above a certain limit, indicate the presence of a scattered-light glitch at a particular time. We retain any maxima within the time series that have a signal-to-noise ratio greater than 8. However, as we do this independently for every template, we will identify multiple peaks for any given glitch, and we will also find peaks that correspond to other types of glitch, or even gravitational wave signals. We discuss how we reduce this to a list of identified scattered-light glitches in the following subsections. ### Scattered-Light Signal Consistency Test To prevent the search for scattered-light glitches from misclassifying other classes of glitches, or gravitational wave signals, we employ a \(\chi^{2}\) consistency test. The \(\chi^{2}\) discriminator introduced in [37] divides gravitational wave templates into a number of independent frequency bins. These bins are constructed so as to contain an equal amount of the total signal-to-noise ratio (SNR) of the original matched filter between template and data. The \(\chi^{2}\) value is obtained by calculating the SNR of each bin, subtracting this from the expected SNR value for each bin and squaring the output. These values are summed for all bins and this value forms the \(\chi^{2}\) statistic, \[\chi^{2}=\frac{n}{2n-2}\sum_{i=1}^{n}\left(\frac{\rho}{\sqrt{n}}-\rho_{bin,i} \right)^{2}. \tag{23}\] Here \(n\) is the number of bins, \(\rho\) is the SNR of the original matched filter between template and data, and \(\rho_{bin}\) is the value of the SNR found when matched filtering one of the divided templates and the data. This test is constructed so as to produce large values when the data contains a non-Gaussianity that is not well described by the template, but to follow a \(\chi^{2}\) distribution if a non-Gaussianity that matches well to the template is present, or if no non-Gaussianity is present in the data. The \(\chi^{2}\) test that we employ is similar to that of [37], however compact binary merger waveforms increase in frequency over time whereas scattered-light glitch templates are symmetric about their centre. We therefore choose to construct our \(\chi^{2}\) test with four non-overlapping bins _in the time domain_, each of which contributes equally to the SNR, an example of the split template can be seen in figure 5. The matched filter between the bins and data is computed and the \(\chi^{2}\) value is calculated using equation 23 (where \(n=4\)). Any non-Gaussianity that does not exhibit symmetric morphology should not fit with this deconstruction of the template, and should result in elevated \(\chi^{2}\) values. After computing the \(\chi^{2}\) value for potential scattered-light glitches, we follow [38] to compute a "reweighted signal-to-noise ratio", which is an empirically tuned statistic depending on the signal-to-noise ratio and the \(\chi^{2}\) value. The re-weighting function we use matches that presented in [38], \[\rho_{rw}=\left\{\begin{array}{ll}\rho&\mbox{if}\quad\chi^{2}\leq 1,\\ \rho[(1+(\chi^{2})^{3})/2]^{-\frac{1}{6}}&\mbox{if}\quad\chi^{2}\geq 1.\end{array}\right. \tag{24}\] We do note that this re-weighting function has been tuned for compact binary merger waveforms and we did not repeat that tuning here with scattered-light glitches. One could retune this statistic, specifically targeting scattered-light glitches, using (for example) the automatic tuning procedure described in [39]. However, we demonstrate Figure 5: A scattered-light glitch template (left) where the colours and line-styles are indicative of the four equal SNR time bins to be used in calculating the \(\chi^{2}\) value and re-weighting the SNR. The same scattered-light glitch template bins overlaid on an injection of the scattered-light glitch (right). The inner two bins are considerably shorter than the outer two bins which informs us that the center - higher frequency - region of the scattered-light glitch contributes a larger amount to the SNR per unit time than the lower frequency regions. the suitability of the \(\chi^{2}\) test for our purposes in figure 6 where we show the \(\chi^{2}\) vs SNR distribution of the triggers found by our scattered-light glitch search when performed on data containing only scattered-light glitches and data containing a binary black hole gravitational wave injection. We note that the \(\chi^{2}\) test increases the number of matched filters required by the search and therefore the computational cost of the search. Each template would require the matched filtering of an additional 4 "binned" templates to calculate the \(\chi^{2}\) value to re-weight the SNR time series of that template, increasing computational cost by a maximum factor of 5. However, we reduce this increase by only computing the \(\chi^{2}\) where it is needed, specifically for any template where the SNR time series has values above the threshold of 8. ### Identifying all scattered-light glitches in the data Our matched filtering process retains "triggers" (potential scattered-light glitches) wherever the re-weighted signal-to-noise time series is larger than 8. We retain no more than one trigger within a window size equal to half the _time period_ of the template used to produce the re-weighted signal-to-noise time series and only store triggers at local maxima. A re-weighted signal-to-noise time series with multiple peaks and identified triggers can be seen in the top right panel in figure 7. After matched filtering all the templates against the data we will recover multiple triggers for any potential scattered-light glitch, as we might expect to independently identify peaks in multiple templates around the true values of the glitch. We therefore collect all of the triggers generated by the template bank and cluster these in time, using a window of 0.9 seconds. This will result in a list of triggers corresponding to the highest re-weighted signal-to-noise ratios, where each trigger should correspond to a unique scattered-light glitch. The bottom left panel in figure 7 shows an example of the triggers found by the search and the highest re-weighted signal-to-noise ratio triggers found by clustering. However, we also expect to see instances of harmonic glitches which are produced by the same scattering surface and so share the same _time period_ and have _glitch frequency_ values equal to a multiple of the lowest frequency glitch in the harmonic stack [25]. We investigate each trigger in the list, searching for harmonic glitches occurring at the same time. We use the first list of triggers identified by all templates across the bank and filter by those that occur within \(\pm 0.05\) seconds of the _center Figure 6: The signal-to-noise ratio and \(\chi^{2}\) values for the triggers identified by the matched filtering and clustering of the scattered-light glitch template bank with data containing only scattered-light glitches (grey hexagons) and data containing a binary black hole gravitational wave injection (red triangles). The black contour lines represent the re-weighted signal-to-noise ratio values the trigger will take when equation 24 is applied. The orange dashed vertical line indicates the signal-to-noise ratio value limit of 8, above which we decide to perform the \(\chi^{2}\) test and calculate the re-weighted signal-to-noise-ratio. The blue solid contour line indicates a re-weighted signal-to-noise ratio value of 8, which is the limit at which we consider the trigger to be real. Different re-weighting parameter values will produce different contour line shapes. It can be seen that no triggers for the data containing the gravitational wave injection lie beneath the contour line and therefore no scattered-light glitches are found on the gravitational wave signal. Figure 7: The process for identifying all the scattered-light glitches in a period of gravitational wave data. The red overlay in the gravitational wave data used in this example (top left) indicates the highest re-weighted signal-to-noise trigger found, the dashed vertical lines represent the time slice window around this trigger. The re-weighted signal-to-noise ratio time series resulting from the matched filter of this trigger’s template and the data (top right) is clustered is time to identify the triggers found above a signal-to-noise threshold of 8, indicated by red vertical dashed lines. All the triggers from all of the templates in the template bank are then clustered in time (bottom left) to identify the highest re-weighted signal-to-noise glitches in the data, indicated by the orange vertical dashes lines. The triggers found within the time slice window, with a similar _time period_ value—within \(\pm 10\%\)—of the highest re-weighted signal-to-noise ratio trigger (bottom right) are clustered by their _glitch frequency_ value to find the harmonic glitches at the same time, indicated again by red dashed lines. time_ of the trigger we are investigating, an example of this window can be seen in the top left panel of figure 7. We then filter the triggers again, keeping only those with an associated _time period_ value within \(\pm 10\%\) of the trigger's _time period_. Finally we cluster these remaining triggers by their associated _glitch frequency_ using a window size of 4Hz, the bottom right panel in figure 7 shows the identification of harmonic glitches for the overlaid scattered-light glitch in the top left panel of figure 7. ### Hierarchical subtraction to find parameter values We now have a list of identified scattered-light glitches and their parameter values, however, these might not be fully accurate when there are many glitches close in time and frequency, as illustrated in figure 8. It is important that the parameters we find match well with the glitches in the data to remove as much power as possible. To better identify the parameter values of the scattered-light glitches, we perform a hierarchical procedure using information about the glitches we have found so far. Firstly, we create new segments of time which we know contain scattered-light glitches, taking a window of 8 seconds on either side of each previously identified glitch, if two glitch windows overlap they are combined into the same segment. For each segment we then create a reduced template bank, consisting of templates "close" to the scattered-light glitches previously identified in the segment. We take the smallest and largest _time period_ and _glitch frequency_ glitches in the segment and bound the retained templates by these values with \(\pm 0.25\) seconds on the _time period_ and \(\pm 1\)Hz on the _glitch frequency_. For each data segment the reduced template bank is matched filtered with the data, the maximum re-weighted SNR value is found and the corresponding glitch is subtracted. We then matched filter _again_ and remove the next largest re-weighted SNR template. This process is repeated until no templates, when matched filtered with the data, produce any re-weighted SNR values above the SNR limit of 8. This method of hierarchical subtraction produces our final list of scattered-light glitches. A further benefit of using these shorter data segments is that we are estimating the PSD of the data using only a short period of data close to the scattered-light glitches being removed. This protects us from a rapidly changing PSD in non-stationary data, which might cause Gaussian noise to be identified with larger SNR in the periods where the PSD is larger. This can be resolved by including the variation in the power spectral density as an additional statistic in the re-ranking of triggers, which has been done for compact binary coalescence gravitational wave searches in [40]. We demonstrate the hierarchical subtraction step on an amount of data which contains four injected scattered-light glitches in a single harmonic stack, this can be seen in figure 9. As shown, the scattered-light glitches are found and subtracted from the data leaving behind a cleaned segment of gravitational wave data leaving behind no excess noise. Figure 8 shows the identified scattered-light glitch triggers before and after performing the hierarchical subtraction step on a stretch of data containing real scattered-light glitches. We identify more triggers prior to performing hierarchical subtraction, however there are more errant mismatches between scattered-light glitches and the overlaid templates. By performing the hierarchical subtraction, we more accurately identify scattered-light glitches, however, we miss some glitches that were previously identified. There is still some imperfection in this process and we do not refine the method further in this work, but highlight this as useful direction for future studies in removing scattered-light glitches. ### Identified scattered-light glitches The methodology described in previous sections is implemented in our "ArchEnemy" pipeline, which is capable of searching for scattered-light glitches in gravitational wave data using a pre-generated bank of glitch templates. We use ArchEnemy to analyse the aforementioned data, which produced a list of 2749 scattered-light glitches in data from the LIGO-Hanford observatory and 1306 from the LIGO-Livingston observatory. The number of scattered-light glitches found by the ArchEnemy pipeline can be compared to Gravity Spy for the same period of time. Gravity Spy finds 2731 and 1396 for LIGO-Hanford and LIGO-Livingston respectively [20]. There will be a difference in the number of glitches found by ArchEnemy and Gravity Spy for at least two reasons: Gravity Spy treats an entire stack of harmonic glitches as a single scattered-light glitch whereas ArchEnemy will identify each glitch as a separate occurrence. Gravity Spy can also identify scattered-light glitches which are not symmetric and fall outside our template bank, for example, the scattered-light glitches shown in figure 10. Figure 8 is an example of the results of the ArchEnemy pipeline and how well it has identified scattered-light glitches in a period of data. A majority of the glitches have been identified with the correct parameter values and even in cases where the chosen template is not visually perfect, there is a good match between the template and the identified power in the data, particularly in the case of slightly asymmetric glitches. Figure 10 demonstrates a period of time where the ArchEnemy pipeline has not fitted well the scattered-light glitches in the data. The glitches at this time are improperly fit by the templates due to asymmetry of the morphology of the glitches and because some of the glitches are outside of our template bank parameter range. However, we note that this is a very extreme period of scattered-light glitching and immediately after this time the detector data is no longer flagged as "suitable for analysis". We have demonstrated the ArchEnemy pipeline on a Figure 8: LIGO-Hanford data from 2019-11-23 17:54:22 - 2019-11-23 17:55:12 containing scattered-light glitches which have been identified by the ArchEnemy search (left), there is a misalignment in the template found for a number of glitches in this period of data and some missed glitches. Scattered-light glitches remaining after running the hierarchical subtraction search (right) for the same period of data, we have missed more scattered-light glitches however misalignments have been removed. The highest harmonic at approximately 892 seconds has been incorrectly split into two separate templates. and have identified and characterized a list of scattered-light glitches, which could be removed from the data. We do note that there are cases where the identification has not worked well, but we expect that subtracting our list of glitches from the data will reduce their effect on the gravitational wave search. In the next section we will demonstrate this by quantifying sensitivity with the PyCBC pipeline. ### Safety of scattered-light identification The data we have searched through contains no previously identified gravitational wave signals [8]. However, there is a risk that the ArchEnemy search would identify real gravitational wave signals as scattered-light glitches. To assess this possibility we simulate and add a large number of gravitational wave signals into the data and assess whether any signals are misidentified. To do this, we use three separate sets of simulated gravitational wave signals (or "injection sets"), one for binary black holes (BBH), another for binary neutron stars (BNS) and a third for neutron star black hole (NSBH) systems. We use the same simulations as the LVK search of this data, detailed in the appendix of [8]. Each Figure 9: Data containing an injected stack of harmonic scattered-light glitches (left) and the corresponding data found when running the hierarchical subtraction search and subtracting the identified scattered-light glitches from the data (right). injection set consists of 6200 simulated signals spaced between 82 and 120 seconds apart. We treat these injection sets exactly the same as for the injection-less data, adding the simulations to the data, and then running ArchEnemy to produce a list of scattered-light glitches for each injection set. To determine whether we have misidentified any gravitational wave injections as scattered-light glitches we look for glitches we have found within the overlapping Figure 10: LIGO-Livingston data from 2019-11-24 01:30:32 - 2019-11-24 01:31:20 containing a very large number of scattered-light glitches at multiple times and frequencies over-plotted with the scattered-light glitches identified by the ArchEnemy search. Very few overlays which match well onto scattered-light glitches. The template bank used in this search terminates at 80 Hz and so the scattered-light glitches located above this value will not be correctly identified. There are also asymmetric scattered-light glitches located which will not be identified correctly by our search which assumes symmetry in the scattered-light glitch. frequency band of gravitational wave signals and our scattered-light glitch template bank. This corresponds to approximately 15 second before merger time for the injections. The simulated signals occur every \(\sim 100\) seconds so we expect to see glitches within this 15 second window, therefore, we also require that there must be more triggers identified in the scattered-light glitch search _with_ injections when compared to the search _without_ injections within the window. The details of the number of gravitational wave injections with overlapping scattered-light triggers can be seen in table 1. A scattered-light glitch will be identified close to a gravitational wave signal in two cases: the ArchEnemy search is misidentifying the gravitational wave signal as a glitch _or_ the simulated signal was added close to actual glitches and a change in the data has meant a different number of glitches has been identified. The presence of real scattered-light glitches means we might miss a gravitational wave signal, therefore, we _do_ want to find and subtract glitches close to gravitational wave signals, but we do not want to subtract power from the gravitational wave signal itself. The scattered-light glitch \(\chi^{2}\) test was designed to prevent the matching of scattered-light glitch templates on other causes of excess power, however, these results show it is not perfect. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Interferometer} & \multicolumn{2}{c}{Injections with} & \multicolumn{2}{c}{Scattered-Light} & \multicolumn{2}{c}{Actual Overlapped} \\ & Injection Set & Coincident Triggers & Coincident Triggers & Injections \\ \hline H1 & BBH & 20 & 45 & 1 \\ & BNS & 23 & 50 & 2 \\ & NSBH & 38 & 73 & 7 \\ L1 & BBH & 13 & 21 & 2 \\ & BNS & 18 & 30 & 2 \\ & NSBH & 35 & 56 & 16 \\ \hline \hline \end{tabular} \end{table} Table 1: For both interferometers and all 3 injection sets we identify the number of injections which are found to have scattered-light glitches identified within 15 seconds of merger time (“Injections with Coincident Triggers”), along with the number of scattered-light glitches found within this window for these injections (“Scattered-Light Coincident Triggers”). We investigated each of these injections and recorded the number which actually had scattered-light glitches identified due to the injected gravitational wave signal (“Actual Overlapped Injections”). We investigate each injection with coincident scattered-light triggers, seeing how many had misidentified scattered-light glitches on the inspiral of the gravitational wave signal, this number can be seen in the column "Actual Overlapped Injections" in table 1. We have included an example of the matching of scattered-light glitches onto gravitational wave injections in figure 11, the right panel shows the gravitational wave data post glitch subtraction where it can be seen there is a portion of the power being subtracted from the signal. Although power is being removed from the signal, the gravitational wave injection is still found by the search for gravitational waves, which we will describe later. For the cases that we have investigated, we note that the behaviour shown in figure 11 only happens for signals that have very large signal-to-noise ratio, and are therefore unphysically close to us. A similar effect is observed with the "autogating" process, described in [9], which prevents the detection of these loud signals. In contrast to the "autogating" though, signals like that illustrated in figure 11 are still identified as gravitational wave signals by the PyCBC search after scattered-light glitch removal. ## 4 Assessing sensitivity gain from removing scattered-light glitches We now assess whether removing our identified list of scattered-light glitches results in a sensitivity gain when searching for compact binary mergers. We do this by comparing the results from the offline PyCBC search on the original data, to the results of the same search but analysing data where the glitches have been removed. ### Comparing search results with and without glitch subtraction The PyCBC pipeline is able to assess significance of potential compact binary mergers in a given stretch of data, and does the same with a set of simulated signals. This significance is quoted in terms of a "false-alarm rate", which denotes how often we would expect to see a non-astrophysical event at least as significant as the coincident trigger being considered. In this work we assess sensitivity at a false-alarm threshold of 2 background events every year. The data we have searched over contained no previously found gravitational wave signals [8] and our search after subtracting scattered-light glitches identified no new gravitational wave signals. While the search hasn't found any gravitational waves, we can still measure the improvement in the sensitivity of the detectors by comparing the number of simulated signals identified with a false-alarm rate below 2 per year for each injection set (described in section 3.8) with and without removing glitches from the data. Table 2 shows the number of injections found for all injection sets and both searches. \begin{table} \begin{tabular}{c c c c} \hline Injection Type & Original Search & Glitch-Subtracted & Sensitivity ratio \\ \hline BBH & 1215 & 1222 & 1.01 \\ BNS & 1315 & 1315 & 1.00 \\ NSBH & 1260 & 1266 & 1.00 \\ \hline \end{tabular} \end{table} Table 2: The number of injections found by each search with a false-alarm rate less than 2 per year. Also shown is the sensitivity ratio of the glitch-subtracted search and original search for each injection set. Figure 11: An injected binary neutron star compact binary coalescence gravitational wave signal, with the scattered-light glitches identified by the ArchEnemy search pipeline overlayed in red (left). The same injected signal but with the scattered-light glitches removed from the data (right). it can be seen that power is removed from the signal track and also there is an amount of power added to the data above the track. We compare the number of injections found by both searches but also look at the gravitational wave injections found by the original search and missed by the glitch-subtracted search and vice versa. Considering signals found by the original search and missed by the glitch-subtracted search there are 3 binary black hole injections with false-alarm rates in the original search ranging from \(0.5-0.3\) per year, 5 binary neutron star injections with false-alarm rates ranging from \(0.5-0.14\) per year and 4 neutron star black hole injections. One of these neutron star black hole injections had a very small false alarm rate (less than 1 per 40000 years), however there were no scattered-light glitches identified within 150 seconds of this injection. This injection has significant precession and we think that a small perturbation to the data resulted in this being missed. The glitch-subtracted search identifies 10 additional binary black hole injections, the most significant of which have false-alarm rates of 1 per 208.9, 1 per 3873.4 and 1 per 7633.9 years. We illustrate the last of these in figure 12 (top). 5 extra binary neutron star injections were found, with false-alarm rates from \(0.5-0.14\) per year and 10 neutron star black hole injections were found, where the false-alarm rate of the most significant is 1 per 9961.7 years. This injection can also be seen in figure 12 (bottom). To quantify the sensitivity of the search we calculate the sensitive volume in which we can observe gravitational wave signals. To calculate the sensitive volume we measure the detection efficiency of different distance bins taken from the injection sets and then multiply the efficiencies by the volume enclosed by the distance bins, these volumes are then summed to find the total volume the search is sensitive to [41]. We are then able to calculate the ratio in sensitivities between the glitch-subtracted gravitational wave search and the original gravitational wave search, revealing the improvement that subtracting scattered-light glitches has made. Figure 13 displays the ratio of the sensitive volume measured for the glitch-subtracted gravitational wave search and the original PyCBC gravitational wave search, across different false-alarm rate values, we quote our sensitivity ratios at a false-alarm rate value of 2 per year. The same set of injected signals was used for both gravitational wave searches and therefore a direct comparison of search sensitivities can be made via this ratio. Disappointingly, the measured sensitivity improvement is small in the results we obtain. For the binary black hole injections we measure a sensitivity ratio at a 2 per year false-alarm rate of 1.01, for binary neutron stars 1.00 and neutron-star-black-holes 1.00. Nevertheless, we have demonstrated that Figure 12: Two examples of gravitational wave injections found by the glitch-subtracted search for gravitational waves which were not found by the original gravitational wave search due to the presence of scattered-light glitches at the same time as the gravitational wave inspiral. Top left: A binary black hole injection with a false-alarm rate of 1 per 7633.9 years is shown alongside the scattered-light glitches found by the ArchEnergy search and subtracted from the data prior to performing the glitch-subtracted PyCBC search for gravitational waves (top right). Bottom left: A neutron star black hole injection with a false-alarm rate of 1 per 9961.7 years and the scattered-light glitches found by the ArchEnergy search and subtracted from the data prior to performing the glitch-subtracted PyCBC search for gravitational waves (bottom right). removing scattered-light glitches from the data can allow us to identify gravitational wave signals that would otherwise be missed. ## 5 Conclusion We have demonstrated a new method for modelling scattered-light glitches and identifying and characterizing these glitches in a period of gravitational wave data. We have developed a scattered-light glitch specific \(\chi^{2}\) test which can discriminate between scattered-light glitches, other types of glitches and gravitational wave signals. We have searched through a representative stretch of gravitational wave data known to contain scattered-light glitches, found thousands of these glitches and subtracted them from the gravitational wave data prior to running a search for gravitational waves. The results of this search include a small increase in Figure 13: The ratio of the sensitive volume-time of the glitch-subtracted search and the original gravitational wave search. The grey dashed line indicates a false-alarm rate of 2 per year which is our threshold and the point at which we measure any sensitivity improvements of the glitch-subtracted search for each of the three gravitational wave injection sets. the measured sensitivity of the gravitational wave search for binary black hole gravitational wave signals, and modest change to sensitivity for binary neutron star and neutron star black hole gravitational wave signals. We highlight that the task of accurately identifying and parameterizing scattered-light glitches in the data is not a trivial one, especially where there are repeated, and harmonic, glitches present in the data. We have developed a new \(\chi^{2}\) test to reduce the number of false identifications of scattered-light glitches, but we do still see cases where we have misidentified other glitches, and even a small number of loud gravitational wave signals, as caused by scattered light, and cases where we do not correctly identify, or parameterize, actual scattered-light glitches. Improving this identification process would be important in improving the efficacy of this process. The possibility of using this model of scattered-light glitches as a bespoke application to gravitational wave signals which are known to have coincident scattered-light glitches has been explored and implemented into Bilby [42] to perform a parameter estimation of scattered-light glitches and removing these to produce glitch-free data [43]. The inclusion of the extra term from [32] within the model can help identify scattered-light glitches more accurately. The results of the application of the ArchEemy search pipeline, the list of scattered-light glitches, can also be used in other applications. For example it could be used in the form of a veto [14], where we use knowledge of the presence of scattered-light glitches to down rank periods of time in gravitational wave data. Additionally, we could use scattered-light glitches previously identified by tools such as Gravity Spy [22] and target known scattered-light glitches with the ArchEemy search pipeline. As a final note, while we acknowledge that the sensitivity improvements that we have observed--\(\sim 1\%\)--are very modest, the concept of removing scattered-light glitches, or other identified glitch classes, from the data prior to matched filtering for compact binary mergers is one that we encourage others to explore further. An increase in the rate of events or the rate of scattered-light glitches in future observing runs will mean an increase in the number of affected events, such techniques off a method for mitigating the effect that these non-Gaussianities will have on the search, maximizing the number of observations that can be made. ## Acknowledgments We would like to thank Derek Davis for their useful comments on this work and the manuscript and also Rhiannon Udall for helpful discussion. AT would like to thank Laura Nuttall, Connor McIsaac, Connor Weaving and Ronaldas Macas for constant feedback, suggestions and debugging help during the development of this work. We would like to thank Thomas Dent for suggesting the name "ArchEnemy". AT was supported by the Science and Technology Facilities Council through the DISCnet Centre for Doctoral Training grant ST/V506977/1. GCD, IH and AL thank the STFC for support via the grants ST/T000333/1 and ST/V005715/1. The authors are grateful for computational resources provided by Cardiff University supported by STFC grant ST/I006285/1 and the LIGO Laboratory supported by the National Science Foundation Grants PHY-0757058 and PHY-0823459. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gwosc.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. KAGRA is supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan Society for the Promotion of Science (JSPS) in Japan; National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea; Academia Sinica (AS) and National Science and Technology Council (NSTC) in Taiwan. This work carries LIGO document number P2200393. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
2307.14579
Neural Representation-Based Method for Metal-induced Artifact Reduction in Dental CBCT Imaging
This study introduces a novel reconstruction method for dental cone-beam computed tomography (CBCT), focusing on effectively reducing metal-induced artifacts commonly encountered in the presence of prevalent metallic implants. Despite significant progress in metal artifact reduction techniques, challenges persist owing to the intricate physical interactions between polychromatic X-ray beams and metal objects, which are further compounded by the additional effects associated with metal-tooth interactions and factors specific to the dental CBCT data environment. To overcome these limitations, we propose an implicit neural network that generates two distinct and informative tomographic images. One image represents the monochromatic attenuation distribution at a specific energy level, whereas the other captures the nonlinear beam-hardening factor resulting from the polychromatic nature of X-ray beams. In contrast to existing CT reconstruction techniques, the proposed method relies exclusively on the Beer--Lambert law, effectively preventing the generation of metal-induced artifacts during the backprojection process commonly implemented in conventional methods. Extensive experimental evaluations demonstrate that the proposed method effectively reduces metal artifacts while providing high-quality image reconstructions, thus emphasizing the significance of the second image in capturing the nonlinear beam-hardening factor.
Hyoung Suk Park, Kiwan Jeon, Jin Keun Seo
2023-07-27T01:57:06Z
http://arxiv.org/abs/2307.14579v1
# Neural Representation-Based Method for Metal-induced Artifact Reduction in Dental CBCT Imaging ###### Abstract This study introduces a novel reconstruction method for dental cone-beam computed tomography (CBCT), focusing on effectively reducing metal-induced artifacts commonly encountered in the presence of prevalent metallic implants. Despite significant progress in metal artifact reduction techniques, challenges persist owing to the intricate physical interactions between polychromatic X-ray beams and metal objects, which are further compounded by the additional effects associated with metal-tooth interactions and factors specific to the dental CBCT data environment. To overcome these limitations, we propose an implicit neural network that generates two distinct and informative tomographic images. One image represents the monochromatic attenuation distribution at a specific energy level, whereas the other captures the nonlinear beam-hardening factor resulting from the polychromatic nature of X-ray beams. In contrast to existing CT reconstruction techniques, the proposed method relies exclusively on the Beer-Lambert law, effectively preventing the generation of metal-induced artifacts during the backprojection process commonly implemented in conventional methods. Extensive experimental evaluations demonstrate that the proposed method effectively reduces metal artifacts while providing high-quality image reconstructions, thus emphasizing the significance of the second image in capturing the nonlinear beam-hardening factor. Computerized tomography, Metal artifact reduction, Beam hardening effect, Neural Radiation Fields. ## I Introduction Metal artifact reduction (MAR) in dental cone-beam computed tomography (CBCT) is challenging owing to the prevalence of metallic implants in patients. Multiple metallic objects, such as dental implants, in the scanned region can result in severe computed tomography (CT) image artifacts owing to the complex physical interactions between the polychromatic X-ray beams and metal objects. However, despite significant progress in MAR methods over the past four decades, existing approaches have shown limited performance in effectively reducing metal artifacts in dental CBCT environments, where multiple metal inserts occupy a significant area. Dental CBCT has gained popularity as a cost-effective and low-radiation alternative to multidetector CT (MDCT) in dental clinics. However, a significant drawback of it is that its inverse problem is more challenging compared with MDCT. Specifically, it poses a highly complex and nonlinear challenge, primarily attributed to multiple factors, including intricate metal-bone and metal-tooth interactions, photon starvation, field-of-view truncation, offset detector, and scattering. Metal-induced artifacts stem from the mismatch between the forward models employed in conventional reconstruction algorithms (such as filtered backprojection (FBP) [1] and Feldkamp-Davis-Kress (FDK) [2]), and the polychromatic nature of X-ray beams. X-ray beams in dental CBCT comprise photons with energies ranging from minimum (e.g., 0 keV) to peak energy (e.g., between 60 and 120 keV) [3]. However, these conventional algorithms overlook the polychromatic nature of X-ray beams, thus leading to a discrepancy between the sinogram data and the range space of the forward operator, such as the Radon transform. This discrepancy can result in widespread artifacts in the reconstructed image; the reconstruction process aims to minimize the discrepancy between the forward projection of the image and measured sinogram. Over the past four decades, numerous methods for MAR have been developed, including projection-based methods [4, 5, 6, 7, 8, 9, 10], iterative reconstruction methods [11, 12, 13, 14, 15], dual-energy CT methods [16, 17, 18], and photon counting methods [19, 20]. Projection-based methods may encounter difficulties in correcting distorted data, particularly when metal objects are large or complex. Iterative methods can achieve superior results compared with projection-based methods; however, they have limitations in accurately modeling complex interactions between X-rays and metal objects. Dual-energy methods improve the accuracy of material identification and artifact reduction; however, they require specialized hardware or software, and an increased radiation dose. Photon counting is a promising technology that has recently gained attention for its potential application in MAR [19, 20]. However, it may not be suitable for dental CBCT because of the high cost of photon-counting detectors. Recently, deep learning algorithms have been widely utilized for MAR in X-ray CT and can be roughly classified into three categories: image-domain learning [21, 22, 23], projection-domain learning [24], and dual-domain learning [25, 26]. The abovementioned methods require numerous paired metal-affected and metal-free CT scans for network training. However, obtaining paired datasets in clinical practice remains challenging. Furthermore, the performance of deep learning methods can considerably degrade when applied to CT scans acquired under acquisition conditions or CT scanners that differ from those used for training. To address the intricate challenge of MAR in dental CBCT, we thoroughly investigated the limitations of conventional methods, such as FBP and FDK algorithms. Recognizing the need for an innovative approach that circumvents the backprojection process commonly used in these methods and its tendency to generate metal-induced artifacts, we proposed a novel MAR algorithm. Recently, neural radiance fields (NeRFs) [27] in computer vision have demonstrated considerable potential for representing 3D scenes from 2D camera data using deep neural networks. Inspired by this, we proposed a CT reconstruction method that utilizes the inherent capabilities of neural representations to generate two distinct, informative tomographic images. One image represents the monochromatic attenuation distribution at a specific energy level, whereas the other captures the nonlinear beam-hardening factor stemming from the polychromatic nature of X-ray beams. In contrast to the existing CT reconstruction techniques, the proposed method exclusively relies on the Beer-Lambert law, effectively preventing the generation of metal-induced artifacts during the backprojection process commonly employed in conventional methods. Figure 1 shows the schematic diagram of the proposed method. The efficacy of the proposed method was assessed through evaluations of realistic simulated and phantom experiment datasets. The results demonstrated increased efficiency in reducing metal artifacts while preserving the morphological structures around metallic objects. Furthermore, the proposed method offers promising performance even with photon starvation. ## II Mathematical framework In dental CBCT, a cone-shaped X-ray beam is directed through a patient's head while they are positioned between an X-ray source and a flat-panel detector housed in a gantry. The gantry is rotated to allow the X-ray beam to pass through the patient's head from various angles. During this process, a planar detector acquires the CBCT projection data denoted as \(\text{P}(\varphi,u,v)\), where \(\varphi\in[0,2\pi)\) represents the projection angle and \((u,v)\) represents the position of the planar detector. The position is scaled using the ratio of the distance between the X-ray source and detector plane to the distance between the source and rotation axes. The sinogram P acquired from low-dose dental CBCT can be described by the expression: \[\text{P}=\mathcal{S}_{\text{truncation}}(\text{P}_{\text{full}}), \tag{1}\] where \(\text{P}_{\text{full}}\) denotes the corresponding sinogram acquired using a wide-detector CBCT without any offset, thus providing the entire information for a sinogram; and \(\mathcal{S}_{\text{truncation}}\) represents the truncation operator determined by the size and offset configuration of the detector. The main objective here is using the truncated data P to reconstruct a scalar value \(\mu(\mathbf{x})\) that represents the attenuation coefficient at a fixed energy level \(E_{0}\) and for a specific position \(\mathbf{x}=(x,y,z)\) in world coordinates. Under the idealized monochromatic assumption, a linear X-ray transform exists, denoted by \(\mathcal{T}_{\text{\tiny{in}}}\), such as the Radon or cone-beam transforms, which maps the CT image to the projection data as follows. \[\text{P}=\mathcal{T}_{\text{\tiny{in}}}\ \mu. \tag{2}\] However, this monochromatic model is inaccurate because the X-ray beams used in these scans consist of photons with a range of energies. Thus, the X-ray attenuation coefficient distribution, denoted by \(\mu_{E}(\mathbf{x})\), varies with the position \(\mathbf{x}\) and photon energy level \(E\). Consider the path of the X-ray beam from the source position \(\mathbf{o}_{\varphi}\) to the detector position \(\mathbf{x}_{\varphi,u,v}\) in the world coordinates. Owing to the polychromatic nature of X-ray beams, the projection data \(\text{P}(\varphi,u,v)\) follow the Lambert-Beer law [28, 29]. \[\text{P}(\varphi,u,v)=-\ln\left(\int_{E_{\text{min}}}^{E_{\text{min}}}\eta(E )\exp\left(-\int_{\ell_{\varphi,u,v}}\mu_{E}ds\right)dE\right), \tag{3}\] Fig. 1: Schematic diagram of the proposed method for metal artifact reduction (MAR) in dental cone-beam computed tomography (CBCT). The key aspect of the proposed method is that it differs from existing CT reconstruction techniques in that it exclusively relies on the multilayer perceptron and the formula of P where \(\int_{\ell_{\mathbf{x},u,v}}\mu_{E}ds\) is the line integral of \(\mu_{E}\) over the ray \(\ell_{\varphi,u,v}\) joining the source position \(\mathbf{\mathrm{o}}_{\varphi}\) and detector position \(\mathbf{x}_{\varphi,u,v}\); and \(\eta(E)\) represents the fractional energy at photon energy \(E\) in the spectrum of the X-ray source [30], with its support being the interval \([E_{\text{min}},E_{\text{max}}]\), and \(\int_{\mathbb{R}}\eta(E)dE=1\). ### _Inherent drawbacks of methods using FBP or FDK_ To solve the ill-posed problem, a regularized least squares method of the following form can be used: \[\mu_{*}=\underset{\mu}{\text{argmin}}\|\mathrm{P}-\mathcal{T}_{ \text{\tiny{$\kappa$}}}\ \mu\|_{\ell_{2}}^{2}+\gamma\text{Reg}(\mu), \tag{4}\] where \(\text{Reg}(\mu)\) is a regularization term constraining prior knowledge of artifact-free and noise-free CBCT images; \(\|\cdot\|_{\ell_{2}}\) denotes the standard Euclidean norm; and \(\gamma\) is the regularization parameter controlling the trade-off between the fidelity term and regularity. The linear operator can be expressed as follows: \[\mathcal{T}_{\text{\tiny{$\kappa$}}}:\mu\in\mathbb{R}^{V}\mapsto \mathrm{P}\in\mathbb{R}^{S\times D} \tag{5}\] where \(V\) denotes the number of voxels in the CBCT images, \(S\) denotes the number of views, and \(D\) denotes the number of detector cells. According to the Hiblert projection theorem, the Hilbert space \(\mathcal{H}=\mathbb{R}^{S\times D}\) can be decomposed as: \[\mathcal{H}=\mathcal{H}^{sino}\oplus\mathcal{H}^{\perp} \tag{6}\] where \(\mathcal{H}sino=\{\mathcal{T}_{\text{\tiny{$\kappa$}}}\mu:\mu\in\mathbb{R}^{V}\}\) is the range space, \(\mathcal{H}^{\perp}\) is its orthogonal complement, and \(\bigoplus\) denotes the orthogonal direct sum. Hence, \(\mathrm{P}\) can be decomposed into \[\mathrm{P}=\mathrm{P}^{sino}+\mathrm{P}^{\perp} \tag{7}\] where \(\mathrm{P}^{sino}\in\mathcal{H}^{sino}\) and \(\mathrm{P}^{\perp}\in\mathcal{H}^{\perp}\). Thus, the problem is equivalent to: \[\mu_{*}=\underset{\mu}{\text{argmin}}\|\mathrm{P}-\mathrm{P}^{ \perp}-\mathcal{T}_{\text{\tiny{$\kappa$}}}\ \mu\|_{\ell_{2}}^{2}+\gamma\text{Reg}(\mu). \tag{8}\] Note that \(\mathcal{T}_{\text{\tiny{$\kappa$}}}\) maps an arbitrary single voxel image to the corresponding sinusoidal curve in the sinogram space \(\mathcal{H}\). Hence, any single-pixel mismatch in \(\mathrm{P}\) leads to a sinusoidal global change \(\mathrm{P}^{\perp}\) when inputting data into the range space \(\mathcal{H}^{sino}\). Thus, attempting a local mismatch in \(\mathrm{P}\) is highly desirable; however, this is not possible within the above least-squares framework. Global matching of \(\mathrm{P}\) by subtracting \(\mathrm{P}^{\perp}\) produces streaking or shadowing artifacts (see Fig. 2). To provide a rigorous explanation of cupping and streaking artifacts for metallic objects in CT imaging, we focus on the fan-beam CT model, where we restrict \(\mathrm{P}(\varphi,u,0)\) to detector position \(v=0\). We can then represent \(\mathcal{T}_{\text{\tiny{$\kappa$}}}\) as a composition of the Radon transform and the data-filtering operator that converts the fan-beam projection data into a parallel beam sinogram. To explain how \(\mathrm{P}^{\perp}\) destroys the global structure of \(\mathrm{P}\), we examined a simplified model comprising two disk-shaped metallic objects, as shown in Fig. 2. Specifically, the desired ideal CT image can be represented as \(\mu=c\chi_{D_{1}\cup D_{2}}\) (where \(c\) is a constant, \(D_{1}\) and \(D_{2}\) are disks of equal radius, and \(\chi_{D}\) denotes the characteristic function of region \(D\)), by assigning it a value of one inside \(D\) and zero otherwise. To analyze the projection data \(\mathrm{P}\), we introduce \(\mathrm{P}_{D_{1}}\) to denote the projection data solely related to \(D_{1}\), and \(\mathrm{P}_{D_{2}}\) for \(D_{2}\). Interestingly, \(\mathrm{P}_{D_{1}}\) and \(\mathrm{P}_{D_{2}}\) lie within the range space but yield cupping artifacts [31, 32]. Therefore, \(\mathrm{P}_{D_{1}}\) and \(\mathrm{P}_{D_{2}}\) are consistent and \(\mathrm{P}_{D_{1}}^{\perp}=0=\mathrm{P}_{D_{2}}^{\perp}\). By contrast, \(\mathrm{P}\) exhibits inconsistency, thus leading to \(\mathrm{P}^{\perp}\neq 0\), as shown in Fig. 2. Here, \(\mathrm{P}^{\perp}\) was computed as \(\mathrm{P}^{\perp}=\mathrm{P}-\mathcal{R}\mathcal{R}^{-1}\mathrm{P}\), where \(\mathcal{R}\) and \(\mathcal{R}^{-1}\) denote the Radon transform and FBP operators, respectively. Consider a scenario in which an X-ray beam passes through both disks within a projection angle range of \(4\pi/9\) to \(5\pi/9\). Thus, \(\mathrm{P}(\phi,u)\neq\mathrm{P}_{D_{1}}(\phi,u)+\mathrm{P}_{D_{2}}(\phi,u)\) for \(\phi\) within the range \([4\pi/9,5\pi/9]\), whereas \(\mathrm{P}(\phi,u)=\mathrm{P}_{D_{1}}(\phi,u)+\mathrm{P}_{D_{2}}(\phi,u)\) holds true for \(\phi\) outside this interval. Based on the sinogram consistency condition for \(\mathrm{P}^{sino}\), it follows that for all \(\phi\in[4\pi/9,5\pi/9]\) and \(\phi^{\prime}\notin[4\pi/9,5\pi/9]\), \[\int(\mathrm{P}(\phi,u)-\mathrm{P}^{\perp}(\phi,u))du=\int(\mathrm{P}(\phi^{ \prime},u)-\mathrm{P}^{\perp}(\phi^{\prime},u))du. \tag{9}\] This indicates that \(\mathrm{P}^{\perp}\) corrects specific regions and affects the global structure of \(\mathrm{P}\) in a broader sense. As shown in Fig. 2, \(\mathrm{P}^{\perp}\), used for rectifying the mismatch, has a broad impact on the entire sinogram, thus leading to the deterioration of its global structure and introduction of streaking and shadowing artifacts. Existing methods that use the backprojection process cannot offer localized correction solely to \(\mathrm{P}\) within the projection angle range of \([4\pi/9,5\pi/9]\) without influencing other segments of the sinogram \(\mathrm{P}\). Consequently, novel methods that address this issue and provide localized corrections specifically to the relevant regions of the sinogram while avoiding adverse impact on other portions must be urgently developed. ### _Fundamental structure of global artifacts caused by sinogram inconsistency_ This section investigates the structure of artifacts caused by a sinogram inconsistency. Assume that \(\mathrm{P}\) has a local mismatch \(\mathrm{P}^{\text{\tiny{$\kappa$}}}\) whose support occupies a small area in the sinogram space. The corrected sinogram \(\mathrm{P}-\mathrm{P}^{\text{\tiny{$\kappa$}}}\) is in the range space such that \(\mu_{*}\) exists, where \(\mathcal{T}_{\text{\tiny{$\kappa$}}}\ \mu_{*}=\mathrm{P}-\mathrm{P}^{\text{\tiny{$\kappa$}}}\). To simplify notation, we will denote a position \((\varphi,u,v)\) in sinogram space as \(\xi=(\varphi,u,v)\). Let us consider the scenario where a sinogram mismatch occurs at a single point \(\xi_{0}=(\varphi_{0},u_{0},v_{0})\). If this mismatch is a Dirac function \(\delta_{\xi_{0}}\), then the corresponding artifact can be represented as: \[\Gamma_{\xi_{0}}=\underset{\mu}{\text{argmin}}\|\delta_{\xi_{0}}- \mathcal{T}_{\text{\tiny{$\kappa$}}}\ \mu\|_{\ell_{2}}^{2} \tag{10}\] Then, the artifacts caused by the sinogram inconsistency \(\mathrm{P}^{\perp}\) can be expressed as: \[\Upsilon(\mathbf{x})=\int_{\Omega}\Gamma_{\xi}(\mathbf{x})\mathrm{P}^{\text{\tiny{$ \kappa$}}\text{\tiny{$\kappa$}}}(\xi)d\xi \tag{11}\] where \(\Omega\) is the support of \(\mathrm{P}^{\text{\tiny{$\kappa$}}\text{\tiny{$\kappa$}}}\). **Remark II.1**: _To understand metal-induced artifacts more intuitively, let us consider a simplified scenario of a bichromatic model with energies of 64 and 80 KeV and the fractional energy is described as \(\eta(E)=\frac{1}{2}\delta(E-64)+\frac{1}{2}\delta(E-80)\) We want to reconstruct an image that is a \(3\times 3\) pixel matrix, which is represented as: \[\left(\begin{array}{ccc}\mu_{1,1}&\mu_{1,2}&\mu_{1,3}\\ \mu_{2,1}&\mu_{2,2}&\mu_{2,3}\\ \mu_{3,1}&\mu_{3,2}&\mu_{3,3}\end{array}\right),\] where \(\mu_{2,1}=\mu_{2,3}\) are metals and the rest are air. We hope that the reconstructed image should be of the form \[\left(\begin{array}{ccc}0&0&0\\ c&0&c\\ 0&0&0\end{array}\right), \tag{12}\] for some constant \(c\) associated with the attenuation coefficient of the metal. The attenuation coefficients of the metal are 64 at \(E=64\) keV and 5 at \(E=80\) keV. Assume that we have the projection data of three angles \(\varphi=0,\frac{\pi}{4},\frac{\pi}{2}\). Then, the conventional CT reconstruction problem solves the following system. \[\left\{\begin{array}{lcl}\mu_{1,1}+\mu_{2,1}+\mu_{3,1}&=&\text{P}(0,1)&=5.7 \\ \mu_{1,2}+\mu_{2,2}+\mu_{3,2}&=&\text{P}(0,2)&=0\\ \mu_{1,3}+\mu_{2,3}+\mu_{3,3}&=&\text{P}(0,3)&=5.7\\ \mu_{2,1}+\mu_{3,2}&=&\text{P}(\pi/4,1)&=5.7\\ \mu_{1,1}+\mu_{2,2}+\mu_{3,3}&=&\text{P}(\pi/4,2)&=0\\ \mu_{1,2}+\mu_{2,3}&=&\text{P}(\pi/4,3)&=5.7\\ \mu_{3,1}+\mu_{3,2}+\mu_{3,3}&=&\text{P}(\pi/2,1)&=0\\ \mu_{2,1}+\mu_{2,2}+\mu_{2,3}&=&\text{P}(\pi/2,2)&=10.7\\ \mu_{1,1}+\mu_{1,2}+\mu_{1,3}&=&\text{P}(\pi/2,3)&=0\end{array}\right. \tag{13}\] where 10.7 comes from \(10.7\approx-\log(0.5\exp(-64\times 2)+0.5\exp(-5\times 2))\) and 5.7 comes from \(5.7\approx-\log(0.5\exp(-64\times 1)+0.5\exp(-5\times 1))\). The standard CT reconstruction algorithm is to find \(\boldsymbol{\mu}_{\text{CT}}\) such that \[\boldsymbol{\mu}_{\text{CT}}=\underset{\boldsymbol{\mu}}{\text{argmin}}\|A \boldsymbol{\mu}-\text{P}\|_{\ell_{2}}^{2},\] where \(\mathbf{A}\) is the \(9\times 9\) matrix corresponding to the Radon transform in (13) and \(\mu\) can be understood as a vectorized version. The reconstructed image using the formula \(\mu_{\text{CT}}=(\mathbf{A}^{T}\mathbf{A})^{-1}\mathbf{A}^{T}\text{P}\) is given by \[\left(\begin{array}{ccc}-1.0&2.2&0.4\\ 6.8&2.5&6.3\\ 0.2&-0.5&0.7\end{array}\right).\] Note that the reconstructed image \(\boldsymbol{\mu}_{\text{CT}}\) significantly deviates from the true solution in (12) owing to the backprojection process \(\mathbf{A}^{T}\text{P}\). This discrepancy can be attributed to the single mismatch observed in the 8th equation of (13), where \(\text{P}(\pi/2,2)=10.7\neq 2\times 5.7\). ### _Implicit neural representation-based MAR_ Conventional CBCT reconstructions use a pixel or voxel-based approach to represent images; however, using this approach in low-dose dental CBCT is challenging owing to the large dimension of the solution space and inconsistent data in the presence of metal implants. To address these issues, it is crucial to incorporate an image prior that constrains the relationships between pixels based on underlying head anatomy. Although regularization techniques are commonly used for this purpose, their performance is limited because they lack global control between pixels. By contrast, neural representations using multilayer perceptrons (MLPs) utilize implicit representations that can capture complex relationships between image pixels more efficiently. These representations enable a significant reduction in the dimensions of the solution space, thus offering a more efficient and accurate reconstruction with highly undersampled data. Fig. 2: Characterization of metal-induced artifacts observed in a two disk-shaped phantom. The projection data P exhibits local inconsistency \(\text{P}-(\text{P}_{D_{1}}+\text{P}_{D_{2}})\), which leads to the emergence of global artifacts, such as streaking and shadowing, in the reconstructed CT image. These artifacts manifest during fitting P onto \(\text{P}^{sim}\) in the range space \(\mathcal{H}^{sim}\) when using the filtered backprojection (FBP) method. In the bottom figures, the symbol \(\mathcal{R}^{-1}\) represents the FBP operation. The middle image in the bottom row emphasizes capping artifacts in the disk region. Our approach to solving the inverse problem of dental CBCT is inspired by the recent success of NeRF in accurately representing 3D scenes derived from 2D camera data using a deep learning network. The proposed approach uses MLP to encode CT representations. The MLP takes a 3D point \(\mathbf{x}=(x,y,z)\) as the input and outputs the attenuation coefficient \(\mu(\mathbf{x})=\mu(\mathbf{x},E_{0})\) and its energy-dependent beam-hardening factor \(\sigma(\mathbf{x}):=\frac{\partial}{\partial E}\mu(\mathbf{x},E_{0})\). \[f_{\Theta}:\mathbf{x}\mapsto(\mu(\mathbf{x}),\sigma(\mathbf{x})). \tag{14}\] Instead of directly computing the attenuation coefficient \(\mu(\mathbf{x})\), we use \(f_{\Theta}\) rather than the standard expression for \(\mu\) because it provides a more concise representation of the CT image while producing the same \(\mu(\mathbf{x})\) as the standard expression. This compact implicit expression allows us to solve the inverse problem with highly undersampled data P. To learn function \(f_{\Theta}\), we minimize the difference between the measured data P (ground truth) and the predicted data \(\hat{\text{P}}\), generated using the output of \(f_{\Theta}\). The loss function is defined as \[\mathcal{L}=\frac{1}{|\mathcal{S}|}\sum_{(\varphi,u,v)\in\mathcal{S}}|\hat{ \text{P}}(\varphi,u,v)-\text{P}(\varphi,u,v)|, \tag{15}\] where \(\mathcal{S}\) represents the set of X-rays that pass through the detector positions. Next, we explain computing \(\hat{\text{P}}(\varphi,u,v)\) from \(f_{\Theta}\). Consider the X-ray path \(\mathbf{r}(t)=\mathbf{o}_{\varphi}+t\mathbf{d}_{\varphi,u,v}\), \(t\in[0,L]\), where \(\mathbf{o}_{\varphi}\) is the X-ray source position and \(\mathbf{d}_{\varphi,u,v}=(\sin\varphi,-\cos\varphi,\beta v)\) is a direction vector of the X-ray corresponding to the position \((\varphi,u,v)\) in the projection data P. This path is defined for \(t\in[0,L]\), where \(L\) is the path length. We use \(f_{\Theta}\) to compute \(\mu(\mathbf{r}(t))\) and \(\sigma(\mathbf{r}(t))\). A careful analysis reveals that \(\hat{\text{P}}(\varphi,u,v)\) can be approximately computed Fig. 3: Comparison of the reconstruction results for numerical phantom consisting of teeth, bone, and multiple crowns. The second and fourth rows show the background of the reconstructed CT images, which correspond to the air region without teeth, bone, and crowns. as follows. \[\hat{\text{P}}(\varphi,u,v)=\int_{0}^{L}\mu(\mathbf{r}(t))dt-\ln\left(\frac{\text{sinh }(\lambda\int_{0}^{L}\sigma(\mathbf{r}(t))dt)}{\lambda\int_{0}^{L}\sigma(\mathbf{r}(t) )dt}\right), \tag{16}\] where \(\lambda>0\) is a constant depending on CBCT scanning system. Now, we provide the proof of (16). From the Beer-Lambert law (3), we have \[\text{P}(\varphi,u,v) =-\ln\int_{E_{\text{min}}}^{E_{\text{max}}}\eta(E)\exp\left[-\int _{0}^{L}\mu(\mathbf{r}(t),E_{0})\right.\] \[\left.+(E-E_{0})\frac{\partial}{\partial E}\mu(\mathbf{r}(t),E_{0}) dt\right]dE, \tag{17}\] where \(E_{0}\) is a reference energy level and the partial derivative of the attenuation coefficient \(\mu\) with respect to photon energy \(E\) is evaluated at \(E_{0}\). This expression leads to the following approximation. \[\text{P}(\varphi,u,v) \approx\int_{0}^{L}\mu(\mathbf{r}(t))dt\] \[-\ln\int_{-1}^{1}\frac{1}{2}\exp\left[-\lambda s\int_{0}^{L} \sigma(\mathbf{r}(t))dt\right]ds. \tag{18}\] Direct computation of (18) yields (16), which completes the proof. In practice, accurately estimating the parameter \(\lambda\) in \(\hat{\text{P}}\) is challenging. Alternatively, for any constant \(\hat{\lambda}\), \(\hat{\text{P}}\) can be reformulated as follows. \[\hat{\text{P}}(\varphi,u,v) =\int_{0}^{L}\mu(\mathbf{r}(t))dt\] \[-\ln\left(\frac{\text{sinh}\left(\hat{\lambda}\left(\int_{0}^{L} \tilde{\sigma}(\mathbf{r}(t))dt+\varepsilon\right)\right)}{\tilde{\lambda}\left( \int_{0}^{L}\tilde{\sigma}(\mathbf{r}(t))dt+\varepsilon\right)}\right), \tag{19}\] where \(\tilde{\sigma}\) is a scaled version of \(\sigma\), expressed as \(\tilde{\sigma}=(\lambda/\tilde{\lambda})|\sigma|\). Based on this formulation, we train \(f_{\Theta}\) to provide \((\mu,\tilde{\sigma})\) with a suitably selected \(\tilde{\lambda}\). In eq. (19), to ensure training stability and avoid division by zero, we incorporate a small positive value \(\epsilon>0\) in the numerator and denominator of \(\hat{\text{P}}\). In this study, we consistently set \(\tilde{\lambda}\) to three, which demonstrates stable performance across our experiments. ### _Implicit Neural Representations with Sinusoidal Activations_ To enhance the ability of the network \(f_{\Theta}\) to accurately model data with high frequency variations, the \(f_{\Theta}\) is designed with a sinusoidal activation function [33]: \[f_{\Theta}(\mathbf{x})=\mathbf{W}_{n}\left(\psi_{n-1}\circ\psi_{n-2}\circ\cdots \circ\psi_{0}\right)(\mathbf{x})+\mathbf{b}_{n}, \tag{20}\] where \(\mathbf{W}_{i}\) and \(\mathbf{b}_{i}\) are the weight and bias at the \(i^{th}\) layer of the network, respectively. Further, \(\psi_{i}\) is the \(i^{th}\) layer of the network and is expressed as: \[\psi_{i}(\mathbf{x}_{i})=\sin(\mathbf{W}_{i}\mathbf{x}_{i}+\mathbf{b}_{i}). \tag{21}\] The sinusoidal activation function can better represent the function, its derivative, and Laplacian information compared with the positional encoding method [34, 27], which applies a serious of sine and cosine transforms to the input coordinates \(\mathbf{x}\). In our experiments, \(f_{\Theta}\) consisted of five fully connected layers in between input and output layers. Each fully connected layer comprises 128 nodes, whereas the input and output layers each consist of two nodes. The network weights were updated using the Adam optimizer [35] at a learning rate of \(5\times 10^{-4}\). The training process was terminated when the loss function value in (15) fell below \(5\times 10^{-3}\) for the numerical simulation and \(9\times 10^{-3}\) for the phantom experiment. The training procedure is implemented using PyTorch [36] on a system equipped with two CPUs (Intel(R) Xeon Gold 6226R, 2.9 GHz) and a GPU (NVIDIA RTX 3090, 24GB). Training the network per 2D CT image took approximately 3-5 min. **Remark II.2**: _In the field of medical tomographic image reconstruction, the conventional approach typically relies on a pixel-based (or voxel-based) representation, where each pixel corresponds to a dimension in the solution space. This process is particularly challenging when faced with a highly ill-posed reconstruction problem. The objective is to explore a vast solution space to and identify a single point that accurately represents the desired image. However, due to the inherent high-resolution nature of medical imaging, the solution space is primarily dominated by noise-like images, while practical solutions that resemble actual medical images occupy an incredibly small fraction, practically negligible in terms of probability. To mitigate these difficulties, researchers have developed various regularization techniques over the past several decades. These techniques aim to impose strong constraints on the solution to improve the reconstruction outcomes. However, these regularization methods often exhibit limited performance and loss of intricate details in the images. Implicit neural representation through MLPs shows promise for overcoming these limitations by optimizing its parameters to effectively search for the most appropriate solution within its architecture, thus offering a potential breakthrough in the field._ ## III Results ### _Numerical Simulation_ To assess the effectiveness of the proposed method, we conducted a performance evaluation using a 2D numerical phantom. The phantom consisted of teeth, bones, and multiple crowns, as shown in Fig. 3. Individual teeth were segmented as in [37], and a virtual crown was generated using dilation and erosion functions as in [38, 39]. The geometries of the teeth and bones were obtained by manually segmenting a real CBCT image. The generated teeth, bone, and crowns were projected based on the X-ray polychromatic model in (3). Here, we utilized the attenuation coefficients provided by the National Institute of Standards and Technology [40] along with the energy spectrum \(\eta(E)\) generated using the Spektr software [41] at a tube voltage of 100 kVp. The crowns were composed of titanium. Additionally, we added Poisson and electric noise to the projection data, and disregarding other factors, such as photon starvation, scattering, and nonlinear partial volume effects. All \(413\times 413\) was reconstructed with a pixel size of \(0.4\) mm \(\times\)\(0.4\) mm. We compared the performance of the proposed method with that of FBP and metal beam hardening correction (MBHC) methods. The FBP images were reconstructed using a standard Ram-Lak filter. In the MBHC method, the beam-hardening artifacts caused by metals were addressed using the following correction formula: \[\phi_{D,\kappa}(\mathbf{x})=-\mathcal{R}^{-1}\left[\ln\left(\frac{\sinh(\kappa \mathcal{R}\chi_{D})}{\kappa\mathcal{R}\chi_{D}}\right)\right](\mathbf{x}). \tag{22}\] In this method, we segmented the metal region \(D\) using a simple thresholding approach. The optimal parameter \(\kappa\) was chosen as \(\kappa=3\) based on Equation (17) in [31]. Based on (18), the two parameters \(\kappa\) in (22) and \(\lambda\) in (16) are related Fig. 4: Performance comparison of the proposed method for the photon starvation effect. In photon starvation (labeled as ’w/ photon starvation’), the proposed neural network is trained using a subset of X-rays that passes only through the teeth and bone. The crowns segmented from the FBP image are additionally added to the reconstructed image. Fig. 5: Comparison of the reconstruction results for resolution phantom with three metallic bolts. as follows. \[\kappa=-\alpha\lambda,\] where the parameter \(\alpha\) is defined as \(\alpha=\frac{\partial}{\partial E}\mu(\mathbf{x},\mathbf{E}_{0}),\mathbf{x}\in D\). Fig. 3 compares the reconstruction results for the numerical phantom. The second and third columns show CT images reconstructed using FBP and MBHC, respectively, whereas the fourth and fifth columns show that of the proposed method. The second and fourth rows show the backgrounds of the reconstructed CT images, which correspond to the air region (i.e., \(\mu(\mathbf{x})=0,\sigma(\mathbf{x})=0\)) without teeth, bones, and crowns. The mean absolute error (MAE) was computed and is listed in the upper-left corner of each background image. Evidently, the FBP image suffered from severe streaking and shadow artifacts, primarily owing to the beam hardening effect caused by the crowns and teeth. The MBHC method reduced the metal beam-hardening artifacts between crowns in the FBP image. However, the artifacts from the interaction between the crowns and teeth remained (red arrows in the third column) because the metal beam-hardening corrector \(\phi_{D,\kappa}\) only addresses the interactions between crowns. The proposed method successfully reconstructed the attenuation (\(\mu\)) and its scaled energy dependent beam hardening factor (\(\tilde{\sigma}\)) images. The proposed method, as opposed to FBP and MBHC methods, successfully reduced the streaking and shadowing artifacts in the reconstructed images. Notably, the proposed method mitigated the discretization error introduced during the standard backprojection process (yellow arrows in the third column). Quantitative analysis revealed that the proposed method achieved the lowest MAE compared with the FBP and MBHC methods. We further investigated the performance of the proposed method for the photon starvation effect. The relationship described in (16) is valid when sufficient X-ray photons reach the detector. Assuming that the metal trace of the numerical phantom was significantly affected by photon starvation, we trained the neural network \(f_{\Theta}\) in (14) using a subset of X-rays, denoted by \(\mathcal{S}_{t}\subseteq\mathcal{S}\), passing through the teeth and bone only. Fig. 4 compares the reconstruction results of the proposed method trained using the sets \(\mathcal{S}\) (labeled as 'w/o photon starvation') and \(\mathcal{S}_{t}\) (labeled as 'w/ photon starvation'). For photon starvation, crown masks were added to the reconstructed image. As indicated by the red arrows, the proposed method trained using \(\mathcal{S}_{t}\) faced challenges in fully restoring the teeth surrounded by crowns owing to limited information available for recovery. However, the proposed method successfully recovered the morphological structures of the teeth near the crowns. ### _Phantom Experiment_ The phantom experiment was conducted using an industrial CBCT scanner equipped with a flat-panel detector (DUKIN, Korea). The resolution phantom containing the three metallic bolts was scanned using a tube voltage of 160 kVp and tube current of 3.0 mAs. A comparison was performed on the sinogram corresponding to the midplane of the CBCT scan. All CBCT images of size \(512\times 512\) were reconstructed with a pixel size of \(0.2\) mm\(\times 0.2\) mm. The MBHC method corrects metal artifacts using (22) with the parameter \(\kappa=1\). In the proposed method, the estimate \(\hat{\mathrm{P}}\) in (19) is computed using a fan-beam projection operator [2]. Fig. 5 compares the reconstruction results for the experiment phantom. The first row shows CT images reconstructed using FBP, MBHC, and the proposed method. The insets represent enlarged metal regions, thus highlighting the presence of cupping artifacts. The second row shows background images of the resolution phantom. A background mask was generated manually from the FBP image. The MBHC and proposed method reduced the cupping artifacts in the reconstructed image. Compared with the MBHC method, the proposed method more effectively reduced the streaking and shadowing artifacts caused by the three metallic bolts in the reconstructed images while preserving the structures of the resolution phantom (red arrows in the first row). However, as indicated by the yellow arrow, additional artifacts were introduced in the reconstructed image obtained by the proposed method, possibly owing to other causes of metal artifacts, such as scattering. For a quantitative evaluation, MSEs were computed in the background region. The proposed method demonstrated the lowest MSE value. ## IV Discussion and Conclusion This study presented an innovative approach for MAR in dental CBCT by harnessing the regularization power of implicit neural representation techniques. The MLP supplementary output, which captures the nonlinear beam-hardening factor stemming from the polychromatic nature of the X-ray beams, is critical in generating high-quality cross-sectional images. By integrating the MLP with a modified Beer-Lambert law and incorporating X-ray casting of point samples, the proposed method effectively mitigates beam-hardening artifacts, substantially enhancing the overall image quality and increasing the clinical relevance of dental CBCT imaging. Recently, Kim et al. [42] introduced an implicit neural representation-based approach for CT reconstruction. Their work focused primarily on sparse-view CT reconstruction and did not specifically address the challenging tasks of MAR. Furthermore, their method relied on existing CT reconstruction techniques. By contrast, our study fills this gap by presenting a novel approach for specifically addressing the MAR in dental CBCT, thereby paving the way for improved image quality. Because our approach is in its initial stages, it can be further improved, thus holding the potential to revolutionize the field of low-dose CT reconstruction. Implicit neural representations offer substantial advantages over traditional grid-based representations, such as pixels and voxels, particularly in solving ill-posed image reconstruction problems. A key advantage is their resolution-independent capability, wherein the representation capacity is determined by the MPLS capacity rather than the grid resolution. MLPs can capture the underlying structure of an image while minimizing redundancy in the representation without sacrificing accuracy or information content. Our ongoing research aims to enhance the proposed method based on implicit neural representation, focusing on two critical aspects: improving computational time and achieving accurate 3D reconstruction in dental CBCT. To enhance computational efficiency, the implementation of pre-trained parameters can be investigated using transfer learning, specifically leveraging image priors in dental CBCT. Although our experiments have shown promising capabilities for removing metal-induced artifacts, residual artifacts, particularly thread-like structures, were observed around metal objects. This observation indicates a minor discrepancy between the rendered model used in our method and real-world clinical CBCT data. Therefore, our ongoing research focuses on refining our mathematical model to better align it with the intricacies and nuances of clinical CBCT data.
2305.02562
Conditional and Residual Methods in Scalable Coding for Humans and Machines
We present methods for conditional and residual coding in the context of scalable coding for humans and machines. Our focus is on optimizing the rate-distortion performance of the reconstruction task using the information available in the computer vision task. We include an information analysis of both approaches to provide baselines and also propose an entropy model suitable for conditional coding with increased modelling capacity and similar tractability as previous work. We apply these methods to image reconstruction, using, in one instance, representations created for semantic segmentation on the Cityscapes dataset, and in another instance, representations created for object detection on the COCO dataset. In both experiments, we obtain similar performance between the conditional and residual methods, with the resulting rate-distortion curves contained within our baselines.
Anderson de Andrade, Alon Harell, Yalda Foroutan, Ivan V. Bajić
2023-05-04T05:32:44Z
http://arxiv.org/abs/2305.02562v2
# Conditional and residual methods in scalable coding for humans and machines ###### Abstract We present methods for conditional and residual coding in the context of _scalable coding for humans and machines_. Our focus is on optimizing the rate-distortion performance of the reconstruction task using the information available in the computer vision task. We include an information analysis of both approaches to provide baselines and also propose an entropy model suitable for conditional coding with increased modelling capacity and similar tractability as previous work. We apply these methods to image reconstruction, using, in one instance, representations created for semantic segmentation on the Cityscapes dataset, and in another instance, representations created for object detection on the COCO dataset. In both experiments, we obtain similar performance between the conditional and residual methods, with the resulting rate-distortion curves contained within our baselines. Anderson de Andrade, Alon Harell, Yalda Foroutan, and Ivan V. Bajic School of Engineering Science, Simon Fraser University, Burnaby, Canada {anderson_de_andrade,alon_harell,yalda_foroutan}@sfu.ca,[email protected] learnable compression, scalable coding, conditional coding, residual coding, entropy modelling ## 1 Introduction With the prominence of artificial intelligence, digital content is not only consumed by humans but also by computer programs. This software often analyzes content in different ways, according to their purpose. Depending on their task, only a subset of the information available might be necessary. Moreover, the information required can be represented in a more suitable way for the computer program that does not necessarily resemble its original natural representation, often required by humans to consume such content. In a collaborative setting [1], where edge devices capture signals that are processed and transmitted to cloud services to complete a set of tasks, it is efficient to transmit only the information necessary to achieve these tasks. Creating representations for every subset of tasks does not scale well with the number of tasks. In addition, if information for some tasks has already been transmitted and a superset of the original tasks is now required for the same input, transmitting the new corresponding representation would incur an overhead in redundant information. Thus, we would like to compose the information required for tasks in a scalable fashion [2], in which base representations are shared among multiple tasks and only incremental amounts of information are required for more specific tasks. Creating learnable tasks that make use of different streams of information in which some are fitted for different purposes is a challenge [3]. The lower-dimensional manifold induced by a particular task might not be readily usable by a different task. Translating representations from one manifold to another, such that the maximum amount of information is usable in a secondary task, is limited by the modelling capacity of the transformation and the data available [4, 5]. Conditional and residual coding have prevailed as two different approaches to incorporate side information in learnable compression settings. These approaches can leverage dedicated learnable transformations to explicitly transfer information to the target domain. We limit our findings to a common setting in which we have an image reconstruction task and a computer vision task whose representation is shared with the former. This configuration is referred as _scalable image coding for humans and machines_[3]. We present conditional and residual approaches for scalable learnable compression in which we transform the representations to share a common feature space. We derive baselines for these approaches and empirically compare them. Our experiments perform reconstruction of different datasets using representations for semantic image segmentation and object detection. We also present an entropy model with increased modelling potential suitable for conditional coding. ## 2 Related work In learnable compression, an information bottleneck is induced on an intermediate representation between the input and the output [6]. Successful approaches follow a variational framework [7] in which a hyper-prior representation learns the dependencies between the different factors of a latent representation and operates as side information [8, 9, 10, 11]. An auto-regressive entropy model is used to induce the information bottleneck and to entropy code the learnt representations. Recent work on scalable coding for humans and machines apply the ideas of learnable compression to both the reconstruction and computer vision tasks [12, 3, 13]. In these approaches, the reconstruction task uses a dedicated and a shared representation. These representations are concatenated after being decoded, and are used as input for a reconstruction model. Through rate-distortion optimization, this approach could create independent representations with no much redundancy of information between them, but the results of [3] and [13] show considerable redundancy. This work focuses primarily on efficiently coding the dedicated representation for the reconstruction task. In the conditional approach, the dedicated representation contains all information relevant to reconstruction but the uncertainty resolved by the shared representation is exploited during coding. In the residual approach, the information of the shared representation is removed from the target representation before coding it, and then added back down the pipeline after decoding. Residual and conditional coding in the context of learnable compression has been explored before for video compression [14, 15, 16, 17]. Our formulation is different in that the prediction is completely explained by the original signal and as such, the information of the residual cannot increase, whereas in video compression this can occur since the previous frame is used to compute the prediction. In [16], it is shown that learnable conditional coding often requires a transformation of the side information, potentially resulting in information loss to a degree where a residual approach could outperform the conditional approach. In this work, we propose to transform the side information in both approaches and show how this can improve the performance of the residual approach with respect to the conditional approach. Many of the existing entropy models for learnable compression support the conditional coding of the target representation given the hyper-prior representation [8]. Recent entropy models extend these ideas to efficiently capture the spatial and dimensional dependencies of these representations, by grouping factors together [18], and reorganizing and parallelizing the decoding order of the spatial locations [19]. In this work, we utilize our conditional information in a similar fashion, but we increase the modelling capacity by augmenting its receptive field and adding scaled residual connections. ## 3 Proposed Methods For an input image \(X\in\mathbb{R}^{C_{x}\times H\times W}\), a lossily-compressed _base_ representation \(Y_{b}=f_{b}(X)\) is learned as to minimize the distortion \(D_{b}=\mathbb{E}_{X}[d_{b}(g_{b}(Y_{b}),T)]\) with respect to a given computer vision target \(T\), a task distortion function \(d_{b}(\cdot,\cdot)\), and a learnable decoding function \(g_{b}(\cdot)\). In the conditional setting, a lossily-compressed _enhancement_ representation \(Y_{c}=f_{e}(X)\) is learned to minimize the distortion \(D_{c}=\mathbb{E}[d_{e}(\widehat{X},X)];\widehat{X}=g_{e}(Y_{c})\), using an image reconstruction distortion function \(d_{e}(\cdot,\cdot)\) and a learnable decoder \(g_{e}(\cdot)\). All information used for the reconstruction task is contained in \(Y_{c}\), and the information contained in \(Y_{b}\) is utilized to efficiently code \(Y_{c}\). Conditional coding effectively models \(H(Y_{c}|Y_{t})\), where \(Y_{t}=h_{c}(Y_{b})\) is a learnable transformation of \(Y_{b}\) that intuitively has a feature space similar to that of the enhancement representation \(Y_{c}\), so that their similarities can be exploited. Any information that reduces the conditional entropy should be maintained in \(Y_{t}\) since there is no rate penalty on its rate. In the residual approach, an analogous representation \(Y_{r}=f_{e}(X_{r});X_{r}=X-X_{p};X_{p}=h_{r}(Y_{b})\) is created to minimize \(D_{r}=\mathbb{E}[d_{e}(g_{e}(Y_{r})+X_{p},X)]\). Here, \(h_{r}(\cdot)\) is a learnable transformation of \(Y_{b}\) that implicitly reconstructs the image. The prediction \(X_{p}\) is added at the end of the reconstruction process. Fig. 1 shows architecture diagrams for both configurations.1 Footnote 1: Official code release: [https://github.com/adeandrade/research](https://github.com/adeandrade/research) ### Bounds for conditional coding Our theoretical analysis is performed in the lossless case to motivate the proposed baselines for our lossy approaches. In conditional coding, we model \(H(Y_{b})+H(Y_{c}|Y_{t})\), having \(H(Y_{c})\) as a lower bound: \[H(Y_{c}) \leq H(Y_{c})+H(Y_{t}|Y_{c})=H(Y_{c},Y_{t})\] \[=H(Y_{t})+H(Y_{c}|Y_{t})\leq H(Y_{b})+H(Y_{c}|Y_{t}), \tag{1}\] Figure 1: Overall architecture of the residual and conditional methods. The dotted line signifies that the enhancement network does not affect the base network. The conditional entropy decoder models \(H(Y_{c}|Y_{t})\). where we used \(H(Y_{t})\leq H(Y_{b})\) due to the data processing inequality. This bound is tight when \(H(Y_{t}|Y_{c})=0\) and \(H(Y_{b}|Y_{t})=0\), or equivalently, when \(H(Y_{b})=I(Y_{c};Y_{t})\). This corresponds to a decrease of information in \(H(Y_{c}|Y_{t})\) of \(H(Y_{b})\). An upper bound is obtained by: \[H(Y_{b})+H(Y_{c}|Y_{t}) =H(Y_{b})+H(Y_{c})-I(Y_{c};Y_{t})\] \[\leq H(Y_{b})+H(Y_{c}). \tag{2}\] This bound is tight when \(I(Y_{c};Y_{t})=0\), which corresponds to \(Y_{c}\) and \(Y_{t}\) being independent. We provide an upper baseline for the conditional approach by using a standalone enhancement representation \(Y_{e}\) generated without relying on any side information, and measuring \(\hat{H}(Y_{e})\), where \(\hat{H}(\cdot)\) is an entropy estimate. As a lower baseline we use \(\hat{H}(Y_{b})+\hat{H}(Y_{e})\). This is motivated by considering that \(Y_{e}\) can be more efficient as a task representation than \(Y_{c}\), and by the bounds in (1) and (2). ### Bounds for residual coding It has been shown that conditional coding is an upper bound of residual coding [16, 17]: \[H(X|X_{p}) =H(X_{r}+X_{p}|X_{p})=H(X_{r}|X_{p}) \tag{3}\] \[=H(X_{r})-I(X_{p};X_{r})\leq H(X_{r})\] (4) \[=H(X|X_{p})+I(X_{p};X_{r}). \tag{5}\] Here (3) uses the fact that having observed \(X_{p}\), the only uncertainty in \(X_{r}+X_{p}\) is due to \(X_{r}\). The inequality in (4) uses the non-negativity of mutual information. \(H(X_{r})\) is rewritten in (5) using the definition of mutual information and once again the fact that given \(X_{p}\), the only uncertainty in \(X-X_{p}\) is due to \(X\). The term \(I(X_{p};X_{r})\) in (4) acts as a penalty term on the residual formulation. To minimize it, the residual \(X_{r}\) and the prediction \(X_{p}\) must be as independent from each other as possible. This can be achieved when \(X_{p}\) collapses values in \(X\) so that \(H(X_{r})\) decreases, or when \(X_{p}\) produces a constant value for different values in \(X\), reducing \(H(X_{p})\). Reducing \(H(X_{p})\) increases \(H(X|X_{p})\), which in turn could have the adverse effect of increasing \(H(X_{r})\), as shown in (5). In our proposed method we train to minimize \(D_{r}\) and \(\hat{H}(Y_{r})\). By extension, we also minimize \(\hat{H}(X_{r})\). As shown in (5), this reduces both \(H(X|X_{p})\) and \(I(X_{p};X_{r})\). Hence, this optimization procedure encourages the learnable function \(h_{r}(\cdot)\) to create a representation \(X_{p}\) that recovers the input \(X\) as accurately as possible, while at the same time being as independent as possible from the resulting residual \(X_{r}\). Note that enforcing the similarity of \(X_{p}\) and the original input \(X\) may not be an optimal procedure, since even though such an optimization will decrease \(H(X|X_{p})\), it may lead to an increase in \(I(X_{p};X_{r})\). This explains why in preliminary experiments, we found that having a function \(h_{r}(\cdot)\) that explicitly reconstructs \(X\) does not perform as well as our proposed method. Due to the previous considerations stemming from our proposed method, we find that \(H(Y_{c}|Y_{t})=H(Y_{r})\) can be easier to achieve. This motivates us to compare \(\hat{H}(Y_{b})+\hat{H}(Y_{r})\) against the same baselines used in the conditional approach. ### Entropy modelling To conditionally code a representation \(Y_{c}\) that exploits as much information as possible from \(Y_{t}\), we model the spatial and dimensional dependencies between and within representations using a CNN [18]. Our proposed entropy model strikes a balance between complexity and accuracy. We group channels with a fixed size \(K\)[18]. Within each group, the same location across channels are processed in parallel, using as context all locations in the previous groups and all previous locations across all channels of the current group, within the receptive field of the convolutional layer. Similarly to [8], locations in the spatial domain are processed in a top-to-bottom, left-to-right fashion. The Markov property is enforced by a mask applied to the convolution kernels. Fig. 1(a) shows the kernel mask for a single output channel of a layer. Unlike previous work, the CNN architecture of our entropy model has scalable residual connections and deeper layers with kernels sizes larger than one for its auto-regressive convolutions. This removes some of the modelling limitations imposed by similar entropy models. The CNN architecture has blocks of three layers in which the input channels are scaled up, transformed in a higher-dimensional space, and scaled down back to the original number of channels. The residual connections are introduced in-between these blocks, such that the inputs can be re-scaled differently across the channel dimension. To maintain the Markov property when the number of channels changes, the group sizes are re-scaled accordingly and the channels can only change in multiples of \(M\). Fig. 1(b) shows the architecture overview of a single block in the CNN. The conditional representation is available as another channel group and is transformed by the CNN in the same way as other groups, except that its context is restricted to the receptive field within that group. All elements of the conditioned representation have access to this group. Similarly to [8] and more recent works, the predictions of the entropy model correspond to the means \(W\) and scales \(\Sigma\) of a univariate Gaussian distribution assigned to each element in the representation. The symbols are obtained by \(Q=\lfloor Y-W\rceil\), and the corresponding probability is \(P_{\mathcal{N}(\mathbf{0},\Sigma)}\lfloor Q\rvert\leq\lfloor Q\rvert\pm\nicefrac{{ 1}}{{2}}\rfloor\). During training, the rounding operation is simulated by adding uniform noise \(\mathcal{U}(-\nicefrac{{1}}{{2}},\nicefrac{{1}}{{2}})\). ### Learnable scalable compression As an architecture for learnable compression, we use a simplified version of the work in [11]. We drop the side information components from the coder, introduced as a hyper-prior in [8]. We also remove the attention layers introduced in [10]. To reduce the memory footprint and speed up the training procedure, we incrementally reduce the channels in the first layers of the analyzers and the last layers of the synthesizers. The base and enhancement tasks have this same architecture. The base generates a representation with the same dimensionality and resolution as the input \(X\) but reconstruction of the input is not enforced. This representation is the input for the computer vision model. The coder and the computer vision model are trained together end-to-end. In the conditional approach, \(h_{c}(\cdot)\) is a traditional CNN composed of blocks with residual connections between them. Each block has three convolutional layers that perform a transformation in a higher dimensionality and scales the output back to a lower dimensionality. The first half of the blocks maintain the same dimensionality as \(Y_{b}\), while the second half transitions to the same dimensionality as \(Y_{c}\), obtaining \(Y_{t}\). The resolution is maintained across the network as both \(Y_{b}\) and \(Y_{c}\) have the same resolution. In the residual approach, \(h_{r}(\cdot)\) uses the same architecture as our synthesizers, to transform \(Y_{b}\) into \(X_{p}\). The synthetizer upscales the representation to match the resolution and dimensionality of \(X\). A representation \(Y_{b}\) that is optimized for \(D_{b}\) can contain information that might not be beneficial for reconstruction on its own. Moreover, the information in \(Y_{b}\) is represented in a way that is suitable for the computer vision task \(T\) and bringing it all back to the image feature space through \(h_{r}(\cdot)\) can be challenging. To overcome these obstacles, we add a small reconstruction penalty on a transformed \(Y_{b}\) to the rate-distortion Lagrange minimization formulation: \[\mathcal{L}_{b}=D_{b}+\lambda_{b}\hat{H}(Y_{b})+\beta\,\mathbb{E}[d_{e}(\hat{ h}_{r}(Y_{b}),X)], \tag{6}\] where \(\hat{h}_{r}(\cdot)\) is an auxiliary network with the same architecture as the other syntherizers and \(\lambda_{b}\) and \(\beta\) are hyper-parameters. For the enhancement representations, we use the traditional rate-distortion loss function [6]: \[\mathcal{L}_{c}=D_{c}+\lambda_{e}\hat{H}(Y_{c}|Y_{t}),\qquad\mathcal{L}_{r}=D _{r}+\lambda_{r}\hat{H}(Y_{r}),\] for the conditional and residual approaches, respectively. During training, either the base network remains frozen or the gradients from the reconstruction network do not flow into the base network. ## 4 Experiments We conduct experiments to analyze the rate-distortion performance of the proposed conditional and residual methods for scalable coding and compare them against the proposed baselines. We perform two sets of experiments: one using semantic segmentation as the computer vision task on the Cityscapes dataset [20], and another using object detection as the computer vision task on the COCO 2017 dataset [21]. We first train the base representation on the computer vision task to obtain \(f_{b}(\cdot)\) and \(g_{b}(\cdot)\) for rate-distortion points under different values of \(\lambda_{b}\). We choose a model corresponding to a point on the rate-distortion curve that achieves reasonable distortion and subsequently use it to generate the representations \(Y_{b}\) for the conditional and residual approaches. The upper baseline is created by training the reconstruction task with no side information. We use the same architecture for the analysis and synthesis transforms as the other models. The entropy model for the upper baseline has the same architecture as the one used in the residual approach. The lower baseline is obtained by adding the rate of the base representation used for the conditional and residual approaches. Across all experiments, we allocated \(C_{b}=32\) channels to the base representation and \(C_{e}=256\) channels to the enhancement representation. In the analysis transforms, the four down-scaling operations have output channels 24, 48, 192, and \(C_{b}\) or \(C_{e}\), while the up-scaling operations in the synthesis transform have output channels 192, 48, 24, and \(C_{x}\). The entropy model consists of 5 blocks with \(K=16\) and \(M=1\). To train the reconstruction tasks in all experiments, we use the RMSE function as the distortion function \(d_{e}(\cdot,\cdot)\). We compute and report the bits-per-pixel (BPP) using the entropy estimates, which in several experiments had at most 0.5% difference with the achieved BPP. Also, to speed up the computation of the rate-distortion curve, we often train a model Figure 2: Entropy model overview. (a) The convolution has kernel size \(3\times 3\) and the input is \(12\times 4\times 4\). The conditional input has size \(3\times 4\times 4\) and there are \(K=4\) groups. With an input padding and a stride of 1, this is the 7th step of the convolution. under low-compression settings and use its weights as initialization for the models trained to obtain the rest of the curve. The parameters are updated using Adam at a learning rate of \(10^{-4}\). We train models with early stopping but first decay the learning rate by a factor of 0.75 if a plateau is reached. ### Image semantic segmentation on Cityscapes Cityscapes is a set of images of urban scenes for semantic understanding [20]. We use DeepLabV3 [22] as the computer vision model for segementation with MobileNetV3 [23] as a back-end. Here, \(d_{b}(\cdot,\cdot)\) is the per-pixel multi-class cross-entropy, although we report the mean intersection over union (mIoU) metric. We set \(\beta=0.1\) and report the results on the validation dataset. For data augmentation, we use random crops of \(768\times 768\) pixels, random horizontal flips and color jittering. The front-end of the model corresponding to the coder is trained with Adam using a learning rate of \(10^{-4}\), while the classifier is trained with stochastic gradient descent using momentum and a learning rate of \(10^{-2}\). A \(\ell_{2}\) loss is added to the weights of the classifier to prevent over-fitting, with a scale factor of \(10^{-4}\). Fig. 2(a) shows the rate-distortion curves for the conditional and residual approach. We notice that these lines lie in between the baselines, producing rate-distortion points that respect them. Compared to the rate-distortion curve of the lower baseline, the conditional approach has a BD-Rate [24] of \(-16.56\%\), whereas the residual approach achieves a \(-14.6\%\) rate reduction. Thus, the conditional approach performs marginally better than the residual approach in terms of BD-Rate. Looking at the ratio between these BD-Rate scores and the BD-Rate score achieved by the upper baseline, we can compute the percentage of the base representation utilized. As such, the conditional approach uses \(43.01\%\) of the side information rate, whereas the residual approach uses \(37.91\%\). In the lowest-compression settings under both approaches, the utilization is higher. Fig. 2(c) shows the rate-distortion performance of the base task. The chosen \(\beta\) value places a penalty on both the rate and the task performance but allows the base representation to be exploited by the architecture. We attribute the small imperfections in this rate-distortion curve to the choice of \(\beta\) and the limitations of the training algorithm. Figure 3: Scalable coding results. The purple lines represent the performance attained with \(\lambda_{b}=0\) and \(\beta=0\). ### Object detection on COCO COCO 2017 has 123,287 domain-agnostic images for object detection and segmentation [21]. We use Faster R-CNN [25] for object detection with ResNet-50 [26] as the back-end. For this task, \(d_{b}(\cdot,\cdot)\) is the sum of the different loss functions employed by this architecture. We report the mean average precision (mAP) metric computed according to [21], and set \(\beta=0.05\). As data preprocessing for training, we use random horizontal flips and generate batches with similar aspect ratios grouped in 3 clusters. Images inside a batch are resized to the minimum size and the bounding boxes are adjusted accordingly when training the computer vision task. When training for reconstruction, the images in a batch are center-cropped to their minimum size. All weights are trained with Adam using a learning rate of \(10^{-4}\). As shown in Fig. 2(b), the performance of both approaches is comparable, with a \(-4.14\%\) and a \(-2.47\%\) BD-Rate improvement over the lower baseline, for the conditional and residual methods, respectively. We achieve a utilization of the base in terms of BD-Rate of 49.24% and 29.32% for the conditional and residual approaches, respectively. Fig. 2(d) shows the base-distortion performance of the base task. Compared to semantic segmentation on Cityscapes, the task model is better at reaching the uncompressed task performance. Also, for a similar distortion penalty, this task uses more rate. The rate-distortion curves obtained by the reconstruction task on the COCO dataset are almost an order of magnitude larger than the ones from Cityscapes. This can be explained by the simplicity of the content in the images found in Cityscapes, and the higher amount of artifacts found in the COCO dataset due to compression. ## 5 Conclusion We present conditional and residual methods for scalable coding for humans and machines. Our experiments show that the proposed architectures for conditional and residual coding perform similarly and that the rate-distortion performance is within the presented baselines or operational bounds. In addition, the proposed conditional entropy model is able to match the performance of the residual method.
2307.03005
Multi-scale hierarchy from multidimensional gravity
We discuss the way of solving the hierarchy problem. We show that starting at the Planck scale, the three energy scales -- inflationary, electroweak and the cosmological ones can be restored. The formation of small parameters is proposed that leads to a successful solution of the problem. The tools involved in the process are $f(R)$ gravity and inhomogeneous extra dimensions. Slow rolling of a space domain from the Planck scale down to the inflationary one gives rise to three consequences: an infinite set of causally disconnected domains (pocket universes) are nucleated; quantum fluctuations in each domain produce a variety of different fields and an extra-dimensional metric distribution; these distributions are stabilized at a sufficiently low energy scale.
Kirill A. Bronnikov, Arkady A. Popov, Sergey G. Rubin
2023-07-06T14:09:54Z
http://arxiv.org/abs/2307.03005v1
###### Abstract ###### Abstract We discuss the way of solving the hierarchy problem. We show that starting at the Planck scale, the three energy scales -- inflationary, electroweak and the cosmological ones can be restored. The formation of small parameters is proposed that leads to a successful solution of the problem. The tools involved in the process are \(f(R)\) gravity and inhomogeneous extra dimensions. Slow rolling of a space domain from the Planck scale down to the inflationary one gives rise to three consequences: an infinite set of causally disconnected domains (pocket universes) are nucleated; quantum fluctuations in each domain produce a variety of different fields and an extra-dimensional metric distribution; these distributions are stabilized at a sufficiently low energy scale. **Multi-scale hierarchy from multidimensional gravity** Kirill A. Bronnikov\({}^{a,b,c,}\)1, Arkady A. Popov\({}^{d,}\)2, Sergey G. Rubin\({}^{c,d,}\)3 Footnote 1: e-mail: [email protected] Footnote 2: e-mail: [email protected] Footnote 3: e-mail: [email protected] \({}^{a}\) _Center fo Gravitation and Fundamental Metrology, VNIIMS,Oxymoraya ulitsa 46, Moscow 119361, Russia_ \({}^{b}\) _Institute of Gravitation and Cosmology, RUDN University, ulitsa Miklukho-Maklaya 6, Moscow 117198, Russia_ \({}^{c}\) _National Research Nuclear University MEPhI (Moscow Engineering Physics Institute), Kashirskoe shosse 31, Moscow 115409, Russia_ \({}^{d}\) _N.I. Lobachevsky Institute of Mathematics and Mechanics, Kazan Federal University, Kremlyovskaya ulitsa 18, Kazan 420008, Russia_ ## 1 Introduction Assuming that the Universe has been formed at the Planck scale, it is naturally implied that its initially formed parameters are of the order of the same scale. The essence of the Hierarchy problem is the question: Why are the observable low-energy physical parameters so small as compared to those of the Planck scale? How did Nature manages to decrease the parameter values so substantially? There are at least four important energy scales during evolution of the Universe: the Planck scale (\(\sim 10^{19}\,\)GeV) at which our Universe cannot be described by classical laws; the inflationary scale (\(\sim 10^{13}\,\)GeV) where our horizon has appeared, the electroweak scale (\(\sim 10^{2}\,\)GeV), and the cosmological scale specified by the cosmological constant (\(\sim 10^{-123}\,\)GeV\({}^{4}\)) (CC). According to the inflationary paradigm, the physical laws are formed at high energies [1; 2], where the Lagrangian structure is yet unknown. The physics has been established at the energy scale \(M\) higher than the inflationary one, \(E_{I}\sim 10^{13}\,\)GeV, see [3; 4] in this context. We study the way of substantially decreasing the physical parameters at the three scales mentioned above assuming natural values of the initial parameters of the order of \(M\). In this paper, we invoke the idea of multidimensional gravity which is a widely used tool for obtaining new theoretical results [5; 6; 7; 8; 9]. The paper [10] uses warped geometry to solve the small cosmological constant problem. Multidimensional inflation is discussed in [11; 12; 13] where it was supposed that an extra-dimensional metric \(g_{\rm n}\) is stabilized at a high-energy scale. Stabilization of extra space as a pure gravitational effect has been studied in [14; 15], see also [16]. The present research is also based on nonlinear \(f(R)\) gravity. The interest in \(f(R)\) theories is motivated by inflationary scenarios starting with Starobinsky's paper [17]. At present, \(f(R)\) gravity is widely discussed, leading to a variety of consequences, in particular, the existence of dark matter [18; 19]. Including a function of the Ricci scalar, \(f(R)\), is the simplest extension of general relativity. In the framework of such an extension, many interesting results have been obtained. Some viable \(f(R)\) models in 4D space that satisfy the observational constraints are proposed in [20; 21; 22; 23; 24]. The idea that the Lagrangian parameters can be considered as some functions of a field has been widely used since Schwinger's paper [25]. Such fields can be involved in the classical equations of motion together with the "main" fields or treated as background fields. The latter were applied for fermion localization on branes [26; 27; 28], gauge field localization [29], extensions of gravity in a scalar-tensor form (with \(f(\phi)R\)) [30] and so on. In this paper, we show that a self-gravitating scalar field can serve as a reason for the emergence of small parameters. As a mathematical tool, we use the Wilsonian approach technique, a well-known method for theoretical studies of the energy dependence of physical parameters [31]. In this approach, the physical parameters \(\lambda_{i}(M)\) of the Wilson action are fixed at a high energy scale \(M\). The renormalization flow used to descend to low energies (the top-down approach) is discussed in [32, 33, 34, 35, 36]. In particular, quantum corrections to the Starobinsky model were discussed in [37]. In our approach, we add extra dimensions and study their role in different scales, with a hope that it should make the renormalization procedure much more efficient. We make use of the idea of flexible (inhomogeneous) extra space that has been developed in [18, 38, 39]. Our preliminary study of inhomogeneous extra metrics concerns such parameters as the cosmological constant [38], those of the Starobinsky inflationary model and baryon asymmetry of the Universe [40, 41]. It has been shown there that inhomogeneous metrics can be tuned to explain the smallness of the appropriate effective parameters. For example, encouraging results for explaining the smallness of the cosmological constant was obtained in [38, 42]. The effect of quantum corrections in this context was discussed in [43]. Here we continue this research by including the Higgs sector of the Standard Model. There are three energy scales which we intend to describe -- inflationary stage, the electroweak scale, and the cosmological one. Each of them is characterized by a specific small parameter. The initial parameters and the Lagrangian of our model are fixed at a sub-Planckian scale and do not vary during the Universe evolution. Special attention is paid to the emergence mechanism small values, specific for each of the three scales. ## 2 The model Consider \(f(R)\) gravity with a minimally coupled scalar field \(\zeta\) in a \(\mathrm{D}=4+n\)-dimensional manifold \(M_{\mathrm{D}}\): \[S=\frac{m_{\mathrm{D}}^{\mathrm{D}-2}}{2}\int_{M_{\mathrm{D}}}d^{\mathrm{D}}X \sqrt{|g_{\mathrm{D}}|}\left(f(R)+\partial^{\mathrm{M}}\zeta\,\partial_{ \mathrm{M}}\zeta-2V(\zeta)\right)+S_{H_{P}}\,, \tag{1}\] where \(g_{\mathrm{D}}\equiv\det g_{\mathrm{MN}}\), \(\mathrm{M},\mathrm{N}=\overline{1,\mathrm{D}}\), the \(n\)-dimensional manifold \(M_{n}\) is assumed to be closed, \(f(R)\) is a function of the D-dimensional Ricci scalar \(R\), and \(m_{\mathrm{D}}\) is the D-dimensional Planck mass. Below, we will work in the units \(m_{D}=1\). The term \(S_{H_{P}}\) denotes the Higgs action (32) considered in Sec. 5, and it is assumed to be small as compared to the gravitational part of the action. It is also postulated that the scalar field \(\zeta\) is very massive and hence unobservable. Nevertheless, this field plays a key role being responsible for the emergence of small parameter(s), see a discussion at the beginning of Sec. 4. Variation of the action (1) with respect to the metric \(g_{\mathrm{D}}^{\mathrm{MN}}\) and the scalar field leads to the known equations \[-\frac{1}{2}f(R)\delta_{\mathrm{N}}^{\mathrm{M}}+\Big{(}R_{ \mathrm{N}}^{\mathrm{M}}+\nabla^{\mathrm{M}}\nabla_{\mathrm{N}}-\delta_{ \mathrm{N}}^{\mathrm{M}}\Box_{\mathrm{D}}\Big{)}f_{R}=-T_{\mathrm{N}}^{ \mathrm{M}}, \tag{2}\] \[\Box_{\mathrm{D}}\,\zeta+V_{\zeta}=0, \tag{3}\] with \(f_{R}=df(R)/dR\), \(\Box_{\mathrm{D}}=\nabla^{\mathrm{M}}\nabla_{\mathrm{M}}\), and \(V_{\zeta}=dV(\zeta)/d\zeta\). Equation (3) is known to be a consequence of equations (2). The corresponding stress-energy tensor of the scalar field \(\zeta\) is \[T_{\mathrm{N}}^{\mathrm{M}}=\frac{\partial L_{\mathrm{matter}}}{\partial\big{(} \partial_{\mathrm{M}}\zeta\big{)}}\partial_{\mathrm{N}}\zeta-\frac{\delta_{ \mathrm{N}}^{\mathrm{M}}}{2}L_{\mathrm{matter}}=\partial^{\mathrm{M}}\zeta\, \partial_{\mathrm{N}}\zeta-\frac{\delta_{\mathrm{N}}^{\mathrm{M}}}{2}\,\partial ^{\mathrm{K}}\zeta\,\partial_{\mathrm{K}}\zeta+\delta_{\mathrm{N}}^{\mathrm{M }}V\big{(}\zeta\big{)}\,. \tag{4}\] Here the Higgs field contribution is omitted. We use the conventions for the curvature tensor \(R_{\mathrm{\ MNK}}^{\mathrm{L}}=\partial_{\mathrm{K}}\Gamma_{\mathrm{MN}}^{ \mathrm{L}}-\partial_{\mathrm{N}}\Gamma_{\mathrm{MK}}^{\mathrm{L}}+\Gamma_{ \mathrm{CK}}^{\mathrm{L}}\Gamma_{\mathrm{NM}}^{\mathrm{C}}-\Gamma_{\mathrm{CN}}^ {\mathrm{L}}\Gamma_{\mathrm{MK}}^{\mathrm{C}}\) and the Ricci tensor \(R_{\mathrm{MN}}=R_{\mathrm{\ MKN}}^{\mathrm{K}}\). The metric is supposed in the form \[ds^{2}=\mathrm{e}^{2\gamma(u)}\left(dt^{2}-\mathrm{e}^{2Ht}(dx^{2}+dy^{2}+dz^{2 })\right)-du^{2}-r(u)^{2}d\Omega_{n-1}^{2}, \tag{5}\] where \(d\Omega_{n-1}^{2}\) is the metric on a unit \(n-1\)-dimensional sphere. The metric ansatz used in this paper has been widely studied in the framework of linear gravity [44, 45, 46, 47], applying, in particular, to solving the Hierarchy problem [48, 49, 50]. The field equations for the metric (5) and \(\zeta=\zeta(u)\) read \[{R^{\prime}}^{2}f_{RRR}+\left[R^{\prime\prime}+\left(3\gamma^{ \prime}+(n-1)\frac{r^{\prime}}{r}\right)R^{\prime}\right]f_{RR}-\left(\gamma^{ \prime\prime}+4{\gamma^{\prime}}^{2}+(n-1)\frac{\gamma^{\prime}r^{\prime}}{r}- \frac{3H^{2}}{{\rm e}^{2\gamma}}\right)f_{R}\] \[\qquad\qquad\qquad-\frac{f(R)}{2}=-\frac{{\zeta^{\prime}}^{2}}{2 }-V(\zeta), \tag{6}\] \[\left(4\gamma^{\prime}R^{\prime}+(n-1)\frac{r^{\prime}}{r}R^{ \prime}\right)\,f_{RR}-\left(4\gamma^{\prime\prime}+4{\gamma^{\prime}}^{2}+(n- 1)\frac{r^{\prime\prime}}{r}\right)\,f_{R}-\,\frac{f(R)}{2}=\frac{{\zeta^{ \prime}}^{2}}{2}-V(\zeta)\,,\] (7) \[{R^{\prime}}^{2}f_{RRR}+\left(R^{\prime\prime}+4\gamma^{\prime}R ^{\prime}+(n-2)\frac{r^{\prime}}{r}\,R^{\prime}\right)f_{RR}-\left(\frac{r^{ \prime\prime}}{r}+\frac{4\gamma^{\prime}r^{\prime}}{r}+(n-2)\frac{{r^{\prime}} ^{2}}{r^{2}}-\frac{(n-2)}{r^{2}}\right)f_{R}\] \[\qquad\qquad\qquad-\,\frac{f(R)}{2}=-\frac{{\zeta^{\prime}}^{2}} {2}-V(\zeta),\] (8) \[\zeta^{\prime\prime}+\left(4\gamma^{\prime}+(n-1)\frac{r^{\prime }}{r}\right)\,\zeta^{\prime}-V_{\zeta}=0, \tag{9}\] where the prime denotes \(d/du\). Also, we will use the expression for the Ricci scalar \[R(u)=\frac{12H^{2}}{{\rm e}^{2\gamma}}-8\gamma^{\prime\prime}-20{\gamma^{ \prime}}^{2}-(n-1)\left(\frac{2r^{\prime\prime}}{r}+\frac{8\gamma^{\prime}r^{ \prime}}{r}+(n-2)\left(\frac{r^{\prime}}{r}\right)^{2}-\frac{(n-2)}{r^{2}}\right) \tag{10}\] as an additional equation and \(R(u)\) will be treated as a new unknown function to avoid 3rd and 4th order derivatives in Eqs. (6)-(8). It can be shown that one of the equations (6)-(9) is a consequence of the others. The combination \(2{\times}(7){-}f_{R}{\times}(10)\) is the constraint equation \[\biggl{(}8\gamma^{\prime}+2(n-1)\frac{r^{\prime}}{r}\biggr{)}R^{ \prime}f_{RR}+\biggl{(}12{\gamma^{\prime}}^{2}+(n-1)\biggl{(}\frac{8\gamma^{ \prime}r^{\prime}}{r}+(n-2)\frac{\left({r^{\prime}}^{2}-1\right)}{r^{2}} \biggr{)}+R\biggr{)}f_{R}\] \[\qquad\qquad\qquad-\frac{12H^{2}}{{\rm e}^{-\gamma(u)}}f_{R}-f(R) ={\zeta^{\prime}}^{2}-2V\bigl{(}\zeta\bigr{)} \tag{11}\] containing only first-order derivatives. It plays the role of a restriction on the solutions of the coupled second-order differential equations (6)-(10). As a result, we use three independent equations (6), (8), (10) and the constraint (11) to fix three functions \(r(u),\gamma(u),R(u)\) and the unknown metric parameter \(H\). One of the possible numerical solutions to this system is shown in Fig. 1. We note that the warp factor \(e^{\gamma(u)}\to 0\) at the boundaries, which are singular ends of the range of \(u\) and can be imagined as a kind of poles in a closed \(n\)-dimensional manifold since there \(r\to 0\). The qualitative behavior of the solution shown in Fig. 1 is quite generic. The particular form of solutions, including the field distribution and the extra metric, depends on the Lagrangian parameters postulated from the beginning. It also depends on the boundary conditions at \(u=0\) that are necessary for solving the second-order differential equations. Unlike the Lagrangian parameters, the boundary conditions ultimately depend on random initial fluctuations within a pocket universe. Inflation produces a continuum set of such universes with different initial conditions and therefore with different metric functions and field distributions in the extra space. ## 3 Matter localization around a singularity In general, it is assumed here that matter is distributed throughout the extra dimensions like in the Universal Extra Dimensional approach [51; 52]. At the same time, there is another direction that deserves discussion. Indeed, we see from the figures that there are two points where the metric is singular or has sharp peaks. They could indicate the formation of branes if the extra space is large enough and if matter is concentrated in a close neighborhood of these peaks (certainly assuming that the formal infinities are somehow suppressed by quantum effects). This opportunity is briefly discussed in this section. As a rough approximation, consider the motion of classical particles near such a singular point, see Fig. 1, bearing in mind the metric \[ds^{2}=\mathrm{e}^{2\gamma(u)}(dt^{2}-dx^{2}-dy^{2}-dz^{2})-du^{2}-r(u)^{2}(d \xi^{2}+\sin^{2}\xi\,d\psi^{2}). \tag{12}\] The geodesic equations have the form \[t_{ss}+2\,t_{s}\,\gamma_{u}\,u_{s}=0, \tag{13}\] \[x_{ss}+2\,x_{s}\,\gamma_{u}\,u_{s}=0,\quad y_{ss}+2\,y_{s}\, \gamma_{u}\,u_{s}=0,\quad z_{ss}+2\,z_{s}\,\gamma_{u}\,u_{s}=0,\] (14) \[u_{ss}+\mathrm{e}^{2\gamma}\,\gamma_{u}\,(t_{s}{}^{2}-x_{s}{}^{ 2}-y_{s}{}^{2}-z_{s}{}^{2})-r\,r_{u}\,\xi_{s}{}^{2}-r\,r_{u}\sin^{2}\xi\,\psi _{s}{}^{2}=0,\] (15) \[\xi_{ss}+2\,\xi_{s}\,\frac{r_{u}}{r}\,u_{s}-\sin\xi\,\cos\xi\, \psi_{s}{}^{2}=0,\] (16) \[\psi_{ss}+2\,\psi_{s}\,\frac{r_{u}}{r}\,u_{s}+2\cot\xi\,\xi_{s}\, \psi_{s}=0, \tag{17}\] where the index \(s\) denotes the derivative with respect to \(s\), and the index \(u\) denotes the derivative with respect to \(u\). These equations admit solutions when \(x,y,z,\xi\), and \(\psi\) are constant. Let us assume, for simplicity, that \(\xi=\pi/2\), then \[0=t_{ss}+2\,t_{s}\,\gamma_{u}\,u_{s}=t_{ss}+2\,t_{s}\,\gamma_{s} \;\;\Rightarrow\;\;t_{s}=\mathrm{e}^{C_{1}-2\gamma}, \tag{18}\] \[0=u_{ss}+\mathrm{e}^{2\gamma}\,\gamma_{u}\,t_{s}{}^{2}=u_{ss}+ \gamma_{u}\,\mathrm{e}^{2C_{1}-2\gamma}=\frac{1}{u_{s}}\left(u_{ss}\,u_{s}+ \gamma_{s}\,\mathrm{e}^{2C_{1}-2\gamma}\right)\;\;\Rightarrow\;\;u_{s}{}^{2}= \mathrm{e}^{2C_{1}-2\gamma}+C_{2}, \tag{19}\] with integration constants \(C_{i}\). The normalization relation gives \[1=\mathrm{e}^{2\gamma}t_{s}{}^{2}-u_{s}{}^{2}=-C_{2}. \tag{20}\] Then \[u_{s}{}^{2}=\mathrm{e}^{2C_{1}-2\gamma}-1=u_{t}{}^{2}\mathrm{e}^{2C_{1}-4 \gamma}\;\Rightarrow\;u_{t}{}^{2}=\mathrm{e}^{2\gamma}\left(1-\mathrm{e}^{2 \gamma-2C_{1}}\right). \tag{21}\] For nonrelativistic particles, only the second equation matters. It can be approximated as \[u_{ss}\simeq-2e^{2\gamma}\gamma_{u}. \tag{22}\] We see that the acceleration of a particle is directed to a singular point, which should ultimately lead to concentration of matter at such a point. As a result, matter is localized around both "poles," as should be the case in a brane world (this time consisting of two branes on the two "poles"). It opens a door for developing a mechanism of strong reduction of the initial parameter values. For example, an interaction term of the form \[\kappa\int d^{D}Z\sqrt{|g_{D}|}\chi(z)\bar{\psi}(z)\psi(z)\] contains overlapping integral \[I_{\rm overlap}\equiv\int d^{n}y\sqrt{|g_{n}|}\chi(y)\bar{\psi}(y)\psi(y)\] over the extra dimensions which could be arbitrarily small if the fields \(\chi(y)\) and \(\psi(y)\) are localized near different branes. It leads to the coupling constant renormalization \[\kappa\to\kappa^{\prime}=\kappa I_{\rm overlap}\ll\kappa.\] We will leave this idea for future studies and return to our main discussion. ## 4 Intermediate energies. The Starobinsky model The second energy scale relates to the inflationary stage with the small parameter \(H/m_{\rm Pl}\sim 10^{-6}\). Different inflationary models use different parameters of this order. It could be the inflaton mass in the simplest model of inflation with a quadratic potential or a constant factor of \(R^{2}\) term in the Starobinsky model. Let us restore the latter. To this end, we should solve the system (6)-(9) and obtain the necessary values of its parameters. The scalar field \(\zeta\) affects the extra-space metric through the Einstein equations, but here we are interested in small amplitude solutions of this field, i.e., \(\zeta(X)\ll 1\). Therefore, its role in the metric formation is negligible, and it can be considered as a test field acting in the background metric. This approximation makes the analysis easier but is not very significant for our reasoning. #### The emergence of small parameters A successful solution of the Hierarchy problem implies the presence of small parameters, and we have enough tools to create them. Indeed, in our picture, there is an infinite set of different universes created during inflation [13, 53, 54] which contain the independently fluctuating field \(\zeta\). These fluctuations decay with time and lead to static field distributions in each universe. There is an infinite set \(\aleph\) of such static distributions that form a continuum set. The situation is similar to the boson star formation model [55] where a self-gravitating scalar field forms a variety of dense stable clumps. The set \(\aleph\) contains a subset of small-amplitude distributions \(\zeta(u)\) like those presented in Fig. 1, right panel. Their values averaged over the extra dimensions represent a set of small parameters to be widely used below. To proceed, let us restore some formulas from our previous paper [56] for a relation between the D-dimensional Planck mass and the 4-dimensional one, which is needed to convert units \(m_{\rm D}=1\) into the physical units. To this end, define \[R_{4}\equiv 12H^{2},\qquad R_{n}\equiv R(u)-{\rm e}^{-2\gamma(u)}R_{4}, \tag{23}\] see (10). Substitution of the Taylor series \[f(R)\simeq f(R_{n})+f_{R}(R_{n}){\rm e}^{-2\gamma(u)}R_{4}+\frac{1}{2}f_{RR}( R_{n}){\rm e}^{-4\gamma(u)}R_{4}^{2}+\ldots \tag{24}\] into the gravitational part of the action (1) leads to an effective theory after integration over the extra coordinates: \[S_{\rm eff}=\frac{m_{\rm Pl}^{2}}{2}\int\limits_{M_{4}}d^{4}x\sqrt{|g_{4}|} \Big{(}a_{\rm eff}R_{4}^{2}+R_{4}+c_{\rm eff}\Big{)}. \tag{25}\] Here \(g_{4}\) is the determinant of the 4D metric \[ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=dt^{2}-{\rm e}^{2Ht}\delta_{ij}dx^{i}dx^{j}\,, \tag{26}\] and \[m_{\rm Pl}^{2}={\cal V}_{n-1}\int_{u_{\rm min}}^{u_{\rm max}}f_{R}(R_{n})\,{ \rm e}^{2\gamma}\,r^{n-1}\,du, \tag{27}\] \[a_{\rm eff}=\frac{{\cal V}_{n-1}}{2m_{\rm Pl}^{2}}\int_{u_{\rm min}}^{u_{\rm max }}f_{RR}(R_{n})\,{\rm e}^{4\gamma}\,r^{n-1}\,du, \tag{28}\] \[c_{\rm eff}=\frac{{\cal V}_{n-1}}{m_{\rm Pl}^{2}}\int_{u_{\rm min}}^{u_{\rm max }}\Big{(}f(R_{n})-(\zeta^{\prime})^{2}-2V(\zeta)\Big{)}\,{\rm e}^{4\gamma}\,r ^{n-1}\,du. \tag{29}\] where \({\cal V}_{n-1}=\int d^{n-1}x\sqrt{|g_{n-1}|}=\frac{2\pi^{n/2}}{\Gamma(n/2)}\). The r.h.s. of Eq. (27) is written in units \(m_{D}=1\). This relation is used to express the D-dimensional Planck mass in terms of the 4D Planck mass \(m_{\rm Pl}\). Here we suppose that the functions \(\gamma(u),\;r(u),\;\zeta(u),\;R(u)\) form a particular solution to the system (6)-(9) for a specific value of \(H\). Therefore the values of \(a_{\rm eff}(H)\) and \(c_{\rm eff}(H)\) are functions of the Hubble parameter. They are approximately constants during inflation and at the present times, being different in these two periods. The parameter \(c_{\rm eff}(H)\) is fixed at the present epoch when \(H\ll m_{\rm Pl}\), while the parameter \(a_{\rm eff}(H)\) is determined by the appropriate inflation rate. The parameter \(a_{\rm eff}\) must be approximately equal to the observable value obtained from the COBE normalization [57], as \[a_{\rm Starob}\simeq 1.12\cdot 10^{9}\left(\frac{N_{\rm e}}{60}\right)^{2}m_{ \rm Pl}^{-2}. \tag{30}\] For the solution shown in Fig. 1, \(a_{\rm eff}\simeq 7.2\cdot 10^{8}\,m_{\rm Pl}^{-2}\), and the Hubble parameter is \(H\simeq 1\cdot 10^{-6}m_{\rm Pl}\). One can see that the Starobinsky inflationary model has been restored. The parameter values \(a=300,\,c=0.002\) lead to the following values of the dimensionless parameters: \[a^{\prime}=\sqrt{am_{D}^{2}}\simeq 17,\qquad c^{\prime}=\sqrt{c/m_{D}^{2}} \simeq 0.045 \tag{31}\] which looks natural. The other parameter, \(c_{\rm eff}(H)\), needs a separate discussion. Equation (29) for \(c_{\rm eff}(H)\) is derived under the assumption that all functions in Eqs. (6)-(9) are stationary, which takes place for a 4D de Sitter metric in which the Hubble parameter \(H={\rm const}\). This approximation is valid at slow-roll inflation with the small parameter \(|\dot{H}|/H^{2}\ll 1\). Fortunately, this inequality holds for a wide range of the parameter \(c_{\rm eff}\), in particular for \(c_{\rm eff}=0\)[17]. However, Eq. (29) has a practical meaning only for a pure de Sitter metric but not for slow rolling. This formula could be valid at the present times, for example, if we suppose that the cosmological constant \(\Lambda=-c_{\rm eff}/2\) is really constant. It is a subject of detailed discussion, see Sec. 6. ## 5 The electroweak scale. Restoration of the Higgs parameters In this section, the reasoning is in the spirit of our previous paper [56], but without introducing an external scalar field. ### Analytical formulas In the previous section, we have reproduced the Starobinsky model of inflation at the scale of \(10^{13}\,\)GeV. The appropriate values of the initial Lagrangian parameters \(a,c\) are fixed. These parameters must be the same at low scales where the Hubble parameter is equal to zero, \(H\simeq 0\), as compared to the Planck scale. On the contrary, the extra space metric depends on the energy scale, the Hubble parameter in our case. In this section we discuss the Hierarchy problem at the electroweak scale using the Higgs field as an example. Within the framework of our approach outlined in the Introduction, we assume that the physics of the Higgs field is formed at the Planck scale. Suppose that the form of the Higgs action at the Planck scale is the same as at the electroweak scale, \[S_{\rm H_{P}}=\frac{1}{2}\int d^{\rm D}X\sqrt{|g_{\rm D}|}\,\Big{(}\partial^{ \rm M}{H_{\rm P}}^{\dagger}\partial_{\rm M}{H_{\rm P}}+\nu{H_{\rm P}}^{\dagger }{H_{\rm P}}-\lambda\big{(}{H_{\rm P}}^{\dagger}{H_{\rm P}}\big{)}^{2}\Big{)}, \tag{32}\] where the symbol \(\dagger\) means Hermitian conjugation, \(\nu\) and \(\lambda>0\) are arbitrary numbers and \(H_{\rm P}\) is a proto-Higgs field. We have managed to avoid large/small initial dimensionless parameter values \(a^{\prime},c^{\prime}\), see (31) when describing the Starobinsky model acting at energies \(\sim 10^{13}\,\)GeV. Our intention is to repeat this success at the electroweak energies. To do that, we need to show that the initial parameters can be reduced by many orders of magnitude. All numerical values in the Lagrangian (32) are of the order of unity in \(m_{\rm D}\) units. More definitely, let us express the dimensionful parameters \(\nu,\lambda\) in terms of the dimensionless ones \(\nu^{\prime},\lambda^{\prime}\): \[\nu\rightarrow(\nu^{\prime}m_{D})^{2},\qquad\lambda\rightarrow(\lambda^{\prime }/m_{D})^{D-4}. \tag{33}\] It is these dimensionless parameters \(\nu^{\prime}\) and \(\lambda^{\prime}\) that should vary around unity. The classical equations of motion are obtained by varying the action (32) with respect to \(H_{\rm P}\), which gives \[\Box_{\rm D}H_{\rm P}=\nu{H_{\rm P}}-2\lambda\big{(}{H_{\rm P}}^{\dagger}{H_{ \rm P}}\big{)}{H_{\rm P}}. \tag{34}\] The proto-Higgs field can be presented as \[H_{\rm P}=h(x)\;U(u)+\delta H_{\rm P},\qquad\delta H_{\rm P}=\sum_{k}h_{k}(x)Y_{k}(u) \tag{35}\] where \(h(x)\) and \(h_{k}(x)\) are 2-component columns acting in the fundamental representation of \(SU(2)\). In what follows, we will consider the case \[H_{\rm P}\simeq h(x)\,U(u),\qquad\delta H_{\rm P}\ll h(x)\,U(u). \tag{36}\] The dimensionality of the proto-Higgs field is \([H_{P}]=m_{D}^{(D-2)/2}\), \([h]=[h_{k}]=m_{D}\), \([U]=[Y_{k}]=m_{D}^{n/2}\). Our immediate aim is to find the distribution of the field \(H_{\rm P}\) over the extra coordinates governed by the scalar function \(U(u)\) by solving Eqs. (34), (36). The inhomogeneities of the field \(h(x)\) are important at low energies, but they are exponentially stretched during the first de Sitter-like stage so that \(h(x)={\rm const}\) with great accuracy. It means that \[h(x)=\frac{1}{\sqrt{2}}\begin{pmatrix}0\\ v_{0}+\rho(x)\end{pmatrix}\simeq\frac{1}{\sqrt{2}}\begin{pmatrix}0\\ v\end{pmatrix}. \tag{37}\] Therefore, the approximation (37) transforms Eq. (34) in the following way: \[\Box_{n}U(u)=\nu U(u)-\lambda\,v^{2}U^{3}(u), \tag{38}\] with a yet unknown parameter \(v\). We suppose further on that the metric functions as well as the Lagrangian parameters remains the same as those considered above with one exception: the Hubble parameter is extremely small at the present epoch as compared to the inflationary epoch, and below we put \(H\approx 0\). The knowledge of solutions to Eq. (38) permits us to integrate out the internal coordinates and to reduce the action (32) to the 4D form \[S_{\rm H}=\frac{{\cal V}_{\rm n-1}}{2}\int d^{4}x\sqrt{|\tilde{g} _{4}|}\int_{u_{\rm min}}^{u_{\rm max}}\Big{[}{\rm e}^{-2\gamma(u)}U^{2}(u) \tilde{g}^{ij}\partial_{i}h^{\dagger}\partial_{j}h\] \[\qquad\qquad\qquad\qquad+\Big{(}-(\partial_{u}U)^{2}+\nu\,U^{2}( u)\Big{)}h^{\dagger}h-\lambda\,U^{4}(u)\big{(}h^{\dagger}h\big{)}^{2}\Big{]}{ \rm e}^{4\gamma(u)}r^{n-1}(u)\,du\] after substitution of (36) into (32). To study this action at low energies, we choose the Minkowski metric \[\tilde{g}_{4,ij}=\eta_{ij} \tag{39}\] and define the following parameters by integration over \(u\): \[K_{h}={\cal V}_{\rm n-1}\int_{u_{\rm min}}^{u_{\rm max}}U^{2}(u) \,{\rm e}^{2\gamma(u)}r^{n-1}(u)\,du, \tag{40}\] \[m_{h}^{2}={\cal V}_{\rm n-1}\int_{u_{\rm min}}^{u_{\rm max}} \Big{(}-(\partial_{u}U)^{2}+\nu\,U^{2}(u)\Big{)}{\rm e}^{4\gamma(u)}r^{n-1}(u )\,du,\] (41) \[\lambda_{h}={\cal V}_{\rm n-1}\int_{u_{\rm min}}^{u_{\rm max}} \lambda\,U^{4}(u)\,{\rm e}^{4\gamma(u)}r^{n-1}(u)\,du. \tag{42}\] ### Comparison with the Higgs parameters Recall that a natural range for the dimensionless parameters \(\lambda^{\prime}\) and \(\nu^{\prime}\) is \(10^{-2}\) to \(10^{2}\). It means that acceptable ranges of the "physical" parameters are \((10^{-6}\div 10^{6})\) for \(\lambda\) and \((10^{-4}\div 10^{4})\) for \(\nu\) according to the definitions (33). The substitution \[H_{0}(x)=h(x)\sqrt{K_{h}} \tag{43}\] leads to the 4D effective Higgs Lagrangian \[S_{\rm H}=\frac{1}{2}\int d^{4}x\sqrt{|\tilde{g}_{4}|}\bigg{(}\partial_{i}H_{ 0}^{\dagger}\partial^{i}H_{0}+m_{H}^{2}H_{0}^{\dagger}H_{0}-\lambda_{H}\big{(} H_{0}^{\dagger}H_{0}\big{)}^{2}\bigg{)}\,, \tag{44}\] \[m_{H}^{2}\equiv\frac{m_{h}^{2}}{K_{h}},\qquad\lambda_{H}\equiv\frac{\lambda_{h }}{K_{h}^{2}}. \tag{45}\] Here \(H_{0}\) is the observable Higgs field at zero energy. The experimentally measured parameters are the Higgs boson mass and its vacuum average, \[m_{\rm Higgs}=125\,{\rm GeV},\quad v_{\rm Higgs}=246\,{\rm GeV}\,. \tag{46}\] according to [58]. They are related to the parameters \(m_{H}\) and \(\lambda_{H}\) of the effective Higgs action (44) as follows: \[m_{H}=m_{\rm Higgs}/\sqrt{2}=88.6\,{\rm GeV}\simeq 10^{-17}m_{\rm Pl}, \tag{47}\] and \[\lambda_{H}=(m_{H}/v_{\rm Higgs})^{2}/2\simeq 0.13, \tag{48}\] The vacuum energy of the Higgs field is \[V_{\rm min}=-\frac{1}{2}m_{H}^{2}v_{\rm Higgs},\] so that the parameter \(c\) in the function \(f(R)\) should be corrected, \(c\to c+V_{\rm min}\). Note, however, that \(V_{\rm min}\) is very small as compared to the D-dimensional Planck scale and may be neglected. The above formulas contain the function \(U(u)\), the solution to Eq. (38) with a yet unknown constant \(v\). It is of interest that the Lagrangian structure (32) allows us to avoid the determination of this constant. Indeed, a solution to Eq. (38) can be found for the function \[\tilde{U}(u)=vU(u)\] because Eq. (38) \(\tilde{U}(u)\) does not contain the unknown parameter \(v\) in this case. Moreover, substitution of \(U=\tilde{U}(u)/v\) into the expressions (40), (41) and (42) gives \[K_{h}[U]=\frac{K_{h}[\tilde{U}]}{v^{2}},\qquad m_{h}^{2}[U]=\frac{m_{h}^{2}[ \tilde{U}]}{v^{2}},\qquad\lambda_{h}[U]=\frac{\lambda_{h}[\tilde{U}]}{v^{4}}, \tag{49}\] hence the observable parameters \(m_{H}\) and \(\lambda_{H}\) in (45) do not depend on \(v\). This quantity also appears in the relation \[v\simeq v_{\rm Higgs}/\sqrt{K_{h}[U]}, \tag{50}\] following from (43) and the substitution \(H_{0}\to v_{\rm Higgs}\), \(h\to v\). Luckily, this relation does not depend on \(v\) as well. After taking into account the first equality in (49), we obtain an additional restriction for the function \(\tilde{U}\): \[1\simeq v_{\rm Higgs}/\sqrt{K_{h}[\tilde{U}]}. \tag{51}\] The quantity \(K_{h}[\tilde{U}]\) is calculated in \(m_{D}\) units. Therefore, \(v_{\rm Higgs}\) should be also expressed in \(m_{D}\) units. Figure 3 presents the Higgs field distribution \(\tilde{U}\) in the extra dimensions. In this section, we have found the conditions at which the initial parameter values written in Fig.3 reproduce the Higgs Lagrangian with the observed parameters. The origin of small parameters has been discussed earlier, see the beginning of Sec. 4. Quantum fluctuations produce a variety of field amplitudes in the countable set of pocket universes. A small measure of them contains (extremely) small amplitudes \(\tilde{U}\). The fields values of the order of \(\tilde{U}\sim 10^{-16}\) (see Fig. 3) are suitable for the relations (47), (48) and (51). Low energies. The cosmological constant Analytical formulas for 4D gravity have been obtained in Sec. 4. It is assumed that they are approximately valid at the inflationary scale \(\sim 10^{13}\,\)GeV. The parameter \(a_{\rm eff}\) was obtained for specific values of the initial parameters \(a,c,m_{D}\). A calculation of \(c_{\rm eff}\) representing the effective cosmological constant (CC) is not necessary because its value could vary in a wide range without any effect on the inflationary process. This value is important at low energies where the Hubble parameter is \(\sim 10^{-61}\approx 0\,\)GeV\({}^{2}\). Therefore, the extra metric must be found by solving the Einstein equations at \(H=0\). Luckily, the resulting metric weakly depends on the Hubble parameter if it varies within the interval \(0<H<0.01\), and Fig. 1 gives the appropriate impression. At the low energy scale, the Hubble parameter \(H\) is small, and the curvature squared \(R_{4}^{2}\) can be neglected in (25). In this case, the 4D Einstein equations lead to a relation between the CC and the Hubble parameter, \[c_{\rm eff}\equiv-2\Lambda=-6\,H^{2}, \tag{52}\] which means that \(c_{\rm eff}\) should be extremely small as well. On the other hand, the same value was found above starting from the initial D-dimensional action, see Eq. (29). It can be presented in the following form (see the Appendix): \[c_{\rm eff} = -6H^{2}+{\cal V}_{\rm n-1}\frac{m_{\rm D}^{\rm D-2}}{m_{\rm Pl}^{ 2}}\int_{u_{\rm min}}^{u_{\rm max}}\left[\left(f_{RR}\,R^{\prime}-f_{R}\, \gamma^{\prime}\right){\rm e}^{4\gamma}r^{\rm n-1}\right]^{\prime}du+O(H^{6}). \tag{53}\] A comparison of the expressions (52) and (53) derived with arbitrary initial parameters and boundary conditions indicates that the integral in (53) must be zero. It makes sense to prove this statement directly. To this end, we should find the function \(\Phi(u)\equiv\left(f_{RR}\,R^{\prime}-f_{R}\,\gamma^{\prime}\right){\rm e}^{4 \gamma}r^{\rm n-1}\) at the boundary points \(u_{\rm min}\) and \(u_{\rm max}\). Numerical simulations indicate that this function tends to zero indeed, see Fig.4. Unfortunately, the accuracy is unsatisfactory while approaching the boundary points. To clarify the situation, the following can be suggested: we modify the nonlinear term in the action \(R^{2}\to R^{2}e^{-\epsilon R^{2}}\) from the beginning, \(\epsilon\lll 1\) in \(m_{D}\) units, and put \(\epsilon=0\) finally. It does not affect the equation of motion, but smooths out the singularities. In this case, \(\Phi\to 0\) at the boundary points, where \(r=0\) by definition. Hence, the integral as a whole equals zero. In this subsection, we have proved that the well-known relation \(H^{2}=\Lambda/3\) can be derived from D-dimenssional gravity. In the standard notations, \(c_{\rm eff}=-2\Lambda\), where \(\Lambda\) is the cosmological constant. In a more general case of the the function \(f(R)\), terms proportional to \(H^{6}\) and other nontrivial terms can appear in the expression (53). In this case, the inflationary dynamic requires a separate study. ## 7 The role of quantum fluctuations ### The smallest scale of compact extra dimensions It is known that our instruments "feel" average values of a field \(\bar{\zeta}(u)\) calculated as \[\bar{\zeta}(u)=\zeta_{\rm classical}(u)+\delta\zeta(u),\] where \(\zeta_{classical}(u)\) is the classical part, and \(\delta\zeta(u)\) is a quantum correction to it. It makes sense to calculate the classical part only if \(\zeta_{\rm classical}(u)\gg\delta\zeta(u)\). This inequality holds only if the action \(S\gg 1\) (the steepest descend method). Thus the classical approach can be applied in our approach if \[S=\int dv_{D}f(R)\gg 1, \tag{54}\] or in other words \[S\simeq\delta v_{D}\langle f(R)\rangle\gg 1, \tag{55}\] where \(\delta v_{D}\simeq\delta u^{D}\) is a small volume parametrized by the coordinate \(u\), and \(\langle...\rangle\) stands for averaging over this volume. It means that the volume \(\delta v_{D}\) must not be smaller than \(\sim 1/\langle f(R)\rangle\rightarrow\delta u\sim\langle f(R)\rangle^{-1/D}\). Thus for a classical description to make sense (i.e. to approximately coincide with the averages), the averaging range must not be smaller than \[\delta u\sim\langle f(R)\rangle^{-1/D}. \tag{56}\] For example, for a seven-dimensional space and \(f(R)\sim 10\), we have \(\delta u\sim 10^{-1/7}\sim 1\). It means that the size of extra dimensions should be larger than \(1/m_{D}\). Also, it is dangerous to make physical conclusions based on classical solutions in the vicinity of singular points at a distance smaller than \(1/m_{D}\). ### Fluctuations in the present epoch The field value \(H_{P}\propto 10^{-17}\) is quite a small value. Let us estimate the probability of large fluctuations that could destroy the solution. The cosmological probability to find a field value \(\chi_{2}\) at the instant \(t_{2}=t_{1}+t\) in a spatial region of the horizon size \(H^{-1}\) was studied in [59; 60]. Based on the outcomes, it is possible to demonstrate that the first mode amplitude fluctuation probability, \(h_{1}\), can be determined as \[dP=dP_{1}=dh_{1}\cdot\sqrt{q_{1}/\pi}\exp[-q_{1}\,h_{1}^{2}],\quad t\to\infty \tag{57}\] where \[q_{1}=\frac{\mu}{\sigma^{2}},\qquad\mu=\frac{m_{1}^{2}}{3H},\qquad\sigma= \frac{H^{3/2}}{2\pi}. \tag{58}\] \(m_{1}\sim m_{D}\), and the present-day Hubble parameter is \(H=1.2\times 10^{-61}m_{\rm Pl}\). Their knowledge allows one to estimate the parameter \(q_{1}\). The fluctuation \(h_{1}\) should be of the order of the classical part, \(h_{1}\sim\tilde{U}\sim 10^{-17}\) (see Fig. 3) or larger to destroy it. Now we have everything to estimate the exponent, \[q_{1}\,h_{1}^{2}\sim{\rm e}^{211}\left(\frac{m_{D}}{M_{\rm Pl}}\right)^{4}, \qquad m_{D}>10^{-5}M_{\rm Pl}.\] This estimate can be substituted to (57) to demonstrate how unlikely it is that even such tiny classical field (\(\tilde{U}\sim 10^{-17}\)) will be destroyed. ### Quantum corrections The essence of the Wilson approach is to fix a Lagrangian and its parameters at the highest scale and shift down to a low energy scale. It is achieved by sequentially integrating the Euclidean action over a small slice of the momentum interval \(\Delta k_{E}\). The renormalization group equations thus obtained are widely used in this concern [31]. The relations between low-energy parameter values and high-energy ones are discussed in [61]. Also, quantum fluctuations could modify the same form of a Lagrangian, [62; 63]. The inclusion of a compact extra space into consideration complicates the procedure. Indeed, we cannot choose an arbitrarily small momentum interval due to the energy level discreteness. For example, if a size is quite small, \(\Delta k_{E}<1/r\), \(r\) being the scale of extra dimensions, then this momentum interval does not contain energy levels at all. A possible way to overcome this difficulty is discussed in [43], where truncated Green functions \[G_{T}(Z,Z^{\prime})\equiv\sum_{N\in\mathcal{N}}\frac{Y_{N}(Z)Y_{N}(Z^{\prime} )^{*}}{\lambda_{N}}\] were introduced. Here \(Y_{N}(Z)\) is a subset of \(n+4\)-dimensional eigenfunctions. The coordinates \(Z\) describe both 4D space and a compact extra space. It allows for approximately calculating the parameters at low energies. As a result, quantum corrections caused by a scalar field are proportional to its self-coupling. This means that such quantum effects cannot be responsible for reducing the parameter values by many orders of magnitude, from the Planck scale to the electroweak scale. The classical mechanism discussed in this paper was elaborated just for this aim. The procedure of quantum renormalization is a necessary and unavoidable element that leads to fine tuning of the physical parameters at low energies. Conclusion In this paper, we have discussed an approach which provided the hierarchy of three energy scales, the inflationary, electroweak and cosmological ones. Necessary tools for small parameters formation and a successful solution of the problem are \(f(R)\) gravity and inhomogeneous extra dimensions. The set of small parameters is formed in the following way. Slow rolling of a spatial domain from a sub-Planckian scale down to the inflationary one gives rise to several consequences: (1) nucleation of an infinite set of causally disconnected domains (pocket universes), (2) quantum fluctuations in each domain produce a variety of fields and an extra-space metric distribution, (3) these distributions are stabilized when the energy scale is low enough. Self-gravitating (scalar) fields do not necessarily settle at states with minimum energy. On the contrary, e.g., the boson stars activity [55] is based on the fact that self-gravitating scalar fields can settle at a continuum set of static states. There are states with arbitrarily small amplitudes among them. These states are formed in a small but finite set of universes. As a result, a small but nonzero measure of different universes contains small effective parameters that are applied here to solve the Hierarchy problem at three energy scales. The mechanism developed should be accompanied by a renormalization group analysis aimed at correction of the initial parameter values. ## Acknowledgements The work of SGR and KAB was funded by the Ministry of Science and Higher Education of the Russian Federation, Project "New Phenomena in Particle Physics and the Early Universe" FSWU-2023-0073 and the Kazan Federal University Strategic Academic Leadership Program. The work of AAP was funded by the development program of Volga Region Mathematical Center (agreement No. 075-02-2023-944). KAB also acknowledges support from Project No. FSSF-2023-0003. ## Appendix The validity Condition of Eq. (52) is not so trivial in the \(4+n\)-dimensional case, and we will discuss it here. We can exclude the terms with a scalar field in the definition of \(c_{\rm eff}\); using the expression (6), we obtain \[c_{\rm eff} = {\cal V}_{n-1}\frac{m_{\rm D}^{D-2}}{m_{\rm Pl}^{2}}\int_{u_{\rm min }}^{u_{\rm max}}\Big{(}f\big{(}R_{\rm n}\big{)}-\big{(}\zeta^{\prime}\big{)}^ {2}-2V\big{(}\zeta\big{)}\Big{)}\,{\rm e}^{4\gamma}\,r^{n-1}\,du \tag{59}\] \[= {\cal V}_{n-1}\frac{m_{\rm D}^{D-2}}{m_{\rm Pl}^{2}}\int_{u_{\rm min }}^{u_{\rm max}}\Big{\{}f\big{(}R_{\rm n}\big{)}+2{R^{\prime}}^{2}f_{RRR}(R)+ 2\Big{[}{R^{\prime\prime}}+{R^{\prime}}\Big{(}3\gamma^{\prime}+({\rm n-1}) \frac{r^{\prime}}{r}\Big{)}\Big{]}f_{RR}(R)\] \[-2\left(\gamma^{\prime\prime}+4{\gamma^{\prime}}^{2}+({\rm n-1}) \frac{\gamma^{\prime}r^{\prime}}{r}\right)f_{R}(R)+\frac{6H^{2}}{{\rm e}^{2 \gamma({\rm n})}}f_{R}-f(R)\Big{\}}\;{\rm e}^{4\gamma}r^{n-1}\,du=0.\] Part of this expression can be transformed as follows: \[{\cal V}_{n-1}\frac{m_{\rm D}^{D-2}}{m_{\rm Pl}^{2}}\int_{u_{\rm min }}^{u_{\rm max}}\Big{\{}\left(2{R^{\prime}}^{2}f_{RRR}+2\Big{[}{R^{\prime \prime}}+{R^{\prime}}\Big{(}3\gamma^{\prime}+({\rm n-1})\frac{r^{\prime}}{r} \Big{)}\right)\Big{]}f_{RR}-2\left(\gamma^{\prime\prime}+4{\gamma^{\prime}}^{2}\right.\] \[\left.+({\rm n-1})\frac{\gamma^{\prime}r^{\prime}}{r}\right)f_{R} \Big{\}}\;{\rm e}^{4\gamma}r^{n-1}\,du={\cal V}_{n-1}\frac{m_{\rm Pl}^{D-2}}{m _{\rm Pl}^{2}}2\int\Big{[}\Big{(}f_{RR}{R^{\prime}}-f_{R}\gamma^{\prime}\Big{)} \,{\rm e}^{4\gamma}r^{n-1}\Big{]}^{\prime}\,du. \tag{60}\] The remaining part of the expression (59) can be rewritten in a more conventional form (by substituting the expansion (24) and the definition (23)): \[{\cal V}_{\rm n-1}\frac{m_{\rm D}^{D-2}}{m_{\rm Pl}^{2}}\int\left(f(R _{n})+\frac{6H^{2}}{{\rm e}^{2\gamma}}f_{R}(R)-f(R)\right)\,{\rm e}^{4\gamma}r^{n- 1}\,du\] \[\qquad\qquad+\frac{R_{4}}{{\rm e}^{2\gamma(u)}}f_{R}(R_{n})+\frac{ R_{4}^{2}}{2{\rm e}^{4\gamma(u)}}f_{RR}(R_{n})\Big{)}+O\left(\frac{R_{4}^{3}}{e^{6 \gamma(u)}}f_{RRR}(R_{n})\right)\right]\,{\rm e}^{4\gamma}r^{n-1}\,du\] \[\qquad\qquad+O\left(\frac{R_{4}^{3}}{{\rm e}^{6\gamma(u)}}f_{RRR} (R_{n})\right)\right]\,{\rm e}^{4\gamma}r^{n-1}\,du. \tag{61}\] Since \(m_{\rm Pl}^{2}\) is defined by the expression (27) and \(R_{4}=12H^{2}\), Eq. (61) can be rewritten as \[-6H^{2}+O(H^{6}). \tag{62}\] Thus we can present \(c_{\rm eff}\) in the form of (53) by summing the expressions (60) and (62).
2306.14650
PhD Thesis: Exploring the role of (self-)attention in cognitive and computer vision architecture
We investigate the role of attention and memory in complex reasoning tasks. We analyze Transformer-based self-attention as a model and extend it with memory. By studying a synthetic visual reasoning test, we refine the taxonomy of reasoning tasks. Incorporating self-attention with ResNet50, we enhance feature maps using feature-based and spatial attention, achieving efficient solving of challenging visual reasoning tasks. Our findings contribute to understanding the attentional needs of SVRT tasks. Additionally, we propose GAMR, a cognitive architecture combining attention and memory, inspired by active vision theory. GAMR outperforms other architectures in sample efficiency, robustness, and compositionality, and shows zero-shot generalization on new reasoning tasks.
Mohit Vaishnav
2023-06-26T12:40:12Z
http://arxiv.org/abs/2306.14650v2
# DOCTORAT DE L'UNIVERSITE DE TOULOUSE ###### Abstract Exploring the role of (self-)attention in cognitive and computer vision architecture **JURY** Timothee Masquelier Jonathan D. Cohen Hugues Talbot Jessica B. Hamrick Thomas Serre Nicholas Asher **Ecole doctorale et specialite :** _EDMITT : Ecole Doctorale Mathematiques, Informatique et Telecommunications de Toulouse - Informatique et Telecommunications_ **Unite de Recherche :** _Departement Sciences et technologies de l'information et de la communication_ **Directeur(s) de These :** _Thomas SERRE_ et _Nicholas ASHER_ **Rapporteurs :** _Jonathan D. Cohen (Princeton University, USA)_ et _Hugues Talbot (CentraleSupelec, France)_ ## Resume Un mecanisme fondamental de la cognition, necessaire a l'execution de taches de raisonnement complexes, est la capacite de traiter selectivement les informations (attention) et de les conserver dans un etat accessible (memoire). Nous analysons systematiquement le role de ces deux composantes, en commencant par l'auto-attention basee sur le modele d'attention le plus populaire: Transformer, et en etendant ensuite l'architecture a la memoire. Transformer est aujourd'hui la derniere classe d'architecture neuronale et est au coeur des demonstrations les plus fascinante du Deep Learning, il a apporte un changement de paradigme dans le domaine de l'intelligence artificielle. Il a remplace les reseaux de recurrence et de convolution par l'auto-attention comme choix architectural de facto pour la plupart des applications de l'IA. Nous etudions d'abord les mecanismes de calcul impliques dans un test de raisonnement visuel synthetique (SVRT), en analysant la capacite d'une architecture de vision par ordinateur populaire (ResNet) de differentes profondeurs et entrainee sur des ensembles de donnees de differentes tailles. Cela a conduit a une nouvelle taxonomie plus fine pour les vingt-trois taches de SVRT, coherente avec les classes de taches de raisonnement - identiques-differentes (SD) et de relations spatiales (SR) - largement acceptees dans la litterature. Ensuite, nous etudions le role de l'auto-attention incorporee a ResNet50 dans la resolution du defi SVRT. Inspires par les deux types de systemes d'attention visuelle, nous avons modelise l'auto-attention pour qu'elle soit utilisee comme une attention basee sur les caracteristiques et sur une attention spatiale pour enrichir les cartes de caracteristiques d'un reseau feedforward. Nous avons evalue la capacite de ces reseaux d'attention a resoudre le defi SVRT et avons constate que les architectures resultantes etaient beaucoup plus efficaces pour resoudre la plus difficile de ces taches de raisonnement visuel. La nouvelle taxonomie obtenue precedemment s'explique aussi partiellement par l'amelioration relative des deux reseaux d'attention et conduit a des predictions testables concernant les besoins attentionnels des taches SVRT. Enfin, nous developpons une nouvelle architecture cognitive integrant l'auto-attention et la memoire. Nous proposons GAMR: Guided Attention Model for visual Reasoning, motive par la theorie de la vision active. Le GAMR a des mecanismes de fonctionnement similaires a ceux du cerveau qui resout des taches complexes de raisonnement visuel par des sequences de changements d'attention pour selectionner et acheminer en memoire les informations visuelles pertinentes pour la tache. Ce changement d'attention est mis en oeuvre a l'aide d'un module d'auto-attention guide par une requete generee en interne. Nous demontrons que _GAMR_ est efficace, robuste et compositionnel par rapport a l'une ou l'autre des architectures basees sur le feedforward, l'attention ou la memoire. De plus, GAMR est capable de generaliser a des taches de raisonnement completement nouvelles. Dans l'ensemble, notre travail analyse le role de l'auto-attention dans l'architecture cognitive et de vision par ordinateur par leur capacite a resoudre des taches complexes de raisonnement visual necessitant de l'attention comme element cle pour resoudre efficacement les taches de raisonnement. ## Abstract A fundamental mechanism of cognition needed to perform complex reasoning tasks is the ability to selectively process information (attention) and retain information in an accessible state (memory). We systematically analyze the role of both these components, starting with Transformer-based self-attention as a model of attention and later extending the architecture with memory. The Transformer is the latest and seemingly most powerful class of neural architecture, and it has brought a paradigm shift in the field of artificial intelligence. It has replaced recurrence and convolution networks with self-attention as the de-facto architectural choice for most AI applications. We first study the computational mechanisms involved in a synthetic visual reasoning test (SVRT) challenge, analyzing the ability of popular computer vision architecture (ResNet) of different depths trained on different dataset sizes. It led to a novel, finer taxonomy for the twenty-three SVRT tasks consistent with the broadly accepted same-different (SD) and spatial-relation (SR) classes of reasoning tasks in literature. Next, we study the role of self-attention incorporated with ResNet50 in solving the SVRT challenge. Inspired by the two types of visual attention systems, we modeled self-attention to be used as feature-based and spatial attention to enrich the feature maps of a feedforward network. We evaluated the ability of these attention networks to solve the SVRT challenge and found the resulting architectures to be much more efficient at solving the hardest of these visual reasoning tasks. The novel taxonomy obtained earlier is also partially explained by the relative improvement of the two attention networks and leads to testable predictions regarding the attentional needs of SVRT tasks. At last, we develop a novel cognitive architecture integrating attention and memory. We propose GAMR: **G**uided **A**ttention **M**odel for (visual) **R**easoning, motivated by the theory of active vision. GAMR has similar working mechanisms as that of the brain that solves complex visual reasoning tasks via sequences of attention shifts to select and route the task-relevant visual information into memory. This shift of attention is implemented with the help of a attention module guided by an internally generated query. We demonstrate that _GAMR_ is sample-efficient, robust, and compositional compared to either of the feedforward, attention or memory-based architectures. In addition, GAMR is shown to be capable of zero-shot generalization on completely novel reasoning tasks. Overall, our work analyzes the role of self-attention in cognitive and computer vision architecture by their ability to solve complex visual reasoning tasks needing attention as a key component to efficiently solve reasoning tasks. To the former president, missile man of India, nuclear scientist, writer, poet, and educator Dr. A. P. J. Abdul Kalam ## Acknowledgments Sailing through the past three years has been an unforgettable experience filled with countless challenges. I would like to use this opportunity to show how grateful I am to all the people who have helped me throughout this exciting journey toward fulfilling my Ph.D. First and foremost, I thank my academic advisor, Thomas Serre (Brown University, USA) and Nicholas Asher (ANITI, France), for accepting me at ANITI. Words cannot express my gratitude to them for their invaluable guidance and patience and for providing me with the intellectual freedom to work. I am particularly thankful to Thomas Serre for his unwavering support, assistance in bridging neuroscience and AI, and encouragement of my ideas. All the conversations we had helped me to develop scientific thinking and to comprehend the ability to filter out the most exciting approaches to be followed. Our conversations have shaped my scientific thinking and helped me filter out the most exciting approaches. His one-to-one review meetings and critical judgment have expanded my boundaries and enlightened me with countless ideas to progress. I acknowledge the members of my thesis committee, Timothee Masquelier (Senior Research Scientist, CerCo, France), Jonathan D. Cohen (Princeton Neuroscience Institute, Princeton University, USA), Hugues Talbot (CentraleSupelec, France) and Jessica Hamrick (Senior Research Scientist, DeepMind, UK). Their expertise and insights have enriched my research and contributed to its quality. Additionally, this endeavor would not have been possible without the Agence Nationale de la Recherche (ANR) support for generously financing my research. I sincerely thank Corinne Joffre, Secretaire generale, ANITI, and her colleagues for supporting my multiple mobilizations between the USA and France. This journey would have been unfinished without the help of the computing staff at High-Performance Cluster (HPC) _Oscar_, Brown University, USA and _CALMIP_, Universite Federale de Toulouse Midi-Pyrenees (UFTMiP), France. They provided their expertise to help me handle computationally intensive jobs. Special thanks go to Rufin VanRullen, Research Director at CerCo, for his reliable and practical scientific mentoring and for offering me office space alongside his team. I am grateful to my office mates Andrea Alamia and Aimen Zerroug, who became my friends and collaborators. Aimen has been my travel companion, and together we visited Brown University and brainstormed numerous ideas. I also want to acknowledge the NeuroAI team members at CerCo, including Milad Mozafari, Romain Bielawaski, Bhavin Choksi, Javier Cuadrado, Mathieu Chalvidal, Benjamin Devillers, Colin Decourt, Ismail Khalfaoui, and Sabine Muzellec. Our lab meetings provided a platform for exchanging scientific insights and fostering collaboration. I want to thank the members of the _Serre Lab_, Aimen Zerroug, Thomas Fel, Mathieu Chalvidal, Lakshmi N. Govindarajan, Jacob Rose, Pachaya Sailamul, Ivan Rodriguez, Rex Liu, Lore Goetschalckx, Victor Boutin, and Drew Linsley, for their warm welcome, collaborative spirit, and sharing of knowledge. Their valuable feedback and insightful analyses have played a crucial role in refining my work. I also had the opportunity to collaborate with Remi Cadene (Senior Scientist, Tesla), who encouraged me to organize my thoughts before working on them; Drew Linsley (Asst. Professor Research, Brown University), who inspired me with his choice of words in scientific writing and positive attitude for any new idea. I greatly benefited from the conversation with Jonathan D. Cohen (Professor, Princeton University) and his group members, especially Taylor W. Webb (University of California, Los Angeles). Their extensive discussions on the GAMR architecture helped me to comprehend better. I thank Peter Wilf (Professor, Pennsylvania State University) for having me on board with him to execute practical aspects of my understanding of Paleobotany. Lastly, our ongoing collaboration with Experimental Neurosurgery and Neuroanatomy at KU Leuven introduced me to another aspiring scientist, Jesus G. Ramirez. This opportunity allowed me to understand neural visual reasoning mechanisms at the anatomical level. My mind and heart owe Ashwani Sharma (Asst. Professor, IIT Ropar), K. R. Ramakrishnan (Emeritus Prof, IISc Bangalore) and Anil Kumar Tiwari (Assoc. Professor, IIT Jodhpur), Ranjan Gangopadhyay (Emeritus Prof, IIT Kharagpur). They fueled my scientific curiosity during various stages of my life and supported me in becoming a keen researcher. I would be remiss not to mention my family, including my parents Smt. Vijaylakshmi Vaishnav and Shri. Bharat Kumar Vaishnav (Commandant, CRPF), my sister Dr. Divya Vaishnav (Assistant Professor, Chandigarh University), my brother-in-law Mr. Sunil Sharma (Senior Scientist, ISRO), and my younger brother Mr. Gaurav Vaishnav (Provincial Civil Service, Govt. of Bihar). Their unwavering belief in me and constant moral support have been the pillars of my motivation throughout this process. In addition to my family, I owe a debt of gratitude to my friends, scattered across different parts of the world, whose support has been invaluable. Special thanks to Dr. Dinesh K. Chobey, Mohit Ahuja, Parita Verma, Himanshu Vaishnav, Malay Bateriwala, Pragnya Paramita, and many others whose names I regrettably cannot mention individually. ###### Contents * 1 Introduction * 1.1 Self-attention-based Transformer architecture * 1.2 Self attention in vision tasks * 1.3 Transformer-based vision architecture * 1.4 Original Contributions * 2 Computational Demands of Visual Reasoning * 2.1 Introduction * 2.2 Systematic analysis of SVRT tasks' learnability * 2.3 An SVRT taxonomy * 2.4 Conclusion * 3 Role of self-attention in a computer vision architecture * 3.1 Introduction * 3.2 Experiment 1: Self-attention with ResNet50 * 3.3 Experiment 2: Feature vs. rule learning * 3.4 Conclusion * 4 Role of self-attention in a cognitive architecture * 4.1 Introduction * 4.2 Related Work * 4.3 Proposed approach * 4.4 Method * 4.5 Benchmarking the system * 4.6 Learning Compositionality * 5 * 4.7 Zero-shot generalization * 4.8 Ablation Study * 4.9 Additional Experiment * 4.10 Hyperparameters * 4.11 Conclusion and limitations * 5 Discussion and Future work * 6 Publications * 7 Summary in French ## Appendix A Synthetic Visual Reasoning Task * B Computational Demands of Visual Reasoning List of Figures * 1.1 A summary of attention in Cognitive science and machine learning (source) * 1.2 Transformer architecture proposed by Vaswani et al. (2017) (source) * 1.3 Illustration of Multi-head attention mechanism in a Transformer network (source) * 1.4 Vision Transformer architecture (Dosovitskiy et al., 2021) * 1.5 Computational demands for training Transformers vs. CNNs. Compute needed to train a Transformer network has increased by 275 times in the last two years. (source) * 2.1 Two SVRT sample tasks from a set of twenty-three in total. For each task, the leftmost and rightmost two examples illustrate the two categories to be classified. Representative samples for the complete set of twenty-three tasks can be found in Figure A1 and A2.2 Test accuracy for each of the twenty-three SVRT tasks as a function of the number of training samples for ResNets with depths 18, 50 and 152, resp. The color scheme reflects the identified taxonomy of SVRT tasks (see Figure 2.3 and text for details). * 2.3 Dendrogram derived from an N-dim hierarchical clustering analysis on the test accuracy of N=15 ResNets[18/50/152] trained to solve each task over a range of training set sizes. * 3.1 Location of the Transformer self-attention modules in our ResNet extensions. * 3.2 Test accuracies for a baseline ResNet50 vs. the same architecture endowed with the two forms of attention for each of the twenty-three SVRT tasks when varying the number of training examples. A different axis scale is used for \(SR_{2}\) to improve visibility. These curves are constructed by joining task accuracy for five points representing dataset sizes. * 3.3 Test accuracies for 50-layer ResNets with spatial attention (orange), feature-based attention (tan), or no attention (green). Each bar depicts performance after training from scratch on 10k samples. The benefit of attention in solving the SVRT is greatest in data-limited training regimes. The x-axis depicts the number of samples for training, and the y-axis depicts a ratio of the average performance of models with attention to models without attention. When the ratio is greater than 1, it shows that attention helps vs. hurts when lower than 1. This gives us five ratios per task and attention process corresponding to each dataset size. We performed a linear fitting procedure for these points and calculated the corresponding slope. This slope characterizes the relative benefits of attention for that particular task as the number of training examples available increases. If the benefit of attention is most evident in lower training regimes, one would expect a relatively small slope. If the benefit of attention is most evident in higher training regimes, one would expect a large slope. * 3.5 Principal component analysis of the twenty-three tasks using the 15-dimensional feature vectors derived from Experiment 1 representing the test accuracy obtained for each task for different dataset sizes and ResNets of varying depths (18, 50 & 152). The dotted red line represents 4 different bins in which these tasks can be clustered. * 3.6 Test accuracies for a baseline ResNet50 trained from scratch ("No initialization") vs. the same architecture pre-trained on an auxiliary task in order to learn visual representations that are already adapted to the SVRT stimuli for different numbers of training examples. The format is the same as used in Figure 3.2. A different axis scale is used for \(SR_{2}\) to improve visibility. These curves are constructed by joining task accuracy for five points representing dataset sizes. * 4.1 Our proposed _GAMR_ architecture is composed of three components: an _encoder_ module (\(f_{e}\)) builds a representation (\(z_{img}\)) of an image, a _controller_ guides the attention module to dynamically shift attention, and selectively routes task-relevant object representations (\(z_{t}\)) to be stored in a memory bank (\(M\)). The recurrent controller (\(f_{s}\)) generates a query vector (\(q_{int_{t}}\)) at each time step to guide the next shift of attention based on the current fixation. After a few shifts of attention, a _reasoning_ module (\(r_{\theta}\)) learns to identify the relationships between objects stored in memory. Encoder module (\(f_{e}\)) used in _GAMR_. It consists of four convolutional blocks to process input image of 128\(\times\)128 resolution * 4.3 Bar plot analysis for the SVRT tasks grouped in same-different (\(SD\)) and spatially related (\(SR\)) tasks. We compared the accuracies of five baseline architectures with _GAMR_. We trained these models with.5k, 1k, 5k and 10k samples. * 4.4 **Compositionality test**: We train the model with tasks containing specific rules (e.g., task \(1\) representing same-different discrimination and task _10_ involving identification if the four shapes form a square or not). We show that with its ability to compose already learned rules, _GAMR_ can quickly learn with 10 samples per class to adapt to a novel scenario (e.g., task _15_ where the rule is to identify if the four shapes forming a square are identical or not). * 4.5 We compared the average accuracy over two sub-clusters of SVRT obtained by _GAMR_ with its variant when we replaced the guided-attention module with the self-attention (_GAMR-SA_) and when we completely gave away attention and made it a relational reasoning architecture (_GAMR w/o Atn (RN)_). * 4.6 **Ablation studies**: We pruned separate parts of the model, one at a time: controller output (\(out\)), attention vector (\(w_{t}\)), relational vector (\(all_{obj}\)), feature channel gain factor (\(g\)) and instance normalization (\(iNorm\)) and the bar plot show the variation in performance on SD and SR tasks when trained with 1k samples. * 4.7 **Time steps visualization**: Figure showing shift of attention with each time step in a task-dependent manner. In the first row, the task is to answer if the two shapes are touching each other from the outside. At each time step, the network explores the area where the shapes are touching each other. In other rows, attribution maps show the shifts over different shapes in an image. The controller module for the task in respective rows shifts attention across different shapes at each time step. * 4.8 **Abstract variable:** t-SNE plot of the output vector (\(out\)) obtained from the controller (\(f_{e}\)) for all 23 SVRT tasks independently. Each cluster can be clearly identified from other clusters representing different relations learned. Tasks are represented as labels with the same colored box around them placed at the mean location of the cluster. * 4.9 ART for _Gamr_:** (a) Same/different discrimination task. (b) Relational match-to-sample task (answer is 2). (c) Distribution-of-three task (answer is 1). (d) Identity rules task (ABA pattern, answer is 3). * Test accuracy on ART with different holdout sets when the images are \(centered\) and compare the accuracy when shapes are \(jittered\) in every image. We find that unlike other baselines experiencing a huge drop in performance when shapes are jittered, GAMR is stable. We plot the average accuracy over ten runs on the dataset. \(x\) axis corresponds to the four types of tasks, and \(y\) represents the average accuracy score. These tasks are as follows: (a) same-different (SD) discrimination task, (b) Relation match to sample task (RMTS); (c) Distribution of three tasks (Dist3); and (d) Identity rule task (ID). * **ART**: Comparing the average performance of _GAMR_ with other baselines over 10 runs for different holdout values (m = 0, 50, 85, 95). These models are evaluated on four types of tasks, i.e., Same-Different (SD), Relation match to sample (RMTS), Distribution of 3 (Dist3) and Identity rules (ID). * Sample images for Same Different (SD) tasks * Sample images for Spatial Relation (SR) tasks * Slope attained by linear fitting of points obtained after taking the ratio of each of the network with spatial attention module and the test accuracy of a ResNet50 for each task and training condition for Spatial Relation (SR) tasks * Slope attained by linear fitting of points obtained after taking the ratio of each of the network with feature-based attention module and the test accuracy of a ResNet50 for each task and training condition for Same Different (SD) tasks * B4 Slope attained by linear fitting of points obtained after taking the ratio of each of the network with feature-based attention module and the test accuracy of a ResNet50 for each task and training condition for Spatial Relation (SR) tasks * B5 Test accuracies for a baseline ResNet50 trained from scratch ("No initialization") vs. the same architecture pre-trained on Imagenet data for different number of training examples. Also note that a different axis scale is used for \(SR_{2}\) to improve visibility. List of Tables * 1.1 Complexity comparison of different networks for a sequence of length \(n\), kernal size \(k\) and dimensionality \(d\)[20] * 3.1 Pearson coefficient (\(r\)) and corresponding \(p\) values obtained by correlating the slope vectors of the spatial attention and the feature-based attention modules with the two principal components of Figure 3.5. See text for details. * 4.1 Test accuracy to show if the model learns the correct rules when we train it with a task and test on a different set of SVRT tasks with _GAMR_, Attention with ResNet50 (_Attn-ResNet_) and ResNet50 (_ResNet_). * 4.2 **ART**: Number of training and test samples used for four different types of tasks. * 4.3 **ART**: For four different tasks number of epochs and learning rates (LR) used to train different architectures. * A.1 Each cell represents attempts participants took to solve seven consecutive correct categorizations. Here, row and column represents \(task\ number\) and \(participant\ number\). Entries containing "X" indicate that the participant failed to solve the problem, and those cells are not included in the marginal means. [19] ## Chapter 1 ## Chapter 1 Introduction ###### Abstract The study of the influence of the tenuated rather than filtered, reconciling opposing viewpoints. These models and theories contributed to the understanding of attention as a mechanism for coping with limited information processing capacity [Broadbent, 1958]. Broadbent's comprehensive attention model, grounded in cognitive psychology, viewed attention as a biological mechanism to cope with the limited capacity of information processing. This model sparked a debate regarding the stage at which selection occurs: early in the information pipeline, as proposed by Broadbent, or later in the selection process, as suggested by Norman [1968]. Treisman introduced the concept of attenuating unattended signals as a middle-ground perspective [Treisman and Gelade, 1980]. Milner [1974] proposed that attention not only selects relevant features but also provides feedback to early stages of information processing. This framework was incorporated into Adaptive Resonance Theory by Grossberg [1975]. Subsequent neurophysiological evidence demonstrated the top-down influence of the attentional state on the activation of perceptual circuitry, indicating that feedback can occur at any stage of the information pipeline. Many computational models of visual saliency originate from Treisman et al. Feature Integration Theory (FIT) of spatial visual attention proposed in 1980. The FIT aimed to explain the performance difference between pop-out stimuli and conjunction search. Texton theory by Bergen and Julesz [1983] demonstrated that certain features allow for rapid discrimination of a target and surrounding outliers (pop-out), while others do not. According to the FIT, a saliency map, often referred to as a "master map," integrates information from separate feature maps to identify salient locations. Treisman and Gelade also addressed the binding issue, which involves the cohesive representation of an object by binding its features (color, shape, location, etc.) together. Koch and Ullman [1987] proposed a computational implementation of the FIT, where the saliency map is a weighted sum. Visual attention is a specific form of attention that operates within the visual modality. It involves the selective processing and allocation of attentional resources to visual stimuli. Visual attention allows us to prioritize and focus on specific visual features, objects, or regions within our visual field while suppressing or inhibiting the processing of other visual inputs. It can operate at different levels. At the early perceptual level, it involves the selection and processing of basic visual features such as color, shape, and motion. This early selection process helps filter out irrelevant visual information and enhance the salience of relevant visual stimuli. At a higher cognitive level, visual attention enables us to selec tively attend to objects or regions of interest, guiding our gaze and directing our focus within the visual scene. The mechanisms of visual attention include both bottom-up and top-down processes. Bottom-up attention is driven by salient or physically distinctive features of stimuli that automatically capture our attention, such as a bright color or sudden movement. Top-down attention, on the other hand, is driven by our goals, expectations, and prior knowledge. It allows us to voluntarily direct our attention to specific stimuli based on their relevance or importance in a given context. Visual attention plays a crucial role in various cognitive processes, such as visual perception, object recognition, scene understanding, and visual search tasks. It helps us efficiently process and interpret visual information, guiding our interactions with the visual world. The cognitive science literature depicts several aspects of attention, such as it can be concentrated, it can focus on a particular modality, it can be divided, it can be selective, and it can have a finite capacity. However, selectivity is its most characteristic feature. Selective attention is necessary because of the limited availability of resources. Visual attention and selective attention are closely interconnected concepts that involve the cognitive process of allocating attentional resources to specific stimuli while ignoring or suppressing others. Visual attention refers to the selective processing and prioritization of visual information, while selective attention encompasses the broader ability to attend to stimuli across different sensory modalities. Selective attention is the cognitive process of focusing on one or a limited number of sensory stimuli while disregarding irrelevant inputs. Various theories have been proposed to explain selective attention, including bottleneck theories and load theories. Bottleneck theories, such as Filter Theory [Broadbent, 1958], Late Selection Theory [Deutsch and Deutsch, 1963], and Attenuation Theory [Treisman, 1964], focus on the flow and filtering of information. Load theories, like Perceptual Load Theory by Lavie and Tsai [1994], Dilution Theory [Tsal and Benoni, 2010] address the allocation of perceptual and cognitive resources. However, operationalizing these constructs and validating the theories can be challenging. Selective attention is essential for daily functioning, preventing overload of the information processing system. Early theories of attention, such as Donald Broadbent's Filter Theory, proposed a "bottleneck" model of selective attention, likened to a bottle with a narrow opening. According to Broadbent, stimuli enter a sensory buffer where their physical characteristics are assessed, allowing only a few to pass through the selective filter. Unselected stimuli decay in the buffer, while the selected ones proceed to be processed for meaning and determine how we respond. Broadbent used the dichotic listening task to study selective attention, finding that participants performed better when attending to one ear at a time. However, criticisms arose regarding where stimuli gain meaning within the attention process, with the cocktail party effect suggesting that analysis occurs before filtering. Deutsch and Deutsch proposed the late selection theory as an alternative to address limitations of Broadbent's theory. They suggested that all stimuli are analyzed for meaning, but only selected ones pass the filter based on their physical characteristics and relevance. Anne Treisman introduced the attenuation theory, suggesting that stimuli are not filtered but attenuated or enter the sensory register at a lower intensity, gaining meaning early on. Her theory addresses the limitation of Broadbent's theory regarding the cocktail party effect. Treisman used the dichotic listening task with complete words and found that people often combine messages from both ears, implying that the unattended message still holds meaning regardless of retention. These early theories laid the groundwork for understanding selective attention and the processing of stimuli based on their relevance and physical characteristics. Visual attention involves the mechanisms and processes that enable us to focus on relevant visual stimuli and filter out irrelevant or distracting visual inputs. It allows us to direct our attention to specific regions or objects within the visual field, selectively process their features, and integrate them into our perceptual experience. Visual attention plays a crucial role in various tasks, such as visual search, where we actively scan our environment to find a specific target among distractors. Selective attention, on the other hand, extends beyond the visual domain and encompasses attentional processes across different sensory modalities, including auditory, tactile, and cognitive inputs. It involves the ability to prioritize and allocate attentional resources to relevant stimuli or information while disregarding or suppressing irrelevant or less important stimuli from all sensory channels. While visual attention is a specific subset of selective attention that focuses on the processing and filtering of visual information, it is interconnected with other modalities of attention. For example, during a complex task that requires both visual and auditory processing, selective attention allows us to prioritize the relevant visual stimuli while simultaneously attending to relevant auditory cues or instructions. Recently visual attention has gained tremendous attention in the field of artificial intelligence. Visual attention helps in answering _what_ to look and _where_ to look. It has been vastly studied in psychology and neuroscience (Posner and Petersen, 1990; Cohen et al., 1990; Phaf et al., 1990; Bundesen, 1990; Desimone et al., 1995; Mozer and Sitton, 1998; Corbetta and Shulman, 2002; O'Reilly and Frank, 2006; Petersen and Posner, 2012; Moore and Zirnsak, 2017) and more recently by Flesch et al. (2022); Dekker et al. (2022). These studies have acted as a source of inspiration for several artificial intelligence models (Khosla et al., 2007; Lindsay and Miller, 2018) including the ones proposed in this thesis. There are three categories of selectivity in a visual attention system: by spatial location _(space-based)_(Posner, 1980; Posner et al., 1982), by object membership _(object-based)_(Duncan, 1984; Egly et al., 1994a; Vecera and Farah, 1994; Kramer et al., 1997) and by particular features of the input _(feature-based)_(Harms and Bundesen, 1983; Driver and Baylis, 1989; Kramer and Jacobson, 1991; Baylis and Driver, 1992; Duncan and Nimmo-Smith, 1996). Visual Spatial AttentionEvery second, our eye makes small and rapid movements several times, known as saccades. These eye movements change the locus of attention. Visible shifts of attention, such as saccades, are known as _overt_ visual attention. One more method used to emphasize a spatial location without any over-the-shift of the fovea location is _covert_ attention. An example is the subject's fixation on a particular region throughout a task where the stimulus is likely to appear. This region is also referred to as the "_spotlight_" of attention. Certain visual patterns that involve edges, contrast, or motion automatically attract attention. These patterns are known as "_salient_" (Iotti and Koch, 2001). In the presence of task-specific information, these saccadic movements are controlled in a top-down fashion around the particular visual target instead of the salient regions. Eye movements are one of the possible ways to control visual attention. Visual Feature AttentionWhen the focus of attention is on features like color, shape or orientation instead of location, it is known as feature-based attention. It is an example of covert visual attention. Cueing the right features enhances the system's performance. It is used in tasks such as visual search combining covert feature-based attention with overt attention. Feature-based attention is global as opposed to spatial attention, i.e., when attention is focused on a particular feature, neurons representing that particular feature in the visual space are also modulated [Saenz et al., 2002]. It is related to object attention, i.e., instead of attending to an abstract feature, the attention is deployed at a specific object in a visual scene [Chen, 2012]. A single feedforward pass in the visual hierarchy can segregate the objects of a visual scene if there is a distinct salient difference between them as opposed to a complex scene where recurrent and serial processing might be required [Lamme and Roelfsema, 2000]. In addition to feature-based or spatial attention, another widely accepted classification is characterized by the type of data processing [Connor et al., 2004, Buschman and Miller, 2007]. There are two types of data processing, _bottom-up_ and _top-down_. In a bottom-up attention process, external factors guide the attentional process because of their inherent properties, like their color or sudden motion in the scene. It is fast and primitive sensory driven. In top-down attention, there is an internal attentional guidance mechanism based on prior knowledge and current goals, like searching for food if one is hungry. It can ignore the salient stimuli and focus on the target object or event. Attention is also involved while performing tasks requiring multiple sensory signals. In the presence of multiple tasks or sensory signals, the central executive controller helps to route the focus of attention. The Central executive controller is responsible for coordinating activity with the cognitive system for directing attention, decision making and maintaining task goals. Context and history are deemed helpful to executing tasks optimally - making it highly related to the working memory. Attention is furthermore seen as the output of the central controller. The controller selects the targets of attention and passes them to the system responsible for its implementation. There is a three-way relationship between executive control, working memory and attention in such a way that the focus of attention is selected by the executive controller based on the contents of the working memory [Soto et al., 2008]. Although all the objects in the working memory can influence attention, the executive controller helps decide which one should affect the most [Olivers and Eimer, 2011]. These vast and extensive cognitive studies related to attention have inspired the field of AI and helped to boost its performance (Figure 1.1). The first attempt to adapt the attention mechanism in a neural network was made in the 1980s when the improved version of _Neocognitron_[Fukushima, 1980] incorporated selective attention [Fukushima, 1987] to decompose the image into elementary features. Later, Fukushima and Imagawa (1993) modified the network to recognize and segment characters in cursive handwriting. Postma et al. (1997) proposed an attentional scanning model, _SCAN_, to attend to and identify object patterns without decomposing the scene into elementary features. As an alternative to these static neural approaches, Schmidhuber and Huber (1991) proposed a sequential model inspired by the sequential eye movements for object detection. In this model, a neural controller learns sequential generation of fovea trajectories to reach the target. Furthermore, data processing types inspired the development around the same time, thereby leading to a model extracting the region of interest using bottom-up and top-down processing Milanese et al. (1994). By the early 2000s, the influence of attention on the evolution of neural networks increased. Miau and Itti (2001) proposed a model of primate vision integrating both, _what_ and _where_ pathways. The model has a fast visual attention-based front-end to select the most salient image areas and a slow back-end to recognize objects in those selected areas. Another model based on the primate selective mechanism is presented in Salah et al. (2002) with the idea of selectively attending to relevant parts of the input image. In this model, a neural network analyzes the input image and generates posterior probabilities for the Markov models. Attention has also been used for object recognition (Walther et al., 2002) and scene analysis (Schill et al., 2001). The year 2015 marks the new beginnings of attention-based architectures with Figure 1.1: A summary of attention in Cognitive science and machine learning (source) the introduction of the attentional model for Neural Machine Translation (NMT) [Bahdanau et al., 2014a, Luong et al., 2015] and image captioning [Xu et al., 2015]. In NMT, the expectation is to learn continuous representations of variable-length sequences. _Recurrent neural networks_ (RNNs) like LSTMs [Hochreiter and Schmidhuber, 1997a], GRUs [Cho et al., 2014a] and Quasi-RNNs [Bradbury et al., 2017] were some of the popular sequence models for representation learning at that time. While these RNNs' output depends on the previous elements in a sequence, traditional feedforward neural networks assume that inputs and outputs are independent of each other. Nonetheless, their limitation includes their inability to parallelize computations - making them slow during training and their fixed-size memory - bottleneck for long-range interactions [Vaswani et al., 2017]. Models used for NMT typically consist of encoder-decoder architecture [Cho et al., 2014b]. Typically, both encoder and decoder are RNN, where the encoder takes an input sequence of fixed-length vector and represents it again to another fixed-length vector. A decoder then takes this encoded vector to generate the output sequence token by token. However, this method has two challenges; first, the encoder compresses the input sequence into a fixed vector length which may lead to the loss of information [Cho et al., 2014a]. Second, the model is incapable of aligning between input and output sequences which is essential for tasks such as translation or summarization [Young et al., 2018]. While generating the output sequence, the decoder also lacked the mechanism to selectively focus on relevant input tokens. Later, Bahdanau et al. [2014b] proposed a sequence-to-sequence modeling task with the help of soft attention, emphasizing the parts of the sentence relevant to predicting the target word. Bahdanau et al. [2014b] extended the basic encoder-decoder by letting the model search a set of input words while generating target words. It allowed the model to focus on information needed to generate the subsequent target sequence. In the following two years, the adoption of attentional mechanisms in neural networks diversified. Content-based soft attention mechanism [Goodfellow et al., 2016] is used in Neural Turing Machine (NTM) [Graves et al., 2014] with end-to-end training. Around the same time, Cheng et al. [2016] used a form of attention called intra-attention in the Long Short-Term Memory (LSTM) [Hochreiter and Schmidhuber, 1997b] architecture. [Hochreiter and Schmidhuber, 1997b] embedded a memory network inside the LSTM architecture to store the contextual representation of the input. This memory network has a set of key and value vectors in the hidden state to represent what is stored in the memory. These vectors are used to estimate the intra-attention with the previously stored tokens in the memory as opposed to the self-attention mechanism used by Vaswani et al. (2017) where interaction between the whole input sequence is estimated. One of the first uses of the self-attention mechanism in NLP is done by Parikh et al. (2016). Since then, self-attention mechanisms have become an integral part of sequence modeling allowing the network to model dependencies between input and output sequences irrespective of their distances. A self-attention layer calculates a single-shot interaction between all pairs of words in a sequence. ### 1.1 Self-attention-based Transformer architecture In 2017 Vaswani et al. (2017) proposed a novel architecture, _Transformer_ for NLP. It is predominantly a self-attention network driving the waves of advances in AI. A Transformer architecture (Figure 1.2) includes a stack of encoder and decoder blocks. Each encoder block is identical and contains a self-attention layer and a feedforward layer. The encoder's input flows through the self-attention layer helping the encoder to look at other words while encoding the current word. Its output is then fed to the feedforward layer. The same feedforward network is applied independently to each word. While a decoder consists of an encoder-decoder attention block in addition to the self-attention layer and a feedforward layer helping the decoder to focus on relevant parts of the input sequence. In the NLP task, each word of the input sequence is first converted into an embedding vector. They are provided as input to the encoder block, passing through a self-attention layer and feedforward network. The obtained output vector is fed to the next encoder block. Using the self-attention layer, the Transformer models the relationship between the current word with other relevant words of a sequence. In a self-attention layer, its input vector is transformed into a key (\(K\)), query (\(Q\)) and value (\(V\)) vectors of dimension \(d_{q}\) = \(d_{k}\) = \(d_{v}\) = 512 using a learnable matrix transformation. At first, the score (\(S\)) is calculated to determine the amount of focus to place on the other words in a sequence while encoding the current word. This score is calculated using the dot product between the query and key vectors (\(S~{}=~{}Q~{}.~{}K^{T}\)). It is normalized (\(S=S/\sqrt{d_{k}}\)) to stabilize the gradients, and later, using a \(softmax\), converted into probabilities. The extent of the probability score shows the relevancy of the current word with other words in the sequence. This score is multiplied by the value vector (\(V\)) so that relevant words are given additional focus while irrelevant words are neglected in the subsequent layers. \[Attention\ (Q,K,V)\ =softmax(\frac{Q\cdot K^{T}}{\sqrt{d_{k}}})\.\ V\] The self-attention mechanism proposed by Vaswani et al. (2017) has an additional feature called _multi-head_ attention (MHA). It helps to improve the performance in two ways: by augmenting the network's ability to focus on multiple positions and by giving distinct representational subspaces to each word. For example, if there are eight heads, eight sets of K, Q and V matrices exist, each representing a unique representational subspace. They are concatenated before passing through the feedforward network (Figure 1.3). The key characteristic of NLP tasks is the order of the words in a sequence. However, the operations we have discussed till now are permutation invariant. Positional embedding vectors are added to each input embedding vector to address this issue. These vectors help the model estimate the position of each word in a sequence in the projection space (i.e., \(K/Q/V\)). Figure 1.2: Transformer architecture proposed by Vaswani et al. (2017) (source) Positional encoding in the Transformer is an active and vibrant research area. Vanilla Transformer (Vaswani et al., 2017) uses absolute positional encoding; however, more recent work (Devlin et al., 2018; Dosovitskiy et al., 2021) prefers a learned (Gehring et al., 2017) or relative positional encoding (Shaw et al., 2018). The absolute coordinate system does not encode translational equivariance, while relative geometry could. Ramachandran et al. (2019); Bello et al. (2019) studied different positional encoding techniques and established that relative positional encoding offers the best results while providing additional advantages like encoding for an unseen length of sequences (refer to Wu et al. (2021) for a review). An overview of different positional encoding strategies used in NLP is discussed by Dufter et al. (2021). The residual connection around the self-attention layer and a feedforward network is an essential module in an encoder block. It is followed by the layer normalization step (Baevski and Auli, 2018; Wang et al., 2019; Dosovitskiy et al., 2021). A residual connection is added to each sub-layer in the encoder (and decoder) to strengthen the flow of information and achieve higher performance. At the end of the encoding steps, decoding begins. The decoder uses the Figure 1.3: Illustration of Multi-head attention mechanism in a Transformer network (source) key (\(K\)) and value (\(V\)) vectors from the top-most encoder block for its encoder-decoder attention layer. It helps to focus on the appropriate locations of the input sequence. At each step, the decoder layer provides an element of the final output sequence. This output vector is again fed to the subsequent decoder layer in the next time step. This process continues till the end of the sequence. An independent set of positional encoding is applied on the decoder side. The encoder-decoder attention module is similar to the multi-head self-attention mechanism described earlier. The only difference is that the key \(K\) and value \(V\) vectors are obtained from the top-most encoder block, and the query vector \(Q\) is derived from the previous self-attention layer of the decoder. Unlike the encoder, self-attention layers in the decoder are only allowed to access previously obtained output by masking the future words of a sequence. Masking future positions is done to prevent the decoder from cheating during the training phase - otherwise, it will already know what is coming next. The linear layer at the end of the decoder block is a fully connected neural network. It projects the vector obtained from the decoder layers into a logit vector. This logit vector represents the complete vocabulary of the language where translation has to be performed. A softmax converts this logit to the probabilities and represents the concerned word from the available vocabulary. In terms of computational complexity, for a sequence of length \(n\) and dimensionality \(d\), self-attention layers are faster than recursive or convolutional layers when \(n\) is smaller than \(d\), which is typically the case. \begin{table} \begin{tabular}{c c c c} \hline **Layer Type** & **Complexity** & **Sequential** & **Maximum** \\ & **per Layer** & **operations** & **path length** \\ \hline Self-Attention & \(\mathcal{O}\)(\(n^{2}.d\)) & \(\mathcal{O}\)(1) & \(\mathcal{O}\)(1) \\ Recurrent & \(\mathcal{O}\)(\(n.d^{2}\)) & \(\mathcal{O}\)(n) & \(\mathcal{O}\)(n) \\ Convolutional & \(\mathcal{O}\)(\(k.n.d^{2}\)) & \(\mathcal{O}\)(1) & \(\mathcal{O}\)(\(log_{k}\)(n)) \\ \hline \end{tabular} \end{table} Table 1.1: Complexity comparison of different networks for a sequence of length \(n\), kernal size \(k\) and dimensionality \(d\)[Vaswani et al., 2017] ### 1.2 Self attention in vision tasks Why self-attention for vision?In a self-attention mechanism, each word of a sequence is correlated with all the others. Thus containing information about the rest of the sequence - increasing the receptive field size equivalent to the length of a sequence. In some sense, images are no different from NLP sequences. Computer vision can take inspiration from the NLP domain to model long-range interactions between pixels with the added benefit of multi-head attention helping to parallelize these interactions. With the help of the multi-head attention method, different heads can focus on modeling different relations between pixels. For example, in a visual reasoning task where the objective is to count the number of pairs of shapes in an image, one head can focus on finding a pair while the other can focus on counting them. It helps the network to model self-similarity within an image. Images such as natural scenes and paintings display a great amount of self-similarity. Such non-local self similarity property was earlier explored for applications such as texture synthesis [10], object detection and segmentation [20], bilateral filtering [14] and image classification [15]. Hereafter, the main focus of this thesis will be computer vision. Self-attention with CNNIn a computer vision task, the resolution of the images could reach around 1000\(\times\)1000 px. Applying a self-attention mechanism to all these pixels ( \(10^{6}\) in number) is computationally expensive because of the quadratic complexity associated with the length of the sequence. Convolutional layers, on the other hand, do not have this bottleneck. However, they face trouble capturing long-range interactions because of their inability to scale up with the large receptive fields. To address this problem, there are predominantly two approaches. The first is to reduce the self-attention operation cost to a linear scale. Aligned to this line of work, Ramachandran et al. (2019) proposed a pure stand-alone attention model for vision tasks by replacing the convolution operations with self-attention operations. Nonetheless, the self-attention operation used in this approach is local. Another similar linear attention variant Halo [21] uses block-wise local attention to improve speed and accuracy. The second approach is to build hybrid CNN-Transformer architectures where the convolutions operations are used to encode the input image, and attention is applied to those encoded features. Srinivas et al. (2021) explored a hybrid combination of CNNs and multi-head self-attention (MHSA) models and showed that replacing the \(3\times 3\) kernal size convolutional layer in the bottleneck blocks of ResNet (He et al., 2016) with MHSA layers improved several CNN baselines. Interestingly, DETR (Carion et al., 2020) showed that concatenating the Transformer model at the end of the feature-extraction network is helpful for tasks like detection, localization, and segmentation. There are four broad categories of research to incorporate self-attention mechanism with CNN, which are as follows: **Inserting few attention modules in between residual blocks**: Along this line of work, Wang et al. (2018), Chen et al. (2018) proposed a non-local block similar to Ramachandran et al. (2019) and used them for video-based applications. In this network, features are gathered and propagated motivated by the squeeze and excite (Hu et al., 2018) network. As mentioned earlier, these methods only focus on the spatial dimension for calculating the non-local interaction, so Yue et al. (2018) added a correlation factor between the channels to improve the model effectiveness. Similarly, Shen et al. (2021) proposed a method to bring down the quadratic complexity of the self-attention mechanism to a linear scale. We demonstrate a unique way to incorporate self-attention with a feedforward network in Chapter 3 where the intermediate features of the network are passed through the self-attention layer to find the global association. This attention is applied directly over the feature space in contrast to the previously used methods of squeezing the feature vector dimensions to save computations. **Inserting attention modules at the end**: Usually, such models have a front-end of convolutional block acting as a feature extraction module for self-attention block as back-end. These models are used for tasks like object detection and semantic segmentation. Huang et al. (2019) designed criss-cross attention that learns the complete image dependency recurrently for semantic segmentation tasks using dot-product attention. Moving away from this trend of using self-attention operations, Carion et al. (2020) proposed DETR architecture by placing a Transformer model as the back-end. **Replacing convolution layers by self-attention layers**: Self-attention mechanism used in this line of research is primarily local in nature to decrease the computational demand associated with the increasing sequence length in an im age which is directly proportional to total pixel count. Bello et al. (2019) made a unique attempt to augment the feature maps of convolutional layers with the self-attention modules. Feature maps obtained with the help of the self-attention module are concatenated with the feature maps of CNNs. They discovered that replacing all the feature maps of CNN with the feature maps of self-attention layers degrades the system's performance. Contrary to their finding, Ramachandran et al. (2019) came up with the architecture replacing all the convolution layers with a local self-attention layer and achieved better performance than a fully convolutional network on the image classification task. In addition to these four categories of research, where the primary focus is on computer vision applications, cognitive studies also explore self-attention mechanisms. In one of the first studies by Whittington et al. (2022), neural representations of Hippocampal formation are related to the Transformer model. They did this correspondence with the help of the Tolman-Eichenbaum Machine Whittington et al. (2020), a model for hippocampal formation. This work showed that when recurrent positional encodings are used in Transformer, they replicate spatial representations of hippocampal formations like place cells and grid cells. To analyze from an attentional point of view, we studied the role of a self-attention layer of the Transformer model in understanding visual reasoning tasks in Chapter 3. This layer is used as a feature-based or spatial attention layer. A multi-head self-attention layer is significantly different from the other existing self-attention models where the span of attention in the dot-product mechanisms is local. They proposed a self-attention mechanism that could be applied globally over a feed-forward network's spatial or feature space. This method gives the network higher representation power because of its ability to use multi-head attention. We also built a cognitive architecture inspired by the active vision literature relating to the shifting of the spotlight of attention in Chapter 4. This attention routing is implemented with a controller module consisting of an attention module and an LSTM layer, which generates a query to guide the shifting. More studies in the NLP domain focus on relating language models to brain activations; however, a similar trend is yet to be seen in the computer vision domain. These developments exploring the self-attention mechanisms propelled toward building a fully self-based attention architecture for computer vision applications. Evolutions in the NLP domain were vital in inspiring the fully self-attention-based architecture for vision tasks. ### 1.3 Transformer-based vision architecture The first fully self-attention-based Transformer architecture is presented by [14]. It is known as Vision Transformer (ViT) (Figure 1.4). In this architecture, an input image is divided into a sequence of image patches called visual tokens and transforms those patches before passing them to the network. The core idea is to treat each pixel as a token and pass it to the Transformer network. However, with the increasing size of the number of pixels, attention cost scales quadratically, so patches of 16\(\times\)16 pixels are used instead. Each patch is flattened and linearly projected to a vector of the desired dimension. As the network is agnostic to the positions of these patches w.r.t. the input image, position embeddings are added to learn the 2D structure. ViT learns this encoded structural information while training. A learnable class embedding token is also added at the beginning of the sequence. A class embedding token is inspired from [14] that is learned along with other patches while training the network. This learnable token eventually helps to predict the classification label with the help of a multi-layer perceptron (MLP) head. When the ViT is trained on a mid-sized dataset like ImageNet [13], outcomes are not impressive because of their lack of inductive biases such as translational equivariance and locality. ViT experiences difficulty learning Figure 1.4: Vision Transformer architecture [14] image-specific inductive biases like a CNN, as the model never sees the complete 2D image during training but only a sequence of transformed patches. Such CNN-like biases are compensated by training the model with massive databases like JFT-300M and fine-tuning it for downstream tasks. ViT learns the spatial relationship from scratch, which raises its demand for extra training data and longer training time. Touvron et al. (2021) proposed DeIT, equalizing this pre-training-related bottleneck using techniques like the teacher-student distillation approach and robust augmentation methods. DeIT, when trained on ImageNet by incorporating these methods, surpasses the performance of the ViT model. Vision transformers are a front-runner in capturing the long-range dependencies in an image, yet they fail to account for local features as CNNs do. A wide gap is perceived between ViT and CNN learnability. Wu et al. (2021), Guo et al. (2022), Yuan et al. (2021), Graham et al. (2021), Dai et al. (2021), Peng et al. (2021) analysed the potential weaknesses in directly applying Transformer model from NLP domain and proposed a combination with convolutional network. Wu et al. (2021) proposed a Convolutional vision Transformer (CVT) and presented a convolutional-based patch projection of image tokens along with a hierarchical design. Another alternative, LocalViT (Li et al., 2021) proposed depthwise convolution to capture local features. Meanwhile, LeViT Graham et al. (2021) enhanced the inference speed of ViT by designing multistage transformer architecture and downsampling the image using attention. In yet another network proposed by Zhou et al. (2021), it incorporated locality without convolutions with the help of enhanced local self-attention using Hadamard attention and ghost head. Hadamard attention is more computational-friendly than dot-product attention, while ghost heads increase the channel capacity by combining attention maps. A striking network, ConViT, proposed by d'Ascoli et al. (2021) took a step further to incorporate the convolutional biases into the Transformer architecture. d'Ascoli et al. (2021) initialized self-attention layers with soft convolutions with the help of Gated-Positional-Self-Attention (GPSA). This self-attention block is characterized by locality strength and head-specific center of attention. The locality factor determines how much the head should focus around its center of attention. For any given query patch, which head should give attention to which position is decided by the head-specific center of attention. With suitable parameters setting, ConViT can have ViT-like expressive power and could be trained in low-data regimes like CNNs. In a collaborative project with paleobotanist, we proposed _conviformer_Vaishnav et al. (2022) to incorporate convolutional biases into any vision transformer with minimal architectural change. With the conviformer architecture, the network can also attend to higher-resolution images and provide compatibility with the base architecture used. ChallengesTransformer architecture confronts a two-front challenge. It requires enormous data for training to learn the right inductive bias and the computational cost associated with the sequence length. Figure 1.5 compares the computational requirement of different Transformers and CNNs models. An empirical study is done by Zhai et al. (2022) on the scalability of the ViT. They report that scaling up training samples and parameters of the model scale up the overall performance of the model; nevertheless, this plateaus quickly for smaller models as they cannot leverage additional data. It indicates that larger models have the scope to improve their representation learning abilities. Training a Transformer model requires massive data to compete with inductive biases like translation invariance similar to CNN. The self-attention mechanisms in a Transformer learn such image-specific concepts during longer training times, thereby significantly Figure 1.5: Computational demands for training Transformers vs. CNNs. Compute needed to train a Transformer network has increased by 275 times in the last two years. (source) increasing the compute requirements. Strong data augmentation techniques nowadays compensate for the vast dataset requirement. Transformer architecture furthermore lacks explicit mechanisms to attend to local neighborhoods. A commonly accepted solution to this issue is to restrict the attention mechanism to the local area Parmar et al. (2018) or to incorporate structural priors on attention like Sparsity (Child et al., 2019). It makes a dense attention matrix into a sparse matrix limiting the computations. Regardless, the approach has some limitations. Sparse matrix multiplication operations are uncommon for hardware accelerators. An additional computational bottleneck is calculating the dot product operation in the self-attention layer. Existing techniques to handle this situation are half-precision, gradient accumulation and gradient checkpointing. Tensor computations on modern hardware architectures are effectively done with 16-bit float tensors. Sometimes higher precision is required while calculating the loss, which doubles the required memory. This precision handling is carried out with the help of _apex_ library1. On a fixed GPU/TPU machine, a large model may only fit a single-digit batch size, ultimately leading to unstable learning. A multivariate chain rule is used to incorporate the dynamics related to bigger batch sizes. It sums the gradients for a larger batch and computes the gradient descent at the end. For more bigger models, the trade-off is to separate the model into different chunks and compute the gradient in a forward/backward pass for each chunk. Footnote 1: [https://nvidia.github.io/apex/](https://nvidia.github.io/apex/) Our proposed _conviformer_Vaishnav et al. (2022) addressed ViT's inability to process longer sequences which restricted ViT to smaller resolution images. In the conviormer, the input image is passed through a convolutional backbone, down-sampling the image to 224\(\times\)224 (a commonly accepted input resolution). With the help of a convolutional frontend, the network makes sure to introduce the inductive biases of CNN into the network. The feature vectors obtained by the CNN modules are later passed to the base architecture of the vision transformer. This technique holds the compatibility of the network with the base model and provides a performance boost with insignificant additional computational cost. Finally, training a huge Transformer model has negatively impacted the environment. Compute cost and the complexity associated with the Transformer are directly related to environmental factors such as \(CO_{2}\) emission (Strubell et al., 2020) and high energy consumption (You et al., 2020). There is also a cost asso ciated with mining rare metals for manufacturing these hardware accelerators. ### 1.4 Original Contributions Our contributions are as follows: * We present a novel fine-grained taxonomy for the SVRT tasks by systematically analyzing the ability of feedforward neural networks. * We first propose a self-attention-augmented feedforward network modeled as spatial or feature-based attention. * Our attentional networks analysis on SVRT tasks provides a granular computational account of visual reasoning and yields testable neuroscience predictions regarding the differential need for feature-based versus spatial attention depending on the type of visual reasoning problem. * Next, we present a novel end-to-end trainable guided-attention module to learn to solve visual reasoning challenges in a data-efficient manner. * We show that our guided-attention module learns to shift attention to task-relevant locations and gate relevant visual elements into a memory bank; * We show that our architecture demonstrate zero-shot generalization ability and learns compositionally. GAMR is capable of learning efficiently by re-arranging previously-learned elementary operations stored within a reasoning module. * Our architecture sets new benchmarks on two visual reasoning challenges, SVRT [17] and ART [20]. The work presented in Chapter 2 and Chapter 3 are taken from our following publication: * **Mohit Vaishnav**, Remi Cadene, Andrea Alamia, Drew Linsley, Rufin Van-Rullen, Thomas Serre; "Understanding the Computational Demands Underlying Visual Reasoning." _Neural Computation_ 2022; 34 (5): 1075-1099. doi: [https://doi.org/10.1162/neco_a_01485](https://doi.org/10.1162/neco_a_01485) The work presented in Chapter 4 is taken from our following publication: * **Mohit Vaishnav**, Thomas Serre. "GAMR: A Guided Attention model for (visual) Reasoning." _International Conference on Learning Representations (ICLR)_ 2023, [https://openreview.net/forum?id=iLMgk2IGNyv](https://openreview.net/forum?id=iLMgk2IGNyv) ## Chapter 2 ## Chapter Understanding the Computational Demand Underlying Visual Reasoning ## Chapter Computational Demands of Visual Reasoning ### 1 Introduction Humans can effortlessly reason about the visual world and provide rich and detailed descriptions of briefly presented real-life photographs (Fei-Fei et al., 2007), vastly outperforming the best current computer vision systems (Geman et al., 2015; Kreiman and Serre, 2020). For the most part, studies of visual reasoning in humans have sought to characterize the neural computations underlying the judgment of individual relations between objects, such as their spatial relations (e.g., Logan (1994a)) or whether they are the same or different (up to a transformation, e.g., Shepard and Metzler (1971)). It has also been shown that different visual reasoning problems have different attentional and working memory demands (Logan, 1994b; Moore et al., 1994; Rosielle et al., 2002; Holcombe et al., 2011; Van Der Ham et al., 2012; Kroger et al., 2002; Golde et al., 2010; Clevenger and Hummel, 2014; Brady and Alvarez, 2015). However, there is still little known about the neural computations that are engaged by different types of visual reasoning (see Ricci et al. (2021) for a recent review). One benchmark that has been designed to probe abstract visual relational capabilities in humans and machines is the _Synthetic Visual Reasoning Test_ (SVRT) (Fleuret et al., 2011). The dataset consists of twenty-three hand-designed binary classification problems that test abstract relationships between objects posed on images of closed-contour shapes. Observers are never explicitly given the underlying rule for solving any given problem. Instead, they learn it while classifying positive and negative examples and receiving task feedback. Examples from two representative tasks are depicted in Figure 1: observers must learn to recognize whether two shapes are the same or different (Task _1_) or whether or not the smaller of the two shapes are near the boundary (Task _2_). Additional abstract relationships tested in the challenge include "inside", "in between", "forming a square", "aligned in a row" or "finding symmetry" (see Figures A1 and A2 for examples). Most SVRT tasks are rapidly learned by human observers within twenty or fewer training examples (Fleuret et al., 2011) (see Table A.1; reproduced from the original study). On the other hand, modern deep neural network models require several orders of magnitude more training samples for some of the more challenging tasks (Ellis et al., 2015a; Kim et al., 2018; Messina et al., 2021; Stabinger et al., 2021, 2016a; Puebla and Bowers, 2021) (see Ricci et al. (2021) for review; see also Funke et al. (2021) for an alternative perspective). It is now clear that some SVRT tasks are more difficult to learn than others. For instance, tasks that involve spatial-relation (SR) judgments can be learned much more easily by deep convolutional neural networks (CNNs) than tasks that involve same-different (SD) judgments [Stabinger et al., 2016a, Kim et al., 2018, Yihe et al., 2019a]. In contrast, a very recent study [Puebla and Bowers, 2021] demonstrated that even when CNNs learn to detect whether objects are the same or different, they fail to generalize over small changes in appearance, meaning that they have only partially learned this abstract rule. The implication of the relative difficulty of learning SR versus SD tasks is that CNNs appear to need additional computations to solve SD tasks beyond standard filtering, non-linear rectification, and pooling. Indeed, recent human electrophysiology work [Alamia et al., 2021a] Figure 2.1: Two SVRT sample tasks from a set of twenty-three in total. For each task, the leftmost and rightmost two examples illustrate the two categories to be classified. Representative samples for the complete set of twenty-three tasks can be found in Figure A1 and A2. has shown that SD tasks recruit cortical mechanisms associated with attention and working memory processes to a greater extent than SR tasks. Others have argued that SD tasks are central to human intelligence [14, 15]. Beyond this basic dichotomy of SR and SD tasks, little is known about the neural computations necessary to learn to solve SVRT tasks as efficiently as human observers. Here, we investigate the neural computations required for visual reasoning. In our experiment, we extend prior studies on the learnability of individual SVRT tasks by feedforward neural networks using a popular class of deep neural networks known as deep residual networks ("ResNets") [13]. We systematically analyze the ability of ResNets to learn all twenty-three SVRT tasks as a function of their expressiveness, parameterized by processing depth (number of layers), and their efficiency in learning a particular task. Through these experiments, we found that most of the performance variance in the space of SVRT tasks could be accounted for by two principal components, which reflected both the type of task (same-different vs. spatial-relation judgments) and the number of relations used to compose the underlying rules. ### 2.2 Systematic analysis of SVRT tasks' learnability All experiments were carried out with the _Synthetic Visual Reasoning Test_ (SVRT) dataset using code provided by the authors to generate images with dimension _128 \(\times\) 128_ pixels (see Fleuret et al. [2011] for details). All images were normalized and resized to 256\(\times\)256 pixels for training and testing models. No image augmentations were used during training. In our first experiment, we wanted to measure how easy or difficult each task is for ResNets to learn. We did this by recording the SVRT performance of multiple ResNets, each with different numbers of layers and trained with different numbers of examples. By varying model complexity and the number of samples provided to a model to learn any given task, we obtained complementary measures of the learnability of every SVRT task for ResNet architectures. In total, we trained 18-, 50-, and 152-layer ResNets separately on each of the SVRT's twenty-three tasks. Each of these models was trained with.5k, 1k, 5k, 10k, 15k, and 120k class-balanced samples. We also generated two unique sets of 40k positive and negative samples for each task: one was used as a validation set to select a stopping criterion for training the networks (if validation accuracy reaches 100%) and one was used as a test set to report model accuracy. In addition, we used three independent random initializations of the training weights for each configuration of architecture/task and selected the best model using the validation set. Models were trained for _100_ epochs using the \(Adam\) optimizer (Kingma and Ba, 2014) with a training schedule (we used an initial learning rate of 1\(e\)-3 and changing it to 1\(e\)-4 from the \(70^{th}\) epoch onward). As a control, because these tasks are quite different from each other, we also tested two additional initial learning rates (_1e-4, 1e-5_). Consistent with prior work (Kim et al., 2018; Stabinger et al., 2016; Yihe et al., 2019), we found that some SVRT tasks are much easier to learn than others for ResNets (Figure 2.2). For instance, a ResNet50 needs only _500_ examples to perform well on tasks _2, 3, 4, 8, 10, 11, 18_ but the same network needs _120k_ samples to perform well on task _21_ (see Figures A1 and A2 for examples of these tasks). Similarly, with _500_ training examples, task _2, 3, 4 & 11_ can be learned well with only 18 layers while task _9, 12, 15 & 23_ require as many as 152 layers. A key assumption of our work is that these differences in training set sizes and depth requirements between different SVRT tasks reflect different computational strategies that need to be discovered by the neural networks during training for different tasks. Our next goal is to characterize what these computational strategies are. ### 2.3 An SVRT taxonomy To better understand the computational strategies needed to solve the SVRT, we analyzed ResNet performance on the tasks with a multi-variate clustering analysis. For each individual task, we created an \(N\)-dimensional vector by concatenating the test accuracy of all ResNet architectures (\(N=3\) depths \(\times\) 5 training set sizes = 15), which served as a signature of each task's computational requirements. We then passed a matrix of these vectors to an agglomerative hierarchical clustering analysis (Figure 2.3) using the \(Ward^{\prime}s\) method. Our clustering analysis revealed a novel taxonomy for the SVRT. At the coarsest level, it recapitulated the dichotomy between _same-different_ (SD; green branches) and _spatial-relation_ (SR; brown branches) categorization tasks originally identified by Kim et al. (2018) using shallow CNNs. Interestingly, two of the tasks which were classified as SR by Kim et al. (2018) (tasks _6 & 17_) were assigned to the SD cluster in our analysis. We examined the descriptions of these two tasks as given in Fleuret et al. (2011) (see also Figures A1 and A2) Figure 2.2: Test accuracy for each of the twenty-three SVRT tasks as a function of the number of training samples for ResNets with depths 18, 50 and 152, resp. The color scheme reflects the identified taxonomy of SVRT tasks (see Figure 2.3 and text for details). and found that these two tasks involve both SR and SD: they ask observers to tell whether shapes are the same or different and judge the distance between the shapes. Specifically, task \(6\) involves two pairs of identical shapes with one category having the same distance in-between two identical shapes vs. not in the other. Similarly, in task _17_, three of the four shapes are identical and their distance from the non-identical one is the same in one category vs. different in the other. Thus, our data-driven dichotomization of SR vs. SD refines the original proposal of Kim et al. (2018). This could be due to our use of ResNets (as opposed to vanilla CNNs), deeper networks, and a greater variety of training set sizes (including much smaller training set sizes than those used by Kim et al. (2018)). The analysis by Fleuret et al. (2011) also revealed that several SD tasks _(_6, 16, 17, 21_)_ are particularly challenging for human observers. Our clustering analysis also revealed a finer organization than the main SR vs. SD dichotomy. The SR cluster could be further subdivided into two sub-clusters. The \(SR_{2}\) (dark-brown-coloured) branch in Figure 2.3 captures tasks that involve Figure 2.3: Dendrogram derived from an N-dim hierarchical clustering analysis on the test accuracy of N=15 ResNets[18/50/152] trained to solve each task over a range of training set sizes. relatively simple and basic relation rules such as shapes making close contact [3, 11], or being close to one another [2], one shape being inside the other [4] or whether the shapes are arranged to form a symmetric pattern [8, 10, 18]. In contrast, tasks that fall in the \(SR_{1}\) (light-brown-colored) branch involve the composition of more than two rules such as comparing the size of multiple shapes to identify a subgroup before identifying the relationship between the members of the sub-groups. This includes tasks such as finding a _larger_ object _in between_ two smaller ones [9] or three shapes of which two are small and one large with two smaller (_identification of large and small object_) ones either inside or outside in one category vs. one _inside_ and the other _outside_ in the second [23], or _two small_ shapes _equally close_ to a bigger one [12]_, etc. These tasks also tend to be comparatively harder to learn, requiring ResNets with greater processing depth and more training samples. For instance, tasks _9, 12, 15, 23_ were harder to learn than tasks _2, 4, 11_ requiring more samples and/or more depth to solve well (Figure 2.2). We found that task _15_ gets assigned to this latter sub-cluster because the task requires finding four shapes in an image that are identical vs. not. One would expect this task to fall in the SD cluster but we speculate that the deep networks are actually able to leverage a shortcut [Geirhos et al., 2020] by classifying the overall pattern as symmetric/square (when the four shapes are identical) vs. trapezoid (when the four shapes are different; see Figure A2) - effectively turning an SD task into an SR task. Our clustering analysis also reveals a further subdivision of the SD cluster. These tasks require recognizing shapes that are identical to at least one of the other shapes in the image. The first sub-cluster \(SD_{2}\) (light green color branch) belongs to tasks that require identification of simple task rules, like answering whether or not two shapes are identical (even if it is along the perpendicular bisector) (tasks _1, 20_; see Figure A1), determining if all the shapes on an image are the same [16, 22], or detecting if two pairs of identical shapes can be translated to become identical to each other [13]. Another set of tasks within this sub-cluster includes tasks that are defined by more complex rules that involve the composition of additional relational judgments. Sample tasks include identifying pairs/triplets of identical shapes and measuring the distance with the rest [6, 17], determining if an image consists of pairs of identical shapes [5], or detecting if one of the shapes is a scaled version of the other [19]. Finally, the second sub-cluster \(SD_{1}\) shown in dark-green color involves two tasks that require an understanding of shape transformations. One task asks observers to say if one of the shapes is the scaled, translated, or rotated version of the other one [21]. The other task test asks observers to judge whether or not an image contains two pairs of three identical shapes or three pairs of two identical shapes in an image [7]. To summarize this first set of experiments, we have systematically evaluated the ability of ResNets spanning multiple depths to solve each of the twenty-three SVRT tasks for different training set sizes. This allowed us to represent SVRT tasks according to their learnability by ResNets of varying depth. By clustering these representations, we extracted a novel SVRT taxonomy that both recapitulated an already described SD-SR dichotomy [Kim et al., 2018], and also revealed a more granular task structure corresponding to the number of rules used to form each task. Tasks with more rules are harder for ResNets to learn. Our taxonomy also reveals an organization of tasks where easier \(SR_{1}\) and \(SR_{2}\) sub-clusters fall closer to each other than harder \(SD_{1}\) and \(SD_{2}\) sub-clusters. ### 2.4 Conclusion The goal of the present study was to shed light on the computational mechanisms underlying visual reasoning using the Synthetic Visual Reasoning Test (SVRT) [Fleuret et al., 2011]. There are twenty-three binary classification problems in this challenge, which include a variety of same-different and spatial reasoning tasks. In our experiment, we systematically evaluated the ability of a battery of \(N=15\) deep convolutional neural networks (ResNets) - varying in depths and trained using different training set sizes - to solve each of the SVRT problems. We found a range of accuracies across all twenty-three tasks. Shallower networks easily learned some tasks, and relatively small training sets and some tasks were hardly solved with much deeper networks and orders of magnitude more training examples. Under the assumption that the computational complexity of individual tasks can be well characterized by the pattern of test accuracy across these \(N=15\) neural networks, we formed N-dimensional accuracy vectors for each task and ran a hierarchical clustering algorithm. The resulting analysis suggests a taxonomy of visual reasoning tasks: beyond two primary clusters corresponding to same-different (SD) vs. spatial relation (SR) judgments, we also identified a finer organization with sub-clusters reflecting the nature and the number of relations used to compose the rules defining the task. Our results are consistent with previous work by Kim et al. (2018), who first identified a dichotomy between SD and SR tasks. Our results also extend prior work (Fleuret et al., 2011; Kim et al., 2018; Yihe et al., 2019) in proposing a finer-level taxonomy of visual reasoning tasks. The accuracy of neural networks is reflected in the number of relationships used to define the basic rules, which is expected, but it deserves closer examination. Kim et al. (2018) have previously suggested that SD tasks "strain" convolutional neural networks. That is, while it is possible to find a network architecture of sufficient depth (or the number of units) that can solve a version of the task up to a number of stimulus configurations (e.g., by forcing all stimuli to be contained within a \(\Delta H\times\Delta W\) window), it is relatively easy to render the same task unlearnable by the same network past a certain number of stimulus configurations (e.g., by increasing the size of the window that contains all stimuli). It is as if these convolutional networks are capable of learning the task if the number of stimulus configurations remains below their memory capacity, and fails beyond that. It remains an open question whether non-convolution alternatives to the CNNs tested here such as the now popular transformer networks (Tolsovitskiy et al., 2021; Tovuron et al., 2021; Tolstikhin et al., 2021) would learn to solve some of the harder SVRT tasks more efficiently. As an initial experiment, we attempted to train and test a Vision Transformer 1 (ViT) (Tolsovitskiy et al., 2021) constrained to have a similar number of parameters (21M) to the ResNet-50 used here. We were not able to get these architectures to do well on most of the tasks that are difficult for ResNets, even with 100k samples (also shown in Messina et al. (2021)). It is worth noting that even 100k samples remain a relatively small dataset size by modern-day standards since ViT was trained from scratch. Footnote 1: [https://github.com/facebookresearch/dino](https://github.com/facebookresearch/dino) Multi-layer perceptrons and convolutional neural networks including ResNets and other architectures can be formally shown to be universal approximators under certain architectural constraints. That is, they can learn arbitrary mappings between images to class labels. Depending on the complexity of the mapping, one might need an increasing number of hidden units to allow for enough expressiveness of the network; but provided enough units \(/\) depth and a sufficient amount of training examples, deep CNNs can learn arbitrary visual reasoning tasks. While we cannot make any strong claim for the specific ResNet architectures used in this study (currently, the proof is limited to a single layer without max pooling or batch normalization (Lin and Jegelka, 2018)), we have indeed found empirically that all SVRT tasks could indeed be learned for networks of sufficient depth and provided a sufficient amount of training examples. However, deep CNNs typically lack many of the human cognitive functions, such as attention and working memory. Such functions are likely to provide a critical advantage for a learner to solve some of these tasks (Marcus, 2001). CNNs might have to rely instead on function approximation which could lead to a less general "brute-force" solution. Given this, an open question is whether the clustering of SVRT tasks derived from our CNN-based analyses will indeed hold for human studies. At the same time, the prediction by Kim et al. (2018) using CNNs that SD tasks are harder than SR tasks and hence that they may demand additional computations (through feedback processes) such as attention and/or working memory was successfully validated experimentally by Alamia et al. (2021) using EEG. Additional evidence for the benefits of feedback mechanisms for visual reasoning was provided by Linsley et al. (2018) who showed that contour tracing tasks that can be solved efficiently with a single layer of a recurrent CNN may require several order of magnitudes more processing stages in a non-recurrent-CNN to solve the same task. This ultimately translates into much greater sample efficiency for recurrent-CNNs on natural image segmentation tasks (Linsley et al., 2020). The closely related task of "insideness" was also studied by Villalobos et al. (2021) who demonstrated the inability of CNNs to learn a general solution for this class of problems. Universal approximators with minimal inductive biases such as multi-layer perceptrons, CNNs and other feedforward or non-attentive architectures can learn to solve visual reasoning tasks, but they might need a very large number of training examples to properly fit. Hence, beyond simply measuring the accuracy of very deep nets in high data regimes (such as when millions of training examples are available), systematically assessing the performance of neural nets of varying depths and for different training regimes may provide critical information about the complexity of different visual reasoning tasks. ## Chapter 3 ## Chapter 3 Role of self-attention in a computer vision architecture \begin{tabular}{r l r} \hline \hline 3.1 & Introduction \\ 3.2 & Experiment 1: Self-attention with ResNet50 \\ 3.3 & Experiment 2: Feature vs. rule learning \\ 3.4 & Conclusion \\ \hline \hline \end{tabular} ## Chapter 3 Role of self-attention in a computer vision architecture ### 3.1 Introduction Humans continue to outperform modern AI systems in their ability to flexibly parse and understand complex visual relations. Prior cognitive neuroscience work suggests that attention plays a key role in humans' visual reasoning ability. In the realm of artificial intelligence, attention mechanisms have become essential components of cutting-edge machine learning algorithms. Inspired by the principles observed in neuroscience, attention models in AI enable machines to focus on salient features or regions of interest within input data, allowing them to allocate computational resources effectively and improve performance on various tasks. By selectively attending to relevant information, AI systems can extract meaningful patterns, make informed decisions, and exhibit more human-like intelligence. The synergy between attention research in neuroscience and AI has led to significant advancements in both fields. Neuroscientists can validate their theories by testing their predictions on AI models, while AI researchers can leverage the findings from neuroscience to design more biologically plausible and efficient attention models. This bidirectional flow of knowledge and insights has the potential to revolutionize our understanding of attention, cognitive processes, and the development of intelligent systems. By bridging the gap between neuroscience and artificial intelligence, we can unlock new perspectives on attention, fostering a deeper understanding of how attention shapes our cognitive abilities and paving the way for more sophisticated and efficient intelligent systems. Attention, a fundamental cognitive process, plays a crucial role in shaping our perception, memory, and decision-making. It allows us to selectively focus on relevant information while filtering out distractions, enabling efficient and adaptive behavior in complex environments. By employing selective attention, we give priority to information that is behaviorally relevant while disregarding surrounding stimulation. In neuroscience, the study of attention has provided valuable insights into the workings of the human brain. Neuroscientists have identified distinct neural networks and mechanisms that govern attentional processes, shedding light on how the brain filters and processes sensory inputs, allocates cognitive resources, and guides behavior. Understanding the neural basis of attention has not only deepened our understanding of human cognition but has also provided inspiration for developing attention models in AI. Attention can be consciously directed towards spatial or non-spatial properties, also known as feature-based attention. By selectively directing our attention, we possess the ability to intentionally focus on particular aspects of our environment. This may include directing our attention to a specific position in space (spatial attention) or highlighting a specific feature, such as a particular color (feature-based attention). When a location or feature is correctly indicated (valid cue; attentional focusing), it results in improved performance for the subsequent stimulus. Conversely, when the cue is incorrect (invalid cue), performance declines as it necessitates reorienting attention to the unexpected target stimulus [1, 19, 20]. The impact of cueing, as observed through the difference between valid and invalid trials, can be observed in the activity modulations of neurons in early visual areas [15, 16, 17]. This effect is further supported by increased activity modulations in early visual areas as revealed by functional magnetic resonance imaging (fMRI) [14, 15]. Numerous studies have provided evidence that both spatial and feature attention have a modulatory effect on neuronal responses, resulting in an improved signal-to-noise ratio during the encoding of the attended stimulus. Additionally, it has been reported that both types of attention lead to increased neuronal response magnitudes across multiple visual areas [11]. Moreover, both spatial and feature attention contribute to enhancing the representation of the attended stimulus by reducing neuronal response variability (often measured using the Fano factor) and pairwise noise correlation [14, 15]. While there are notable similarities between feature and spatial attention, several differences have been observed as well. One prominent distinction is that spatial attention is confined to a specific location within the retinotopic map [20], whereas feature attention impacts processing across the entire visual field [17]. Furthermore, the temporal dynamics of sensory neuron modulation differ between the two types of attention [13]. In order to investigate the underlying neural mechanism that may account for both similarities and differences between spatial and feature attention, 20] conducted a study involving trained monkeys performing a direction-change detection task. During the task, neuronal activity was recorded from the medial temporal cortex (MT). By manipulating the direction of the Gabors and the attended location, the researchers created three distinct task variants aimed at measuring the neuronal modulation induced by normalization, spatial attention, and feature attention. Attention plays a crucial role in addressing the binding problem, which involves integrating various features of a stimulus into a unified object representation. Extensive research in the cognitive and neuroscience fields underscores the significance of attention in this process. The Feature Integration Theory (FIT) proposed by Treisman and Gelade (1980) emphasizes how attention facilitates the binding of features, enabling the formation of coherent object representations. Desimone et al. (1995) work focuses on the neural mechanisms of selective visual attention, elucidating how attention contributes to feature binding and information integration within the visual system. Reynolds and Chelazzi (2004) explore attention's impact on visual processing, highlighting its role in feature binding and coordinating neural activity across different brain regions. Treisman (1998) delves into the intricate relationship between feature binding, attention, and object perception. Additionally, Corbetta and Shulman (2002) investigate the control of goal-directed and stimulus-driven attention, shedding light on their implications for solving the binding problem. Collectively, these studies underscore the indispensable role of attention in integrating features and effectively addressing the binding problem through both cognitive and neural processes. Furthermore, other works contribute valuable insights into object vision and the temporal dynamics of attention during visual search tasks. Tanaka (1996) focuses on the inferotemporal cortex (IT) and its role in object vision, discussing the neural mechanisms involved in recognizing complex object features. Riesenhuber and Poggio (1999) address the "binding problem" by examining cortical models and proposing distributed representations and feature-based attention as potential solutions, challenging the notion that cortical models are limited by this problem. Woodman and Luck (2003) investigate the temporal dynamics of attention during visual search tasks and propose a two-stage model involving serial deployment of attention to select target locations, followed by parallel processing within those locations. Together, these works contribute to our understanding of object recognition, the integration of visual features, and the temporal aspects of attention during visual search tasks, complementing the broader literature on attention and the binding problem in cognitive and neuroscience research. Addressing the binding problem has been a prominent focus in cognitive and neuroscientific research, with computational approaches providing valuable insights. Hommel (1998) introduces the concept of event files, which propose the automatic integration of stimulus-response episodes in memory. This work presents evidence supporting the automatic binding of stimuli and responses, suggesting the creation of temporary associations to optimize processing efficiency. Lisman and Jensen (2013) contribute to the field by discussing the theta-gamma neural code, highlighting the significance of synchronized theta and gamma oscillations in neural communication and information processing. Their review emphasizes the role of the theta phase as a temporal framework for precise encoding and integration of information in various cognitive processes. Additionally, Verguts (2017) presents a computational model known as "binding by random bursts" to elucidate cognitive control mechanisms. This model posits that cognitive control emerges from dynamic interactions between low-level sensory processing and high-level control processes, where random bursts of neural activity act as a binding mechanism, coordinating information flow and facilitating flexible cognitive control. Lastly, Senoussi et al. (2022) investigate time-based binding as a solution and limitation for flexible cognition. They propose that temporal associations between events are crucial for cognitive processing and the binding of information over time. The discussion explores how time-based binding can enhance cognitive flexibility but also introduces constraints in rapidly changing contexts. Together, these studies contribute to our understanding of the binding problem and offer computational models that shed light on cognitive control and the role of time-based binding in flexible cognition. In the previous chapter, we discussed a benchmark used to evaluate the abilities of machines in solving visual reasoning tasks and compare them with humans. We did this by systematically assessing the ability of modern deep convolutional neural networks (CNNs) to learn to solve the synthetic visual reasoning test (SVRT) challenge, a collection of 23 visual reasoning problems. Our analysis revealed a novel taxonomy of visual reasoning tasks, which can be primarily explained by the type of relations (same-different (SD) versus spatial-relation (SR) judgments) and the number of relations used to compose the underlying rules. Consistent with the speculated role of attention in solving the binding problem when reasoning about objects (Egly et al., 1994; Roelfsema et al., 1998), prior work by Kim et al. (2018) has shown that combining CNNs with an oracle model of attention and feature binding (i.e., preprocessing images so that they are explicitly and readily organized into discrete object channels) renders SD tasks as easy to learn by CNNs as SR tasks. Here, we build on this work and introduce CNN extensions incorporating spatial or feature-based attention. In the first set of experiments, we show that these attention networks learn difficult SVRT tasks with fewer training examples than their non-attentive (CNN) counterparts but that the different forms of attention help on different tasks. This experiment raises the question: how do attention mechanisms help with learning different visual reasoning problems? There are at least two possible computational benefits: attention could improve model performance by simply increasing its capacity, or attention could help models learn the abstract rules governing object relationships more efficiently. To adjudicate between these two possibilities, we measured the sample efficiency of ResNets pre-trained on SVRT images so that they only had to learn the abstract rules for each SVRT task. We found that attention ResNets and ResNets pre-trained on the SVRT were similarly sample-efficient in learning new SVRT tasks, indicating that attention helps discover abstract rules instead of merely increasing model capacity. ### 3.2 Experiment 1: Self-attention with ResNet50 We sought to identify computational mechanisms that could help ResNets learn the more challenging SVRT tasks revealed by our novel taxonomy. Attention has classically been implicated in visual reasoning in primates and humans [14, 15]. Attentional processes can be broadly divided into _spatial_ (e.g., attending to all features in a particular image location) vs. _feature-based_ (e.g., attending to a particular shape or color at all spatial positions) [13]. The importance of attention for perceiving and reasoning about challenging visual stimuli has also been realized by the computer vision community. There are now a number of attention modules proposed to extend CNN's - including spatial (e.g., Sharma et al. [2015], Chen et al. [2015], Yang et al. [2016], Xu and Saenko [2015], Ren and Zemel [2016]), feature-based (e.g., Stollenga et al. [2014], Chen et al. [2017], Hu et al. [2018]) and hybrid (e.g., Linsley et al. [2018], Woo et al. [2018]) approaches. Here, we adapt the increasingly popular Transformer architecture [21] to implement both forms of attention. These networks, which were originally developed for natural language processing, are now pushing the state of the art in computer vision [14, 15, 16]. Recent work [14] has also shown the benefits of such architectures and especially attention mechanisms for solving higher-level reasoning problems. Transformers are neural network modules usually consisting of at least one "self-attention" module followed by a feedforward layer. Here, we introduced different versions of the self-attention module into ResNets to better understand the computational demands of each SVRT task. Transformers' self-attention is applied to and derived from the module's input. By reconfiguring standard Transformer self-attention, we developed versions capable of allocating either spatial or feature-based attention over the input. Specifically, we created these different forms of attention by reshaping the convolutional feature map input to a Transformer. For spatial attention, we reshaped the \(\mathcal{Z}\in\mathcal{R}^{H,W,C}\) (where \(H\) is _height_, \(W\) is _width_ and \(C\) is _number of feature channels_) feature maps to \(\mathcal{Z}\in\mathcal{R}^{C,H*W}\) so that the Transformer's self-attention was allocated overall spatial locations. For feature-based attention, we reshaped the convolutional feature maps to \(\mathcal{Z}\in\mathcal{R}^{H*W,C}\), enforcing attention to overall features instead of spatial locations. Spatial Attention Module (SAM)Our first attention module takes a features map \(X\in\mathcal{R}^{d_{C}\times d_{H}\times d_{W}}\) as input, where \(d_{C}\), \(d_{H}\), and \(d_{W}\) respectively refer to the number of channels, height and width of the map, and outputs a features map \(Y\) of the same dimensions. We flatten the spatial dimensions to obtain \(X^{\prime}\in\mathcal{R}^{d_{C}\times d_{N}}\), where \(d_{N}=d_{H}\times d_{W}\), and we apply the original multi-head self-attention module from Vaswani et al. (2017) as follows. We first apply independent linear mappings of the input \(X^{\prime}\) to obtain three feature maps of dimensions \(\mathcal{R}^{d\times d_{N}}\) for each attention head from a total of \(n_{H}\) heads. For the \(i^{th}\) head, these maps are known as the query \(Q_{i}\), the key \(K_{i}\) and the value \(V_{i}\), and are obtained such as: \[Q_{i} =W_{i}^{Q}.X^{\prime}\] \[K_{i} =W_{i}^{K}.X^{\prime}\] \[V_{i} =W_{i}^{V}.X^{\prime}\] The mappings are parametrized by three matrices \(W_{i}^{Q}\), \(W_{i}^{K}\) and \(W_{i}^{V}\) of dimensions \(\mathcal{R}^{d\times d_{C}}\) for each head. The symbol. denotes a matrix multiplication. Then, we apply the scaled dot-product attention Vaswani et al. (2017) to ob tain \(n_{H}\) attention heads of dimensions \(\mathcal{R}^{d\times d_{N}}\) such as: \[H_{i}=SoftMax(\frac{Q_{i}\cdot K_{i}^{T}}{\sqrt{d}})V_{i} \tag{3.1}\] After, we concatenate all attention heads along the first dimension and apply a linear mapping to obtain \(Y^{\prime}\in\mathcal{R}^{d_{C}\times d_{N}}\) such as: \[Z=W^{O}.Concat(H_{1},...,H_{n_{H}}) \tag{3.2}\] The mapping is parametrized by the matrix \(W^{O}\in\mathcal{R}^{d_{C}\times d}\). As commonly done, we have a residual connection before applying a layer normalization [1] such as: \[Y^{\prime}=LayerNorm(Z+X^{\prime}) \tag{3.3}\] Finally, we unflatten \(Y^{\prime}\) to obtain \(Y\in\mathcal{R}^{d_{C}\times d_{H}\times d_{W}}\). We obtain the best results with a representation space of 512 dimensions (\(d=512\)) and four attention heads (\(n_{H}=4\)). Features-based Attention Module (FBAM)Our second attention module is simply obtained by transposing the channel dimension with the spatial dimensions before applying the same transformations. In other words, we transpose the input \(X^{\prime}\) into \(\mathcal{R}^{d_{N}\times d_{C}}\) and transpose the output \(Y^{\prime}\) back into \(\mathcal{R}^{d_{C}\times d_{N}}\). While SAM models attention over the \(d_{H}*d_{W}\) regions that compose the input features map, FBAM models attention over the \(d_{C}\) features channels. We obtain the best results with a representation space of 196 dimensions (\(d=196\)) and one attention head (\(n_{H}=1\)). We added one spatial or feature-based attention after one of the four residual blocks in a ResNet-50. We placed either form of attention module to a ResNet-50 by choosing the location where the addition of attention yielded the best validation accuracy across the SVRT tasks. Through this procedure, we inserted a spatial attention module after the second residual block and a feature-based attention module after the third residual block (Figure 3.1). To measure the effectiveness of different forms of attention for solving the SVRT, we compared the accuracy of three ResNet-50 models: one capable of spatial attention, one capable of feature-based attention, and one that had no attention mechanisms ("vanilla") (Figure 3.2). Spatial attention consistently improved model accuracy on all tasks across all five dataset sizes that models we used for training. The improvement in accuracy is particularly noticeable for the \(SD_{1}\) cluster. Tasks in this sub-cluster are composed of two rules, which ResNets, without attention, struggled to learn. Attention helps ResNets learn these tasks more efficiently. The improvement is also evident for \(SD_{2}\) and \(SR_{1}\). However, the benefit of attention for \(SR_{2}\) is marginal since ResNets without attention already perform well on these tasks. We find that feature-based attention leads to the largest improvements for \(SD_{1}\), especially when training on 5k or 10k examples (Figure 3.3). On the other hand, spatial attention leads to the largest improvements for \(SD_{2}\) and \(SR_{1}\). This improvement is pronounced when training on 500 or 1000 examples. Taken together, the differential success of spatial versus feature-based attention reveals that their varying attentional demands can explain the task sub-clusters discovered in our data-driven taxonomy. Figure 3.1: Location of the Transformer self-attention modules in our ResNet extensions. Figure 3.2: Test accuracies for a baseline ResNet50 vs. the same architecture endowed with the two forms of attention for each of the twenty-three SVRT tasks when varying the number of training examples. A different axis scale is used for \(SR_{2}\) to improve visibility. These curves are constructed by joining task accuracy for five points representing dataset sizes. To better understand how the ResNet-derived taxonomy found in Experiment 1 can be explained by the need for spatial and feature-based attention, we measured the relative improvement of each form of attention over the vanilla ResNet. For each attention model and task, we calculated the ratio of the test accuracies between the model and the vanilla ResNet50. We repeated this for every training dataset size, then fit a linear model to these ratios to calculate the slope across dataset sizes (see Figure 3.4 for representative examples). We then repeated this procedure for all twenty-three tasks to produce two 23-dimensional vectors containing slopes for each model and every task. We next used these slopes to understand the attentional demands of each SVRT task. We did this through a two-step procedure. First, we applied a principal component analysis (see Figure 3.5) to the vanilla ResNet performance feature vectors (\(N=15\)) derived from Experiment 1. Second, we correlated the principal components with the slope vectors from the two attention models. We restricted our analysis to the first two principal components, which captured \(\sim 93\%\) of the variance in the vanilla ResNet's performance (Figure 3.5). This analysis revealed a Figure 3.3: Test accuracies for 50-layer ResNets with spatial attention (orange), feature-based attention (tan), or no attention (green). Each bar depicts performance after training from scratch on 10k samples. dissociation between the two forms of attention: feature-based attention was most correlated with the first principal component, and spatial attention with the second principal component. Additionally, along the first principal component, we found the broader dichotomy of these 23 tasks into \(SD\) and \(SR\) clusters, whereas the second principal component divulges the tasks which responded better with spatial attention from tasks requiring either no attention or feature-based attention (as seen in dotted red line along both the axis in Figure 3.5). The corresponding Pearson coefficient \(r\) and \(p\) values are given in Table 3.1. Figure 3.4: The benefit of attention in solving the SVRT is greatest in data-limited training regimes. The x-axis depicts the number of samples for training, and the y-axis depicts a ratio of the average performance of models with attention to models without attention. When the ratio is greater than 1, it shows that attention helps vs. hurts when lower than 1. This gives us five ratios per task and attention process corresponding to each dataset size. We performed a linear fitting procedure for these points and calculated the corresponding slope. This slope characterizes the relative benefits of attention for that particular task as the number of training examples available increases. If the benefit of attention is most evident in lower training regimes, one would expect a relatively small slope. If the benefit of attention is most evident in higher training regimes, one would expect a large slope. To summarize our results from Experiment 2, we have found that the task clusters derived from ResNet test accuracies computed over a range of depth and training set sizes can be explained in terms of attentional demands. Here, we have shown that endowing these networks with attentional mechanisms helps them learn some of the most challenging problems with far fewer training examples. We also found that the relative improvements obtained over standard ResNets with feature-based and spatial attention are consistent with the taxonomy of visual reasoning tasks found in Experiment 1. More generally, our analysis shows how the relative need for feature vs. spatial attention seems to account for a large fraction of the variance in computational demand required for these SVRT tasks. Figure 3.5: Principal component analysis of the twenty-three tasks using the 15-dimensional feature vectors derived from Experiment 1 representing the test accuracy obtained for each task for different dataset sizes and ResNets of varying depths (18, 50 & 152). The dotted red line represents 4 different bins in which these tasks can be clustered. defined in Experiment 1 according to their learnability by ResNets. ### 3.3 Experiment 2: Feature vs. rule learning The learnability of individual SVRT tasks reflects two components: the complexity of the task's visual features and, separately, the complexity of the rule needed to solve the task. To what extent are our estimates of learnability driven by either of these components? We tested this question by training a new set of ResNets without attention according to the procedure laid out in Experiment 1, but with different pre-training strategies. One of the ResNets was pre-trained to learn visual statistics (but not rules) of SVRT images, and another was pre-trained on ImageNet, [a popular computer vision dataset containing natural object categories; Deng et al., 2009]. For pre-training on SVRT, we sampled 5,000 class-balanced images from each of the 23 tasks (5,000 \(\times\) 23 = 115,000 samples in total). To ensure the networks did not learn any of the SVRT task rules, we shuffled images and binary class labels across all twenty-three problems while pre-training the network. We then trained models with binary cross-entropy to detect positive examples _without discriminating tasks_. Our assumption is that shuffling images and labels removes any semantic information between individual images and SVRT rules. However, a network with sufficient capacity can still learn the corresponding mapping between arbitrary images and class labels (even though it cannot generalize it to novel samples). To learn this arbitrary mapping, the network has to be able to encode visual features; but by construction, it cannot learn the SVRT task rule. When training this model and the ImageNet-initialized model to solve individual \begin{table} \begin{tabular}{c c c c c} \hline \multicolumn{3}{c}{\(Spatial\)} & \multicolumn{2}{c}{\(Feature\)} \\ \hline & **r** & **p** & **r** & **p** \\ \hline \hline \(PC_{1}\) & 0.466 & 0.0249 & **0.649** & 0.0008 \\ \hline \(PC_{2}\) & **-0.652** & 0.0007 & -0.491 & 0.0174 \\ \hline \end{tabular} \end{table} Table 3.1: Pearson coefficient (\(r\)) and corresponding \(p\) values obtained by correlating the slope vectors of the spatial attention and the feature-based attention modules with the two principal components of Figure 3.5. See text for details. SVRT tasks, we froze the weights of the convolutional layers and only fine-tuned the classification layers to solve SVRT problems. Figure 3.6 shows a comparison between the different architectures in terms of their test accuracies according to the sub-clusters discovered in Experiment 1. These results first confirm that the SVRT pre-training approach works because it consistently outperforms pre-training on ImageNet (Figure B5) or training from scratch. Interestingly, for the \(SR_{2}\) sub-cluster, we found that the benefits of pre-training on SVRT go down very quickly as the number of training examples grows. We interpret these results as reflecting the fact that generic visual features are sufficient for the task and that the rule can be learned very quickly (somewhere around 500 and 5,000 samples). For \(SR_{1}\) sub-cluster, the benefits of starting from features learned from SVRT are somewhat more evident in low training regimes. Still, these advantages quickly vanish as more training examples are available (the task is learned by all architectures within 5,000 training samples). For \(SD_{1}\) while there appears to be a noteworthy advantage of pre-training on SVRT over ImageNet pre-training and training from scratch, the tasks never appear to be fully learned by any of the networks even with 15,000 training examples. This demonstrates the challenge of learning the rules associated with this sub-cluster beyond simply learning good visual representations. Finally, our results also show that the performance gap across all the architectures for \(SD_{2}\) vs. \(SD_{1}\) increases rapidly with more training examples - demonstrating the fact that the abstract rule for \(SD_{2}\) tasks are more rapidly learned than for \(SD_{1}\). Finally, we carried out a similar analysis with the pre-trained network as done in Experiment 2: We built test accuracy vectors for the SVRT pre-trained network trained using all five dataset sizes (.5k, 1k, 5k, 10k, 15k) and searching over a range of optimal learning rates (_1e-4, 1e-5, 1e-6_). This led to a five-dimensional vector, which we normalized by dividing each entry with the corresponding test accuracy of a baseline ResNet50 trained from scratch. Hence, the normalized vector represents the improvement (ratio larger than 1) or reduction in accuracy (ratio smaller than 1) that results from the pre-training on SVRT for that particular task and training set size. We then calculated the slope vector in \(\mathcal{R}^{(23)}\), which we correlated with the corresponding spatial and feature-based attention vectors from Experiment 2. We found that task improvements due to SVRT pre-training correlated more strongly with task improvements due to spatial (\(r=0.90\), \(p=4e-9\)) than Figure 3.6: Test accuracies for a baseline ResNet50 trained from scratch (“No initialization”) vs. the same architecture pre-trained on an auxiliary task in order to learn visual representations that are already adapted to the SVRT stimuli for different numbers of training examples. The format is the same as used in Figure 3.2. A different axis scale is used for \(SR_{2}\) to improve visibility. These curves are constructed by joining task accuracy for five points representing dataset sizes. feature-based attention (\(r=0.595\), \(p=0.002\)). This suggests that the observed improvements in accuracy derived from spatial attention are more consistent with learning better feature representations compared to feature-based attention. To summarize, in Experiment 3, we have tried to address the question of the learnability of SVRT features vs. rules. We found that using an auxiliary task to pre-train the networks on the SVRT stimuli in order to learn visual representations beforehand provides learning advantages to the network compared to a network trained from scratch. We also found a noteworthy correlation between the test accuracy vector of a network pre-trained on SVRT visual statistics and a similar network endowed with spatial attention. This suggests that spatial attention helps discover the abstract rule more so that it helps improve learning good visual representations for the task. ### 3.4 Conclusion Earlier, Kim et al. (2018) hypothesized that such straining by convolutional networks is due to their lack of attention mechanisms to allow the explicit binding of image regions to mental objects. A similar point was made by Greff et al. (2020) in the context of the contemporary neural network failure to carve out sensory information into discrete chunks which can then be individually analyzed and compared (see also Tsotsos et al. (2007) for a similar point). Interestingly, this prediction was recently tested using human EEG by Alamia et al. (2021) who showed that indeed the brain activity recorded during SD tasks is compatible with greater attention and working memory demands than SR tasks. At the same time, that CNNs can learn SR tasks more efficiently than SD tasks does not necessarily mean that human participants can solve these tasks without attention. Indeed, (Logan, 1994) has shown that SR tasks such as judging insideness require attention under some circumstances. To assess the role of attention in visual reasoning, we used Transformer modules to endow deep CNNs with spatial and feature-based attention. The relative improvements obtained by the CNNs with the two forms of attention varied across tasks. Many tasks reflected a larger improvement for spatial attention, and a smaller number benefited from feature-based attention. Further, we found that the patterns of relative improvements accounted for much of the variance in the space of SVRT tasks derived in Experiment 1. Overall, we found that the requirement for feature-based and spatial attention accounts well for the taxonomy of visual reasoning tasks identified in Experiment 1. Our computational analysis also lead to testable predictions for human experiments by suggesting tasks that either benefit from spatial attention (task 22) or from feature-based attention (task _21_), tasks that benefit from either form of attention (task _19_), and tasks that do not benefit from attention (task _2_). Finally, our study has focused on the computational benefits of spatial and feature-based attention for visual reasoning. Future work should consider the role of other forms of attention, including object-based attention [14] for visual reasoning. In our second experiment, we studied the learnability of SVRT features vs. rules. We did this by pre-training the neural networks on auxiliary tasks in order to learn SVRT features before training them to learn the abstract rules associated with individual SVRT problems. Our pre-training methods led to networks that learn to solve the SVRT problems better than networks trained from scratch as well as networks that were pre-trained to perform image categorization on the ImageNet dataset. We have also found that such attention processes seem to contribute more to rule learning than to feature learning. For \(SR_{1}\) sub-cluster we find this type of pre-training to be advantageous in lower training regimes but the benefits rapidly fade away in higher training regimes. In contrast, this pre-training does not allow the tasks from the \(SD_{1}\) sub-cluster to be learned even with 15k samples - suggesting that the key challenge with these tasks is not to discover good visual representations but rather to discover the rule. This suggests the need for additional mechanisms beyond those implemented in ResNets. This is also consistent with the improvements observed for these tasks with the addition of attention mechanisms. In summary, our study compared the computational demands of different visual reasoning tasks. While our focus has been on understanding the computational benefits of attention and feature learning mechanisms, it is clear that additional mechanisms will be required to fully solve all SVRT tasks. These mechanisms are likely to include working memory which is known to play a role in SD tasks [1]. Overall, this work illustrates the potential benefits of incorporating brain-like mechanisms in modern neural networks and provides a path forward to achieving human-level visual reasoning. ## Chapter 4 ## Chapter 4 Role of self-attention in a cognitive architecture #### 4.1 Introduction Abstract reasoning refers to our ability to analyze information and discover rules to solve arbitrary tasks, and it is fundamental to general intelligence in human and non-human animals [Gentner and Markman, 1997, Lovett and Forbus, 2017]. It is considered a critical component for the development of artificial intelligence (AI) systems and has rapidly started to gain attention. A growing body of literature suggests that current neural architectures exhibit significant limitations in their ability to solve relatively simple visual cognitive tasks in comparison to humans (see Ricci et al. [2021] for review). Given the vast superiority of animals over state-of-the-art AI systems, it makes sense to turn to brain sciences to find inspiration to leverage brain-like mechanisms to improve the ability of modern deep neural networks to solve complex visual reasoning tasks. Indeed, a recent human EEG study has shown that attention and memory processes are needed to solve same-different visual reasoning tasks [Alamia et al., 2021b]. This interplay between attention and memory is previously discussed in Buehner et al. [2006], Fougnie [2008], Cochrane et al. [2019] emphasizing that a model must learn to perform attention over the memory for reasoning. It is thus not surprising that deep neural networks which lack attention and/or memory system fail to robustly solve visual reasoning problems that involve such same-different judgments [Kim et al., 2018]. Recent computer vision works [Messina et al., 2021b] including our own work in Chapter 3 have provided further computational evidence for the benefits of attention mechanisms in solving a variety of visual reasoning tasks. Interestingly, in both aforementioned studies, a Transformer module was used to implement a form of attention known as self-attention [Cheng et al., 2016, Parikh et al., 2016]. In such a static mod ule, attention mechanisms are deployed in parallel across an entire visual scene. By contrast, modern cognitive theories of active vision postulate that the visual system explores the environment dynamically via sequences of attention shifts to select and route task-relevant information to memory. Psychophysics experiments [14] on overt visual attention have shown that eye movement patterns are driven according to task-dependent routines. Inspired by active vision theory, we describe a dynamic attention mechanism, which we call _guided attention_. Our proposed Guided Attention Module for (visual) Reasoning (GAMR) learns to shift attention dynamically, in a task-dependent manner, based on queries internally generated by an LSTM executive controller. Through extensive experiments on the two visual reasoning challenges, the Synthetic Visual Reasoning Test (SVRT) by Fleuret et al. [2011] and the Abstract Reasoning Task (ART) by Webb et al. [2021], we demonstrate that our neural architecture is capable of learning complex compositions of relational rules in a data-efficient manner and performs better than other state-of-the-art neural architectures for visual reasoning. Using explainability methods, we further characterize the visual strategies leveraged by the model in order to solve representative reasoning tasks. We demonstrate that our model is compositional - in that it is able to generalize to novel tasks efficiently and learn novel visual routines by re-composing previously learned elementary operations. It also exhibit zero shot generalization ability -by translating knowledge across the tasks sharing similar abstract rules without the need of re-training. ### 4.2 Related Work Multiple datasets have been used to assess the visual reasoning ability of neural networks. One of the first challenges included the SVRT. Recently introduced Raven style Progressive Matrix datasets, RAVEN [22], PGM Barrett et al. [2018], focuses on learning nearly seven unique rules and choose one of the eight choices. However, it was found that the dataset was seriously flawed as it was later found that neural architectures could solve tasks by leveraging shortcuts [17, 23] which were later removed in I-RAVEN [17]. Prior work [14, 15, 16] on SVRT studies has focused on the role of attention in solving some of these more challenging tasks. In SVRT, tasks that involve same-different (SD) judgements appear to be significantly harder for neural networks to learn compared to those involving spatial relation judgement (SR) (Stabinger et al., 2016; Yihe et al., 2019; Kim et al., 2018) (see Ricci et al. (2021) for review). Motivated by neuroscience principles, Vaishnav et al. (2022) studied how the addition of feature-based and spatial attention mechanisms differentially affects the learnability of the tasks. These authors found that SVRT tasks could be further taxonomized according to their differential demands for these two types of attention. In another attempt to leverage a Transformer architecture to incorporate attention mechanisms for visual reasoning, Messina et al. (2021) proposed a recurrent extension of the classic Vision Transformer block (R-ViT). Spatial attention and feedback connections helped the Transformer to learn visual relations better. The authors compared the accuracy of four same-different (SVRT) tasks (tasks _1,5,20,21_) to demonstrate the efficacy of their model. With the introduction of Transformer architecture, attention mechanisms started gaining popularity in computer vision. They can either complement (Bello et al., 2019; Vaishnav et al., 2022; d'Ascoli et al., 2021) or completely replace existing CNN architectures (Ramachandran et al., 2019; Touvron et al., 2021; Dosovitskiy et al., 2021). Augmenting the attention networks with the convolution architectures helps them explore the best of the both worlds and train relatively faster. In contrast, stand-alone attention architecture takes time to develop similar inductive biases similar to CNN. As initially introduced by Vaswani et al. (2017), Transformer uses a self-attention layer, followed by residual connection and layer normalization and a linear projection layer to compute the association between input tokens. We used a similar system where instead of using a self-attention module, in GAMR, feature-based attention vector (an internally generated query) is obtained via an LSTM to guide the attention module to the location essential for the task and we thus call the model as _guided-attention_. Since there could be more than one location where the model will attend, we then implemented a memory bank. A more closely aligned model with the human visual system is proposed by Mnih et al. (2014) - Recurrent Attention Model (RAM). It learns a saccadic policy over visual images and is trained using reinforcement learning to learn policies. The Mnih et al system constitutes an example of overt attention. Conversely, GAMR constitutes an example of a covert attention system and assumes a fixed acuity. We took inspiration for the memory bank from ESBN (Webb et al., 2021), where mechanisms for variable binding and indirection were introduced in architecture for visual reasoning with the help of external memory. Variable binding is the ability to bind two representations, and indirection is the mechanism involved in retrieving one representation to refer to the other. While ESBN was indeed a source of inspiration, we would like to emphasize that GAMR constitutes a substantial improvement over ESBN. First and foremost, ESBN lacks attention. It requires items/objects to be passed serially one by one and hence it cannot solve SVRT or any other multi-object visual reasoning problems. In a sense, the approach taken in ESBN is to assume an idealized frontend that uses hard attention to perfectly parse a scene into individual objects and then serially pipe them through the architecture. This is where our work makes a substantial contribution by developing an attention front-end (which is soft and not hard) to sequentially attend to relevant features and route them into memory. We tested this template-matching behavior of the ESBN architecture by training it in the presence of Gaussian noise and spatial jittering. It led to a chance-level performance. Here, we build on this work and describe an end-to-end trainable model that learns to individuate task-relevant scenes and store their representations in memory to allow the judging of complex relations between objects. Finally, our relational mechanism is inspired by the work in Santoro et al. (2017) that introduced a plug-and-play model for computing relations between object-like representations in a network. ### 4.3 Proposed approach Our model can be divided into three components: an encoder, a controller, and a relational module (see Fig. 4.1 for an overview). In the **encoder module**, a low dimensional representation (\(z_{img}\)) for an input image (\(x_{in}\)) is created. It includes a feature extraction block (\(f_{e}\)) which is composed of five convolutional blocks (Figure 4.2). The output of the module is denoted as \(z_{img}\in\mathcal{R}^{(128,hw)}\) (with \(h\) height and \(w\) width). We applied instance normalization (_iNorm_) (Ulyanov et al., 2016) over \(z_{img}\) before passing it to the controller for further processing without which the network even struggles to learn even simple relations. **Controller module** is composed of two blocks: a recurrent neural network (\(f_{s}\)) which generates an internal query to guide the attention spotlight over the task relevant features. These features are selected with the help of the second block, i.e., guided attention (_GA_) and are sent to the memory bank (\(M\)). After \(z_{img}\) is built, a _guided-attention_ block is used to extracts the relevant visual information from the input image in a top-down manner at each time step (\(t\)). This block is responsible for generating context vector (\(z_{t}\)) to be stored in the memory bank (\(M\)) along with all the previous context vectors. They are subsequently accessed again later by a reasoning module. This memory bank is inspired by the differentiable memory used in Webb et al. (2021). In the guided attention block, an attention vector (\(w_{t}\in\mathcal{R}^{128}\)) is obtained by normalizing the addition of encoded feature (\(z_{img}\)) with internally generated query (\(q_{int_{t}}\)) fetched by \(f_{s}\). This normalized attention vector is used to re-weight the features at every spatial location of \(z_{img}\) to generate the context vector \(z_{t}\in\mathcal{R}^{128}\). The recurrent controller (\(f_{s}\)) uses a Long Short-Term Memory (LSTM) to provide a query vector (\(q_{int_{t}}\in\mathcal{R}^{128}\)) in response to a task-specific goal in order to guide attention for the current time step \(t\). \(f_{s}\) also independently generates a gate vector (\(g\in\mathcal{R}^{128}\)) and output vector (\(out\in\mathcal{R}^{512}\)) with the help of linear layers. The Figure 4.1: Our proposed _GAMR_ architecture is composed of three components: an _encoder_ module (\(f_{e}\)) builds a representation (\(z_{img}\)) of an image, a _controller_ guides the attention module to dynamically shift attention, and selectively routes task-relevant object representations (\(z_{t}\)) to be stored in a memory bank (\(M\)). The recurrent controller (\(f_{s}\)) generates a query vector (\(q_{int_{t}}\)) at each time step to guide the next shift of attention based on the current fixation. After a few shifts of attention, a _reasoning_ module (\(r_{\theta}\)) learns to identify the relationships between objects stored in memory. gate (\(g\)) is later used to shift attention to the next task-relevant feature based on the features previously stored in \(M\). On the other hand, the decision layer uses the output vector (\(out\)) to produce the system classification output. The **relational module** is where the reasoning takes place over the context vector (\(z_{t}\)) stored in the memory bank (\(M\)). This module is composed of a two layered MLP (\(r_{\theta}\)) which produces a relational vector (\(all_{obj}\)) similar to the relational network (Santoro et al., 2017). As we will show in section 4.6, \(r_{\theta}\) learns elementary operations associated with basic relational judgments between context vectors (\(z_{t}\)) stored in the memory (\(M\)). It is concatenated with the output (\(out\)) of the controller (\(f_{s}\)) at the last time step (\(t=T\)) and passed through the decision layer (\(f_{\phi}\)) to predict the output (\(\hat{y}\)) for a particular task. We have summarized the steps in Algorithm 1. Figure 4.2: Encoder module (\(f_{e}\)) used in _GAMR_. It consists of four convolutional blocks to process input image of 128\(\times\)128 resolution ``` **Algorithm 1** Guided Attention Model for (visual) Reasoning (_GAMR_). \(LN\) represents layer normalization [1] (\(||\)) indicates the concatenation of two vectors, forming a new vector. {, } indicates the concatenation of a matrix and a vector, forming a matrix with one additional row. \(\odot\) represents element-wise multiplication and {. } represents the product between a scalar and a vector. (h,w) corresponds to the height and width of the feature map obtained from the encoder (\(f_{e}\)). ``` \(k_{r_{t=1}}\gets 0\)\(\triangleright\in\mathcal{R}^{128}\)\(h_{t=1}\gets 0\)\(\triangleright\in\mathcal{R}^{512}\)\(M_{t=1}\leftarrow\{\}\)\(z_{img}\gets f_{e}(x_{in})\)\(\triangleright\in\mathcal{R}^{(hw,128)}\) for t in 1...Tdo \(out,\ g,\ q_{int_{t}},\ h_{t}\gets f_{s}(h_{t-1},\ k_{r_{t-1}})\)\(\triangleright\)\(out\in\mathcal{R}^{512},g\in\mathcal{R}^{128},q_{int_{t}}\in\mathcal{R}^{128}\)\(w_{t}\gets LN(z_{img}+q_{int_{t}}.repeat(hw,axis=1))\)\(\triangleright\in\mathcal{R}^{(hw,128)}\)\(z_{t}\leftarrow(z_{img}\ \odot\ w_{t}.sum(axis=1)).sum(axis=1)\)\(\triangleright\in\mathcal{R}^{128}\) ift is 1 then \(k_{r_{t}}\gets 0\) else \(k_{r_{t}}\gets g\ \odot\ M_{t-1}.sum(axis=1)\) endif \(M_{t}\leftarrow\{M_{t-1},\ z_{t}\}\)\(\triangleright\in\mathcal{R}^{(t,128)}\) endfor \(all_{obj}\gets r_{\theta}(\sum_{i,j=1}^{T}(M_{v_{i}},\ M_{v_{j}}))\) \(\hat{y}\gets f_{\phi}(all_{obj}\ ||\ out)\) ``` **Algorithm 2** Guided Attention Model for (visual) Reasoning (_GAMR_). \(LN\) represents layer normalization [1] (\(||\)) indicates the concatenation of two vectors, forming a new vector. {, } indicates the concatenation of a matrix and a vector, forming a matrix with one additional row. \(\odot\) represents element-wise multiplication and {. } represents the product between a scalar and a vector. (h,w) corresponds to the height and width of the feature map obtained from the encoder (\(f_{e}\)). ### 4.4 Method DatasetThe SVRT dataset is composed of _twenty-three_ different binary classification challenges, each representing either a single rule or a composition of multiple rules. A complete list of tasks with sample images from each category is shown in Appendix (Fig. A1, A2). We formed four different datasets with 0.5k, 1k, 5k, and 10k training samples to train our model. We used unique sets of 4k and 40k samples for validation and test purposes. Classes are balanced for all the analyses. We trained the model from scratch for a maximum of 100 epochs with an early stopping criterion of 99% accuracy on the validation set as followed in Vaishnav et al. (2022) using Adam (Kingma and Ba, 2014) optimizer and a binary cross-entropy loss. We used a hyperparameter optimization framework _Optuna_(Akiba et al., 2019) to get the best learning rates and weight decays for these tasks and reports the test accuracy for the models that gave the best validation scores. BaselinesFor the baselines in this dataset, we compared our architecture performance to a Relational Network (\(RN\)), a popular architecture for reasoning in VQA. The \(RN\) uses the same CNN backbone as _GAMR_ with feature maps of dimension \(\mathcal{R}^{128,hw}\) where \(h=8\) and \(w=8\). We consider each spatial location of the encoded feature representation as an object (i.e., \(N=8\times 8=64\) object representations). We computed all pairwise combinations between all 64 representations using a shared MLP between all the possible pairs (totalling 4096 pairs). These combinations are then averaged and processed through another MLP to compute a relational feature vector (\(all_{obj}\)) before the final prediction layer (\(f_{\phi}\)). In a sense, _GAMR_ is a special case of an \(RN\) network endowed with the ability to attend to a task-relevant subset (\(N\) = 4) of these representations with the help of a controller instead of exhaustively computing all 4,096 possible relations - thus reducing the computing and memory requirements of the architecture very significantly. As an additional baseline model, we used 50 layered _ResNet_(He et al., 2016) and its Transformer-based self-attention network (_Attn-ResNet_) introduced in Vaishnav et al. (2022) and follow the training mechanism as defined in the paper. These have been previously evaluated on SVRT tasks (Funke et al., 2021, Vaishnav et al., 2022, Messina et al., 2021, 202). _Attn-ResNet_ serves as a powerful baseline because of more free parameters and a self-attention module to compare the proposed active attention component of _GAMR_. In our proposed method, the controller shifts attention head sequentially to individual task-relevant features against a standard self-attention module - where all task-relevant features are attended to simultaneously. We also evaluated memory based architecture, ESBN (Webb et al., 2021) in which we used a similar encoder to that of _GAMR_ and pass the images in sequential order with each shape as a single stimulus and the number of time steps as the number of shapes present in the SVRT task. In order to train these models we used images of dimension \(128\times 128\) for architectures such as _RN, ESBN, GAMR_ and \(256\times 256\) for _ResNet, Attn-ResNet_ (to be consistent with configuration in Vaishnav et al. (2022)). ResNet-50 (\(ResNet\)) has 23M parameters, Relation Network (\(RN\)) has 5.1M parameters, ResNet-50 with attention (_Attn-ResNet_) has 24M parameters and _GAMR_ & _ESBN_ both have 6.6M parameters. ### 4.5 Benchmarking the system All twenty-three tasks in the SVRT dataset can be broadly divided into two categories: same-different (SD) and spatial relations (SR), based on the relations involved in the tasks. Same-different tasks (_1, 5, 6, 7, 13, 16, 17, 19, 20, 21, 22_) have been found to be harder for neural networks (Ellis et al., 2015, Kim et al., 2018, Stabinger et al., 2016, 2021, Puebla and Bowers, 2021, Messina et al., 2021, Vaishnav et al., 2022) compared to spatial relations tasks (_2, 3, 4, 8, 9, Figure 4.3: Bar plot analysis for the SVRT tasks grouped in same-different (\(SD\)) and spatially related (\(SR\)) tasks. We compared the accuracies of five baseline architectures with _GAMR_. We trained these models with.5k, 1k, 5k and 10k samples. ### 4.6 Learning Compositionality Compositionality is the capacity to understand novel combinations from previously known components. While the human brain learns compositionally, deep learning models work on the learning Single task with a Single Model principle. Below, we provide evidence that _GAMR_ is capable of harnessing compositionality. We looked for triplets of tasks \((x,y,z)\) such that \(z\) would be a composition of tasks \(x\) and \(y\). We systematically looked for all such available triplets in the SVRT dataset and found three triplets: [15, 1, 10], [18, 16, 10] and [21, 19, 25]. We study the ability of the network to learn to compose a new relation with very few training samples, given that it had previously learned the individual rules. We first trained the model with tasks \(x\) and \(y\) so that the rules are learned with the help of the reasoning module \(r_{\theta}\) which is a two-layered MLP. We expect that the first layer learns the elementary operation over the context vectors stored in the memory block (\(M\)), while the second layer learns to combine those operations for the tasks \(z\). We freeze the model after training with tasks \(x\), \(y\) and only fine-tune: (i) a layer to learn to combine elementary operations and (ii) a decision layer (\(f_{\phi}\)) on tasks \(z\) with ten samples per category and 100 epochs in total. Results are shown in Figure 4.4. As our baseline, we trained the model from scratch on task \(z\) from a triplet of tasks (\(x\),\(y\),\(z\)) to show that the model is exploring indeed compositionality. We also ran an additional control experiment for compositionality choosing the random pair of tasks (\(x\)=5, \(y\)=17) such that the rules are not included in tasks (z) [15, 18, and 21]. When we evaluated the network in this setup, we found the performance of the network to be at the chance level - aligning with the claim. We selected group corresponding to each tasks [15, 18, 21] used for composition. Task _15_ has four shapes forming a square and are identical. It can be composed of task \(1\), helping to identify the same shapes and task _10_, which helps to learn if the four shapes are forming a square. In task _18_, a rule is needed to be learned related to symmetry along the perpendicular bisector of the image. It can be taken as a composition of task _16_ which requires learning mirror reflection of the image along the perpendicular bisector of the image and task _10_ in which symmetry could be discovered in between 4 shapes (forming a square). At last, we took task _21_, which involves both scaling and rotation between two shapes in an image. As its compositional elements, we designed a variant where there is only rotation and no scaling and represented it with _25_ and combined it with another counterpart of _21_ where there is scaling and no rotation, i.e., task _19_. Figure 4.4: **Compositionality test**: We train the model with tasks containing specific rules (e.g., task \(1\) representing same-different discrimination and task _10_ involving identification if the four shapes form a square or not). We show that with its ability to compose already learned rules, _GAMR_ can quickly learn with 10 samples per class to adapt to a novel scenario (e.g., task _15_ where the rule is to identify if the four shapes forming a square are identical or not). ### Zero-shot generalization We hypothesize that if a model has learned the abstract rule underlying a given task, it should be able to re-use its knowledge of this task on other novel tasks which share a similar rule. To verify that _GAMR_ is indeed able to generalize across tasks that share similar rules, we searched for pairs of tasks in SVRT which were composed of at least one common elementary relation [20] between them. For example, in pair [1, 22], task \(1\) involves the identification of _two_ similar shapes in category 1 and task _22_ involves the identification of _three_ similar shapes in category 1. In the selected pair, the category that judges the similar rule should belong to the same class (let us say category 1 in the above example) so that we test for the right learnability. We systematically identified a set \(x\) of tasks _1, 5, 7, 21, 23_ representing elementary relations such as identifying same-different [1, 5], grouping [7], learning transformation like scaling and rotation [21] and learning insideness [23]. Then we paired them with other tasks sharing similar relations. These pairs are task \(1\) with each of _5, 15 and 22_, task \(5\) with each of _1, 15 and 22_. Similarly other pairs of tasks are [7, 22], [21, 15]_ and [23, 8]_. We separately trained the model on the set \(x\) and tested the same model on their respective pairs without finetuning further with any samples from the test set (zero-shot classification). We observed that _GAMR_ could easily generalize from \begin{table} \begin{tabular}{c c c c c} \hline \hline Training & Test & \multicolumn{3}{c}{Test Accuracy} \\ \cline{3-5} Task & Task & GAMR & Attn-ResNet & ResNet \\ \hline \hline \multirow{3}{*}{1} & 5 & 72.07 & 53.03 & **73.04** \\ & 15 & **92.53** & 92.07 & 78.87 \\ & 22 & **84.91** & 80.10 & 67.15 \\ \hline \multirow{3}{*}{5} & 1 & **92.64** & 85.73 & 92.28 \\ & 15 & **84.36** & 62.69 & 49.95 \\ & 22 & **76.47** & 55.69 & 50.19 \\ \hline \multirow{3}{*}{7} & 22 & **83.80** & 79.11 & 50.37 \\ \cline{1-1} & 15 & **90.53** & 50.00 & 49.76 \\ \hline \multirow{3}{*}{23} & 8 & **85.84** & 58.90 & 59.25 \\ \cline{1-1} \cline{2-5} & & & & \\ \hline \hline \end{tabular} \end{table} Table 4.1: Test accuracy to show if the model learns the correct rules when we train it with a task and test on a different set of SVRT tasks with _GAMR_, Attention with ResNet50 (_Attn-ResNet_) and ResNet-50 (_ResNet_). one task to another without re-training. On the contrary, a chance level performance by _ResNet_ shows the network's shortcut learning and rote memorization of task-dependent features. In comparison, _GAMR_ exhibits far greater abstraction abilities - demonstrating an ability to comprehend rules in unseen tasks without any training at all. We further explored the strategies learned by _GAMR_ using attribution methods for all the tasks. These attribution methods confirm that _GAMR_ does indeed use a similar visual routine between the original task for which it was trained and the new task for which it was never trained. Table 4.1 summarizes these results. ### 4.8 Ablation Study Benchmarking guided attentionWe evaluated our guided-attention module (_GAMR_) and compared it with alternative systems with comparable base Figure 4.5: We compared the average accuracy over two sub-clusters of SVRT obtained by _GAMR_ with its variant when we replaced the guided-attention module with the self-attention (_GAMR-SA_) and when we completely gave away attention and made it a relational reasoning architecture (_GAMR w/o Atn (RN)_). architecture but endowed with self-attention (_With-SA_) or no attention and/or memory (_GAMR w/o Atn (RN)_) over 23 SVRT tasks for the same number of time steps. In _GAMR with-SA_, we add a self-attention layer in the guided attention module and all three input vectors to the attention module are the same (\(z_{img}\)). Our intuition is that at each time step, the self-attention mechanism should provide the mechanism to learn to attend to different objects in a scene. As a side note, _GAMR with-SA_ turns out to be similar to ARNe [Hahne et al., 2019] used for solving Raven's tasks. We found that, on average, our Guided Attention Model's relative performance is 11.1% better than its SA counterpart and 35.6% than a comparable system lacking attention (or memory) for \(SD\) tasks; similarly, relative improvements for \(SR\) tasks are 4.5% and 10.4% higher. It shows that _GAMR_ is efficient as it yields a higher performance for the same number (1k) of training samples. Results are shown in Figure 4.5. _GAMR_ is a complex model with several component, so we now proceed to Figure 4.6: **Ablation studies**: We pruned separate parts of the model, one at a time: controller output (\(out\)), attention vector (\(w_{t}\)), relational vector (\(all_{obj}\)), feature channel gain factor (\(g\)) and instance normalization (\(iNorm\)) and the bar plot show the variation in performance on SD and SR tasks when trained with 1k samples. Figure 4.7: **Time steps visualization**: Figure showing shift of attention with each time step in a task-dependent manner. In the first row, the task is to answer if the two shapes are touching each other from the outside. At each time step, the network explores the area where the shapes are touching each other. In other rows, attribution maps show the shifts over different shapes in an image. The controller module for the task in respective rows shifts attention across different shapes at each time step. study what role different components of the proposed architecture play in it's ability to learn reasoning tasks. We studied the effect of these components on SD and SR categories. Our lesioning study revealed that _iNorm_ plays a vital role in the model's reasoning and generalization capability even for learning simple rules of \(SR\) tasks. Normalizing every sample individually helps the model learn the abstract rules involved in task. We also found that for \(SD\) tasks, excluding vector \(out\) from the decision-making process is detrimental. The t-SNE plot shows that it encodes the independent abstract representation for the SVRT tasks (Figure 4.8). We systematically ran an ablation study to show that each of these components are essential to make it an efficient model. In \(no\_all_{obj}\), the model takes a decision based on the final outcome (\(out\)) of the recurrent module (\(f_{s}\)); for \(no\_w_{t}\), output of the attention block is used to obtain \(z_{t}\) instead after projecting it on \(z_{img}\); for \(no\_g\), equal weighing is applied to the feature space of the context vectors stored in the memory block. We have summarized the results in Figure 4.6. We also plot the attribution maps of the model in Figure 4.7 at each time step and show the way in which the model attends to task-dependent features while learning the rule. ### 4.9 Additional Experiment DatasetWebb et al. (2021) proposed four visual reasoning tasks (Figure 4.9), that we will henceforth refer to as the _Abstract Reasoning Task_ (ART): (1) a same-different (_SD_) discrimination task, (2) a relation match to sample task (_RMTS_), (3) a distribution of three tasks (_Dist3_) and (4) an identity rule task (_ID_). These four tasks utilize shapes from a set of 100 unique Unicode character images 1. They are divided into training and test sets into four generalization regimes using different holdout character sets (m = 0, 50, 85, and 95) from 100 characters. We have described training and test samples and different hyperparameters for all four tasks in section 4.10. Footnote 1: [https://github.com/taylorwebb/emergent_symbols](https://github.com/taylorwebb/emergent_symbols) Baseline modelsAs a baseline, we chose the ESBN (Webb et al., 2021) along with the two other prevalent reasoning architectures, the Transformer (Vaswani et al., 2017) and Relation Network (RN) (Santoro et al., 2017). These three share a similar encoder backbone as in _GAMR_. We also ran an additional baseline with _ResNet50_ to verify if in a multi-object scenario _GAMR_ exploring some biases (like visual entropy) which is otherwise not present when segmented images are passed. In order to make our baselines stronger, we evaluated these models in their natural order, i.e., by passing a single image at a time. We added a random translation (jittering) for the shapes in the area of \(\pm 5\) pixels around the center to prevent these architectures from performing template matching. This jittering increases the complexity of the tasks and hence we have to increase the number of time steps from 4 to 6. This caused the differences in results when compared to the implementation in Webb et al. (2021). For _GAMR_ and _ResNet50_, we present task-relevant images together as a single stimulus Figure 4.9 while jittering each shape. We have also added ART results where each image is centered and put together in a single stimulus in Figure 4.10. In order to make our architecture choose one option from multiple stimuli (_RMTS_: 2, _Dist3_ and _ID_: 4), we concatenate the relational vector (\(all_{obj}\)) for every stimulus and pass them to a linear layer for final decision. Figure 4.8: **Abstract variable:** t-SNE plot of the output vector (_out_) obtained from the controller (\(f_{e}\)) for all 23 SVRT tasks independently. Each cluster can be clearly identified from other clusters representing different relations learned. Tasks are represented as labels with the same colored box around them placed at the mean location of the cluster. Figure 4.10: Test accuracy on ART with different holdout sets when the images are \(centered\) and compare the accuracy when shapes are \(jittered\) in every image. We find that unlike other baselines experiencing a huge drop in performance when shapes are jittered, GAMR is stable. We plot the average accuracy over ten runs on the dataset. \(x\) axis corresponds to the four types of tasks, and \(y\) represents the average accuracy score. These tasks are as follows: (a) same-different (SD) discrimination task, (b) Relation match to sample task (RMTS); (c) Distribution of three tasks (Dist3); and (d) Identity rule task (ID). Figure 4.9: **ART for _GAMR_: (a) Same/different discrimination task. (b) Relational match-to-sample task (answer is 2). (c) Distribution-of-three task (answer is 1). (d) Identity rules task (ABA pattern, answer is 3). ResultsWe found a near-chance level (50%) accuracy for all the baseline models and in all the four generalization regimes for the _SD_ and _RMTs_ tasks (Figure 4.11) which otherwise performed with perfection when images were centered and passed through the same models. However, our proposed architecture is robust to handle this jittering, as shown in Figure 4.10 where we compare its performance when images are not jittered. For the other two tasks, _Dist3_ and _ID_, baseline models performed better than the chance level (25%). _ESBN_ showed an increasing trend in accuracy for progressively easier generalization conditions approaching 0 holdouts. This points toward the fact that the first three shapes in both tasks allow _ESBN_ to consider a translation factor while comparing the next three shapes, letting it choose the correct option appropriately. _RN_ and _Transformer_ consistently struggled to generalize. _ESBN_ (memory-based model) performance on SD tasks in both the visual reasoning datasets show that attention is needed for reasoning. Figure 4.11: **ART**: Comparing the average performance of _GAMR_ with other baselines over 10 runs for different holdout values (m = 0, 50, 85, 95). These models are evaluated on four types of tasks, i.e., Same-Different (SD), Relation match to sample (RMTs), Distribution of 3 (Dist3) and Identity rules (ID). ### 4.10 Hyperparameters Holdout setFor example, holdout \(0\) represents a generalization regime in which the test sets contain the same characters as those used during training. At the other extreme, in holdout _95_, the training set contains a minimal number of characters, most of which are actually used for tests. Hence, it is necessary to learn the abstract rule in order to generalize to characters in this regime. \begin{table} \begin{tabular}{c c c c c c} \hline **Tasks** & & m=0 & m=50 & m=85 & m=95 \\ \hline \multirow{2}{*}{SD} & Training & 18,810 & 4,900 & 420 & 40 \\ & Test & 990 & 4,900 & 10,000 & 10,000 \\ \hline RMTS & Training & 10,000 & 10,000 & 10,000 & 480 \\ Dist3 & Training & 10,000 & 10,000 & 10,000 & 360 \\ ID & Training & 10,000 & 10,000 & 10,000 & 8,640 \\ & Test & 10,000 & 10,000 & 10,000 & 10,000 \\ \hline \end{tabular} \end{table} Table 4.2: **ART**: Number of training and test samples used for four different types of tasks. ### 4.11 Conclusion and limitations In this paper, we described a novel Guided Attention Module for (visual) Reasoning (_GAMR_) to bridge the gap between the reasoning abilities of humans and machines. Inspired by the cognitive science literature, our module learns to dynamically allocate attention to task-relevant image locations and store relevant information in memory. Our proposed guided-attention mechanism is shown to outperform the self-attention mechanisms commonly used in vision transformers. Our ablation study demonstrated that an interplay between attention and memory was critical to achieving robust abstract visual reasoning. Furthermore, we demonstrated that the resulting systems are capable of solving novel tasks efficiently - by simply rearranging the elemental processing steps to learn the rules without involving any training. We demonstrated GAMR's versatility, robustness, and ability to generalize compositionality through an array of experiments. We achieved state-of-the-art accuracy for the two main visual reasoning challenges in the process. One limitation of the current approach is that it currently only deals with a fixed number of time steps. Training the model with four time steps was sufficient to solve all SVRT tasks efficiently. However, a more flexible ap \begin{table} \begin{tabular}{c c c|c c c|c c c} \hline \hline **Tasks** & \multicolumn{2}{c}{m=0} & \multicolumn{2}{c}{m=50} & \multicolumn{2}{c}{m=85} & \multicolumn{2}{c}{m=95} \\ \cline{2-9} & \multicolumn{6}{c}{GAMR} \\ \cline{2-9} & Epoch & LR & Epoch & LR & Epoch & LR & Epoch & LR \\ \hline SD & 50 & 0.0001 & 50 & 0.0005 & 100 & 0.0005 & 200 & 0.001 \\ RMTS & 50 & 0.00005 & 50 & 0.0001 & 50 & 0.0005 & 300 & 0.0005 \\ Dist3 & 50 & 0.00005 & 50 & 0.0001 & 50 & 0.00005 & 300 & 0.0005 \\ ID & 50 & 0.00005 & 50 & 0.00005 & 50 & 0.0005 & 100 & 0.0005 \\ \cline{2-9} & \multicolumn{6}{c}{Other baselines} \\ \cline{2-9} SD & 50 & 0.0005 & 50 & 0.0005 & 100 & 0.0005 & 200 & 0.0005 \\ RMTS & 50 & 0.0005 & 50 & 0.0005 & 50 & 0.0005 & 300 & 0.0005 \\ Dist3 & 50 & 0.0005 & 50 & 0.0005 & 50 & 0.0005 & 300 & 0.0005 \\ ID & 50 & 0.0005 & 50 & 0.0005 & 50 & 0.0005 & 100 & 0.0005 \\ \hline \hline \end{tabular} \end{table} Table 4.3: **ART**: For four different tasks number of epochs and learning rates (LR) used to train different architectures. proach is needed to allow the model to automatically allocate a number of time steps according to the computational demand of the task. GAMR is also limited to covert attention unlike biological system where both covert and overt attention are demonstrated. ## Chapter 5 ## Chapter 5 Discussion and Future Work Attention, a concept extensively explored in cognitive science and machine learning, has made significant strides in the fields of computer vision and natural language processing. Self-attention-based architectures, prevalent in these domains, have demonstrated remarkable performance on various benchmarks. The self-attention mechanism employed by Transformer networks bears resemblance to the central executive controller attention theory proposed in cognitive psychology, as both are concerned with attentional control and cognitive processing. According to the central executive controller attention theory, attentional control relies on a central executive system that monitors and governs the flow of information within working memory. This system selectively attends to different components of the working memory representation based on task demands and information relevance. Similarly, the self-attention mechanism in Transformer networks enables models to selectively attend to specific elements of the input sequence, guided by their relevance to the current task. This mechanism can be viewed as an abstraction of the central executive system, allowing the model to dynamically adjust its attentional focus in response to the input and its current state. Furthermore, both the central executive system and the self-attention mechanism in Transformers involve the integration of multiple information sources, including internal representations and external cues, to facilitate attentional control and cognitive processing. In Transformers, this integration is achieved through multiple self-attention layers, enabling the model to construct an internal representation of the input sequence that captures the most pertinent features for the task at hand. Although the self-attention mechanism in Transformers and the central executive system in cognitive psychology are not identical, they share similar principles of attentional control and cognitive processing, enabling flexible and adaptive behavior to address evolving task demands and environmental cues. Contrasting attention in Transformers with attention in the human mind reveals several distinguishing characteristics. In Transformers, attention "heads" operate independently in relation to the previous layer, indicating a lack of integration and coherence when compared to the integrated attention observed in the human mind. Transformers exhibit offline processing during learning, focusing solely on the present configuration, while attention in the human mind is influenced by past states, shaping the interpretation of the present and future. Additionally, representations in Transformers are static, whereas in the human mind, representations are dynamic. Transformers employ features for scene segmentation, while in the human mind, features serve as parametric operators that facilitate scene predictions. Moreover, attention in Transformers does not drive active perception, unlike attention in the human mind, which plays a crucial role in guiding active perception and goal-directed cognition. Besides, attention in the human mind integrates latent features into a scene graph, enhancing the coherence and organization of information, a characteristic not explicitly addressed in the context of Transformers. These comparisons highlight the distinctions between attention mechanisms in Transformers and the human mind, emphasizing the integrated, dynamic, and goal-directed nature of attention within the human cognitive system. Furthermore, attention, as a cognitive process enabling focused concentration on relevant stimuli, plays a vital role in enhancing human reasoning ability. To deepen our understanding of the self-attention mechanism, this thesis investigates its role in cognitive and computer vision architectures, particularly within the domain of visual reasoning. By exploring the interaction between self-attention and cognitive processes, we aim to contribute to the advancement of both cognitive science and machine learning, ultimately enhancing our comprehension and utilization of attention mechanisms. Visual reasoning is the process of analyzing the provided visual information in order to solve a task. It is considered an important part of fluid intelligence, which involves thinking and reasoning independent of learning, education, and experience. This ability has not only been shown in primates (Gentner et al., 2021) but also in bees (Giurfa et al., 2001) and in newborn ducklings (Martinho and Kacelnik, 2016). On the contrary, prior studies (Puebla and Bowers, 2021; Kim et al., 2018; Ricci et al., 2021; Messina et al., 2021) (including our own work) have shown that modern-day neural networks struggle to solve simple visual reasoning tasks when tested on a popular benchmark called synthetic visual reasoning test (SVRT) by Fleuret et al. (2011) otherwise simple for humans. We found a similar trend when we tested popular reasoning architectures like Relational Network (Santoro et al., 2017), Transformer (Vaswani et al., 2017), ESBN (Webb et al., 2021) on Abstract Reasoning Task (ART) where the stimulus contains a simple Unicode character. As a result, visual reasoning has become an increasingly popular topic of research in recent years with the emergence of numerous fluid intelligence tests for AI algorithms, including tests for Compositional Visual Reasoning (CVR) (Zerroug et al., 2022), Ravens' (RPM) (Barrett et al., 2018; Zhang et al., 2019) and visual progressive matrices (V-PROM) (Barrett et al., 2018; Teney et al., 2020) as well as an Abstract Reasoning Corpus (ARC) (Chollet, 2019). We began this thesis by studying the computational mechanisms involved in solving the Synthetic Visual Reasoning Test (SVRT) challenge (Fleuret et al., 2011). This challenge consists of twenty-three binary classification tasks, each involving unique abstract relations in their formulation. Previous studies have identified two broad categories of SVRT tasks (Stabinger et al., 2016; Kim et al., 2018; Yihe et al., 2019) - tasks involving spatial-relation (_SR_) judgment and tasks involving same-different (_SD_) judgment. The same-different tasks are found to be harder for the neural networks compared to the spatial relation tasks (Ellis et al., 2015, Kim et al., 2018; Stabinger et al., 2016, 2021; Puebla and Bowers, 2021; Messina et al., 2021; Vaishnav et al., 2022). Consistent with this work, we proposed a novel taxonomy beyond the two primary clusters, reflecting the number of relationships used to define a particular task. A closer examination is needed to better understand the trend reflected by the neural networks in terms of accuracy and the number of relations involved in defining a particular task. An earlier study by Kim et al. (2018) has also reported that feedforward neural networks demonstrate a'straining' effect in solving tasks involving same-different relations and hypothesized that the straining effect might be because of the lack of attention. The same was also shown with a human EEG experiment by Alamia et al. (2021) where higher activity is recorded in the lower \(\beta\) band while solving the same-different judgment when compared to spatial relation judgment indicating higher demands for attention and/or working memory. To test the same, in the next chapter, we focused on understanding the role of attention in solving visual reasoning tasks. Inspired by the two types of visual attention, we proposed a self-attention module that can be used as a _feature-based_ or _spatial_ attention to augment the features of a feedforward network (ResNet50 [He et al., 2016]). We evaluated both types of attention-augmented neural networks on SVRT tasks and found that our proposed attentional models could solve the most challenging SVRT tasks efficiently. The relative improvements obtained by feedforward networks endowed with the two different forms of attention varied across SVRT tasks. We observed that many tasks benefited from spatial attention mechanisms, whereas a few tasks from feature-based attention and showed a significant improvement. Our computational analysis also leads to testable predictions for human experiments by suggesting tasks that benefit from spatial attention (task 22) or feature-based attention (task 21), tasks that benefit from either form of attention (task 19), and tasks that do not benefit from attention (task 2). While we evaluated two types of attention systems, there is a future possibility to add experiments with the third type of attention - object-based attention [Duncan, 1984, Egly et al., 1994a, Vecera and Farah, 1994, Kramer et al., 1997]. Object-based attention focuses on the particular object rather than its spatial location or corresponding features. In the last part of the thesis, we proposed a novel architecture, the Guided Attention Model for (visual) Reasoning (_GAMR_). We integrated both cognitive abilities humans use - attention and memory in solving reasoning tasks. It draws inspiration from the cognitive science literature on active vision, where the spotlight of attention is routed in the visual system to gather task-relevant information. According to the theory of active vision, the visual world is explored using rapid eye movements guided by shifts of visual attention. We designed a controller akin to the mechanisms involved in the active vision framework to route the spotlight of attention and send the task-relevant representations in the memory block later used for reasoning. In GAMR, the controller is implemented with the key/query/value-based self-attention layer. Contrary to the existing method where key, query and value vectors all correspond to the same vector, the query is internally generated at each time step in our model. It helps the controller to shift the spotlight of attention. One of the limitations of the current approach is the fixed number of time steps. I believe that a future continuation of this work could be to incorporate a mechanism to adapt the number of time steps based on the complexity of the task. For now, we have set the number of time steps as four for all the tasks; however, a simple task might require fewer time steps to arrive at a decision with high confidence. To make the model adaptive to the situation, one possibility could be to train it with a confidence variable as a stopping criterion. While we have limited our analysis to synthetic visual reasoning datasets, a future possibility exists to test the models on a real-world dataset like V-PROM. It consists of images organized in a Ravens' style of reasoning with some context images and some choice images from which the correct answer is selected. Another possible direction is to think of an architecture that considers two important traits - efficient use of data and efficient use of the computational resource. One way to design this architecture is by incorporating a read-and-write mechanism similar to a Neural Turing Machine (Graves et al., 2014). Both these mechanisms will help the network read the already stored relations from memory and write them into the memory if they are novel. We expect such cognitive architecture to demonstrate higher-order reasoning ability, continual learning, compositionally, and meta-learnability. We also evaluated ViT (Dosovitskiy et al., 2021) - a full self-attention architecture on SVRT tasks and found that it struggles to learn the simplest of the SVRT tasks; however, Messina et al. (2021) conducted a similar study on a smaller subset of four SVRT tasks trained on 28k samples and found that a recurrent version of ViT - an attentional network with a convolutional backbone can learn those tasks. Adding convolutions in the early layers of ViT is found to help obtain better accuracy and improve sensitivity to the optimization settings (Xiao et al., 2021). This observation motivated us to propose _Conviformer_(Vaishnav et al., 2022) for another collaborative project on leaf-fossil classification. We propose a network to incorporate a convolutional network as the front end for a full self-attention-based vision transformer network enhancing its ability to process higher-resolution images. While bigger images hold great importance in computer vision applications like object detection, segmentation and fine-grained classification, they cannot be used with vision transformers because of the associated computational memory demand. _Conviformer_ improves the performance of vision transformers by incorporating local features and infusing convolutional priors in a transformer architecture. We would like to see how convolution-induced vision transformers perform on SVRT tasks. Concept learning is yet another exciting direction of research. One of the key features of human intelligence is the ability to quickly learn new concepts and use them to generalize to a novel scenario. A _concept_ can be an idea representing a class of events (e.g., walking), objects (e.g., cats), or their properties (e.g., blue color). To test the concept learning ability of neural networks in a few-shot manner, we recently introduced a novel visual reasoning dataset, Compositional Visual Reasoning (_CVR_) (Zerroug et al., 2022). This dataset is based on the principle of odd-one-out reasoning. In this form of reasoning task, three out of four samples follow a similar concept (rule) in their formulation, while the fourth does not. Each sample contains shapes similar to the shapes used in the SVRT challenge. It extends the variety of relations used in the formulation compared to previously defined datasets like SVRT or RPM. We have also included compositionality prior in the dataset, where some elementary relations are used to compose the several tasks. The motivation is to push the community to build a compositional and sample-efficient network. In this thesis, we embarked on one of the pioneering endeavors to investigate self-attention through the lens of visual reasoning. Attention assumes a pivotal role in showcasing visual reasoning capabilities, and an enhanced attentional model holds promise for improved reasoning abilities. We elucidated how self-attention operations can serve as a computational model for a visual attention system, encompassing spatial and feature-based attention, while also acting as a model for active vision. Our findings indicate that self-attention is equally effective in addressing reasoning tasks as it is in tackling other challenges in the realm of vision. However, further analysis is required to unravel the underlying mechanisms within a comprehensive self-attention model, which may constrain its sample-efficient learnability for reasoning tasks. Collectively, this work exemplifies the potential advantages of incorporating self-attention mechanisms into cognitive and computer vision architectures to conquer visual reasoning tasks. ## Chapter 6 ## Chapter 6 Publications * **Mohit Vaishnav**, Thomas Serre. "GAMR: A Guided Attention Model for (visual) Reasoning." _International Conference on Learning Representations (ICLR)_ 2023, [https://openreview.net/forum?id=iLMgk2IGNyv](https://openreview.net/forum?id=iLMgk2IGNyv) * **Mohit Vaishnav**, Remi Cadene, Andrea Alamia, Drew Linsley, Rufin Van-Rullen, Thomas Serre; "Understanding the Computational Demands Underlying Visual Reasoning." _Neural Computation_ 2022; 34 (5): 1075-1099. doi: [https://doi.org/10.1162/neco_a_01485](https://doi.org/10.1162/neco_a_01485) * **Aimen Zerroug, Mohit Vaishnav**, Julien Colin, Sebastian Musslick, Thomas Serre. " A Benchmark for Compositional Visual Reasoning." _In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks_ abs/2206.05379 (2022) * **Mohit Vaishnav**, Thomas Fel, Ivan Rodriguez, Thomas Serre. "Conviformers: Convolutionally guided Vision Transformer." _ArXiv_ abs/2208.08900 (2022) (_in preparation_) ## Chapter 7 ## Chapter 7 Summary in French L'attention est un domaine ambient discute et etudie en neurosciences, en psychologie, en sciences cognitives et en apprentissage automatique [Chun et al., 2011, Cho et al., 2015]. L'attention est le processus consistant a se concentrer de maniere selective sur un aspect discret de l'information tout en ignorant les autres informations perceptibles. Une caracteristique largement acceptee de l'attention est qu'elle facilite l'utilisation efficace des ressources informatiques disponibles. Bien que l'attention ait ete etudiee depuis des decennies, elle est encore loin d'etre un concept simple ou unifie [Lindsay, 2020]. La litterature en sciences cognitives depeint plusieurs aspects de l'attention, comme le fait qu'elle puisse etre concentree, focalisee sur une modalite particuliere, divisee, etre selective et avoir une capacite finie. Cependant, la selectivite reste son trait le plus caracteristique. La selectivite est necessaire en raison de la disponibilite limitee des ressources. Recommend, l'attention visuelle a fait l'objet d'un interet considerable dans le domaine de l'intelligence artificielle. L'attention visuelle [Ahmad, 1991] est la capacite a hierarchiser les informations tout en negligeant les informations non pertinentes pour contenir la surcharge de donnees dans notre systeme visuel. L'attention visuelle permet de repondre a la question: _quoi_regarder et _ou_ regarder. Cela a ete largement etudiee en psychologie et en neurosciences [Posner and Petersen, 1990, Bundesen, 1990, Desimone et al., 1995, Corbetta and Shulman, 2002, Petersen and Posner, 2012]. Ces etudes ont ete une source d'inspiration pour plusieurs modeles d'intelligence artificielle [Khosla et al., 2007, Lindsay and Miller, 2018, Vaishnav et al., 2022b, Vaishnav and Serre, 2023]. Il existe trois categories de selectivite dans un systeme d'attention visuelle: par localisation spatiale _(space-based)_[Posner, 1980, Posner et al., 1982], par appartenance a un objet _(object-based)_[Duncan, 1984, Egly et al., 1994a, Vecera and Farah, 1994, Kramer et al., 1997] et par des caracteristiques particulieres de l'entree _(feature-based)_[Harms and Bundesen, 1983, Driver and Baylis, 1989, Kramer and Jacobson, 1991, Baylis and Driver, 1992, Duncan and Nimmo-Smith, 1996]. L'attention est egalement sollicitee lors de l'execution de taches necessistant des signaux sensoriels multiples. En presence de taches ou de signaux sensoriels multiples, le controleur executif central aide a diriger l'attention. Le controleur executif central est charge de coordonner l'activite avec le systeme cognitif pour diriger l'attention, prendre des decisions et maintenir les objectifs de la tache. Le contexte et l'historique sont consideres comme utiles a l'execution optimale des taches, ce qui les rend etroitement lies a la memoire de travail. L'attention est en outre consideree comme le resultat du controleur central. Le controleur selectionne les cibles de l'attention et les transmet au systeme responsable de sa mise en oeuvre. Il existe une relation tripartite entre le controle executif, la memoire de travail et l'attention, de telle sorte que le centre d'attention est selectionn'e par le controleur executif en fonction du contenu de la memoire de travail [Soto et al., 2008]. Bien que tous les objets de la memoire de travail puissent influencer l'attention, le controleur executif aide a decider lequel doit affecter le plus [Olivers and Eimer, 2011]. Ces etudes cognitives vastes et approfondies liees a l'attention ont inspire le domaine de l'IA et ont contribue a stimuler ses performances. L'attention a profondement marque le domaine de la vision par ordinateur et du traitement naturel du langage (NLP), qui a connu un essor des architectures basees sur la _self-attention_ atteignant des performances de pointe sur de nombreux benchmarks. En outre, l'attention est un processus cognitif qui permet de se concentrer sur un stimulus pertinent. Cette caracteristique joue un role essentiel dans l'enrichissement de la capacite de raisonnement des humains. Pour mieux comprendre le mecanisme d'auto-attention, cette these a etudie son role dans les architectures cognitives et de vision par ordinateur sous l'angle du raisonnement visuel. Le raisonnement visuel est le processus d'analyse des informations visuelles a disposition afin de resoudre une tache. Il est considere comme une partie impor tante de l'intelligence fluide, qui implique de penser et de raisonner independamment de l'apprentissage, de l'education et de l'experience. Cette capacite a ete demontree non seulement chez les primates [Gentner et al., 2021b] mais aussi chez les abeilles [Giurfa et al., 2001] et chez les canetons nouveau-nes [Martinho and Kacelnik, 2016]. Au contraire, des etudes anterieures [Puebla and Bowers, 2021, Kim et al., 2018, Ricci et al., 2021, Messina et al., 2021c] (y compris nos propres travaux) ont montre que les reseaux neuronaux modernes peinent a resoudre des taches de raisonnement visuel simples lorsqu'ils sont testes sur un repere populaire appele test de raisonnement visuel synthetique (SVRT) par Fleuret et al. [2011], autrement simple pour les humains. Nous avons constate une tendance similaire lorsque nous avons teste des architectures de raisonnement populaires de types _Relational Network_[Santoro et al., 2017], _Transformer_[Vaswani et al., 2017] ou encore _ESBN_[Webb et al., 2021] sur la tache de raisonnement abstrait (ART) ou le stimulus contient un simple caractere Unicode. Par consequent, le raisonnement visuel est devenu un sujet de recherche de plus en plus populaire ces dernieres annees, en particulier avec l'emergence de nombreux tests d'intelligence fluide pour des algorithmes d'IA; notamment les tests de raisonnement visuel compositionnel (CVR) [Zerroug et al., 2022], Ravens (RPM) [Barrett et al., 2018, Zhang et al., 2019] et les matrices visuelles progressives (V-PROM) [Barrett et al., 2018, Teney et al., 2020] ainsi qud les Corpus de Raisonnement Abstrait (ARC) [Chollet, 2019]. L'objectif de la premiere etude de cette these a ete de mettre en lumiere les mecanismes computationnels qui sous-tendent le raisonnement visuel a l'aide du _Synthetic Visual Reasoning Test_ (SVRT) [Fleuret et al., 2011]. Ce defi comprend vingt-trois problemes de classification binaire, comprenant une variete de taches de raisonnement identique-different et spatial. Dans notre experience, nous avons systematiquement evalue la capacite d'une batterie de \(N=15\) reseaux neuronaux convolutifs profonds (_ResNets_) - variant en profondeur et entraines en utilisant differentes tailles d'ensembles d'entrainement - a resoudre chacun des problemes SVRT. Nous avons trouve une gamme de precision sur l'ensemble des vingt-trois taches. Des reseaux peu profonds ont facilement appris certaines taches, et des ensembles d'entrainement relativement petits et certaines taches ont ete difficilement resolues avec des reseaux beaucoup plus profonds et des ordres de grandeur plus eleves d'exemples d'entrainement. Sous l'hypothese que la complexite de calcul des taches individuelles peut etre correctement caracterisee par le motifs de precision des tests de ces \(N=15\) reseaux neuronaux, nous avons forme des vecteurs de precision a N dimensions pour chaque tache et execute un algorithme de regroupement hierarchique. L'analyse en resultant suggere une taxonomie des taches de raisonnement visual: au-dela de deux clusters primaires correspondant aux jugements _identifique-different_ (SD) vs _relation spatiale_ (SR), nous avons egalement identifie une organisation plus fine avec des sous-groupes refletant la nature et le nombre de relations utilisees pour composer les regles definissant la tache. Nos resultats sont coherents avec les travaux anterieurs de Kim et al. (2018), qui a ete le premier a identifier une dichotomie entre les taches SD et SR. Nos resultats prolongent egalement les travaux anterieurs de (Fleuret et al., 2011, Kim et al., 2018, Yihe et al., 2019) en proposant une taxonomie plus fine des taches de raisonnement visual. La precision des reseaux neuronaux se reflete dans le nombre de relations utilisees pour definir les regles de base, ce qui est attendu, mais merite un examen plus approfondi. Kim et al. (2018) ont precedemment suggere que les taches de SD "tendent" les reseaux neuronaux convolutifs. C'est-a-dire que, bien qu'il soit possible de trouver une architecture de reseau d'une profondeur suffisante (ou un nombre d'unites suffisant) qui de plus, peut resoudre une version de la tache avec une certaine configuration de stimuli (par ex, en forvant tous les stimuli a etre contenus dans une fenetre \(\Delta H\times\Delta W\)), il est aussi relativement facile de rendre la meme tache non apprenable par cette meme architecture de reseau passe un certain nombre de stimuli de configuration (par exemple, en augmentant la taille de la fenetre qui contient tous les stimuli). Tout se passe comme si ces reseaux convolutifs etaient capables d'apprendre la tache quand le nombre de stimuli reste inferieur a leurs capacites de memoire, et echouent au-dela. La question de savoir si des alternatives non convolutionnelles aux reseaux de neurones convolutionnels (CNNs) testes ici, comme les desormais populaires reseaux (Dosovitskiy et al., 2021, Touvron et al., 2021, Tolstikhin et al., 2021), pourraient apprendre a resoudre plus efficacement les taches SVRT les plus efficiles reste encore une question ouverte. Comme experience initiale, nous avons tente d'entrainer et de tester un _Transformer_ visuel (ViT) 1(Dosovitskiy et al., 2021) contraint d'avoir un nombre de parametres (21M) egal au nombre de parametres du modele ResNet-50 utilise ici. Pour la plupart des taches SVRT difficiles, nous n'avons pas reussi a obtenir avec ces transformeurs visuels des resultats au-dela de ceux obtenus par les reseaux de type ResNet et ce meme en utilisant 100 000 echantillons (le meme resultat a ete montre egalement dans Messina et al. [2021b]). Il convient de noter qu'un jeu de donnees de 100 000 echantillons reste relativement petit par rapport aux standards actuels en vision, puisque ViT a ete entraine a partir de zero. On peut demontrer que sous certaines contraintes architecturales, les perceptrons multicouches et les reseaux neuronaux convolutifs, y compris les reseaux ResNets ainsi que d'autres architectures, sont des approximateurs universels. En d'autres termes, ces reseaux peuvent apprendre des correspondances arbitraires entre des images et des labels de classe. En fonction de la complexite de la correspondance, on peut avoir besoin d'un nombre croissant de neurones caches afin de permettre une expressivite suffisante du reseau ; mais si l'on dispose de suffisamment de profondeur et d'une quantite suffisante d'exemples d'entrainement, les CNNs profonds peuvent apprendre des taches de raisonnement visuel arbitraires. Bien que nous ne puissions pas specifiquement faire d'affirmation forte pour les architectures ResNet utilisees dans cette etude (car la preuve d'approximation universelle a ete faite dans le cadre d'une seule couche sans max pooling ni batch normalization [Lin and Jegelka, 2018]), nous avons constate empiriquement que toutes les taches SVRT pouvaient effectivement etre apprises pour des reseaux de profondeur suffisante et a condition d'avoir une quantite suffisante d'exemples d'entrainement. Cependant, les CNNs profonds sont generalement depourvus de nombreuses fonctions cognitives humaines, telles que l'attention et la memoire de travail. De telles fonctions sont susceptibles de fournir un avantage considerable a un apprenant pour resoudre certaines de ces taches [Marcus, 2001]. Au lieu de ces fonctions cognitives, les CNNs s'appuyeraient sur leur aspect d'approximateurs universels, conduisant a une solution de type "force brute" qui serait moins generale. Dans ces conditions, une question ouverte est de savoir si les taches de SVRT derives de nos analyses basees sur les CNNs serait effectivement des taches valables pour une etude sur des sujets humains. De plus, la prediction faite par Kim et al. [2018] a l'aide des CNNs et selon laquelle les taches SD sont plus difficiles a resoudre que les taches SR et donc qu'elles peuvent exiger des calculs supplementaires (par le biais de processus de feedback) tels que l'attention et/ou la memoire de travail a ete validee experimentalement avec succes par Alamia et al. [2021a] en utilisant des donnees EEG. Une preuve supplementaire des avantages des mecanismes de retroaction pour le raisonnement visuel a ete fournie par Linsley et al. [2018a] qui a montre que les taches de tracage de contours qui peuvent etre resolues efficacement avec une seule couche d'un CNN recurrent peuvent necessiter plusieurs ordres de grandeur d'etapes de traitement supplementaires dans un CNN non recurrent pour resoudre la meme tache. Cela se traduit en fin de compte par une efficacite d'echantillonnage beaucoup plus grande pour les CNN recurrents sur les taches de segmentation d'images naturelles [Linsley et al., 2020]. La tache etroitement liee de l'inutilite a egalement ete etudiee par Villalobos et al. [2021] qui a demontre l'incapacite des CNN a apprendre une solution generale pour cette classe de problemes. Les approximateurs universels avec des biais inductifs minimaux tels que les perceptrons multicouches, les CNN et d'autres architectures feedforward ou non-attentives peuvent apprendre a resoudre des taches de raisonnement visuel, mais l'es peuvent avoir besoin d'un tres grand nombre d'exemples d'entrainement pour s'adapter correctement. Par consequent, au-dela de la simple mesure de la precision de reseaux tres profonds dans des regimes de donnees eleves (comme lorsque des millions d'exemples d'entrainement sont disponibles), l'evaluation systematique des performances de reseaux neuronaux de differentes profondeurs et pour differents regimes d'entrainement peut fournir des informations essentielles sur la complexite de differentes tasks de raisonnement visuel. Plus tot, Kim et al. [2018] a emis l'hypothese que cette tension des reseaux convolutifs est due a leur manque de mecanismes d'attention permettant de lier explicitement les regions de l'image aux objets mentaux. Une remarque similaire a ete faite par Greff et al. [2020] dans le contexte de l'incapacite des reseaux neuronaux contemporains a decouper les informations sensorielles en morceaux distincts qui peuvent ensuite etre analyses et compares individuellement (voir egalement Tsotsos et al. [2007] pour un point similaire). Il est interessant de noter que cette prediction a recement ete testee a l'aide de l'EEG humain par Alamia et al. [2021a] qui a montre qu'en effet l'activite cerebrale enregistree pendant les taches SD est compatible avec des demandes d'attention et de memoire de travail plus importantes que les tasks SR. En meme temps, le fait que les CNN puissent apprendre des tasks SR plus efficacement que des tasks SD ne signifie pas necessairement que les participants humains peuvent resoudre ces taches sans attention. En effet, [Logan, 1994b] a montre que les taches SR telles que le jugement de l'interieur necessitent de l'attention dans certaines circonstances. Pour evaluer le role de l'attention dans le raisonnement visuel, nous avons utilise des modules _Transformer_ pour doter les CNNs profonds d'une attention spatiale et d'une attention basee sur les caracteristiques. Les ameliorations relatives obtenues par les CNNs avec les deux formes d'attention varient selon les tasks. De nombreuses tasks refletaient une plus grande amelioration de l'attention spatiale, et un plus petit nombre beneficiait de l'attention basee sur les caracteristiques. De plus, nous avons constate que les modeles d'ameliorations relatives expliquaient une grande partie de la variance dans l'espace des tasks SVRT derivees dans l'experience 1. Dans l'ensemble, nous avons constate que l'existence d'une attention spatiale et d'une attention basee sur les caracteristiques explique bien la taxonomie des tasks de raisonnement visuel identifiees dans l'experience 1. Notre analyse informatique a egalement conduit a des predictions testables pour les experiences humaines en suggerant des tasks qui beneficient soit de l'attention spatiale (tache 22) soit de l'attention basee sur les caracteristiques (tache 21), des taches qui beneficient des deux formes d'attention (tache 19) et des taches qui ne beneficient pas de l'attention (tache2). Enfin, notre etude s'est concentree sur les avantages computationnels de l'attention spatiale et de l'attention basee sur les caracteristiques pour le raisonnement visuel. Les travaux futurs devraient considerer le role d'autres formes d'attention, y compris l'attention basee sur l'objet [Egly et al., 1994b] pour le raisonnement visuel. Dans notre deuxieme experience, nous avons etudie la capacite d'apprentissage des caracteristiques SVRT par rapport aux regles. Pour ce faire, nous avons preforme les reseaux neuronaux sur des tasks auxiliaires afin d'apprendre les caracteristiques SVRT avant de les former a apprendre les regles abstraites associees aux problemes SVRT individuels. Nos methodes de pre-entrainement ont conduit a des reseaux qui apprennent a resoudre les problemes SVRT mieux que les reseaux formes a partir de zero, ainsi que les reseaux qui ont ete pre-entraines pour effectuer la categorisation d'images sur le jeu de donnees ImageNet. Nous avons egalement constate que ces processus d'attention semblent contribuer davantage a l'apprentissage de regles qu'a l'apprentissage de caracteristiques. Pour le sous-cluster \(SR_{1}\), nous constatons que ce type de pre-entrainement est avantageux dans les regimes d'entrainement inferieurs, mais que les avantages disparaissent rapidement dans les regimes d'entrainement superieurs. En revanche, ce pre-entrainement ne permet pas d'apprendre les tasks du sous-cluster \(SD_{1}\), meme avec 15 000 echantillons, ce qui suggere que le principal defi de ces tasks n'est pas de decouvrir de bonnes representations visuelles, mais plutot de decouvrir la regle. Cela suggere le besoin de mecanismes supplementaires au-dela de ceux mis en oeuvre dans les ResNets. Ceci est egalement coherent avec les ameliorations observees pour ces tasks avec l'ajout de mecanismes d'attention. En resume, notre etude a compare les exigences informatiques de differentes tasks de raisonnement visuel. Bien que nous nous soyons concentres sur la comprehension des avantages computationnels des mecanismes d'attention et d'apprentissage des caracteristiques, il est clair que des mecanismes supplementaires seront necessaires pour resoudre pleinement toutes les tasks de SVRT. Ces mecanismes sont susceptibles d'inclure la memoire de travail, dont on sait qu'elle joue un role dans les tasks de SVRT [1]. Dans l'ensemble, ce travail illustre les avantages potentiels de l'incorporation de mecanismes semblables a ceux du cerveau dans les reseaux neuronaux modernes et fournit une voie a suivre pour atteindre un raisonnement visuel de niveau humain. Dans la derniere partie de la these, nous avons propose une nouvelle architecture, le Guided Attention Model for (visual) Reasoning (_GAMR_). Nous avons integre les deux capacites cognitives utilisees par les humains - l'attention et la memoire - dans la resolution de tasks de raisonnement. Les theories cognitives modernes de la vision active postulent que le systeme visuel explore l'environnement de maniere dynamique par le biais de sequences de changements d'attention pour selectionner et acheminer vers la memoire les informations pertinentes pour la task [1984, 1987]. Des experiences de psychophysique [1000] sur l'attention visuelle manifeste ont montre que les schemas de mouvements oculaires sont diriges selon des routines dependantes de la task. Le GAMR s'inspire de la litterature des sciences cognitives sur la vision active, ou l'attention manifeste est dirigee dans le systeme visuel pour recueillier des informations pertinentes pour la task. Selon la theorie de la vision active, le monde visuel est explore a l'aide de mouvements oculaires rapides guides par des deplacements de l'attention visuelle. Nous avons concu un controleur semblable aux mecanismes impliques dans le cadre de la vision active pour diriger le projecteur d'attention et envoyer les representations pertinentes pour la task dans le bloc de memoire utilise ensuite pour le raisonnement. Dans GAMR, le controleur est implemente avec la couche d'encodeur basee sur les transformateurs. Contrairement a la methode existante ou l'auto-attention est suivie d'une addition et d'une normalisation de la couche et ou une couche lineaire est ajoutee a la fin de cette operation, nous supprimons la couche d'auto-attention et la remplacons par l'attention basee sur les caracteristiques obtenue avec le module controleur LSTM. Cela aide le controleur a deplacer le point d'attention. Notre proposition de module d'attention guidee pour le raisonnement (visuel) (GAMR) apprend a deplacer l'attention dynamiquement, en fonction de la task, sur la base de requetes generees en interne par un controleur executif LSTM. Grace a des experiences approfondies sur les deux principaux defis de raisonnement visuel, le Synthetic Visual Reasoning Test (SVRT) [111] et le Abstract Reasoning Task (ART) [2021], nous demontrons que notre architecture neuronale est capable d'apprendre des compositions complexes de regles relationnelles d'une maniere efficace en termes de donnees et qu'elle est plus performante que les autres architectures neuronales de pointe pour le raisonnement visuel. En utilisant des methodes d'explicabilite, nous caracterisons davantage les strategies visuelles utilisees par le modele afin de resoudre des tasks de raisonnement representatives. Nous demontrons que notre modele est compositionnel, c'est-a-dire qu'il est capable de se generaliser efficacement a de nouvelles tasks et d'apprendre de nouvelles routines visuelles en recomposant des operations elementaires apprises precedemment. Nous nous sommes inspires pour la banque de memoire de Webb et al. [2021], ou des mecanismes de liaison et d'indirerection de variables ont ete introduits dans l'architecture pour le raisonnement visuel a l'aide d'une memoire externe. La liaison de variables est la capacite de lier deux representations, et l'indirerection est le mecanisme implique dans la recuperation d'une representation pour se referer a l'autre. Ces auteurs introduisent egalement la normalisation temporelle du contexte (TCN) [2020], qui s'avere benefique pour la generalisation hors distribution pour les tasks de raisonnement relationnel. Cependant, le modele presente des limites importantes: Il suppose une representation d'image centree sur l'objet, ou les objets sont presentes individuellement dans une sequence. Nous ne pouvons pas evaluer une telle architecture dans le cadre du defi SVRT, car les images de chaque task contiennent plusieurs objets qui necessitent une individualisation. Il existe egalement certaines relations, comme "toucher", que cette individuation ne peut representer (ou toute architecture centree sur l'objet). ESBN n'a pas non plus de mecanisme attentionnel et fonctionne mieux dans un scenario ou une attention forte au niveau du pre-traitement permet de simplifier les tasks. Nous avons teste ce comportement de correspondance des modeles de l'architecture en l'entrainant en presence d'un bruit gaussien. Il a conduit a une performance de niveau aleatoire. Ici, nous nous appuyons sur ce travail et decrivons un modele entrainable de bout en bout qui apprend a individualiser les scenes pertinentes pour la task et a stocker leurs representations en memoire pour permettre de juger des relations complexes entre ces objets. Enfin, notre mecanisme relationnel est inspire du travail de Santoro et al. [2017] qui a introduit un modele pret a l'emploi pour calculer les relations entre les representations de type objet dans un reseau. L'une des limites de l'approche actuelle est le nombre fixe de pas de temps. Je pense qu'une future continuation de ce travail pourrait etre d'incoroporer un mecanisme pour adapter le nombre de pas de temps en fonction de la complexite de la task. Pour l'instant, nous avons fixe le nombre de pas de temps a quatre pour toutes les tasks ; cependant, une task simple pourrait necessiter moins de pas de temps pour arriver a une decision avec une grande confiance. Pour que le modele s'adapte a la situation, une possibilite pourrait etre de l'entrainer avec une variable de confiance comme critere d'arret. Bien que nous ayons limite notre analyse a des ensembles de donnees synthetiques de raisonnement visuel, il serait possible a l'avenir de tester les modeles sur un ensemble de donnees du monde reel comme V-PROM. Il s'agit d'images organisees dans le style de raisonnement des Ravens, avec des images de contexte et des images de choix parmi lesquelles la bonne reponse est selectionnee. Une autre direction possible est de penser a une architecture qui prend en compte deux caracteristiques importantes: l'utilisation efficace des donnees et l'utilisation efficace des ressources informatiques. Une facon de concevoir cette architecture est d'incoroporer un mecanisme de lecture et d'criture similaire a une machine de Turing neuronale. Ces deux mecanismes aideront le reseau a lire les relations deja stockees en memoire et a les ccrire dans la memoire si elles sont nouvelles. Nous nous attendons a ce qu'une telle architecture cognitive demontre une capacite de raisonnement d'ordre superieur, d'apprentissage continu, de composition et de meta-apprentissage. Nous avons egalement evalue ViT [Dosovitskiy et al., 2021] - Nous avons egalement evalue ViT citepDosovitskiy2020-iq - une architecture d'auto-attention complete sur des tasks SVRT et avons constate qu'elle avait du mal a apprendre les tasks SVRT les plus simples ; cependant, Messina et al. [2021b] a mene une etude similaire sur un sous-ensemble plus petit de quatre tasks SVRT entrainees sur 28k echantillons et a constate qu'une version recurrente de ViT - un reseau attentionnel avec une colonne vertebrae convolutive peut apprendre ces tasks. L'ajout de convolutions dans les premieres couches de ViT permet d'obtenir une meilleure precision et ameliore la sensibilite aux parametres d'optimisation [Xiao et al., 2021]. Cette observation nous a motive a proposer _Conviformer_[Vaishnav et al., 2022a] pour un autre projet collaboratif sur la classification des feuillesfossiles. Nous proposons un reseau pour incoroporer un reseau convolutif comme frontal d'un reseau de transformation de la vision base sur l'auto-attention comme plete, ameliorant ainsi sa capacite a traiter des images a plus haute resolution. Bien que les images de plus grande taille revetent une grande importance dans les applications de vision par ordinateur telles que la detection d'objets, la segmentation et la classification a grain fin, elles ne peuvent pas etre utilisees avec les transformateurs de vision en raison de la demande de memoire de calcul associee. _Conviformer_ ameliore la performance des transformateurs de vision en incorporant des caracteristiques locales et en infusant des prieurs convolutifs dans une architecture de transformateur. Nous aimerions voir comment les transformateurs de vision induits par convolution se comportent dans les tasks SVRT. L'apprentissage de concepts est une autre direction de recherche passonnante. L'une des principales caracteristiques de l'intelligence humaine est la capacite d'apprendre rapidement de nouveaux concepts et de les utiliser pour generaliser a un nouveau scenario. Un _concept_ peut etre une idee representant une classe d'evenements (par exemple, la marche), des objets (par exemple, les chats), ou leurs proprietes (par exemple, la couleur bleue). Pour tester la capacite d'apprentissage de concepts des reseaux neuronaux en quelques coups, nous avons recemment introduit un nouveau jeu de donnees de raisonnement visuel, Compositional Visual Reasoning (_CVR_) [Zerroug et al., 2022]. Ce jeu de donnees est base sur le principe du raisonnement odd-one-out. Dans cette forme de task de raisonnement, trois echantillons sur quatre suivent un concept similaire (regle) dans leur formulation, tandis que le quatrieme ne le fait pas. Chaque echantillon contient des formes similaires aux formes utilisees dans le defi SVRT. Cela permet d'etendre la variete des relations utilisees dans la formulation par rapport aux jeux de donnees precedemment definis comme SVRT ou RPM. Nous avons egalement inclus la compositionnalite prealable dans le jeu de donnees, ou certaines relations elementaires sont utilisees pour composer les differentes tasks. La motivation est de pousser la communaute a construire un reseau compositionnel et efficace en termes d'echantillons. Dans cette these, nous avons fait l'une des toutes premieres tentatives d'explorer l'auto-attention du point de vue du raisonnement visuel. L'attention joue un role crucial dans la demonstration des capacites de raisonnement visuel, et on s'attend a ce qu'un meilleur modele attentionnel soit meilleur pour le raisonnement. Nous avons montre comment les operations d'auto-attention pouvaient etre utilisees comme modele computationnel d'un systeme d'attention visuelle representant l'attention spatiale et basee sur les caracteristiques, ainsi que comme modele de vision active. Bien que nous avons constate que l'auto attention est aussi efficace dans la resolution de tasks de raisonnement que dans d'autres defis lies a la vision, il est necessaire d'effectuer des analyses supplementaires pour comprendre les mecanismes fondamentaux d'un modele complet d'auto-attention qui limitent son apprentissage efficace par echantillonnage pour les tasks de raisonnement. Dans l'ensemble, ce travail demontre les avantages potentiels de l'ajout de mecanismes d'auto-attention dans l'architecture cognitive et de vision par ordinateur pour resoudre les tasks de raisonnement visuel. ## Appendix A Appendix ## Appendix A Synthetic Visual Reasoning Task \begin{table} \begin{tabular}{|c|c|c c c c c c c c c c c c c c c c c c c c c c c c|} \hline \multirow{2}{*}{**Task No.**} & \multicolumn{11}{c}{**Participant No.**} \\ \cline{3-14} & **1** & **2** & **3** & **4** & **5** & **6** & **7** & **8** & **9** & **10** & **11** & **12** & **13** & **14** & **15** & **16** & **17** & **18** & **19** & **20** & **Mean** & **Fail** \\ \hline [MISSING_PAGE_POST] ean** & 4.45 & 9.33 & 3.59 & 6.78 & 6.33 & 6.05 & 4.65 & 6.7 & 7.18 & 7.2 & 7.5 & 6.14 & 8.55 & 2.37 & 5.15 & 4.14 & 4.4 & 3.11 & 6.9 & 3.05 & \\ \hline **No of Fails** & 1 & 5 & 1 & 5 & 2 & 2 & 0 & 3 & 12 & 3 & 3 & 1 & 3 & 4 & 3 & 1 & 3 & 4 & 3 & 4 & \\ \hline \end{tabular} \end{table} Table A.1: Each cell represents attempts participants took to solve seven consecutive correct categorizations. Here, row and column represents \(task\ number\) and \(participant\ number\). Entries containing ”X” indicate that the participant failed to solve the problem, and those cells are not included in the marginal means. [15] Figure A1: Sample images for Same Different (SD) tasks Figure A2: Sample images for Spatial Relation (SR) tasks ## Appendix A Computational Demands of Visual Reasoning Figure B1: Slope attained by linear fitting of points obtained after taking the ratio of each of the network with spatial attention module and the test accuracy of a ResNet50 for each task and training condition for Same Different (SD) tasks Figure B2: Slope attained by linear fitting of points obtained after taking the ratio of each of the network with spatial attention module and the test accuracy of a ResNet50 for each task and training condition for Spatial Relation (SR) tasks Figure B3: Slope attained by linear fitting of points obtained after taking the ratio of each of the network with feature-based attention module and the test accuracy of a ResNet50 for each task and training condition for Same Different (SD) tasks Figure B4: Slope attained by linear fitting of points obtained after taking the ratio of each of the network with feature-based attention module and the test accuracy of a ResNet50 for each task and training condition for Spatial Relation (SR) tasks Figure B5: Test accuracies for a baseline ResNet50 trained from scratch (“No initialization”) vs. the same architecture pre-trained on Imagenet data for different number of training examples. Also note that a different axis scale is used for \(SR_{2}\) to improve visibility. ## References ## References * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _Advances in neural information processing systems_, pages 5998-6008, 2017. (cit. on pp. ix, xvi, 10, 11, 12, 13, 14, 42, 43, 59, 72, 82, 91) * Dosovitskiy et al. (2017) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net, 2021. URL [https://openreview.net/forum?id=YicbFdNTTy](https://openreview.net/forum?id=YicbFdNTTy). (cit. on pp. ix, 13, 18, 34, 42, 59, 84, 92, 98) * Fleuret et al. (2011) Francois Fleuret, Ting Li, Charles Dubout, Emma K Wampler, Steven Yantis, and Donald Geman. Comparing machines and humans on a visual categorization test. _Proceedings of the National Academy of Sciences_, 108(43):17621-17625, 2011. (cit. on pp. xvi, 22, 26, 28, 29, 31, 33, 34, 58, 82, 91, 92, 97, 103) * Chun et al. (2011) Marvin M Chun, Julie D Golomb, Nicholas B Turk-Browne, et al. A taxonomy of external and internal attention. _Annual review of psychology_, 62(1):73-101, 2011. (cit. on pp. 3, 89) * Cho et al. (2015) Kyunghyun Cho, Aaron Courville, and Yoshua Bengio. Describing multimedia content using attention-based encoder-decoder networks. _IEEE Transactions on Multimedia_, 17(11):1875-1886, 2015. (cit. on pp. 3, 89) * Descartes and Rodis-Lewis (1649) Rene Descartes and Genevieve Rodis-Lewis. _Les passions de l'ame_. Le Gras, Paris, 1649. (cit. on pp. 3) * 2nd German Edition, translated by M. Mackeben, from Nakayama and Mackeben_, 29(11):1631-1647, 1989. (cit. on pp. 3) * Kohler (1947) Wolfgang Kohler. Gestalt psychology today. 1947. (cit. on pp. 3)
2310.18954
Mask Propagation for Efficient Video Semantic Segmentation
Video Semantic Segmentation (VSS) involves assigning a semantic label to each pixel in a video sequence. Prior work in this field has demonstrated promising results by extending image semantic segmentation models to exploit temporal relationships across video frames; however, these approaches often incur significant computational costs. In this paper, we propose an efficient mask propagation framework for VSS, called MPVSS. Our approach first employs a strong query-based image segmentor on sparse key frames to generate accurate binary masks and class predictions. We then design a flow estimation module utilizing the learned queries to generate a set of segment-aware flow maps, each associated with a mask prediction from the key frame. Finally, the mask-flow pairs are warped to serve as the mask predictions for the non-key frames. By reusing predictions from key frames, we circumvent the need to process a large volume of video frames individually with resource-intensive segmentors, alleviating temporal redundancy and significantly reducing computational costs. Extensive experiments on VSPW and Cityscapes demonstrate that our mask propagation framework achieves SOTA accuracy and efficiency trade-offs. For instance, our best model with Swin-L backbone outperforms the SOTA MRCFA using MiT-B5 by 4.0% mIoU, requiring only 26% FLOPs on the VSPW dataset. Moreover, our framework reduces up to 4x FLOPs compared to the per-frame Mask2Former baseline with only up to 2% mIoU degradation on the Cityscapes validation set. Code is available at https://github.com/ziplab/MPVSS.
Yuetian Weng, Mingfei Han, Haoyu He, Mingjie Li, Lina Yao, Xiaojun Chang, Bohan Zhuang
2023-10-29T09:55:28Z
http://arxiv.org/abs/2310.18954v1
# Mask Propagation for Efficient Video Semantic Segmentation ###### Abstract Video Semantic Segmentation (VSS) involves assigning a semantic label to each pixel in a video sequence. Prior work in this field has demonstrated promising results by extending image semantic segmentation models to exploit temporal relationships across video frames; however, these approaches often incur significant computational costs. In this paper, we propose an efficient mask propagation framework for VSS, called MPVSS. Our approach first employs a strong query-based image segmentor on sparse key frames to generate accurate binary masks and class predictions. We then design a flow estimation module utilizing the learned queries to generate a set of segment-aware flow maps, each associated with a mask prediction from the key frame. Finally, the mask-flow pairs are warped to serve as the mask predictions for the non-key frames. By reusing predictions from key frames, we circumvent the need to process a large volume of video frames individually with resource-intensive segmentors, alleviating temporal redundancy and significantly reducing computational costs. Extensive experiments on VSPW and Cityscapes demonstrate that our mask propagation framework achieves SOTA accuracy and efficiency trade-offs. For instance, our best model with Swin-L backbone outperforms the SOTA MRCFA using MiT-B5 by 4.0% mIoU, requiring only 26% FLOPs on the VSPW dataset. Moreover, our framework reduces up to 4\(\times\) FLOPs compared to the per-frame Mask2Former baseline with only up to 2% mIoU degradation on the Cityscapes validation set. Code is available at [https://github.com/ziplab/MPVSS](https://github.com/ziplab/MPVSS). ## 1 Introduction Video Semantic Segmentation (VSS), a fundamental task in computer vision, seeks to assign a semantic category label to each pixel in a video sequence. Previous research on VSS has leveraged developments in image semantic segmentation models, _e.g._, FCN [40] and Deeplab [83; 4], which made tremendous progress in the field. However, adapting image semantic segmentation models to VSS remains challenging. On one hand, sophisticated temporal modeling is required to capture the intricate dynamics among video frames. On the other hand, videos contain a significantly larger volume of data compared to images. Processing every frame with a strong image segmentor can incur significant computational costs. Prior work in VSS mainly focuses on leveraging both temporal and spatial information to enhance the accuracy of pixel-wise labeling on each video frame, building upon the pixel-wise classification paradigm of traditional image semantic segmentors. Specifically, these methods exploit temporal information by integrating visual features from previous frames into the target frame, employing techniques such as recurrent units [26; 43], temporal shift [33; 1], or spatial-temporal attention modules [19; 44; 28; 16; 53], among others. More recently, with the prevalence of the DETR [2] framework, DETR-like segmentors [8; 7; 51] have dominated the latest advances and achieved state-of-the-art performance in semantic segmentation. Notably, MaskFormer series [8; 7] learn a set of queries representing target segments, and employ bipartite matching with ground truth segments as the training objective. Each query is learned to predict a binary mask and its associated class prediction. The final result is obtained by combining the binary masks and class predictions of all queries via matrix multiplication. However, these models require computing a high-resolution per-pixel embedding for each video frame using a powerful image encoder, leading to significant computational costs. For instance, processing an input video clip with 30\(\times\)1024\(\times\)2048 RGB frames using the strong Mask2Former [7] requires more than 20.46T Floating-point Operations (FLOPs) in total. Such computational demands render it impractical to deploy these models in real-world VSS systems. A typical solution is to incorporate temporal information guided by optical flow [50; 83; 22; 67] to reduce the redundancy. The related literature proposes to propagate feature maps from the key frame to other non-key frames using optical flow. The motivation mainly comes from two aspects. First, videos exhibit high temporal redundancy due to the similar appearance of objects or scenes in consecutive frames [13], especially in videos with a high frame rate. As shown in Fig. 1, we empirically observe that semantic information in consecutive video frames is highly correlated. Additionally, previous methods [50; 83; 22; 30; 25; 45] highlight the gradual changes in semantic features within deep neural networks. They attempt to reduce the computational costs by reusing feature maps from preceding frames for the target frame, facilitated by the estimated optical flow between the neighboring frames. However, these methods may still suffer from degraded performance due to inaccurate and noisy optical flow estimation, which only measures pixel-to-pixel correspondence between adjacent frames. Recent advancements in Transformer-based optical flow methods [58; 20; 63; 64] demonstrate the importance of incorporating global correspondence information for each pixel, resulting in more precise per-pixel optical flow estimation. Nevertheless, these methods still focus solely on pixel-level correspondence and do not account for segment-level information among video frames, as illustrated in Fig. 1, which may be insufficient for the task of VSS. In this paper, we present an efficient _mask propagation_ framework for VSS, namely MPVSS, as shown in Fig. 2. This framework relies on two key components: _a strong query-based image segmentor_ for the key frame (upper part of Fig. 2), and _a powerful query-based flow estimator_ for the non-key frames (lower part of Fig. 2). Specifically, we employ the state-of-the-art query-based image segmentation model, _i.e._, Mask2Former [7] to process sparse key frames and generate accurate binary masks and class predictions. We then propose to estimate individual flow map corresponding to each segment-level mask prediction of the key frame. To achieve this, we leverage the learned queries from the key frame to aggregate segment-level motion information between adjacent frame pairs and capture segment-specific movements along the temporal dimension. These queries are fed to a flow head to predict a set of segment-aware flow maps, followed by a refinement step. The mask Figure 1: Motivation of the proposed MPVSS. Three consecutive frames with ground truth category labels are sampled from a video in VSPW [42], illustrating a strong correlation between video frames along the temporal dimension. This observation underscores the significant temporal redundancy present in videos. The remaining columns contrast the proposed query-based flow and traditional optical flow, each showing a normalized flow map, a mask prediction, and the final predicted semantic map, respectively. predictions from key frames are subsequently warped to other non-key frames through the generated query-based flow maps, where each flow map is tailored to the corresponding mask predicted by the associated query in the key frame. Finally, we derive semantic maps for the non-key frames by aggregating the warped mask predictions with the class probabilities predicted from the key frame. By warping the predicted segmentation masks from key frames, our model avoids processing each individual video frame using computationally intensive semantic segmentation models. This not only reduces temporal redundancy but also significantly lowers the computational costs involved. In summary, our main contributions are in three folds: * We propose \(\mathtt{MPVSS}\), a novel mask propagation framework for efficient VSS, which reduces computational costs and redundancy by propagating accurate mask predictions from key frames to non-key frames. * We devise a novel query-based flow estimation module that leverages the learned queries from the key frame to model motion cues for each segment-level mask prediction, yielding a collection of accurate flow maps. These flow maps are utilized to propagate mask predictions from the key frame to other non-key frames. * Extensive experiments on standard benchmarks, VSPW [42] and Cityscapes [9], demonstrate that our \(\mathtt{MPVSS}\) achieves SOTA accuracy and efficiency trade-offs. ## 2 Related Work **Image semantic segmentation.** As a fundamental task in computer vision, image semantic segmentation seeks to assign a semantic label to each pixel in an image. The pioneering work of Fully Convolutional Networks (FCNs) [40] first adopts fully convolutional networks to perform pixel-wise classification in an end-to-end manner and apply a classification loss to each output pixel, which naturally groups pixels in an image into regions of different categories. Building upon the per-pixel classification formulation, various segmentation methods have been proposed in the past few years. Some works aim at learning representative features, which propose to use atrous convolutional layers to enlarge the receptive field [69; 4; 5; 46; 68], aggregate contextual information from multi-scale feature maps via a pyramid architecture [17; 36], encoder-decoder architecture [49; 6], or utilize attention modules to capture the global dependencies [27; 77; 14; 80; 79; 61]. Another line of work focuses on introducing boundary information to improve prediction accuracy for details [10; 29; 71; 78]. More recent methods demonstrate the effectiveness of Transformer-based architectures for semantic segmentation. SegFormer [62], Segmentor [51], SETR [79], MaskFormer [8; 7] replace traditional convolutional backbones [18] with Vision Transformers [11; 59; 39; 34] and/or implement the head with Transformers following the DETR-like framework [2]. Basically, we propose an efficient framework for video semantic segmentation that leverages the mask predictions generated by DETR-like image segmentors. Our framework focuses on propagating these mask predictions across video frames, enabling accurate and consistent segmentation results throughout the entire video sequence. By incorporating the strengths of DETR-like models in image segmentation, we achieve high-quality and efficient video semantic segmentation. **Video semantic segmentation.** Video semantic segmentation has recently captured the attention of researchers due to its dynamic nature and practical significance in real-world scenarios. Compared with image semantic segmentation, most existing methods of VSS pay attention to exploiting temporal information, which can be divided into two groups. The first group of approaches exploits temporal relationships to improve prediction accuracy and consistency. For example, some works utilize recurrent units [26; 43] or design an attention propagation module [19] to warp the features or results of several reference frames [82; 15], or aggregate neighbouring frames based on optical flow estimation [15; 37]. In particular, ETC [38] proposes temporal consistency regularization during training, [74] instead exploits perceptual consistency in videos, MRCFA [54] mines relationship of cross-frame affinities, STT [28] employs spatial-temporal transformer, CFFM-VSS [53] disentangles static and dynamic context for video frames. The second line of methods infers to alleviate the huge temporal redundancy among video frames and thus reduce the computational cost. Observing the high similarity of the semantic representation of deep layers, ClockNet [50] designs a pipeline schedule to adaptively reuse feature maps from previous frames in certain network layers. DFF [83] utilizes optical flow to propagate feature maps from key frames to non-key frames, achieving remarkable computational efficiency. Accel [22] further proposes to execute a large model on key frames and a compact one on non-key frames, while DVN [67] proposes a region-based execution scheme on non-key frames, as well as an dynamic key-frame scheduling policy. LLVS [30] develops an efficient framework involving both adaptive key frame selection and selective feature propagation. Unlike methods that primarily rely on optical flow, which models dense pixel-to-pixel correspondence, we introduce a novel query-based flow module that focuses on segment-level flow fields between adjacent frames. By operating at the segment level, our method captures higher-level motion information that is more accurate for VSS. **Motion estimation by optical flow.** Optical flow estimation is a fundamental module in video analysis, which estimates a dense field of pixel displacement between adjacent frames. In deep learning era, FlowNet [12] first utilizes convolutional neural networks to learn from labeled data. Most follow-up methods propose to employ spatial pyramid networks [52], or utilize recurrent layers [58] to apply iterative refinement for the predicted flow in a coarse-to-fine manner [52; 47; 24; 55]. More recent methods pioneer Transformers to capture the global dependencies of cost volume [58; 20], or extract representative feature maps for global matching [63], addressing the challenge of large displacement. As optical flow represents the pixel-to-pixel correspondence in adjacent frames, it is often utilized in a wide range of video analysis tasks, _i.e_., optical flow as motion information for modeling action motion in the tasks of action recognition [3; 56], temporal action detection [57; 32], or improving the feature representation in video segmentation [75], object detection [82]; optical flow estimation as a simultaneous task to model pixel correspondence for video interpolation [23; 48; 21], video inpainting [72; 73; 31], etc. ## 3 Mask Propagation via Query-based Flow In Sec. 3.1, we introduce the overall pipeline of the proposed \(\mathtt{MPVSS}\). In Sec. 3.2, we simply revisit Mask2Former that \(\mathtt{MPVSS}\) is built upon. Finally, in Sec. 3.3, we introduce our query-based flow estimation method, which yields a set of flow maps for efficiently propagating mask predictions from key frames to other non-key frames. ### Overview The overall architecture is shown in Fig. 2. Let \(\{\mathbf{I}^{t}\in\mathbb{R}^{H\times W\times 3}\}_{t=1}^{T}\) be an input video clip with \(T\) frames, where \(H\) and \(W\) denote the height and width of frames in the video. Without loss of generality, we select key frames from the video clip at fixed intervals, _e.g_., every 5 frames, considering other frames within these intervals as non-key frames. For simplicity, we denote each key frame as \(\mathbf{I}^{k}\) and non-key frame as \(\mathbf{I}^{j}\). To reduce redundant computation, we only run the heavy image segmentation model on the sampled key frames and propagate the predictions from key frames to non-key frames. Specifically, for a key frame \(\mathbf{I}^{k}\), we adopt Mask2Former [7] (upper part of Fig. 2) to generate \(N\) mask predictions \(\mathbf{M}^{k}\) and class predictions. Then, we use the flow estimation module (bottom part Figure 2: Overall architecture of the proposed \(\mathtt{MPVSS}\). of Fig. 2) to predict a set of flow maps \(\mathcal{F}^{j\to k}\in\mathbb{R}^{N\times 2\times H\times W}\) indicating the motion changes from \(\mathbf{I}^{j}\) to \(\mathbf{I}^{k}\), which will be discussed in Sec. 3.3. Subsequently, we apply the bilinear warping function on all locations to propagate per-segment mask predictions \(\mathbf{M}^{k}\) to obtain \(\mathbf{M}^{j}\) of non-key frames, _i.e._, \(\mathbf{M}^{j}=\mathcal{W}(\mathbf{M}^{k},\mathcal{F}^{j\to k})\). Consequently, we segment the non-key frame \(\mathbf{I}^{j}\) by propagating the predicted masks from its preceding key frame \(\mathbf{I}^{k}\), avoiding the necessity of computationally intensive image segmentation models for every video frame. During training, we apply the loss function in Mask2Former [7] on the warped mask predictions \(\mathbf{M}^{j}\) and perform bipartite matching with the ground truth masks for the current frame. The training approach is similar to the process described in [83], where the loss gradients are back-propagated throughout the model to update the proposed flow module. ### Mask2Former for Segmenting Key Frames Mask2Former consists of three components: a backbone, a pixel decoder and a transformer decoder, as depicted in the upper part of Fig. 2. The backbone extracts feature maps from the key frame \(\mathbf{I}^{k}\in\mathbb{R}^{H\times W\times 3}\). The pixel decoder builds a multi-scale feature pyramid as well as generates a high-resolution per-pixel embedding, following FPN [35] and Deformable DETR [81]. In transformer decoder, the target segments are represented as a set of learned queries \(\mathcal{Q}^{k}_{\mathcal{O}}\in\mathbb{R}^{N\times C}\) where \(C\) is the channel dimension. After gradually enriching their representations with stacked decoder blocks, \(\mathcal{Q}^{k}_{\mathcal{O}}\) is fed into a class and a mask head to yield \(N\) class probabilities and \(N\) mask embeddings, respectively. After decoding the binary mask predictions \(\mathbf{M}^{k}\in\mathbb{R}^{N\times H\times W}\) with the class predictions and the per-pixel embeddings, we can finally derive the semantic maps for the key frame by aggregating \(N\) binary mask predictions with their corresponding predicted class probabilities via matrix multiplication. We refer readers to [8; 7] for more details. ### Query-based Flow Estimation To generate the segmentation masks for the non-key frames efficiently, we propose to estimate a set of flow maps, each corresponding to a mask prediction of the key frame. We start with introducing the process of generating \(N\) query-based flow maps between the current non-key frame \(\mathbf{I}^{j}\) and its preceding key frame \(\mathbf{I}^{k}\), where \(0<j-k\leq T\). The overall flow module comprises three key elements: a motion encoder to encode the motion feature pyramid between the pair of video frames \(\{\mathbf{I}^{k},\mathbf{I}^{j}\}\), a motion decoder that leverages the learned queries \(\mathcal{Q}^{k}_{\mathcal{O}}\) from the key frame to extract motion information from the motion feature maps for each segment, and a flow head responsible for predicting \(N\) query-based flow maps \(\mathcal{F}^{j\to k}\), where each flow map is associated with warping a segment (mask prediction) of the key frame to the current non-key frame. **Motion encoder.** To obtain motion features between the paired frames \(\{\mathbf{I}^{k},\mathbf{I}^{j}\}\), we concatenate the two frames along the channel dimension as the input of the motion encoder. The motion encoder, utilizing the same architecture as the flow encoder in FlowNet [12], is employed to extract motion features at each location in a downsampling manner. Following [12; 7], we reverse the order of the original feature maps, generating a motion feature pyramid \(\{\mathbf{b}_{1},\mathbf{b}_{2},\mathbf{b}_{3},\mathbf{b}_{4}\}\), each at resolutions \(1/32\), \(1/16\), \(1/8\) and \(1/4\) of the original video frame, respectively. Subsequently, the feature pyramid is fed to the model decoder for the decoding of flow maps from the lowest to the highest resolution. **Motion decoder.** We then devise a motion decoder to aggregate motion information from the motion feature pyramid using \(N\) flow queries \(\mathcal{Q}_{\mathcal{F}}\). First, the motion decoder takes as input the first three feature maps \(\{\mathbf{b}_{1},\mathbf{b}_{2},\mathbf{b}_{3}\}\) from the motion feature pyramid and the flow queries \(\mathcal{Q}_{\mathcal{F}}\). We initialize flow queries with the \(N\) learned queries \(\mathcal{Q}^{k}_{\mathcal{O}}\) of the preceding key frame, _i.e._, \(\mathcal{Q}^{1}_{\mathcal{F}}=\mathcal{Q}^{k}_{\mathcal{O}}\), where \(\mathcal{Q}^{1}_{\mathcal{F}}\) indicates the input flow queries to the motion decoder. Following [7], feature map in each layer is projected into \(C\) channels in the channel dimension using a linear layer, which is then added with a sinusoidal positional embedding and a learnable level embedding. This results in three scales of motion feature maps in total, _i.e._, \(\{\mathbf{B}^{1}_{1},\mathbf{B}^{1}_{2},\mathbf{B}^{1}_{3}\}\), as the input of the following layers. Next, the motion decoder can be divided into \(S\) stages, each indexed by \(s\), with each stage involving \(L\) blocks, each indexed by \(l\). We adopt \(S=3\) and \(L=3\) for our model. Inspired by [7; 65], starting from \(s=1\) and \(l=1\), for each block in each stage, we first concatenate \(\mathbf{B}^{s}_{l}\) and \(\mathcal{Q}^{s}_{\mathcal{F}}\) together and then feed them into a number of Transformer layers, each of which performs information propagation between \(\mathbf{B}_{l}^{s}\) and \(\mathcal{Q}_{\mathcal{F}}^{s}\). \[\mathbf{Z}_{l}^{s} =\left[\mathcal{Q}_{\mathcal{F}}^{s};\mathbf{B}_{l}^{s}\right], \tag{1}\] \[\mathbf{\hat{Z}_{l}^{s}} =\mathrm{LN}(\mathrm{MSA}\left(\mathbf{Z}_{l}^{s}\right)+\mathbf{ Z}_{l}^{s}),\] (2) \[\hat{\mathcal{Q}}_{\mathcal{F}}^{s},\mathbf{\hat{B}}_{l}^{s} =\mathrm{LN}(\mathrm{FFN}(\mathbf{\hat{Z}_{l}^{s}})+\mathbf{\hat{ Z}_{l}^{s}}), \tag{3}\] where \([;]\) denotes the concatenation operation. \(\mathrm{LN}(\cdot)\), \(\mathrm{MSA}(\cdot)\) and \(\mathrm{FFN}(\cdot)\) denote LayerNorm layer, Mutli-head Self-Attention and Feed-forward Neural Network, respectively. Therefore, in each stage, we learn flow queries and update motion feature maps via self-attention. The self-attention mechanism aggregates motion information globally from both the motion feature maps and the flow query features simultaneously. Finally, the learned flow queries \(\hat{\mathcal{Q}}_{\mathcal{F}}\) and updated motion feature maps \(\{\mathbf{\hat{B}}_{1},\mathbf{\hat{B}}_{2},\mathbf{\hat{B}}_{3}\}\) are fed to the flow head for flow prediction. As each query of \(\mathcal{Q}_{\mathcal{O}}^{k}\) encodes global information for each segment in the key frame, using \(\mathcal{Q}_{\mathcal{O}}^{k}\) as initial flow queries enables each flow query to extract motion information for each segment present in the key frame, which facilitates query-based motion modeling. **Flow head.** The flow head generate the final flow maps \(\mathcal{F}^{j\to k}\in\mathbb{R}^{N\times 2\times H\times W}\) by combining query-based flow \(\mathcal{F}^{QF}\) and pixel-wise flow \(\mathcal{F}^{PF}\). To obtain the query-based flow \(\mathcal{F}^{QF}\), an MLP with 2 hidden layers is employed to convert the learnt flow queries \(\hat{\mathcal{Q}}_{\mathcal{F}}\) to flow embeddings \(\mathcal{E}_{\mathcal{F}}\in\mathbb{R}^{N\times C}\), each with \(C\) channels. Inspired by the pixel decoder design in Mask2Former, we project the motion feature map \(\mathbf{b}_{4}\) from the motion encoder to \(\mathcal{E}_{B}\in\mathbb{R}^{2C\times\frac{H}{4}\times\frac{W}{4}}\) using another MLP with 2 hidden layers. Let \(\mathcal{E}_{B}^{x}\) and \(\mathcal{E}_{B}^{y}\) denote the first and last \(C\) channels of \(\mathcal{E}_{B}\), and they are used to obtain the query-based flow predictions in terms of \(x\) and \(y\) direction, respectively. Then, for the \(n^{th}\) query, we get the flow map via a dot product between the \(n^{th}\) flow embedding \(\mathcal{E}_{\mathcal{F}_{n}}\) and the projected motion map \(\mathcal{E}_{B}\), _i.e._, \(\mathcal{F}_{n}^{QF}(x)=\mathcal{E}_{\mathcal{F}_{n}}\cdot\mathcal{E}_{B}^{x}\) and \(\mathcal{F}_{n}^{QF}(y)=\mathcal{E}_{\mathcal{F}_{n}}\cdot\mathcal{E}_{B}^{y}\) in terms of \(x\) and \(y\) direction, respectively. Then we can obtain the predicted query-based flow \(\mathcal{F}_{n}^{QF}=[\mathcal{F}_{n}^{QF}(x);\mathcal{F}_{n}^{QF}(y)]\in \mathbb{R}^{2\times\frac{H}{4}\times\frac{W}{4}}\) for the \(n^{th}\) query. The dot product of the flow embeddings and projected motion maps quantifies the similarity between the segment-level motion vector and the encoded motion information at each location on the feature maps, which reflects the movement for each query on the spatial dimension. Additionally, we use the updated motion feature maps \(\{\mathbf{\hat{B}}_{1},\mathbf{\hat{B}}_{2},\mathbf{\hat{B}}_{3}\}\) to predict a pixel-wise flow \(\mathcal{F}^{PF}\in\mathbb{R}^{2\times\frac{H}{4}\times\frac{W}{4}}\) in a coarse-to-fine manner, following [12], which is used to refine the query-based flow. To obtain the final flow map \(\mathcal{F}_{n}^{j\to k}\in\mathbb{R}^{2\times H\times W}\) for the \(n^{th}\) query, we apply a 2D convolutional layer to the concatenated \(\mathcal{F}_{n}^{QF}\) and \(\mathcal{F}^{PF}\), and then upsample the flow map to the original resolution: \[\mathcal{F}_{n}^{j\to k}=\mathrm{Upsample}\left(\mathrm{Conv2D}\left( \left[\mathcal{F}_{n}^{QF};\mathcal{F}^{PF}\right]\right)\right). \tag{4}\] **Remark.** We use \(N\) flow queries, which are initialized with the learned per-segment queries \(\mathcal{Q}_{\mathcal{O}}^{k}\) from the key frame, to capture the motion for each mask predicted on the key frame. In contrast to existing optical flow estimation methods, which model pixel-to-pixel dense correspondence, we propose a novel query-based flow that incorporates segment-level motion information between adjacent frames, assisting more accurate mask propagation in VSS. ## 4 Experiments ### Experimental Setup **Datasets.** We evaluate our method on two benchmark datasets: VSPW [42] and Cityscapes [9]. VSPW is the largest video semantic segmentation benchmark, consisting of 2,806 training clips (198,244 frames), 343 validation clips (24,502 frames), and 387 test clips (28,887 frames). Each video in VSPW has densely annotated frames at a rate of 15 frames per second, covering 124 semantic categories across both indoor and outdoor scenes. Additionally, we present results on the Cityscapes dataset, which provides annotations every 30 frames. This dataset serves as an additional evaluation to showcase the effectiveness of our proposed method. **Implementation details.** By default, all experiments are trained with a batch size of 16 on 8 NVIDIA GPUs. Note that pre-training is important for the DETR-like segmentors. For VSPW dataset, we pretrain Mask2Former on the image semantic segmentation task, serving as the per-frame baseline. For Cityscapes, we adopt pretrained Mask2Former with ResNet [18] and Swin Transformer [39] backbones from [7]. For the motion encoder, we utilized the weights from FlowNet [12] encoder which is pre-trained on the synthetic Flying Chairs dataset; the motion decoder and flow head are randomly initialized. We freeze most parameters of the pretrained Mask2Former, fine-tuning only its classification and mask head, as well as the proposed flow module. We apply loss functions in Mask2Former, including a classification loss and a binary mask loss on the class embeddings and warped mask embeddings, respectively. All the models are trained with the AdamW optimizer [41] for a maximum of 90k iterations and the polynomial learning rate decay schedule [4] with an initial learning rate of 5e-5. **Compared methods.** To validate the effectiveness of our proposed model, we include the following methods for study: _Per-frame_: Employing Mask2Former [7] to independently predict masks for each video frame. _Optical flow_: Warping mask predictions from key frames to non-key frames using optical flow estimated by a lightweight optical flow model, _i.e_., FlowNet [12]. _Query-random_: Implementing the proposed flow estimation method with randomly initialized flow queries. _Query-learned_: Utilizing the proposed flow estimation method using the learned queries \(\mathcal{Q}_{\mathcal{O}}^{k}\) as initialization. _Query-for-PF_: Using pixel-wise flow \(\mathcal{F}^{PF}\) which incorporates segment-level information from the learned flow queries for warping predictions. **Evaluation metrics.** Following previous works, we use mean Intersection over Union (mIoU) at single-scale inference, and Weighted IoU (WiIoU) to evaluate the segmentation performance. We also compare models in terms of their computational complexity with FLOPs to evaluate the efficiency of these VSS models. We calculate FLOPs for each single frame by averaging on a video clip with 15 frames and fix the image resolution of \(480\times 853\) and \(1024\times 2048\) for VSPW and Cityscapes, respectively. For VSPW dataset, we adopt video consistency (VC) [42] to evaluate the visual smoothness of the predicted segmentation maps across the temporal dimension. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline Methods & Backbone & mIoU\(\uparrow\) & WIoU\(\uparrow\) & mVCs\(\uparrow\) & mVC\({}_{16}\uparrow\) & GFLOPs\(\downarrow\) & Params(M)\(\downarrow\) & FPS\(\uparrow\) \\ \hline DeepLab3+ [4] & R101 & 34.7 & 58.8 & 83.2 & 78.2 & 379.0 & 62.7 & 9.25 \\ UperNet [60] & R101 & 36.5 & 58.6 & 82.6 & 76.1 & 403.6 & 83.2 & 16.05 \\ PSPNet [76] & R101 & 36.5 & 58.1 & 84.2 & 79.6 & 401.8 & 70.5 & 13.84 \\ OCRNet [70] & R101 & 36.7 & 59.2 & 84.0 & 79.0 & 361.7 & 58.1 & 14.39 \\ ETC [38] & OCRNet & 37.5 & 59.1 & 84.1 & 79.1 & 361.7 & - & - \\ NetWarp [15] & OCRNet & 37.5 & 58.9 & 84.0 & 79.0 & 1207 & - & - \\ TCB [42] & R101 & 37.8 & 59.5 & 87.9 & 84.0 & 1692 & - & - \\ Segformer [62] & MiT-B2 & 43.9 & 63.7 & 86.0 & 81.2 & 100.8 & 24.8 & 16.16 \\ Segformer & MiT-B5 & 48.9 & 65.1 & 87.8 & 83.7 & 185.0 & 82.1 & 9.48 \\ CFFM-VSS [53] & MiT-B2 & 44.9 & 64.9 & 89.8 & 85.8 & 143.2 & 26.5 & 10.08 \\ CFFM-VSS & MiT-B5 & 49.3 & 65.8 & 90.8 & 87.1 & 413.5 & 85.5 & 4.58 \\ MRCFA [54] & MiT-B2 & 45.3 & 64.7 & 90.3 & 86.2 & 127.9 & 27.3 & 10.7 \\ MRCFA & MiT-B5 & 49.9 & 66.0 & 90.9 & 87.4 & 373.0 & 84.5 & 5.02 \\ \hline Mask2Former [7] & R50 & 38.5 & 60.2 & 81.3 & 76.4 & 110.6 & 44.0 & 19.4 \\ Mask2Former & R101 & 39.3 & 60.1 & 82.5 & 77.6 & 141.3 & 63.0 & 16.90 \\ Mask2Former & Swin-T & 41.2 & 62.6 & 84.5 & 80.0 & 114.4 & 47.4 & 17.13 \\ Mask2Former & Swin-S & 42.1 & 63.1 & 84.7 & 79.3 & 152.2 & 68.9 & 14.52 \\ Mask2Former & Swin-B & 54.1 & 70.3 & 86.6 & 82.9 & 223.5 & 107.1 & 11.45 \\ Mask2Former & Swin-L & 56.1 & 70.8 & 87.6 & 84.1 & 402.7 & 215.1 & 8.41 \\ MPVSS & R50 & 37.5 & 59.0 & 84.1 & 77.2 & 38.9 & 84.1 & 33.93 \\ MPVSS & R101 & 38.8 & 59.0 & 84.8 & 79.6 & 45.1 & 103.1 & 32.38 \\ MPVSS & Swin-T & 39.9 & 62.0 & 85.9 & 80.4 & 39.7 & 114.0 & 32.86 \\ MPVSS & Swin-S & 40.4 & 62.0 & 86.0 & 80.7 & 47.3 & 108.0 & 30.61 \\ MPVSS & Swin-B & 52.6 & 68.4 & 89.5 & 85.9 & 61.5 & 147.0 & 27.38 \\ MPVSS & Swin-L & 53.9 & 69.1 & 89.6 & 85.8 & 97.3 & 255.4 & 23.22 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparisons with state-of-the-art methods on VSPW dataset. We report mean IoU (mIoU) and Weighted IoU (WiIoU) for the performance evaluation and Video Consistency (VC) for the temporal consistency comparison. We measure FLOPs (G) for computational costs, averaging over a 15-frame clip with resolution of \(480\times 853\). Frame-per-second (FPS) is measured on a single NVIDIA V100 GPU with 3 repeated runs. ### Comparison with State-of-the-art Methods The comparisons on VSPW dataset are shown in Tab. 1. We also report the backbone used by each method and the computational complexity (GFOLPs), averaged over a video clip of 15 frames, with each frame of resolution 480\(\times\)853. For our proposed models, we use 5 as the default key frame interval for comparison. We have the following analysis: **1)** The proposed \(\mathtt{MPVSS}\) achieves comparable accuracy with significantly reduced computational cost when compared to the strong baseline method, Mask2Former [7]. Specifically, compared to Mask2Former, our \(\mathtt{MPVSS}\) reduces the computational cost in terms of FLOPs by 71.7G, 96.2G, 74.7G, 104.9G, 162.0G, and 305.4G on R50, R101, Swin-T, Swin-S, Swin-B, and Swin-L backbones, respectively, while only experiencing 1.0%, 0.5%, 1.3%, 1.7%, 1.5%, and 2.2% degradation in the mIoU score. These results demonstrate a promising trade-off between accuracy and efficiency of our framework. **2)** Our models with different backbones perform on par with the state-of-the-art VSS methods on the VSPW dataset with much fewer FLOPs. Specifically, our \(\mathtt{MPVSS}\) model with the Swin-L backbone achieves an impressive mIoU of 53.9%, surpassing CFFM-VSS with MiT-B5 backbone by 4.6%. Notably, our approach achieves this performance with only 24% of the computational cost in terms of FLOPs. **3)** When considering the comparable computational cost, our model consistently outperforms other state-of-the-art methods. For instance, \(\mathtt{MPVSS}\) with Swin-L backbone surpasses Segformer with MiT-B2 backbone by 10% mIoU, under the FLOPs around 3.5G. These compelling results highlight the exceptional performance and efficiency of our \(\mathtt{MPVSS}\) against the compared VSS approaches. In terms of temporal consistency, our proposed \(\mathtt{MPVSS}\) achieves comparable mVC scores when compared to approaches that explicitly exploit cross-frame temporal context [54] or incorporate temporal consistency constraints during training [38]. We provide an explanation of VC score and a discussion in the supplementary material. We further evaluate our \(\mathtt{MPVSS}\) on the Cityscapes dataset and achieve state-of-the-art results with relatively low computational complexity. Notably, the Cityscapes dataset only provides sparse anno \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline Methods & Backbone & mIoU & GFLOPs & Params(M) & FPS \\ \hline FCN [40] & R101 & 76.6 & 2203.3 & 68.5 & 2.83 \\ PSPNet [76] & R101 & 78.5 & 2048.9 & 67.9 & 2.88 \\ DFF [83] & R101 & 68.7 & 100.8 & - & - \\ DVSN [67] & R101 & 70.3 & 978.4 & - & - \\ Accel [22] & R101 & 72.1 & 824.4 & - & - \\ ETC [38] & R18 & 71.1 & 434.1 & - & - \\ SegFormer [62] & MiT-B1 & 78.5 & 243.7 & 13.8 & 20.7 \\ SegFormer & MiT-B5 & 82.4 & 1460.4 & 84.7 & 7.20 \\ \hline CFFM-VSS [53] & MiT-B0 & 74.0 & 80.7 & 4.6 & 15.79 \\ CFFM-VSS & MiT-B1 & 75.1 & 158.7 & 15.4 & 11.71 \\ MRCEA [54] & MiT-B0 & 72.8 & 77.5 & 4.2 & 16.55 \\ MRCEA & MiT-B1 & 75.1 & 145 & 14.9 & 12.97 \\ \hline Mask2Former [7] & R50 & 79.4 & 529.9 & 44.0 & 6.58 \\ Mask2Former & R101 & 80.1 & 685.5 & 63.0 & 5.68 \\ Mask2Former & Swin-T & 82.1 & 543.6 & 47.4 & 5.41 \\ Mask2Former & Swin-S & 82.6 & 730.1 & 68.7 & 4.31 \\ Mask2Former & Swin-B & 83.3 & 1057.0 & 107.0 & 3.26 \\ Mask2Former & Swin-L & 83.3 & 1911.3 & 215.0 & 2.11 \\ Mask2Former-DFF & R101 & 77.1 & 457.4 & 101.7 & 7.14 \\ Mask2Former-DFF & Swin-B & 79.9 & 525.3 & 145.7 & 6.09 \\ Mask2Former-Aceel & R101+R50 & 78.9 & 594.8 & 145.7 & 5.78 \\ Mask2Former-Aceel & Swin-B+Swin-T & 81.4 & 680.1 & 193.1 & 4.40 \\ \(\mathtt{MPVSS}\) & R50 & 78.4 & 173.2 & 84.1 & 13.43 \\ \(\mathtt{MPVSS}\) & R101 & 78.2 & 204.3 & 103.1 & 12.55 \\ \(\mathtt{MPVSS}\) & Swin-T & 80.7 & 175.9 & 114.0 & 12.33 \\ \(\mathtt{MPVSS}\) & Swin-S & 81.3 & 213.2 & 108.0 & 10.98 \\ \(\mathtt{MPVSS}\) & Swin-B & 81.7 & 278.6 & 147.0 & 9.54 \\ \(\mathtt{MPVSS}\) & Swin-L & 81.6 & 449.5 & 255.4 & 7.24 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparisons with the state-of-the-art VSS methods on Cityscapes. We report mIoU for the semantic segmentation performance and FLOPs (G) for the computational cost comparison, which is averaged by a video clip of 15 images with a resolution of \(1024\times 2048\). Frame-per-second (FPS) is measured on a single NVIDIA V100 GPU with 3 repeated runs. tations for one out of every 30 frames in each video. Therefore, for a fair comparison with previous efficient VSS methods, we report the results following the same evaluation process and compute their average FLOPs per-frame over all video frames. Specifically, compared to Mask2Former, our MPVSS reduces the computational cost in terms of FLOPs by 0.36T, 0.48T, 0.37T, 0.52T, 0.78T and 1.5T on R50, R101, Swin-T, Swin-S, Swin-B, and Swin-L backbones, respectively, with only 1.0%, 1.9%, 1.4%, 1.3%, 1.6% and 1.7% degradation in the mIoU score. The strong accuracy-efficiency trade-offs demonstrate that our proposed framework also works on sparse annotated video data. ### Ablation Study **Effects of the query-based flow.** We compare the performance between the proposed query-based flow and the traditional optical flow in Table 2(a). Overall, our approach demonstrates superior performance compared with the models relying solely on optical flow for mask propagation from key frames to non-key frames. To be specific, the utilization of the proposed query-based flow for mask warping consistently boosts the performance by 1.4%, 0.6%, 1.5%, 0.3%, 0.8% and 0.7% in terms of mIoU on R50, R101, Swin-T, Swin-S, Swin-B and Swin-L, respectively, which leads to less degradation to the per-frame baseline compared to models using pixel-wise optical flow. Moreover, directly copying the mask predictions from the key frame leads to a clear performance drop. **Effects of each component.** In Table 3, we verify the effects of each component design in our query-based flow. We conduct experiments on VSPW using Swin-T and Swin-B. The performance of _Query-random_ on the two backbones is slightly inferior to using optical flow. This suggests that without per-segment query initialization, the query-based flow struggles to retrieve specific motion information from motion features for one-to-one mask warping from the key frame, resulting in suboptimal performance. Then, by using key-frame queries as the initialization for flow queries, we observe an improvement of 1.1% and 1.5% mIoU scores for _Query-learned_ on Swin-T and Swin-B respectively, compared to _Query-random_. Furthermore, we develop a variant design _Query-for-PF_, which introduces additional performance gain by 0.1% and 0.2% compared to optical flow respectively. These results underscore two findings: **1)** learning query-based flow for each segment requires the learned queries from the key frame, for extracting meaningful motion features for each segment; **2)** integrating segment-level information benefits the learning of pixel-wise flow. Therefore, we incorporate a refinement step using the enhanced pixel-wise flow for producing each query-based flow, which results in the best performance on both backbones. The outcomes highlight that warping mask predictions using our query-based flow consistently outperforms the methods using traditional optical flow on VSS. **Effects of the key frame interval.** In Fig. 2(a) and Fig. 2(b), we show the mIoU scores and TFLOPs versus key frame interval for our MPVSS with different backbones, respectively. Overall, all the results achieve significant FLOPs reduction with decent accuracy drop, which smoothly trade in accuracy for computational resource and fit different application needs flexibly. In Fig. 2(c), we present the performance of warping mask predictions using our query-based flow and traditional optical flow, as well as direct copying version, on Swin-T. We observe that, when the key frame interval increases from 2 to 10, the performance of the proposed query-based flow only slightly degraded by 1.1% and 0.7% in terms of mIoU and WIoU score, while the scores are decreased by 2.63% and 2.96% using optical flow. These results indicates that the proposed query-based flow is more robust to capture long-term temporal changing for videos. We provide comparisons on other backbones in the supplementary material. \begin{table} \end{table} Table 3: Ablation study on VSPW dataset. ### Qualitative Results In Fig. 4, we present two examples from VSPW dataset to visually compare the prediction quality of the per-frame baseline, the method using optical flow, and the proposed MPVSS. We observe that the proposed query-based flow exhibits superior performance in capturing the movement of small objects when compared to optical flow. Additionally, the proposed mask propagation framework demonstrates enhanced category consistency compared to the per-frame baseline. For instance, the per-frame baseline exhibits category inconsistency as each frame is independently predicted. This highlights the advantage of our proposed approach in maintaining category coherence across consecutive frames. ## 5 Conclusion and Future Work In this paper, we have presented a simple yet effective mask propagation framework, dubbed MPVSS, for efficient VSS. Specifically, we have employed a strong query-based image segmentor to process key frames and generate accurate binary masks and class predictions. Then we have proposed to estimate specific flow maps for each segment-level mask prediction of the key frame. Finally, the mask predictions from key frames were subsequently warped to other non-key frames via the proposed query-based flow maps. Extensive experiments on VSPW and Cityscapes have demonstrated that our MPVSS achieves SOTA accuracy and efficiency trade-off. Future work could explore extending MPVSS to other video dense prediction tasks, _e.g._, video object and instance segmentation. **Limitations and broader impacts.** Although our framework significantly reduces computational costs for processing videos, the new module leads to an increase in the number of parameters in the models. Additionally, there might be a need for fine-tuning the new module or optimizing hyperparameters to achieve the best performance. It can lead to increased carbon emissions, indicating the potential negative societal impact. Figure 4: Qualitative results. We present two examples from VSPW dataset. In each example, we display a series of consecutive frames from left to right. From top to bottom: (a) the ground truth; (b) the predictions of per-frame baseline, (c) the predictions of method using optical flow; (d) the predictions of the proposed MPVSS. Figure 3: Effects of the key frame interval. We compare the trade-off between accuracy and efficiency on variant backbones and warping strategies with respect to key frame interval. The number of FLOPs (T) is calculated based on a clip of 15 frames with a resolution of \(480\times 853\).
2303.16699
HyperLTL Satisfiability Is Highly Undecidable, HyperCTL* is Even Harder
Temporal logics for the specification of information-flow properties are able to express relations between multiple executions of a system. The two most important such logics are HyperLTL and HyperCTL*, which generalise LTL and CTL* by trace quantification. It is known that this expressiveness comes at a price, i.e. satisfiability is undecidable for both logics. In this paper we settle the exact complexity of these problems, showing that both are in fact highly undecidable: we prove that HyperLTL satisfiability is $\Sigma_1^1$-complete and HyperCTL* satisfiability is $\Sigma_1^2$-complete. These are significant increases over the previously known lower bounds and the first upper bounds. To prove $\Sigma_1^2$-membership for HyperCTL*, we prove that every satisfiable HyperCTL* sentence has a model that is equinumerous to the continuum, the first upper bound of this kind. We also prove this bound to be tight. Furthermore, we prove that both countable and finitely-branching satisfiability for HyperCTL* are as hard as truth in second-order arithmetic, i.e. still highly undecidable. Finally, we show that the membership problem for every level of the HyperLTL quantifier alternation hierarchy is $\Pi_1^1$-complete.
Marie Fortin, Louwe B. Kuijer, Patrick Totzke, Martin Zimmermann
2023-03-29T13:51:41Z
http://arxiv.org/abs/2303.16699v2
# HyperlLTL satisfiability is highly undecidable, hypercTL* is even Harder ###### Abstract. Temporal logics for the specification of information-flow properties are able to express relations between multiple executions of a system. The two most important such logics are HyperLTL and HyperCTL*, which generalise LTL and CTL* by trace quantification. It is known that this expressiveness comes at a price, i.e. satisfiability is undecidable for both logics. In this paper we settle the exact complexity of these problems, showing that both are in fact highly undecidable: we prove that HyperLTL satisfiability is \(\Sigma^{1}_{1}\)-complete and HyperCTL* satisfiability is \(\Sigma^{2}_{1}\)-complete. These are significant increases over the previously known lower bounds and the first upper bounds. To prove \(\Sigma^{2}_{1}\)-membership for HyperCTL*, we prove that every satisfiable HyperCTL* sentence has a model that is equinumerous to the continuum, the first upper bound of this kind. We also prove this bound to be tight. Furthermore, we prove that both countable and finitely-branching satisfiability for HyperCTL* are as hard as truth in second-order arithmetic, i.e. still highly undecidable. Finally, we show that the membership problem for every level of the HyperLTL quantifier alternation hierarchy is \(\Pi^{1}_{1}\)-complete. Key words and phrases:HyperLTL, HyperCTL*, Satisfiability, Analytical Hierarchy ## 1. Introduction Most classical temporal logics like LTL and CTL* refer to a single execution trace at a time while information-flow properties, which are crucial for security-critical systems, require reasoning about multiple executions of a system. Clarkson and Schneider [10] coined the term _hyperproperties_ for such properties which, structurally, are sets _of sets_ of traces. Just like ordinary trace and branching-time properties, hyperproperties can be specified using ###### Abstract In this paper, we study the hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTL (LTL) satisfiability problem for hyperLTLTL (LTL) satisfiability problem for Our Contributions.In this paper, we settle the complexity of the satisfiability problems for HyperLTL and HyperCTL\({}^{*}\) by determining exactly how undecidable they are. That is, we provide matching lower and upper bounds in terms of the analytical hierarchy and beyond, where decision problems (encoded as subsets of \(\mathbb{N}\)) are classified based on their definability by formulas of higher-order arithmetic, namely by the type of objects one can quantify over and by the number of alternations of such quantifiers. We refer to Roger's textbook [14] for fully formal definitions. For our purposes, it suffices to recall the following classes. \(\Sigma^{0}_{1}\) contains the sets of natural numbers of the form \[\{x\in\mathbb{N}\mid\exists x_{0}.\ \cdots\exists x_{k}.\ \psi(x,x_{0},\ldots,x_{k })\}\] where quantifiers range over natural numbers and \(\psi\) is a quantifier-free arithmetic formula. The notation \(\Sigma^{0}_{1}\) signifies that there is a single block of existential quantifiers (the subscript 1) ranging over natural numbers (type 0 objects, explaining the superscript 0). Analogously, \(\Sigma^{1}_{1}\) is induced by arithmetic formulas with existential quantification of type 1 objects (functions mapping natural numbers to natural numbers) and arbitrary (universal and existential) quantification of type 0 objects. Finally, \(\Sigma^{2}_{1}\) is induced by arithmetic formulas with existential quantification of type 2 objects (functions mapping type 1 objects to natural numbers) and arbitrary quantification of type 0 and type 1 objects. So, \(\Sigma^{0}_{1}\) is part of the first level of the arithmetic hierarchy, \(\Sigma^{1}_{1}\) is part of the first level of the analytical hierarchy, while \(\Sigma^{2}_{1}\) is not even analytical. In terms of this classification, we prove that HyperLTL satisfiability is \(\Sigma^{1}_{1}\)-complete while HyperCTL\({}^{*}\) satisfiability is \(\Sigma^{2}_{1}\)-complete, thereby settling the complexity of both problems and showing that they are highly undecidable. In both cases, this is a significant increase of the lower bound and the first upper bound. First, let us consider HyperLTL satisfiability. The \(\Sigma^{1}_{1}\) lower bound is a straightforward reduction from the recurrent tiling problem, a standard \(\Sigma^{1}_{1}\)-complete problem asking whether \(\mathbb{N}\times\mathbb{N}\) can be tiled by a given finite set of tiles. So, let us consider the upper bound: \(\Sigma^{1}_{1}\) allows to quantify over type 1 objects: functions from natural numbers to natural numbers, or, equivalently, over sets of natural numbers, i.e. countable objects. On the other hand, HyperLTL formulas are evaluated over sets of infinite traces, i.e. uncountable objects. Thus, to show that quantification over type 1 objects is sufficient, we need to apply a result of Finkbeiner and Zimmermann proving that every satisfiable HyperLTL formula has a countable model [10]. Then, we can prove \(\Sigma^{1}_{1}\)-membership by expressing the existence of a model and the existence of appropriate Skolem functions for the trace quantifiers by type 1 quantification. We also prove that the satisfiability problem remains \(\Sigma^{1}_{1}\)-complete when restricted to ultimately periodic traces, or, equivalently, when restricted to finite traces. Then, we turn our attention to HyperCTL\({}^{*}\) satisfiability. Recall that HyperCTL\({}^{*}\) formulas are evaluated over (possibly infinite) transition systems, which can be much larger than type 2 objects, as the cardinality of type 2 objects is bounded by \(\mathfrak{c}\), the cardinality of the continuum. Hence, to obtain our upper bound on the complexity we need, just like in the case of HyperLTL, an upper bound on the size of minimal models of satisfiable HyperCTL\({}^{*}\) formulas. To this end, we generalise the proof of Finkbeiner and Zimmermann to HyperCTL\({}^{*}\), showing that every satisfiable HyperCTL\({}^{*}\) formula has a model of size \(\mathfrak{c}\). We also exhibit a satisfiable HyperCTL\({}^{*}\) formula \(\varphi_{\mathfrak{c}}\) whose models all have at least cardinality \(\mathfrak{c}\), as they have to encode all subsets of \(\mathbb{N}\) by disjoint paths. Thus, our upper bound \(\mathfrak{c}\) is tight. With this upper bound on the cardinality of models, we are able to prove \(\Sigma^{2}_{1}\)-membership of HyperCTL\({}^{*}\) satisfiability by expressing with type 2 quantification the existence of a model and the existence of a winning strategy in the induced model checking game. The matching lower bound is proven by directly encoding the arithmetic formulas inducing \(\Sigma^{2}_{1}\) as instances of the HyperCTL\({}^{*}\) satisfiability problem. To this end, we use the formula \(\varphi_{\mathfrak{c}}\) whose models have for each subset \(A\subseteq\mathbb{N}\) a path encoding \(A\). Now, quantification over type 0 objects (natural numbers) is simulated by quantification of a path encoding a singleton set, quantification over type 1 objects (which can be assumed to be sets of natural numbers) is simulated by quantification over the paths encoding such subsets, and existential quantification over type 2 objects (which can be assumed to be subsets of \(2^{\mathbb{N}}\)) is simulated by the choice of the model, i.e. a model encodes \(k\) subsets of \(2^{\mathbb{N}}\) if there are \(k\) existential type 2 quantifiers. Finally, the arithmetic operations can easily be implemented in HyperLTL, and therefore also in HyperCTL\({}^{*}\). Using variations of these techniques, we also show that HyperCTL\({}^{*}\) satisfiability restricted to countable or to finitely branching models is equivalent to truth of second-order arithmetic, i.e. the question whether a given sentence of second-order is satisfied in the structure \((\mathbb{N},0,1,+,\cdot,<)\). Restricting the class of models makes the problem simpler, but it is still highly-undecidable. After settling the complexity of satisfiability, we turn our attention to the HyperLTL quantifier alternation hierarchy and its relation to satisfiability. Rabe remarks that the hierarchy is strict [14]. On the other hand, Mascle and Zimmermann show that every HyperLTL formula has a polynomial-time computable equi-satisfiable formula with one quantifier alternation [13]. Here, we present a novel proof of strictness by embedding the FO\([<]\) alternation hierarchy, which is also strict [1, 10]. We use our construction to prove that for every \(n>0\), deciding whether a given formula is equivalent to a formula with at most \(n\) quantifier alternations is \(\Pi^{1}_{1}\)-complete (\(\Pi^{1}_{1}\) is the co-class of \(\Sigma^{1}_{1}\), i.e. containing the complements of sets in \(\Sigma^{1}_{1}\)). ## 2. Preliminaries Fix a finite set AP of atomic propositions. A _trace_ over AP is a map \(t\colon\mathbb{N}\to 2^{\mathrm{AP}}\), denoted by \(t(0)t(1)t(2)\cdots\). It is _ultimately periodic_, if \(t=x\cdot y^{\omega}\) for some \(x,y\in(2^{\mathrm{AP}})^{+}\), i.e. there are \(s,p>0\) with \(t(n)=t(n+p)\) for all \(n\geq s\). The set of all traces over AP is \((2^{\mathrm{AP}})^{\omega}\). A transition system \(\mathcal{T}=(V,E,v_{I},\lambda)\) consists of a set \(V\) of vertices, a set \(E\subseteq V\times V\) of (directed) edges, an initial vertex \(v_{I}\in V\), and a labelling \(\lambda\colon V\to 2^{\mathrm{AP}}\) of the vertices by sets of atomic propositions. A path \(\rho\) through \(\mathcal{T}\) is an infinite sequence \(\rho(0)\rho(1)\rho(2)\cdots\) of vertices with \((\rho(n),\rho(n+1))\in E\) for every \(n\geq 0\). The trace of \(\rho\) is defined as \(\lambda(\rho(0))\lambda(\rho(1))\lambda(\rho(2))\cdots\). ### HyperLTL The formulas of HyperLTL are given by the grammar \[\varphi\,{\rm::=}\,\exists\pi.\ \varphi\mid\forall\pi.\ \varphi\mid\psi\,{\rm::=}\,a_{ \pi}\mid\neg\psi\mid\psi\vee\psi\mid{\bf X}\,\psi\mid\psi\,{\bf U}\,\psi\] where \(a\) ranges over atomic propositions in AP and where \(\pi\) ranges over a fixed countable set \(\mathcal{V}\) of _(trace) variables_. Conjunction, implication, and equivalence are defined as usual, and the temporal operators eventually \({\bf F}\) and always \({\bf G}\) are derived as \({\bf F}\,\psi=\neg\psi\,{\bf U}\,\psi\) and \({\bf G}\,\psi=\neg\,{\bf F}\,\neg\psi\). A _sentence_ is a formula without free variables. The semantics of HyperLTL is defined with respect to a _trace assignment_, a partial mapping \(\Pi\colon\mathcal{V}\to(2^{\mathrm{AP}})^{\omega}\). The assignment with empty domain is denoted by \(\Pi_{\emptyset}\). Given a trace assignment \(\Pi\), a variable \(\pi\), and a trace \(t\) we denote by \(\Pi[\pi\to t]\) the assignment that coincides with \(\Pi\) everywhere but at \(\pi\), which is mapped to \(t\). Furthermore, \(\Pi[j,\infty)\) denotes the trace assignment mapping every \(\pi\) in \(\Pi\)'s domain to \(\Pi(\pi)(j)\Pi(\pi)(j+1)\Pi(\pi)(j+2)\cdots\), its suffix from position \(j\) onwards. For sets \(T\) of traces and trace assignments \(\Pi\) we define * \((T,\Pi)\models a_{\pi}\) if \(a\in\Pi(\pi)(0)\), * \((T,\Pi)\models\neg\psi\) if \((T,\Pi)\not\models\psi\), * \((T,\Pi)\models\psi_{1}\vee\psi_{2}\) if \((T,\Pi)\models\psi_{1}\) or \((T,\Pi)\models\psi_{2}\), * \((T,\Pi)\models\mathbf{X}\,\psi\) if \((T,\Pi[1,\infty))\models\psi\), * \((T,\Pi)\models\psi_{1}\,\mathbf{U}\,\psi_{2}\) if there is a \(j\geq 0\) such that \((T,\Pi[j,\infty))\models\psi_{2}\) and for all \(0\leq j^{\prime}<j\): \((T,\Pi[j^{\prime},\infty))\models\psi_{1}\), * \((T,\Pi)\models\exists\pi.\)\(\varphi\) if there exists a trace \(t\in T\) such that \((T,\Pi[\pi\to t])\models\varphi\), and * \((T,\Pi)\models\forall\pi.\)\(\varphi\) if for all traces \(t\in T\): \((T,\Pi[\pi\to t])\models\varphi\). We say that \(T\)_satisfies_ a sentence \(\varphi\) if \((T,\Pi_{\emptyset})\models\varphi\). In this case, we write \(T\models\varphi\) and say that \(T\) is a _model_ of \(\varphi\). Two HyperLTL sentences \(\varphi\) and \(\varphi^{\prime}\) are equivalent if \(T\models\varphi\) if and only if \(T\models\varphi^{\prime}\) for every set \(T\) of traces. Although HyperLTL sentences are required to be in prenex normal form, they are closed under Boolean combinations, which can easily be seen by transforming such a formula into an equivalent formula in prenex normal form. ### HyperCTL\({}^{*}\) The formulas of HyperCTL\({}^{*}\) are given by the grammar \[\varphi\,{::=}\,a_{\pi}\mid\neg\varphi\mid\varphi\vee\varphi\mid\mathbf{X}\, \varphi\mid\varphi\,\mathbf{U}\,\varphi\mid\exists\pi.\ \varphi\mid\forall\pi.\ \varphi\] where \(a\) ranges over atomic propositions in AP and where \(\pi\) ranges over a fixed countable set \(\mathcal{V}\) of _(path) variables_, and where we require that each temporal operator appears in the scope of a path quantifier. Again, other Boolean connectives and temporal operators are derived as usual. Sentences are formulas without free variables. Let \(\mathcal{T}\) be a transition system. The semantics of HyperCTL\({}^{*}\) is defined with respect to a _path assignment_, a partial mapping \(\Pi\) from variables in \(\mathcal{V}\) to paths of \(\mathcal{T}\). The assignment with empty domain is denoted by \(\Pi_{\emptyset}\). Given a path assignment \(\Pi\), a variable \(\pi\), and a path \(\rho\) we denote by \(\Pi[\pi\to\rho]\) the assignment that coincides with \(\Pi\) everywhere but at \(\pi\), which is mapped to \(\rho\). Furthermore, \(\Pi[j,\infty)\) denotes the path assignment mapping every \(\pi\) in \(\Pi\)'s domain to \(\Pi(\pi)(j)\Pi(\pi)(j+1)\Pi(\pi)(j+2)\cdots\), its suffix from position \(j\) onwards. For transition systems \(\mathcal{T}\) and path assignments \(\Pi\) we define * \((\mathcal{T},\Pi)\models a_{\pi}\) if \(a\in\lambda(\Pi(\pi)(0))\), where \(\lambda\) is the labelling function of \(\mathcal{T}\), * \((\mathcal{T},\Pi)\models\neg\psi\) if \((\mathcal{T},\Pi)\not\models\psi\), * \((\mathcal{T},\Pi)\models\psi_{1}\vee\psi_{2}\) if \((\mathcal{T},\Pi)\models\psi_{1}\) or \((\mathcal{T},\Pi)\models\psi_{2}\), * \((\mathcal{T},\Pi)\models\mathbf{X}\,\psi\) if \((\mathcal{T},\Pi[1,\infty))\models\psi\), * \((\mathcal{T},\Pi)\models\psi_{1}\,\mathbf{U}\,\psi_{2}\) if there exists a \(j\geq 0\) such that \((\mathcal{T},\Pi[j,\infty))\models\psi_{2}\) and for all \(0\leq j^{\prime}<j\): \((\mathcal{T},\Pi[j^{\prime},\infty))\models\psi_{1}\), * \((\mathcal{T},\Pi)\models\exists\pi.\)\(\varphi\) if there exists a path \(\rho\) of \(\mathcal{T}\), starting in rcnt(\(\Pi\)), such that \((\mathcal{T},\Pi[\pi\to\rho])\models\varphi\), and * \((\mathcal{T},\Pi)\models\forall\pi.\)\(\varphi\) if for all paths \(\rho\) of \(\mathcal{T}\) starting in rcnt(\(\Pi\)): \((\mathcal{T},\Pi[\pi\to\rho])\models\varphi\). Here, \(\operatorname{rcnt}(\Pi)\) is the initial vertex of \(\Pi(\pi)\), where \(\pi\) is the path variable most recently added to or changed in \(\Pi\), and the initial vertex of \(\mathcal{T}\) if \(\Pi\) is empty.1 We say that \(\mathcal{T}\)_satisfies_ a sentence \(\varphi\) if \((\mathcal{T},\Pi_{\emptyset})\models\varphi\). In this case, we write \(\mathcal{T}\models\varphi\) and say that \(\mathcal{T}\) is a _model_ of \(\varphi\). Footnote 1: For the sake of simplicity, we refrain from formalising this notion properly, which would require to keep track of the order in which variables are added to or changed in \(\Pi\). ### Complexity Classes for Undecidable Problems A type \(0\) object is a natural number \(n\in\mathbb{N}\), a type \(1\) object is a function \(f\colon\mathbb{N}\to\mathbb{N}\), and a type \(2\) object is a function \(f\colon(\mathbb{N}\to\mathbb{N})\to\mathbb{N}\). As usual, predicate logic with quantification over type \(0\) objects (first-order quantifiers) is called first-order logic. Second- and third-order logic are defined similarly. We consider formulas of arithmetic, i.e. predicate logic with signature \((0,1,+,\cdot,<)\) evaluated over the natural numbers. With a single free variable of type \(0\), such formulas define sets of natural numbers (see, e.g. Rogers [14] for more details): * \(\Sigma^{0}_{1}\) contains the sets of the form \(\{x\in\mathbb{N}\mid\exists x_{0}.\ \cdots\exists x_{k}.\ \psi(x,x_{0},\ldots,x_{k})\}\) where \(\psi\) is a quantifier-free arithmetic formula and the \(x_{i}\) are variables of type \(0\). * \(\Sigma^{1}_{1}\) contains the sets of the form \(\{x\in\mathbb{N}\mid\exists x_{0}.\ \cdots\exists x_{k}.\ \psi(x,x_{0},\ldots,x_{k})\}\) where \(\psi\) is an arithmetic formula with arbitrary (existential and universal) quantification over type \(0\) objects and the \(x_{i}\) are variables of type \(1\). * \(\Sigma^{2}_{1}\) contains the sets of the form \(\{x\in\mathbb{N}\mid\exists x_{0}.\ \cdots\exists x_{k}.\ \psi(x,x_{0},\ldots,x_{k})\}\) where \(\psi\) is an arithmetic formula with arbitrary (existential and universal) quantification over type \(0\) and type \(1\) objects and the \(x_{i}\) are variables of type \(2\). Note that there is a bijection between functions of the form \(f\colon\mathbb{N}\to\mathbb{N}\) and subsets of \(\mathbb{N}\), which is implementable in arithmetic. Similarly, there is a bijection between functions of the form \(f\colon(\mathbb{N}\to\mathbb{N})\to\mathbb{N}\) and subsets of \(2^{\mathbb{N}}\), which is again implementable in arithmetic. Thus, whenever convenient, we use quantification over sets of natural numbers and over sets of sets of natural numbers, instead of quantification over type \(1\) and type \(2\) objects; in particular when proving lower bounds. We then include \(\in\) in the signature. Also, note that \(0\) and \(1\) are definable in first-order arithmetic. Thus, whenever convenient, we drop \(0\) and \(1\) from the signature of arithmetic. In the same vein, every fixed natural number is definable in first-order arithmetic. ## 3. HyperLTL satisfiability is \(\Sigma^{1}_{1}\)-complete In this section we settle the complexity of the satisfiability problem for HyperLTL: given a HyperLTL sentence, determine whether it has a model. **Theorem 3.1**.: _HyperLTL satisfiability is \(\Sigma^{1}_{1}\)-complete._ We should contrast this result with the \(\Sigma^{0}_{1}\)-completeness of HyperLTL satisfiability restricted to _finite_ sets of ultimately periodic traces [13, Theorem 1]. The \(\Sigma^{1}_{1}\)-completeness of HyperLTL satisfiability in the general case implies that, in particular, the set of satisfiable HyperLTL sentences is neither recursively enumerable nor co-recursively enumerable. A semi-decision procedure, like the one introduced in [13] for finite sets of ultimately periodic traces, therefore cannot exist in general. ### HyperLTL satisfiability is in \(\Sigma^{1}_{1}\) The \(\Sigma^{1}_{1}\) upper bound relies on the fact that every satisfiable HyperLTL formula has a countable model [10]. This allows us to represent these models, and Skolem functions on them, by sets of natural numbers, which are type 1 objects. In this encoding, trace assignments are type 0 objects, as traces in a countable set can be identified by natural numbers. With some more existential type 1 quantification one can then express the existence of a function witnessing that every trace assignment consistent with the Skolem functions satisfies the quantifier-free part of the formula under consideration. **Lemma 3.2**.: _HyperLTL satisfiability is in \(\Sigma^{1}_{1}\)._ Proof.: Let \(\varphi\) be a HyperLTL formula, let \(\Phi\) denote the set of quantifier-free subformulas of \(\varphi\), and let \(\Pi\) be a trace assignment whose domain contains the variables of \(\varphi\). The expansion of \(\varphi\) on \(\Pi\) is the function \(e_{\varphi,\Pi}\colon\Phi\times\mathbb{N}\to\{0,1\}\) with \[e_{\varphi,\Pi}(\psi,j)=\begin{cases}1&\text{if $\Pi[j,\infty)\models\psi$, and}\\ 0&\text{otherwise.}\end{cases}\] The expansion is completely characterised by the following consistency conditions: * \(e_{\varphi,\Pi}(a_{\pi},j)=1\) if and only if \(a\in\Pi(\pi)(j)\). * \(e_{\varphi,\Pi}(-\psi,j)=1\) if and only if \(e_{\varphi,\Pi}(\psi,j)=0\). * \(e_{\varphi,\Pi}(\psi_{1}\vee\psi_{2},j)=1\) if and only if \(e_{\varphi,\Pi}(\psi_{1},j)=1\) or \(e_{\varphi,\Pi}(\psi_{2},j)=1\). * \(e_{\varphi,\Pi}(\mathbf{X}\psi,j)=1\) if and only if \(e_{\varphi,\Pi}(\psi,j+1)=1\). * \(e_{\varphi,\Pi}(\psi_{1}\,\mathbf{U}\,\psi_{2},j)=1\) if and only if there is a \(j^{\prime}\geq j\) such that \(e_{\varphi,\Pi}(\psi_{2},j^{\prime})=1\) and \(e_{\varphi,\Pi}(\psi_{2},j^{\prime\prime})=1\) for all \(j^{\prime\prime}\) in the range \(j\leq j^{\prime\prime}<j^{\prime}\). Every satisfiable HyperLTL sentence has a countable model [10]. Hence, to prove that the HyperLTL satisfiability problem is in \(\Sigma^{1}_{1}\), we express, for a given HyperLTL sentence encoded as a natural number, the existence of the following type 1 objects (relying on the fact that there is a bijection between finite sequences over \(\mathbb{N}\) and \(\mathbb{N}\) itself): * A countable set of traces over the propositions of \(\varphi\) encoded as a function \(T\) from \(\mathbb{N}\times\mathbb{N}\) to \(\mathbb{N}\), mapping trace names and positions to (encodings of) subsets of the set of propositions appearing in \(\varphi\). * A function \(S\) from \(\mathbb{N}\times\mathbb{N}^{*}\) to \(\mathbb{N}\) to be interpreted as Skolem functions for the existentially quantified variables of \(\varphi\), i.e. we map a variable (identified by a natural number) and a trace assignment of the variables preceding it (encoded as a sequence of natural numbers) to a trace name. * A function \(E\) from \(\mathbb{N}\times\mathbb{N}\times\mathbb{N}\) to \(\mathbb{N}\), where, for a fixed \(a\in\mathbb{N}\) encoding a trace assignment \(\Pi\), the function \(x,y\mapsto E(a,x,y)\) is interpreted as the expansion of \(\varphi\) on \(\Pi\), i.e. \(x\) encodes a subformula in \(\Phi\) and \(y\) is a position. Then, we express the following properties using only type 0 quantification: For every trace assignment of the variables in \(\varphi\), encoded by \(a\in\mathbb{N}\), if \(a\) is consistent with the Skolem function encoded by \(S\), then the function \(x,y\mapsto E(a,x,y)\) satisfies the consistency conditions characterising the expansion, and we have \(E(a,x_{0},0)=1\), where \(x_{0}\) is the encoding of the maximal quantifier-free subformula of \(\varphi\). We leave the tedious, but standard, details to the industrious reader. ### HyperLTL satisfiability is \(\Sigma^{1}_{1}\)-hard To prove a matching lower bound, we reduce from the recurrent tiling problem [10], a standard \(\Sigma^{1}_{1}\)-complete problem. **Lemma 3.3**.: _HyperLTL satisfiability is \(\Sigma^{1}_{1}\)-hard._ Proof.: A _tile_ is a function \(\tau\colon\{\mathit{east},\mathit{west},\mathit{north},\mathit{south}\} \to\mathcal{C}\) that maps directions into a finite set \(\mathcal{C}\) of colours. Given a finite set \(\mathit{Ti}\) of tiles, a _tiling of the positive quadrant_ with \(\mathit{Ti}\) is a function \(\mathit{ti}\colon\mathbb{N}\times\mathbb{N}\to\mathit{Ti}\) with the property that: * if \(\mathit{ti}(i,j)=\tau_{1}\) and \(\mathit{ti}(i+1,j)=\tau_{2}\), then \(\tau_{1}(\mathit{east})=\tau_{2}(\mathit{west})\) and * if \(\mathit{ti}(i,j)=\tau_{1}\) and \(\mathit{ti}(i,j+1)=\tau_{2}\) then \(\tau_{1}(\mathit{north})=\tau_{2}(\mathit{south})\). The _recurring tiling problem_ is to determine, given a finite set \(\mathit{Ti}\) of tiles and a designated \(\tau_{0}\in\mathit{Ti}\), whether there is a tiling \(\mathit{ti}\) of the positive quadrant with \(\mathit{Ti}\) such that there are infinitely many \(j\in\mathbb{N}\) such that \(\mathit{ti}(0,j)=\tau_{0}\). This problem is known to be \(\Sigma^{1}_{1}\)-complete [10], so reducing it to HyperLTL satisfiability will establish the desired hardness result. In our reduction, each \(x\)-coordinate in the positive quadrant will be represented by a trace, and each \(y\)-coordinate by a point in time.2 In order to keep track of which trace represents which \(x\)-coordinate, we use one designated atomic proposition \(x\) that holds on exactly one time point in each trace: \(x\) holds at time \(i\) if and only if the trace represents \(x\)-coordinate \(i\). Footnote 2: Note that this means that if we were to visually represent this construction, traces would be arranged vertically. For this purpose, let \(\mathit{Ti}\) and \(\tau_{0}\) be given, and define the following formulas over \(\mathrm{AP}=\{x\}\cup\mathit{Ti}\): * Every trace has exactly one point where \(x\) holds: \[\varphi_{1}=\forall\pi.\ (\neg x_{\pi}\operatorname{\mathbf{U}}(x_{\pi}\wedge \operatorname{\mathbf{X}}\operatorname{\mathbf{G}}\neg x_{\pi}))\] * For every \(i\in\mathbb{N}\), there is a trace with \(x\) in the \(i\)-th position: \[\varphi_{2}=(\exists\pi.\ x_{\pi})\wedge(\forall\pi_{1}.\ \exists\pi_{2}.\ \operatorname{ \mathbf{F}}(x_{\pi_{1}}\wedge\operatorname{\mathbf{X}}x_{\pi_{2}}))\] * If two traces represent the same \(x\)-coordinate, then they contain the same tiles: \[\varphi_{3}=\forall\pi_{1},\pi_{2}.\ (\operatorname{\mathbf{F}}(x_{\pi_{1}} \wedge x_{\pi_{2}})\to\operatorname{\mathbf{G}}(\bigwedge_{\tau\in\mathit{Ti} }(\tau_{\pi_{1}}\leftrightarrow\tau_{\pi_{2}})))\] * Every time point in every trace contains exactly one tile: \[\varphi_{4}=\forall\pi.\ \operatorname{\mathbf{G}}\bigvee_{\tau\in\mathit{Ti} }(\tau_{\pi}\wedge\bigwedge_{\tau^{\prime}\in\mathit{Ti}\setminus\{\tau\}} \neg(\tau^{\prime})_{\pi})\] * Tiles match vertically: \[\varphi_{5}=\forall\pi.\ \operatorname{\mathbf{G}}\bigvee_{\tau\in\mathit{Ti} }(\tau_{\pi}\wedge\bigvee_{\tau^{\prime}\in\{\tau^{\prime}\in\mathit{Ti}|\tau( \mathit{north})=\tau^{\prime}(\mathit{south})\}}\operatorname{\mathbf{X}}( \tau^{\prime})_{\pi})\] * Tiles match horizontally: \[\varphi_{6}=\forall\pi_{1},\pi_{2}.\ (\operatorname{\mathbf{F}}(x_{\pi_{1}} \wedge\operatorname{\mathbf{X}}x_{\pi_{2}})\to\operatorname{\mathbf{G}} \bigvee_{\tau\in\mathit{Ti}}(\tau_{\pi_{1}}\wedge\bigvee_{\tau^{\prime}\in \{\tau^{\prime}\in\mathit{Ti}|\tau(\mathit{east})=\tau^{\prime}(\mathit{west}) \}}(\tau^{\prime})_{\pi_{2}}))\] * Tile \(\tau_{0}\) occurs infinitely often at \(x\)-position \(0\): \[\varphi_{7}=\exists\pi.\ (x_{\pi}\wedge\operatorname{\mathbf{G}} \operatorname{\mathbf{F}}\tau_{0})\] Finally, take \(\varphi_{\mathit{Ti}}=\bigwedge_{1\leq n\leq 7}\varphi_{n}\). Technically \(\varphi_{\mathit{Ti}}\) is not a HyperLTL formula, since it is not in prenex normal form, but it can be trivially transformed into one. Collectively, subformulas \(\varphi_{1}\)-\(\varphi_{3}\) are satisfied in exactly those sets of traces that can be interpreted as \(\mathbb{N}\times\mathbb{N}\). Subformulas \(\varphi_{4}\)-\(\varphi_{6}\) then hold if and only if the \(\mathbb{N}\times\mathbb{N}\) grid is correctly tiled with \(\mathit{Ti}\). Subformula \(\varphi_{7}\), finally, holds if and only if the tiling uses the tile \(\tau_{0}\) infinitely often at \(x\)-coordinate \(0\). Overall, this means \(\varphi_{\mathit{Ti}}\) is satisfiable if and only if \(\mathit{Ti}\) can recurrently tile the positive quadrant. The \(\Sigma^{1}_{1}\)-hardness of HyperLTL satisfiability therefore follows from the \(\Sigma^{1}_{1}\)-hardness of the recurring tiling problem [1]. The \(\Sigma^{1}_{1}\)-completeness of HyperLTL satisfiability still holds if we restrict to ultimately periodic traces. **Theorem 3.4**.: _HyperLTL satisfiability restricted to sets of ultimately periodic traces is \(\Sigma^{1}_{1}\)-complete._ Proof.: The problem of whether there is a tiling of \(\{(i,j)\in\mathbb{N}^{2}\mid i\geq j\}\), i.e. the part of \(\mathbb{N}\times\mathbb{N}\) below the diagonal, such that a designated tile \(\tau_{0}\) occurs on every row, is also \(\Sigma^{1}_{1}\)-complete [1].3 We reduce this problem to HyperLTL satisfiability on ultimately periodic traces. Footnote 3: The proof in [1] is for the part _above_ the diagonal with \(\tau_{0}\) occurring on every column, but that is easily seen to be equivalent. The reduction is very similar to the one discussed above, with the necessary changes being: (i) every time point beyond \(x\) satisfies the special tile "null", (ii) horizontal and vertical matching are only checked at or before time point \(x\) and (iii) for every trace \(\pi_{1}\) there is a trace \(\pi_{2}\) such that \(\pi_{2}\) has designated tile \(\tau_{0}\) at the time where \(\pi_{1}\) satisfies \(x\) (so \(\tau_{0}\) holds at least once in every row). Membership in \(\Sigma^{1}_{1}\) can be shown similarly to the proof of Lemma 3.2. So, the problem is \(\Sigma^{1}_{1}\)-complete. Furthermore, a careful analysis of the proof of Theorem 3.4 shows that we can restrict ourselves to ultimately periodic traces of the form \(x\cdot\not\!\!\psi^{\omega}\), i.e. to essentially finite traces. ## 4. The HyperLTL Quantifier Alternation Hierarchy The number of quantifier alternations in a formula is a crucial parameter in the complexity of HyperLTL model-checking [11, 12]. A natural question is then to understand _which_ properties can be expressed with \(n\) quantifier alternations, that is, given a sentence \(\varphi\), determine if there exists an equivalent one with at most \(n\) alternations. In this section, we show that this problem is in fact exactly as hard as the HyperLTL unsatisfiability problem (which asks whether a HyperLTL sentence has no model), and therefore \(\Pi^{1}_{1}\)-complete. Here, \(\Pi^{1}_{1}\) is the co-class of \(\Sigma^{1}_{1}\), i.e. it contains the complements of the \(\Sigma^{1}_{1}\) sets. ### Definition and strictness of the hierarchy Formally, the HyperLTL quantifier alternation hierarchy is defined as follows. Let \(\varphi\) be a HyperLTL formula. We say that \(\varphi\) is a \(\Sigma_{0}\)- or a \(\Pi_{0}\)-formula if it is quantifier-free. It is a \(\Sigma_{n}\)-formula if it is of the form \(\varphi=\exists\pi_{1}\). \(\cdots\exists\pi_{k}\). \(\psi\) and \(\psi\) is a \(\Pi_{n-1}\)-formula. It is a \(\Pi_{n}\)-formula if it is of the form \(\varphi=\forall\pi_{1}\). \(\cdots\forall\pi_{k}\). \(\psi\) and \(\psi\) is a \(\Sigma_{n-1}\)-formula. We do not require each block of quantifiers to be non-empty, i.e. we may have \(k=0\) and \(\varphi=\psi\). Note that formulas in \(\Sigma_{0}=\Pi_{0}\) have free variables. As we are only interested in sentences, we disregard \(\Sigma_{0}=\Pi_{0}\) in the following and only consider the levels \(\Sigma_{n}\) and \(\Pi_{n}\) for \(n>0\). By a slight abuse of notation, we also let \(\Sigma_{n}\) denote the set of hyperproperties definable by a \(\Sigma_{n}\)-sentence, that is, the set of all \(L(\varphi)=\{T\subseteq(2^{\mathrm{AP}})^{\omega}\mid T\models\varphi\}\) such that \(\varphi\) is a \(\Sigma_{n}\)-sentence of HyperLTL. **Theorem 4.1** ([1, Corollary 5.6.5]).: _The quantifier alternation hierarchy of HyperLTL is strict: for all \(n>0\), \(\Sigma_{n}\subsetneq\Sigma_{n+1}\)._ The strictness of the hierarchy also holds if we restrict our attention to sentences whose models consist of finite sets of traces that end in the suffix \(\emptyset^{\omega}\), i.e. that are essentially finite. **Theorem 4.2**.: _For all \(n>0\), there exists a \(\Sigma_{n+1}\)-sentence \(\varphi\) of HyperLTL that is not equivalent to any \(\Sigma_{n}\)-sentence, and such that for all \(T\subseteq(2^{\mathrm{AP}})^{\omega}\), if \(T\models\varphi\) then \(T\) contains finitely many traces and \(T\subseteq(2^{\mathrm{AP}})^{*}\emptyset^{\omega}\)._ This property is a necessary ingredient for our argument that membership at some fixed level of the quantifier alternation hierarchy is \(\Pi_{1}^{1}\)-hard. It could be derived from a small adaptation of the proof in [11], and we provide for completeness an alternative proof by exhibiting a connection between the HyperLTL quantifier alternation hierarchy and the quantifier alternation hierarchy for first-order logic over finite words, which is known to be strict [1, 12]. The remainder of the subsection is dedicated to the proof of Theorem 4.2. The proof is organised as follows. We first define an encoding of finite words as sets of traces. We then show that every first-order formula can be translated into an equivalent (modulo encodings) HyperLTL formula with the same quantifier prefix. Finally, we show how to translate back HyperLTL formulas into \(\mathrm{FO}[\leq]\) formulas with the same quantifier prefix, so that if the HyperLTL alternation quantifier hierarchy collapsed, then so would the hierarchy for \(\mathrm{FO}[\leq]\). ### First-Order Logic over Words Let \(\mathrm{AP}\) be a finite set of atomic propositions. A finite word over \(\mathrm{AP}\) is a finite sequence \(w=w(0)w(1)\cdots w(k)\) with \(w(i)\in 2^{\mathrm{AP}}\) for all \(i\). We let \(|w|\) denote the _length_ of \(w\), and \(\mathit{pos}(w)=\{0,\ldots,|w|-1\}\) the set of _positions_ of \(w\). The set of all finite words over \(\mathrm{AP}\) is \(\left(2^{\mathrm{AP}}\right)^{*}\). Assume a countably infinite set of variables _Var_. The set of \(\mathrm{FO}[\leq]\) formulas is given by the grammar \[\varphi::=a(x)\mid x\leq y\mid\neg\varphi\mid\varphi\vee\varphi\mid\exists x.\ \varphi\mid\forall x.\ \varphi\,,\] where \(a\in\mathrm{AP}\) and \(x,y\in\mathit{Var}\). The set of free variables of \(\varphi\) is denoted \(\mathrm{Free}(\varphi)\). A sentence is a formula without free variables. The semantics is defined as follows, \(w\in\left(2^{\mathrm{AP}}\right)^{*}\) being a finite word and \(\nu:\mathrm{Free}(\varphi)\to\mathit{pos}(w)\) an interpretation mapping variables to positions in \(w\): * \((w,\nu)\models a(x)\) if \(a\in w(\nu(x))\). * \((w,\nu)\models x\leq y\) if \(\nu(x)\leq\nu(y)\). * \((w,\nu)\models\neg\varphi\) if \(w,\nu\not\models\varphi\). * \((w,\nu)\models\varphi\vee\psi\) if \(w,\nu\models\varphi\) or \((w,\nu)\models\psi\). * \((w,\nu)\models\exists x.\ \varphi\) if there exists a position \(n\in\mathit{pos}(w)\) such that \((w,\nu[x\mapsto n])\models\varphi\). * \((w,\nu)\models\forall x.\ \varphi\) if for all positions \(n\in\mathit{pos}(w)\): \((w,\nu[x\mapsto n])\models\varphi\). If \(\varphi\) is a sentence, we write \(w\models\varphi\) instead of \((w,\nu)\models\varphi\). As for HyperLTL, a FO\([\leq]\) formula in prenex normal form is a \(\Sigma_{n}\)-formula if its quantifier prefix consists of \(n\) alternating blocks of quantifiers (some of which may be empty), starting with a block of existential quantifiers. We let \(\Sigma_{n}(\mathrm{FO}[\leq])\) denote the class of languages of finite words definable by \(\Sigma_{n}\)-sentences. **Theorem 4.3** ([14, 15]).: _The quantifier alternation hierarchy of \(\mathrm{FO}[\leq]\) is strict: for all \(n\geq 0\), \(\Sigma_{n}(\mathrm{FO}[\leq])\subsetneq\Sigma_{n+1}(\mathrm{FO}[\leq])\)._ ### Encodings of Words The idea is to encode a word \(w\in\left(2^{\mathrm{AP}}\right)^{*}\) as a set of traces \(T\) where each trace in \(T\) corresponds to a position in \(w\); letters in the word are reflected in the label of the first position of the corresponding trace in \(T\), while the total order \(<\) is encoded using a fresh proposition \(o\notin\mathrm{AP}\). More precisely, each trace has a unique position labelled \(o\), distinct from one trace to another, and traces are ordered according to the order of appearance of the proposition \(o\). Note that there are several possible encodings for a same word, and we may fix a canonical one when needed. This is defined more formally below. A _stretch function_ is a monotone funtion \(f:\mathbb{N}\to\mathbb{N}\setminus\{0\}\), i.e. it satisfies \(0<f(0)<f(1)<\cdots\). For all words \(w\in\left(2^{\mathrm{AP}}\right)^{*}\) and stretch functions \(f\), we define the set of traces \(\mathit{enc}(w,f)=\{t_{n}\mid n\in\mathit{pos}(w)\}\subseteq(2^{\mathrm{AP} \cup\{o\}})^{*}\emptyset^{\omega}\) as follows: for all \(i\in\mathbb{N}\), * for all \(a\in\mathrm{AP}\), \(a\in t_{n}(i)\) if and only if \(i=0\) and \(a\in w(n)\) * \(o\in t_{n}(i)\) if and only if \(i=f(n)\). It will be convenient to consider encodings with arbitrarily large spacing between \(o\)'s positions. To this end, for every \(N\in\mathbb{N}\), we define a particular encoding \[\mathit{enc}_{N}(w)=\mathit{enc}(w,n\mapsto N(n+1))\,.\] So in \(\mathit{enc}_{N}(w)\), two positions with non-empty labels are at distance at least \(N\) from one another. Given \(T=\mathit{enc}(w,f)\) and a trace assignment \(\Pi\colon\mathcal{V}\to T\), we let \(T^{(N)}=\mathit{enc}_{N}(w)\), and \(\Pi^{(N)}\colon\mathcal{V}\to T^{(N)}\) the trace assignment defined by shifting the \(o\) position in each \(\Pi(\pi)\) accordingly, i.e. * \(o\in\Pi^{(N)}(\pi)(N(i+1))\) if and only if \(o\in\Pi(\pi)(f(i))\) and * for all \(a\in\mathrm{AP}\): \(a\in\Pi^{N}(\pi)(0)\) if and only if \(a\in\Pi(\pi)(0)\). ### From FO to HyperLTL We associate with every FO\([\leq]\) formula \(\varphi\) in prenex normal form a HyperLTL formula \(\mathit{enc}(\varphi)\) over \(\mathrm{AP}\cup\{o\}\) by replacing in \(\varphi\): * \(a(x)\) with \(a_{x}\), and * \(x\leq y\) with \(\mathbf{F}(o_{x}\wedge\mathbf{F}\,o_{y})\). In particular, \(\mathit{enc}(\varphi)\) has the same quantifier prefix as \(\varphi\), which means that we treat variables of \(\varphi\) as trace variables of \(\mathit{enc}(\varphi)\). **Lemma 4.4**.: _For every \(\mathrm{FO}[\leq]\) sentence \(\varphi\) in prenex normal form, \(\varphi\) is equivalent to \(\mathit{enc}(\varphi)\) in the following sense: for all \(w\in\left(2^{\mathrm{AP}}\right)^{\sharp}\) and all stretch functions \(f\),_ \[w\models\varphi\quad\text{if and only if}\quad\mathit{enc}(w,f)\models\mathit{ enc}(\varphi)\,.\] Proof.: By induction over the construction of \(\varphi\), relying on the fact that traces in \(\mathit{enc}(w,f)\) are in bijection with positions in \(w\). In particular, note that the evaluation of \(\mathit{enc}(\varphi)\) on \(\mathit{enc}(w,f)\) does not depend on \(f\). We call such a formula _stretch-invariant_: a HyperLTL sentence \(\varphi\) is _stretch-invariant_ if for all finite words \(w\) and all stretch functions \(f\) and \(g\), \[\mathit{enc}(w,f)\models\varphi\quad\text{ if and only if}\quad\mathit{ enc}(w,g)\models\varphi\,.\] **Lemma 4.5**.: _For all \(\varphi\in\mathrm{FO}[\leq]\), \(\mathit{enc}(\varphi)\) is stretch-invariant._ Proof.: By induction over the construction of \(\mathit{enc}(\varphi)\), relying on the fact that the only temporal subformulas of \(\mathit{enc}(\varphi)\) are of the form \(\mathbf{F}(o_{x}\wedge\mathbf{F}\,o_{y})\). Going Back From HyperLTL to FO.Let \(\mathit{enc}(\mathrm{FO}[\leq])\) denote the fragment of HyperLTL consisting of all formulas \(\mathit{enc}(\varphi)\), where \(\varphi\) is an \(\mathrm{FO}[\leq]\) formula in prenex normal form. Equivalently, \(\psi\in\mathit{enc}(\mathrm{FO}[\leq])\) if it is a HyperLTL formula of the form \(\psi=Q_{1}x_{1}\cdots Q_{k}x_{k}.\,\psi_{0}\), where \(\psi_{0}\) is a Boolean combination of formulas of the form \(a_{x}\) or \(\mathbf{F}(o_{x}\wedge\mathbf{F}\,o_{y})\). Let us prove that every HyperLTL sentence is equivalent, over sets of traces of the form \(\mathit{enc}(w,f)\), to a sentence in \(\mathit{enc}(\mathrm{FO}[\leq])\) with the same quantifier prefix. This means that if a HyperLTL sentence \(\mathit{enc}(\varphi)\) is equivalent to a HyperLTL sentence with a smaller number of quantifier alternations, then it is also equivalent over all word encodings to one of the form \(\mathit{enc}(\psi)\), which in turns implies that the \(\mathrm{FO}[\leq]\) sentences \(\varphi\) and \(\psi\) are equivalent. The _temporal depth_ of a quantifier-free formula in HyperLTL is defined inductively as * \(\mathit{depth}(a_{\pi})=0\), * \(\mathit{depth}(\neg\varphi)=\mathit{depth}(\varphi)\), * \(\mathit{depth}(\varphi\vee\psi)=\max(\mathit{depth}(\varphi),\mathit{depth }(\psi))\), * \(\mathit{depth}(\mathbf{X}\,\varphi)=1+\mathit{depth}(\varphi)\), and * \(\mathit{depth}(\varphi\,\mathbf{U}\,\psi)=1+\max(\mathit{depth}(\varphi, \psi))\). For a general HyperLTL formula \(\varphi=Q_{1}\pi_{1}\cdots Q_{k}\pi_{k}.\,\,\psi\), we let \(\mathit{depth}(\varphi)=\mathit{depth}(\psi)\). **Lemma 4.6**.: _Let \(\psi\) be a quantifier-free formula of HyperLTL. Let \(N=\mathit{depth}(\psi)+1\). There exists a quantifier-free formula \(\widehat{\psi}\in\mathit{enc}(\mathrm{FO}[\leq])\) such that for all \(T=\mathit{enc}(w,f)\) and trace assignments \(\Pi\),_ \[(T^{(N)},\Pi^{(N)})\models\psi\quad\text{if and only if}\quad(T,\Pi)\models \widehat{\psi}\,.\] Proof.: Assume that \(\mathrm{Free}(\psi)=\{\pi_{1},\ldots,\pi_{k}\}\) is the set of free variables of \(\psi\). Note that the value of \((T^{(N)},\Pi^{(N)})\models\psi\) depends only on the traces \(\Pi^{(N)}(\pi_{1}),\ldots,\Pi^{(N)}(\pi_{k})\). We see the tuple \((\Pi^{(N)}(\pi_{1}),\ldots,\Pi^{(N)}(\pi_{k}))\) as a single trace \(w_{T,\Pi,N}\) over the set of propositions \(\mathrm{AP}^{\prime}=\{a_{\pi}\mid a\in\mathrm{AP}\cup\{o\}\wedge\pi\in \mathrm{Free}(\psi)\}\), and \(\psi\) as an LTL formula over \(\mathrm{AP}^{\prime}\). We are going to show that the evaluation of \(\psi\) over words \(w_{T,\Pi,N}\) is entirely determined by the ordering of \(o_{\pi_{1}},\ldots,o_{\pi_{n}}\) in \(w_{T,\Pi,N}\) and the label of \(w_{T,\Pi,N}(0)\), which we can both describe using a formula in \(\mathit{enc}(\mathrm{FO}[\leq])\). The intuition is that non-empty labels in \(w_{T,\Pi,N}\) are at distance at least \(N\) from one another, and a temporal formula of depth less than \(N\) cannot distinguish between \(w_{T,\Pi,N}\) and other words with the same sequence of non-empty labels and sufficient spacing between them. More generally, the following can be easily proved via Ehrenfeucht-Fraisse games: **Claim 4.7**.: Let \(m,n\geq 0\), \((a_{i})_{i\in\mathbb{N}}\) be a sequence of letters in \(2^{\mathrm{AP}^{\prime}}\), and \[w_{1},w_{2}\in\emptyset^{m}a_{0}\emptyset^{n}\emptyset^{*}a_{1}\emptyset^{n} \emptyset^{*}a_{2}\emptyset^{n}\emptyset^{*}\cdots\] Then for all LTL formulas \(\varphi\) such that \(\mathit{depth}(\varphi)\leq n\), \(w_{1}\models\varphi\) if and only if \(w_{2}\models\varphi\). Here we are interested in words of a particular shape. Let \(L_{N}\) be the set of infinite words \(w\in\left(2^{\mathrm{AP}^{\prime}}\right)^{\omega}\) such that: * For all \(\pi\in\{\pi_{1},\ldots,\pi_{k}\}\), there is a unique \(i\in\mathbb{N}\) such that \(o_{\pi}\in w(i)\). Moreover, \(i\geq N\). * If \(o_{\pi}\in w(i)\) and \(o_{\pi^{\prime}}\in w(i^{\prime})\), then \(|i-i^{\prime}|\geq N\) or \(i=i^{\prime}\). * If \(a_{\pi}\in w(i)\) for some \(a\in\mathrm{AP}\) and \(\pi\in\{\pi_{1},\ldots,\pi_{k}\}\), then \(i=0\). Notice that \(w_{T,\Pi,N}\in L_{N}\) for all \(T\) and all \(\Pi\). For \(w_{1},w_{2}\in L_{N}\), we write \(w_{1}\sim w_{2}\) if \(w_{1}\) and \(w_{2}\) differ only in the spacing between non-empty positions, that is, if there are \(\ell\leq k\) and \(a_{0},\ldots,a_{\ell}\in 2^{\mathrm{AP}^{\prime}}\) such that \(w_{1},w_{2}\in a_{0}\emptyset^{*}a_{1}\emptyset^{*}\cdots a_{\ell}\emptyset^{\omega}\). Notice that \(\sim\) is of finite index. Moreover, we can distinguish between its equivalence classes using formulas defined as follows. For all \(A\subseteq\{a_{\pi}\mid a\in\mathrm{AP}\wedge\pi\in\{\pi_{1},\ldots,\pi_{k}\}\}\) and all total preorders \(\preceq\) over \(\{\pi_{1},\ldots,\pi_{k}\}\),4 we let Footnote 4: i.e. \(\preceq\) is required to be transitive and for all \(\pi,\pi^{\prime}\in\{\pi_{1},\ldots,\pi_{k}\}\), we have \(\pi\preceq\pi^{\prime}\) or \(\pi^{\prime}\preceq\pi\) (or both) \[\varphi_{A,\preceq}=\bigwedge_{a\in A}a\wedge\bigwedge_{a\notin A}\neg a\wedge \bigwedge_{\pi_{i}\preceq\pi_{j}}\mathbf{F}(o_{\pi_{i}}\wedge\mathbf{F}\,o_{ \pi_{j}})\,.\] Note that every word \(w\in L_{N}\) satisfies exactly one formula \(\varphi_{A,\preceq}\), and that all words in an equivalence class satisfy the same one. We denote by \(L_{A,\preceq}\) the equivalence class of \(L_{N}/\!\sim\) consisting of words satisfying \(\varphi_{A,\preceq}\). So we have \(L_{N}=\biguplus L_{A,\preceq}\). Since \(\psi\) is of depth less than \(N\), by Claim 4.7 (with \(n=N-1\) and \(m=0\)), for all \(w_{1}\sim w_{2}\) we have \(w_{1}\models\psi\) if and only if \(w_{2}\models\psi\). Now, define \(\widehat{\psi}\) as the disjunction of all \(\varphi_{A,\preceq}\) such that \(\psi\) is satisfied by elements in the class \(L_{A,\preceq}\). Then \(\widehat{\psi}\in\mathit{enc}(\mathrm{FO}[\leq])\), and \[\text{for all }w\in L_{N},\quad w\models\widehat{\psi}\text{ if and only if }w\models\psi\,.\] In particular, for every \(T\) and every \(\Pi\), we have \((T^{(N)},\Pi^{(N)})\models\widehat{\psi}\) if and only if \((T^{(N)},\Pi^{(N)})\models\psi\). Since the preorder between propositions \(o_{\pi}\) and the label of the initial position are the same in \((T^{(N)},\Pi^{(N)})\) and \((T,\Pi)\), we also have \((T,\Pi)\models\widehat{\psi}\) if and only if \((T^{(N)},\Pi^{(N)})\models\widehat{\psi}\). Therefore, \[(T^{(N)},\Pi^{(N)})\models\psi\quad\text{if and only if}\quad(T,\Pi)\models \widehat{\psi}\,.\qed\] For a quantified HyperLTL sentence \(\varphi=Q_{1}\pi_{1}\cdots Q_{k}\pi_{k}.\,\psi\), we let \(\widehat{\varphi}=Q_{1}\pi_{1}\ldots Q_{k}\pi_{k}.\,\widehat{\psi}\), where \(\widehat{\psi}\) is the formula obtained through Lemma 4.6. **Lemma 4.8**.: _For all HyperLTL formulas \(\varphi\), for all \(T=\mathit{enc}(w,f)\) and trace assignments \(\Pi\),_ \[(T^{(N)},\Pi^{(N)})\models\varphi\text{ if and only if }(T,\Pi)\models \widehat{\varphi}\,,\] _where \(N=\mathit{depth}(\varphi)+1\)._ Proof.: We prove the result by induction. Quantifier-free formulas are covered by Lemma 4.6. We have \[(T,\Pi)\models\exists\pi.\ \widehat{\psi} \Leftrightarrow \exists t\in T\ \text{such that}\ (T,\Pi[\pi\mapsto t])\models\widehat{\psi}\] \[\Leftrightarrow \exists t\in T\ \text{such that}\ (T^{(N)},(\Pi[\pi\mapsto t])^{(N)})\models\psi\] (IH) \[\Leftrightarrow \exists t\in T^{(N)}\ \text{such that}\ (T^{(N)},\Pi^{(N)}[\pi\mapsto t])\models\psi\] \[\Leftrightarrow (T^{(N)},\Pi^{(N)})\models\exists\pi.\ \psi\,,\] and similarly, \[(T,\Pi)\models\forall\pi.\ \widehat{\psi} \Leftrightarrow \forall t\in T,\text{we have}\ (T,\Pi[\pi\mapsto t])\models\widehat{\psi}\] \[\Leftrightarrow \forall t\in T,\text{we have}\ (T^{(N)},(\Pi[\pi\mapsto t])^{(N)})\models\psi\] (IH) \[\Leftrightarrow \forall t\in T^{(N)},\text{we have}\ (T^{(N)},\Pi^{(N)}[\pi\mapsto t])\models\psi\] \[\Leftrightarrow (T^{(N)},\Pi^{(N)})\models\forall\pi.\ \psi\,.\qed\] As a consequence, we obtain the following equivalence. **Lemma 4.9**.: _For all stretch-invariant HyperLTL sentences \(\varphi\) and for all \(T=\text{enc}(w,f)\),_ \[T\models\varphi\quad\text{if and only if}\quad T\models\widehat{\varphi}\,.\] Proof.: By definition of \(\varphi\) being stretch-invariant, we have \(T\models\varphi\) if and only if \(T^{(N)}\models\varphi\), which by Lemma 4.8 is equivalent to \(T\models\widehat{\varphi}\). We are now ready to prove the strictness of the HyperLTL quantifier alternation hierarchy. Proof of Theorem 4.2.: Suppose towards a contradiction that the hierarchy collapses at level \(n>0\), i.e. every HyperLTL \(\Sigma_{n+1}\)-sentence is equivalent to some \(\Sigma_{n}\)-sentence. Let us show that the \(\text{FO}[\leq]\) quantifier alternation hierarchy also collapses at level \(n\), a contradiction with Theorem 4.3. Fix a \(\Sigma_{n+1}\)-sentence \(\varphi\) of \(\text{FO}[\leq]\). The HyperLTL sentence \(\text{enc}(\varphi)\) has the same quantifier prefix as \(\varphi\), i.e. is also a \(\Sigma_{n+1}\)-sentence. Due to the assumed hierarchy collapse, there exists a HyperLTL \(\Sigma_{n}\)-sentence \(\psi\) that is equivalent to \(\text{enc}(\varphi)\), and is stretch-invariant by Lemma 4.5. Then the HyperLTL sentence \(\widehat{\psi}\) defined above is also a \(\Sigma_{n}\)-sentence. Moreover, since \(\widehat{\psi}\in\text{enc}(\text{FO}[\leq])\), there exists a \(\text{FO}[\leq]\) sentence \(\varphi^{\prime}\) such that \(\widehat{\psi}=\text{enc}(\varphi^{\prime})\), which has the same quantifier prefix as \(\widehat{\psi}\), i.e. \(\varphi^{\prime}\) is a \(\Sigma_{n}\)-sentence of \(\text{FO}[\leq]\). For all words \(w\in(2^{\text{AP}})^{*}\), we now have \[w\models\varphi\] if and only if \[\text{enc}(w,f)\models\text{enc}(\varphi)\] (Lemma 4.4) \[\text{if and only if}\] \[\text{enc}(w,f)\models\psi\] (assumption) \[\text{if and only if}\] \[\text{enc}(w,f)\models\widehat{\psi}\] (Lemma 4.9 and Lemma 4.5) \[\text{if and only if}\] \[\text{enc}(w,f)\models\text{enc}(\varphi^{\prime})\] (definition) \[\text{if and only if}\] \[w\models\varphi^{\prime}\] (Lemma 4.4) for an arbitrary stretch function \(f\). Therefore, \(\Sigma_{n+1}(\text{FO}[\leq])=\Sigma_{n}(\text{FO}[\leq])\), yielding the desired contradiction. This proves not only that for all \(n>0\), there is a HyperLTL \(\Sigma_{n+1}\)-sentence that is not equivalent to any \(\Sigma_{n}\)-sentence, but also that there is one of the form \(\text{enc}(\varphi)\). Now, the proof still goes through if we replace \(\mathit{enc}(\varphi)\) by any formula equivalent to \(\mathit{enc}(\varphi)\) over all \(\mathit{enc}(w,f)\), and in particular if we replace \(\mathit{enc}(\varphi)\) by \(\mathit{enc}(\varphi)\wedge\psi\), where the sentence \[\psi=\exists\pi.\ \forall\pi^{\prime}.\ (\mathbf{F}\,\mathbf{G}\,\emptyset_{\pi}) \wedge\mathbf{G}(\mathbf{G}\,\emptyset_{\pi}\to\mathbf{G}\,\emptyset_{\pi^{ \prime}})\] with \(\emptyset_{\pi}=\bigwedge_{a\in\mathrm{AP}}\neg a_{\pi}\) selects models that contain finitely many traces, all in \((2^{\mathrm{AP}})^{*}\cdot\emptyset^{\omega}\). Indeed, all \(\mathit{enc}(w,f)\) satisfy \(\psi\). Notice that \(\psi\) is a \(\Sigma_{2}\)-sentence, and since \(n+1\geq 2\), (the prenex normal form of) \(\mathit{enc}(\varphi)\wedge\psi\) is still a \(\Sigma_{n+1}\)-sentence. ### Membership problem In this subsection, we investigate the complexity of the membership problem for the HyperLTL quantifier alternation hierarchy. Our goal is to prove the following result. **Theorem 4.10**.: _Fix \(n>0\). The problem of deciding whether a given HyperLTL sentence is equivalent to some \(\Sigma_{n}\)-sentence is \(\Pi^{1}_{1}\)-complete._ The easier part of the proof will be the upper bound, since a corollary of Theorem 3.1 is that the problem of deciding whether two HyperLTL formulas are equivalent is \(\Pi^{1}_{1}\)-complete. The lower bound will be proven by a reduction from the HyperLTL unsatisfiability problem. The proof relies on Theorem 4.2: given a sentence \(\varphi\), we are going to combine \(\varphi\) with some \(\Sigma_{n+1}\)-sentence \(\varphi_{n+1}\) witnessing the strictness of the hierarchy, to construct a sentence \(\psi\) such that \(\varphi\) is unsatisfiable if and only if \(\psi\) is equivalent to a \(\Sigma_{n}\)-sentence. Intuitively, the formula \(\psi\) will describe models consisting of the "disjoint union" of a model of \(\varphi_{n+1}\) and a model of \(\varphi\). Here "disjoint" is to be understood in a strong sense: we split both the set of traces and the time domain into two parts, used respectively to encode the models of \(\varphi_{n+1}\) and those of \(\varphi\). To make this more precise, let us introduce some notations. We assume a distinguished symbol \(\$\notin\mathrm{AP}\). We say that a set of traces \(T\subseteq(2^{\mathrm{AP}\cup\{\$\}})^{\omega}\) is _bounded_ if there exists \(b\in\mathbb{N}\) such that \(T\subseteq(2^{\mathrm{AP}})^{b}\cdot\{\$\}^{\omega}\). **Lemma 4.11**.: _There exists a \(\Pi_{1}\)-sentence \(\varphi_{bd}\) such that for all \(T\subseteq(2^{\mathrm{AP}\cup\{\$\}})^{\omega}\), we have \(T\models\varphi_{bd}\) if and only if \(T\) is bounded._ Proof.: We let \[\varphi_{bd}=\forall\pi,\pi^{\prime}.\ (\neg\$_{\pi}\,\mathbf{U}\,\mathbf{G}\, \$_{\pi})\wedge\bigwedge_{a\in\mathrm{AP}}\mathbf{G}(\neg(a_{\pi}\wedge\$_{\pi}) )\wedge\mathbf{F}\,(\neg\$_{\pi}\wedge\neg\$_{\pi^{\prime}}\wedge\mathbf{X}\, \$_{\pi}\wedge\mathbf{X}\,\$_{\pi^{\prime}})\.\] The conjunct \((\neg\$_{\pi}\,\mathbf{U}\,\mathbf{G}\,\$_{\pi})\wedge\bigwedge_{a\in \mathrm{AP}}\mathbf{G}(\neg(a_{\pi}\wedge\$_{\pi}))\) ensures that every trace is in \((2^{\mathrm{AP}})^{*}\cdot\{\$\}^{\omega}\), while \(\mathbf{F}\,(\neg\$_{\pi}\wedge\neg\$_{\pi^{\prime}}\wedge\mathbf{X}\,\$_{ \pi}\wedge\mathbf{X}\,\$_{\pi^{\prime}})\) ensures that the \(\$\)'s in any two traces \(\pi\) and \(\pi^{\prime}\) start at the same position. Figure 1. Example of a split set of traces where each row represents a trace and \(b=3\). We say that a nonempty set \(T\) of traces is _split_ if there exist a \(b\in\mathbb{N}\) and \(T_{1}\), \(T_{2}\) such that \(T=T_{1}\uplus T_{2}\), \(T_{1}\subseteq\left(2^{\mathrm{AP}}\right)^{b}\cdot\{\$\}^{\omega}\), and \(T_{2}\subseteq\{\$\}^{b}\cdot\left(2^{\mathrm{AP}}\right)^{\omega}\). Note that \(b\) as well as \(T_{1}\) and \(T_{2}\) are unique then. Hence, we define the left and right part of \(T\) as \(T_{\ell}=T_{1}\) and \(T_{r}=\{t\in\left(2^{\mathrm{AP}}\right)^{\omega}\mid\{\$\}^{b}\cdot t\in T_{2}\}\), respectively (see Figure 1). It is easy to combine HyperLTL specifications for the left and right part of a split model into one global formula. **Lemma 4.12**.: _For all HyperLTL sentences \(\varphi_{\ell},\varphi_{r}\), one can construct a sentence \(\psi\) such that for all split \(T\subseteq\left(2^{\mathrm{AP}\cup\{\$\}}\right)^{\omega}\), it holds that \(T_{\ell}\models\varphi_{\ell}\) and \(T_{r}\models\varphi_{r}\) if and only if \(T\models\psi\)._ Proof.: Let \(\widehat{\varphi_{r}}\) denote the formula obtained from \(\varphi_{r}\) by replacing: * every existential quantification \(\exists\pi.\;\varphi\) with \(\exists\pi.\;((\mathbf{F}\,\mathbf{G}\,\neg\$\,_{\pi})\wedge\varphi)\); * every universal quantification \(\forall\pi.\;\varphi\) with \(\forall\pi.\;((\mathbf{F}\,\mathbf{G}\,\neg\$\,_{\pi})\rightarrow\varphi)\); * the quantifier-free part \(\varphi\) of \(\varphi_{r}\) with \(\$\,_{\pi}\,\mathbf{U}(\neg\$\,_{\pi}\wedge\varphi)\), where \(\pi\) is some free variable in \(\varphi\). Here, the first two replacements restrict quantification to traces in the right part while the last one requires the formula to hold at the first position of the right part. We define \(\widehat{\varphi_{\ell}}\) by similarly relativizing quantifications in \(\varphi_{\ell}\). The formula \(\widehat{\varphi_{\ell}}\wedge\widehat{\varphi_{r}}\) can then be put back into prenex normal form to define \(\psi\). Conversely, any HyperLTL formula that only has split models can be decomposed into a Boolean combination of formulas that only talk about the left or right part of the model. This is formalised in the lemma below. **Lemma 4.13**.: _For all HyperLTL \(\Sigma_{n}\)-sentences \(\varphi\) there exists a finite family \((\varphi_{\ell}^{i},\varphi_{r}^{i})_{i}\) of \(\Sigma_{n}\)-sentences such that for all split \(T\subseteq\left(2^{\mathrm{AP}\cup\{\$\}}\right)^{\omega}\): \(T\models\varphi\) if and only if there is an \(i\) with \(T_{\ell}\models\varphi_{\ell}^{i}\) and \(T_{r}\models\varphi_{r}^{i}\)._ Proof.: To prove this result by induction, we need to strengthen the statement to make it dual and allow for formulas with free variables. We let \(\mathrm{Free}(\varphi)\) denote the set of free variables of a formula \(\varphi\). We prove the following result, which implies Lemma 4.13. **Claim 4.14**.: For all HyperLTL \(\Sigma_{n}\)-formulas (resp. \(\Pi_{n}\)-formulas) \(\varphi\), there exists a finite family of \(\Sigma_{n}\)-formulas (resp. \(\Pi_{n}\)-formulas) \((\varphi_{\ell}^{i},\varphi_{r}^{i})_{i}\) such that for all \(i\), \(\mathrm{Free}(\varphi)=\mathrm{Free}(\varphi_{\ell}^{i})\uplus\mathrm{Free}( \varphi_{r}^{i})\), and for all split \(T\) and \(\Pi\): \((T,\Pi)\models\varphi\) if and only if there exists \(i\) such that * For all \(\pi\in\mathrm{Free}(\varphi)\), \(\Pi(\pi)\in T_{\ell}\) if and only if \(\pi\in\mathrm{Free}(\varphi_{\ell}^{i})\) (and thus \(\Pi(\pi)\in T\setminus T_{\ell}\) if and only if \(\pi\in\mathrm{Free}(\varphi_{r}^{i})\)). * \((T_{\ell},\Pi)\models\varphi_{\ell}^{i}\); * \((T_{r},\Pi^{\prime})\models\varphi_{r}^{i}\), where \(\Pi^{\prime}\) maps every \(\pi\in\mathrm{Free}(\varphi_{r}^{i})\) to the trace in \(T_{r}\) corresponding to \(\Pi(\pi)\) in \(T\) (i.e. \(\Pi(\pi)=\{\$\}^{b}\cdot\Pi^{\prime}(\pi)\) for some \(b\)). To simplify, we can assume that the partition of the free variables of \(\varphi\) into a left and right part is fixed, i.e. we take \(V_{\ell}\subseteq\mathrm{Free}(\varphi)\) and \(V_{r}=\mathrm{Free}(\varphi)\setminus V_{\ell}\), and we restrict our attention to split \(T\) and \(\Pi\) such that \(\Pi(V_{\ell})\subseteq T_{\ell}\) and \(\Pi(V_{r})\subseteq T\setminus T_{\ell}\). The formulas \((\varphi_{\ell}^{i},\varphi_{r}^{i})_{i}\) we are looking for should then be such that \(\mathrm{Free}(\varphi_{\ell}^{i})=V_{\ell}\) and \(\mathrm{Free}(\varphi_{r}^{i})=V_{r}\). If we can define sets of formulas \((\varphi_{\ell}^{i},\varphi_{r}^{i})\) for each choice of \(V_{\ell},V_{r}\), then the general case is solved by taking the union of all of those. So we focus on a fixed \(V_{\ell},V_{r}\), and prove the result by induction on the quantifier depth of \(\varphi\). **Base case.** If \(\varphi\) is quantifier-free, then it can be seen as an LTL formula over the set of propositions \(\{a_{\pi},\$_{\pi}\mid\pi\in\mathrm{Free}(\varphi),a\in\mathrm{AP}\}\), and any split model of \(\varphi\) consistent with \(V_{\ell},V_{r}\) can be seen as a word in \(\Sigma_{\ell}^{*}\cdot\Sigma_{r}^{\omega}\), where \[\Sigma_{\ell} =\left\{\alpha\cup\{\$_{\pi}\mid\pi\in V_{r}\}\mid\alpha\subseteq \{a_{\pi}\mid\pi\in V_{\ell}\wedge a\in\mathrm{AP}\}\right\}\text{ and }\] \[\Sigma_{r} =\left\{\alpha\cup\{\$_{\pi}\mid\pi\in V_{\ell}\}\mid\alpha \subseteq\{a_{\pi}\mid\pi\in V_{r}\wedge a\in\mathrm{AP}\}\right\}.\] Note in particular that \(\Sigma_{\ell}\cap\Sigma_{r}=\emptyset\). We can thus conclude by applying the following standard result of formal language theory: **Claim 4.15**.: Let \(L\subseteq\Sigma_{1}^{*}\cdot\Sigma_{2}^{\omega}\), where \(\Sigma_{1}\cap\Sigma_{2}=\emptyset\). If \(L=L(\varphi)\) for some LTL formula \(\varphi\), then there exists a finite family \((\varphi_{1}^{i},\varphi_{2}^{i})_{i}\) of LTL formulas such that \(L=\bigcup_{1\leq i\leq k}L(\varphi_{1}^{i})\cdot L(\varphi_{2}^{i})\) and for all \(i\), \(L(\varphi_{1}^{i})\subseteq\Sigma_{1}^{*}\) and \(L(\varphi_{2}^{i})\subseteq\Sigma_{2}^{\omega}\). Proof.: A language is definable in LTL if and only if it is accepted by some counter-free automaton [1, 15]. Let \(\mathcal{A}\) be a counter-free automaton for \(L\). For every state \(q\) in \(\mathcal{A}\), let \[L_{1}^{q} =\left\{w\in\Sigma_{1}^{*}\mid q_{0}\xrightarrow{w}q\text{ for some initial state }q_{0}\right\}\text{ and }\] \[L_{2}^{q} =\left\{w\in\Sigma_{2}^{\omega}\mid\text{ there is an accepting run on $w$ starting from $q$}\right\}.\] We have \(L=\bigcup_{q}L_{1}^{q}\cdot L_{2}^{q}\). Moreover, \(L_{1}^{q}\) and \(L_{2}^{q}\) are still recognisable by counter-free automata, and therefore LTL definable. **Case \(\varphi=\exists\pi\)**. \(\psi\).** Let \((\psi_{\ell,1}^{i},\psi_{r,1}^{i})\) and \((\psi_{\ell,2}^{i},\psi_{r,2}^{i})\) be the formulas constructed respectively for \((\psi,V_{\ell}\cup\{\pi\},V_{r})\) and \((\psi,V_{\ell},V_{r}\cup\{\pi\})\). We take the union of all \((\exists\pi\). \(\psi_{\ell,1}^{i},\psi_{r,1}^{i})\) and \((\psi_{\ell,2}^{i},\exists\pi\). \(\psi_{r,2}^{i})\). **Case \(\varphi=\forall\pi\)**. \(\psi\).** Let \((\xi_{\ell}^{i},\xi_{r}^{i})_{1\leq i\leq k}\) be the formulas obtained for \(\exists\pi\). \(\neg\psi\). We have \((T,\Pi)\models\varphi\) if and only if for all \(i\), \((T_{\ell},\Pi)\not\models\xi_{\ell}^{i}\) or \((T_{r},\Pi^{\prime})\not\models\xi_{r}^{i}\); or, equivalently, if there exists \(h:\{1,\ldots,k\}\to\{\ell,r\}\) such that \((T_{\ell},\Pi)\models\bigwedge_{h(i)=\ell}\neg\xi_{\ell}^{i}\) and \((T_{r},\Pi^{\prime})\models\bigwedge_{h(i)=r}\neg\xi_{r}^{i}\). Take the family \((\varphi_{\ell}^{h},\varphi_{r}^{h})_{h}\), where \(\varphi_{\ell}^{h}=\bigwedge_{h(i)=\ell}\neg\xi_{\ell}^{i}\) and \(\varphi_{r}^{h}=\bigwedge_{h(i)=r}\neg\xi_{r}^{i}\). Since \(\varphi=\forall\pi\). \(\psi\) is a \(\Pi_{n}\)-formula, the formula \(\exists\pi\). \(\neg\psi\) and by induction all \(\xi_{\ell}^{i}\) and \(\xi_{r}^{i}\) are \(\Sigma_{n}\)-formulas. Then all \(\neg\xi_{r}^{i}\) are \(\Pi_{n}\)-formulas, and since \(\Pi_{n}\)-formulas are closed under conjunction (up to formula equivalence), all \(\varphi_{\ell}^{h}\) and \(\varphi_{r}^{h}\) are \(\Pi_{n}\)-formulas as well. We are now ready to prove Theorem 4.10. Proof of Theorem 4.10.: The upper bound is an easy consequence of Theorem 3.1: Given a HyperLTL sentence \(\varphi\), we express the existence of a \(\Sigma_{n}\)-sentence \(\psi\) using first-order quantification and encode equivalence of \(\psi\) and \(\varphi\) via the formula \((\neg\varphi\wedge\psi)\vee(\varphi\wedge\neg\psi)\), which is unsatisfiable if and only if \(\varphi\) and \(\psi\) are equivalent. Altogether, this shows membership in \(\Pi_{1}^{1}\), as \(\Pi_{1}^{1}\) is closed under existential first-order quantification (see, e.g. [1, Page 82]). We prove the lower bound by reduction from the unsatisfiability problem for HyperLTL. So given a HyperLTL sentence \(\varphi\), we want to construct \(\psi\) such that \(\varphi\) is unsatisfiable if and only if \(\psi\) is equivalent to a \(\Sigma_{n}\)-sentence. We first consider the case \(n>1\). Fix a \(\Sigma_{n+1}\)-sentence \(\varphi_{n+1}\) that is in not equivalent to any \(\Sigma_{n}\)-sentence, and such that every model of \(\varphi_{n+1}\) is bounded. The existence of such a formula is a consequence of Theorem 4.2. By Lemma 4.12, there exists a computable \(\psi\) such that for all split models \(T\), we have \(T\models\psi\) if and only if \(T_{\ell}\models\varphi_{n+1}\) and \(T_{r}\models\varphi\). First, it is clear that if \(\varphi\) is unsatisfiable, then \(\psi\) is unsatisfiable as well, and thus equivalent to \(\exists\pi.\;a_{\pi}\wedge\neg a_{\pi}\), which is a \(\Sigma_{n}\)-sentence since \(n\geq 1\). Conversely, suppose towards a contradiction that \(\varphi\) is satisfiable and that \(\psi\) is equivalent to some \(\Sigma_{n}\)-sentence. Let \((\psi^{i}_{\ell},\psi^{i}_{r})_{i}\) be the finite family of \(\Sigma_{n}\)-sentences given by Lemma 4.13 for \(\psi\). Fix a model \(T_{\varphi}\) of \(\varphi\). For a bounded \(T\), we let \(\overline{T}\) denote the unique split set of traces such that \(\overline{T_{\ell}}=T\) and \(\overline{T_{r}}=T_{\varphi}\). For all \(T\), we then have \(T\models\varphi_{n+1}\) if and only if \(T\) is bounded and \(\overline{T}\models\psi\). Recall that the set of bounded models can be defined by a \(\Pi_{1}\)-sentence \(\varphi_{bd}\) (Lemma 4.11), which is also a \(\Sigma_{n}\)-sentence since \(n>1\). We then have \(T\models\varphi_{n+1}\) if and only if \(T\models\varphi_{bd}\) and there exists \(i\) such that \(T\models\psi^{i}_{\ell}\) and \(T_{\varphi}\models\psi^{i}_{r}\). So \(\varphi_{n+1}\) is equivalent to \[\varphi_{bd}\wedge\bigvee_{i\text{ with }T_{\varphi}\models\psi^{i}_{r}}\psi^{i }_{\ell}\,,\] which, since \(\Sigma_{n}\)-sentences are closed (up to logical equivalence) under conjunction and disjunction, is equivalent to a \(\Sigma_{n}\)-sentence. This contradicts the definition of \(\varphi_{n+1}\). We are left with the case \(n=1\). Similarly, we construct \(\psi\) such that \(\varphi\) is unsatisfiable if and only if \(\psi\) is unsatisfiable, and if and only if \(\psi\) is equivalent to a \(\Sigma_{1}\)-sentence. However, we do not need to use bounded or split models here. Every satisfiable \(\Sigma_{1}\)-sentence has a model with finitely many traces. Therefore, a simple way to construct \(\psi\) so that it is not equivalent to any \(\Sigma_{1}\)-sentence (unless it is unsatisfiable) is to ensure that every model of \(\psi\) contains infinitely many traces. Let \(x\notin\text{AP}\), and \(T_{\omega}=\{\emptyset^{n}\{x\}\emptyset^{\omega}\mid n\in\mathbb{N}\}\). As seen in the proof of Lemma 3.3, \(T_{\omega}\) is definable in HyperLTL: There is a sentence \(\varphi_{\omega}\) such that \(T\subseteq(2^{\text{AP}\cup\{x\}})^{\omega}\) is a model of \(\varphi_{\omega}\) if and only if \(T=T_{\omega}\). By relativising quantifiers in \(\varphi_{\omega}\) and \(\varphi\) to traces with or without the atomic proposition \(x\), one can construct a HyperLTL sentence \(\psi\) such that \(T\models\psi\) if and only if \(T_{\omega}\subseteq T\) and \(T\setminus T_{\omega}\models\varphi\). Again, if \(\varphi\) is unsatisfiable then \(\psi\) is unsatisfiable and therefore equivalent to \(\exists\pi.\;a_{\pi}\wedge\neg a_{\pi}\), a \(\Sigma_{1}\)-sentence. Conversely, all models of \(\psi\) contain infinitely many traces and therefore, if \(\psi\) is equivalent to a \(\Sigma_{1}\)-sentence then it is unsatisfiable, and so is \(\varphi\). ## 5. HyperCTL\({}^{*}\) satisfiability is \(\Sigma_{1}^{2}\)-complete Here, we consider the HyperCTL\({}^{*}\) satisfiability problem: given a HyperLTL sentence, determine whether it has a model \(\mathcal{T}\) (of arbitrary size). We prove that it is much harder than HyperLTL satisfiability. As a key step of the proof, which is interesting in its own right, we also prove that every satisfiable sentence admits a model of cardinality at most \(\mathfrak{c}\) (the cardinality of the continuum). Conversely, we exhibit a satisfiable HyperCTL\({}^{*}\) sentence whose models are all of cardinality at least \(\mathfrak{c}\). **Theorem 5.1**.: _HyperCTL\({}^{*}\) satisfiability is \(\Sigma_{1}^{2}\)-complete._ ### An Upper Bound on the Size of HyperCTL\({}^{*}\) Models Before we begin proving membership in \(\Sigma^{2}_{1}\), we obtain a bound on the size of minimal models of satisfiable HyperCTL\({}^{*}\) sentences. For this, we use an argument based on Skolem functions, which is a transfinite generalisation of the proof that all satisfiable HyperLTL sentences have a countable model [10]. Later, we complement this upper bound by a matching lower bound, which will be applied in the hardness proof. In the following, we use \(\omega\) and \(\omega_{1}\) to denote the first infinite and the first uncountable ordinal, respectively, and write \(\aleph_{0}\) and \(\aleph_{1}\) for their cardinality. As \(\mathfrak{c}\) is uncountable, we have \(\aleph_{1}\leq\mathfrak{c}\). **Lemma 5.2**.: _Each satisfiable HyperCTL\({}^{*}\) sentence \(\varphi\) has a model of size at most \(\mathfrak{c}\)._ The proof of Lemma 5.2 uses a Skolem function to create a model. Before giving this proof, we should therefore first introduce Skolem functions for HyperCTL\({}^{*}\). Let \(\varphi\) be a HyperCTL\({}^{*}\) formula. A quantifier in \(\varphi\) occurs _with polarity 0_ if it occurs inside the scope of an even number of negations, and _with polarity 1_ if it occurs inside the scope of an odd number of negations. We then say that a quantifier _occurs existentially_ if it is an existential quantifier with polarity 0, or a universal quantifier with polarity 1. Otherwise the quantifier _occurs universally_. A Skolem function will map choices for the universally occurring quantifiers to choices for the existentially occurring quantifiers. For reasons of ease of notation, it is convenient to consider a single Skolem function for all existentially occurring quantifiers in a HyperCTL\({}^{*}\) formula \(\varphi\), so the output of the function is an \(l\)-tuple of paths, where \(l\) is the number of existentially occurring quantifiers in \(\varphi\). The input consists of a \(k\)-tuple of paths, where \(k\) is the number of universally occurring quantifiers in \(\varphi\), plus an \(l\)-tuple of integers. The reason for these integers is that we need to keep track of the time point in which the existentially occurring quantifiers are invoked. Consider, for example, a HyperCTL\({}^{*}\) formula of the form \(\forall\pi_{1}\). \(\mathbf{G}\,\exists\pi_{2}.\)\(\psi\). This formula states that for every path \(\pi_{1}\), and for every future point \(\pi_{1}(i)\) on that path, there is some \(\pi_{2}\) starting in \(\pi_{1}(i)\) satisfying \(\psi\). So the choice of \(\pi_{2}\) depends not only on \(\pi_{1}\), but also on \(i\). For each existentially occurring quantifier, we need one integer to represent this time point at which it is invoked. A HyperCTL\({}^{*}\) Skolem function for a formula \(\varphi\) on a transition system \(\mathcal{T}\) is therefore a function \(f\) of the form \(f\colon\mathit{paths}(\mathcal{T})^{k}\times\mathbb{N}^{l}\to\mathit{paths}( \mathcal{T})^{l}\), where \(\mathit{paths}(\mathcal{T})\) is the set of paths over \(\mathcal{T}\), \(k\) is the number of universally occurring quantifiers in \(\varphi\) and \(l\) is the number of existentially occurring quantifiers. Note that not every function of this form is a Skolem function, but for our upper bound it suffices that every Skolem function is of that form. Now, we are able to prove that every satisfiable HyperCTL\({}^{*}\) formula has a model of size \(\mathfrak{c}\). Proof of Lemma 5.2.: If \(\varphi\) is satisfiable, let \(\mathcal{T}\) be one of its models, and let \(f\) be a Skolem function witnessing the satisfaction of \(\varphi\) on \(\mathcal{T}\). We create a sequence of transition systems \(\mathcal{T}_{\alpha}\) as follows. * \(\mathcal{T}_{0}\) is a single, arbitrarily chosen, path of \(\mathcal{T}\) starting in the initial vertex. * \(\mathcal{T}_{\alpha+1}\) contains exactly those vertices and edges from \(\mathcal{T}\) that are (i) part of \(\mathcal{T}_{\alpha}\) or (ii) among the outputs of the Skolem function \(f\) when restricted to input paths from \(\mathcal{T}_{\alpha}\). * if \(\alpha\) is a limit ordinal, then \(\mathcal{T}_{\alpha}=\bigcup_{\alpha^{\prime}\leq\alpha}\mathcal{T}_{\alpha^{ \prime}}\). Note that if \(\alpha\) is a limit ordinal then \(\mathcal{T}_{\alpha}\) may contain paths \(\rho(0)\rho(1)\rho(2)\cdots\) that are not included in any \(\mathcal{T}_{\alpha^{\prime}}\) with \(\alpha^{\prime}<\alpha\), as long as each finite prefix \(\rho(0)\cdots\rho(i)\) is included in some \(\alpha^{\prime}_{i}<\alpha\). First, we show that this procedure reaches a fixed point at \(\alpha=\omega_{1}\). Suppose towards a contradiction that \(\mathcal{T}_{\omega_{1}+1}\neq\mathcal{T}_{\omega_{1}}\). Then there are \(\vec{\rho}=(\rho_{1},\ldots,\rho_{k})\in\mathit{paths}(\mathcal{T}_{\omega_{ 1}})^{k}\) and \(\vec{n}\in\mathbb{N}^{l}\) such that \(f(\vec{\rho},\vec{n})\not\in\mathit{paths}(\mathcal{T}_{\omega_{1}})^{l}\). Then for every \(i\in\mathbb{N}\) and every \(1\leq j\leq k\), there is an ordinal \(\alpha_{i,j}<\omega_{1}\) such that the finite prefix \(\rho_{j}(0)\cdots\rho_{j}(i)\) is contained in \(\mathcal{T}_{\alpha_{i,j}}\). The set \(\{\alpha_{i,j}\mid i\in\mathbb{N},1\leq j\leq k\}\) is countable, and because \(\alpha_{i,j}<\omega_{1}\) each \(\alpha_{i,j}\) is also countable. A countable union of countable sets is itself countable, so \(\sup\{\alpha_{i,j}\mid i\in\mathbb{N},1\leq j\leq k\}=\bigcup_{i\in\mathbb{N} }\bigcup_{1\leq j\leq k}\alpha_{i,j}=\beta<\omega_{1}\). But then the \(\vec{\rho}\) are all contained in \(\mathcal{T}_{\beta}\), and therefore \(f(\vec{\rho},\vec{n})\in\mathit{paths}(\mathcal{T}_{\beta+1})^{l}\). But \(\beta+1<\omega_{1}\), so this contradicts the assumption that \(f(\vec{\rho},\vec{n})\not\in\mathit{paths}(\mathcal{T}_{\omega_{1}})^{l}\). From this contradiction we obtain \(\mathcal{T}_{\omega_{1}+1}=\mathcal{T}_{\omega_{1}}\), so we have reached a fixed point. Furthermore, because \(\mathcal{T}_{\omega_{1}}\) is contained in \(\mathcal{T}\) and closed under the Skolem function and \(\mathcal{T}\) satisfies \(\varphi\), we obtain that \(\mathcal{T}_{\omega_{1}}\) also satisfies \(\varphi\). Left to do, then, is to bound the size of \(\mathcal{T}_{\omega_{1}}\), by bounding the number of vertices that get added at each step in its construction. We show by induction that \(|\mathcal{T}_{\alpha}|\leq\mathfrak{c}\) for every \(\alpha\). So, in particular, we have \(\mathcal{T}_{\omega_{1}}\leq\mathfrak{c}\), as required. As base case, we have \(|\mathcal{T}_{0}|\leq\aleph_{0}<\mathfrak{c}\), since it consists of a single path. Consider then \(|\mathcal{T}_{\alpha+1}|\). For each possible input to \(f\), there are at most \(l\) new paths, and therefore at most \(|\mathbb{N}\times l|\) new vertices in \(\mathcal{T}_{\alpha+1}\). Further, there are \(|\mathit{paths}(\mathcal{T}_{\alpha})|^{k}\times|\mathbb{N}|^{l}\) such inputs. By the induction hypothesis, \(|\mathcal{T}_{\alpha}|\leq\mathfrak{c}\), which implies that \(|\mathit{paths}(\mathcal{T}_{\alpha})|\leq\mathfrak{c}\). As such, the number of added vertices in each step is limited to \(\mathfrak{c}^{k}\times\aleph_{0}^{l}\times\aleph_{0}\times l=\mathfrak{c}\). So \(|\mathcal{T}_{\alpha+1}|\leq|\mathcal{T}_{\alpha}|+\mathfrak{c}=\mathfrak{c}\). If \(\alpha\) is a limit ordinal, \(\mathcal{T}_{\alpha}\) is a union of at most \(\aleph_{1}\) sets, each of which has, by the induction hypothesis, a size of at most \(\mathfrak{c}\). Hence \(|\mathcal{T}_{\alpha}|\leq\aleph_{1}\times\mathfrak{c}=\mathfrak{c}\). ### HyperCTL\({}^{*}\) satisfiability is in \(\Sigma^{2}_{1}\) With the upper bound on the size of models at hand, we can place HyperCTL\({}^{*}\) satisfiability in \(\Sigma^{2}_{1}\), as the existence of a model of size \(\mathfrak{c}\) can be captured by quantification over type 2 objects. **Lemma 5.3**.: _HyperCTL\({}^{*}\) satisfiability is in \(\Sigma^{2}_{1}\)._ Proof.: As every HyperCTL\({}^{*}\) formula is satisfied in a model of size at most \(\mathfrak{c}\), these models can be represented by objects of type 2. Checking whether a formula is satisfied in a transition system is equivalent to the existence of a winning strategy for Verifier in the induced model checking game. Such a strategy is again a type 2 object, which is existentially quantified. Finally, whether it is winning can be expressed by quantification over individual elements and paths, which are objects of types 0 and 1. Checking the satisfiability of a HyperCTL\({}^{*}\) formula \(\varphi\) therefore amounts to existential third-order quantification (to choose a model and a strategy) followed by a second-order formula to verify that \(\varphi\) holds on the model (i.e. that the chosen strategy is winning). Hence HyperCTL\({}^{*}\) satisfiability is in \(\Sigma^{2}_{1}\). Formally, we encode the existence of a winning strategy for Verifier in the HyperCTL\({}^{*}\) model checking game \(\mathcal{G}(\mathcal{T},\varphi)\) induced by a transition system \(\mathcal{T}\) and a HyperCTL\({}^{*}\) sentence \(\varphi\). This game is played between Verifier and Falsifier, one of them aiming to prove that \(\mathcal{T}\models\varphi\) and the other aiming to prove \(\mathcal{T}\not\models\varphi\). It is played in a graph whose positions correspond to subformulas which they want to check (and suitable path assignments of the free variables): each vertex (say, representing a subformula \(\psi\)) belongs to one of the players who has to pick a successor, which represents a subformula of \(\psi\). A play ends at an atomic proposition, at which point the winner can be determined. Formally, a vertex of the game is of the form \((\Pi,\psi,b)\) where \(\Pi\) is a path assignment, \(\psi\) is a subformula of \(\varphi\), and \(b\in\{0,1\}\) is a flag used to count the number of negations encountered along the play; the initial vertex is \((\Pi_{\emptyset},\varphi,0)\). Furthermore, for until-subformulas \(\psi\), we need auxiliary vertices of the form \((\Pi,\psi,b,j)\) with \(j\in\mathbb{N}\). The vertices of Verifier are * of the form \((\Pi,\psi,0)\) with \(\psi=\psi_{1}\vee\psi_{2}\), \(\psi=\psi_{1}\,\mathbf{U}\,\psi_{2}\), or \(\psi=\exists\pi.\)\(\psi^{\prime}\), * of the form \((\Pi,\forall\pi.\)\(\psi^{\prime},1)\), or * of the form \((\Pi,\psi_{1}\,\mathbf{U}\,\psi_{2},1,j)\). The moves of the game are defined as follows: * A vertex \((\Pi,a_{\pi},b)\) is terminal. It is winning for Verifier if \(b=0\) and \(a\in\lambda(\Pi(\pi)(0))\) or if \(b=1\) and \(a\notin\lambda(\Pi(\pi)(0))\), where \(\lambda\) is the labelling function of \(\mathcal{T}\). * A vertex \((\Pi,\neg\psi,b)\) has a unique successor \((\Pi,\psi,b+1\bmod 2)\). * A vertex \((\Pi,\psi_{1}\vee\psi_{2},b)\) has two successors of the form \((\Pi,\psi_{i},b)\) for \(i\in\{1,2\}\). * A vertex \((\Pi,\mathbf{X}\,\psi,b)\) has a unique successor \((\Pi[1,\infty),\psi,b)\). * A vertex \((\Pi,\psi_{1}\,\mathbf{U}\,\psi_{2},b)\) has a successor \((\Pi,\psi_{1}\,\mathbf{U}\,\psi_{2},b,j)\) for every \(j\in\mathbb{N}\). * A vertex \((\Pi,\psi_{1}\,\mathbf{U}\,\psi_{2},b,j)\) has the successor \((\Pi[j,\infty),\psi_{2},b)\) as well as successors \((\Pi[j^{\prime},\infty),\psi_{1},b)\) for every \(0\leq j^{\prime}<j\). * A vertex \((\Pi,\exists\pi.\)\(\psi,b)\) has successors \((\Pi[\pi\mapsto\rho],\psi,b)\) for every path \(\rho\) of \(\mathcal{T}\) starting in \(\mathrm{rcnt}(\Pi)\). * A vertex \((\Pi,\forall\pi.\)\(\psi,b)\) has successors \((\Pi[\pi\mapsto\rho],\psi,b)\) for every path \(\rho\) of \(\mathcal{T}\) starting in \(\mathrm{rcnt}(\Pi)\). A play of the model checking game is a finite path through the graph, starting at the initial vertex and ending at a terminal vertex. It is winning for Verifier if the terminal vertex is winning for her. Note that the length of a play is bounded by \(2d\), where \(d\) is the depth5 of \(\varphi\), as the formula is simplified during at least every other move. Footnote 5: The depth is the maximal nesting of quantifiers, Boolean connectives, and temporal operators. A strategy \(\sigma\) for Verifier is a function mapping each of her vertices \(v\) to some successor of \(v\). A play \(v_{0}\cdots v_{k}\) is consistent with \(\sigma\), if \(v_{k^{\prime}+1}=\sigma(v_{k^{\prime}})\) for every \(0\leq k^{\prime}<k\) such that \(v_{k^{\prime}}\) is a vertex of Verifier. A straightforward induction shows that Verifier has a winning strategy for \(\mathcal{G}(\mathcal{T},\varphi)\) if and only if \(\mathcal{T}\models\varphi\). Recall that every satisfiable HyperCTL\({}^{*}\) sentence has a model of cardinality \(\mathfrak{c}\) (Lemma 5.2). Thus, to place HyperCTL\({}^{*}\) satisfiability in \(\Sigma^{2}_{1}\), we express, for a given natural number encoding a HyperCTL\({}^{*}\) formula \(\varphi\), the existence of the following type 2 objects (using suitable encodings): * A transition system \(\mathcal{T}\) of cardinality \(\mathfrak{c}\). * A function \(\sigma\) from \(V\) to \(V\), where \(V\) is the set of vertices of \(\mathcal{G}(\mathcal{T},\varphi)\). Note that a single vertex of \(V\) is a type 1 object. Then, we express that \(\sigma\) is a strategy for Verifier, which is easily expressible using quantification over type 1 objects. Thus, it remains to express that \(\sigma\) is winning by stating that every play (a sequence of type 1 objects of bounded length) that is consistent with \(\sigma\) ends in a terminal vertex that is winning for Verifier. Again, we leave the tedious, but standard, details to the reader. ### HyperCTL\({}^{*}\) satisfiability is \(\Sigma^{2}_{1}\)-hard Next, we prove a matching lower bound. We first describe a satisfiable HyperCTL\({}^{*}\) sentence \(\varphi_{\mathfrak{c}}\) that does not have any model of cardinality less than \(\mathfrak{c}\) (more precisely, the initial vertex must have uncountably many successors), thus matching the upper bound from Lemma 5.2. We construct \(\varphi_{\mathfrak{c}}\) with one particular model \(\mathcal{T}\mathfrak{c}\) in mind, defined below, though it also has other models. The idea is that we want all possible subsets of \(A\subseteq\mathbb{N}\) to be represented in \(\mathcal{T}\mathfrak{c}\) in the form of paths \(\rho_{A}\) such that \(\rho_{A}(i)\) is labelled by \(1\) if \(i\in A\), and by \(0\) otherwise. By ensuring that the first vertices of these paths are pairwise distinct, we obtain the desired lower bound on the cardinality. We express this in HyperCTL\({}^{*}\) as follows: First, we express that there is a part of the model (labelled by fbt) where every reachable vertex has two successors, one labelled with \(0\) and one labelled with \(1\), i.e. the unravelling of this part contains the full binary tree. Thus, this part has a path \(\rho_{A}\) as above for every subset \(A\), but their initial vertices are not necessarily distinct. Hence, we also express that there is another part (labelled by set) that contains a copy of each path in the fbt-part, and that these paths indeed start at distinct successors of the initial vertex. We let \(\mathcal{T}\mathfrak{c}=(V_{\mathfrak{c}},E_{\mathfrak{c}},t_{\mathfrak{c}}, \lambda_{\mathfrak{c}})\) (see Figure 2), where * \(V_{\mathfrak{c}}=\{t_{u}\mid u\in\{0,1\}^{*}\}\cup\{s^{i}_{A}\mid i\in\mathbb{ N}\wedge A\subseteq\mathbb{N}\}\), * \(E_{\mathfrak{c}}=\{(t_{u},t_{u0}),(t_{u},t_{u1})\mid u\in\{0,1\}^{*}\}\cup\{(t_ {\mathfrak{c}},s^{0}_{A})\mid A\subseteq\mathbb{N}\}\cup\{(s^{i}_{A},s^{i+1}_{ A})\mid A\subseteq\mathbb{N},i\in\mathbb{N}\}\), * and the labelling \(\lambda_{\mathfrak{c}}\) is defined as * \(\lambda_{\mathfrak{c}}(t_{u\cdot 0})=\{\texttt{fbt},0\}\) * \(\lambda_{\mathfrak{c}}(t_{u\cdot 1})=\{\texttt{fbt},1\}\), and * \(\lambda_{\mathfrak{c}}(s^{i}_{A})=\begin{cases}\{\texttt{set},0\}&\text{if $i \notin A$,}\\ \{\texttt{set},1\}&\text{if $i\in A$.}\end{cases}\) **Lemma 5.4**.: _There is a satisfiable HyperCTL\({}^{*}\) sentence \(\varphi_{\mathfrak{c}}\) that has only models of cardinality at least \(\mathfrak{c}\)._ Proof.: The formula \(\varphi_{\mathfrak{c}}\) is defined as the conjunction of the formulas below: 1. The label of the initial vertex is \(\{\texttt{fbt}\}\) and the labels of non-initial vertices are \(\{\texttt{fbt},0\}\), \(\{\texttt{fbt},1\}\), \(\{\texttt{set},0\}\), or \(\{\texttt{set},1\}\): \[\forall\pi.\ (\texttt{fbt}_{\pi}\land\neg 0_{\pi}\land\neg 1_{\pi}\land\neg \texttt{set}_{\pi})\land\mathbf{X}\,\mathbf{G}\left((\texttt{set}_{\pi} \leftrightarrow\neg\texttt{fbt}_{\pi})\land(0_{\pi}\leftrightarrow\neg 1_{\pi})\right)\] Figure 2. A depiction of \(\mathcal{T}\mathfrak{c}\). Vertices in black (on the left including the initial vertex) are labelled by fbt, those in red (on the right, excluding the initial vertex) are labelled by set. 2. All fbt-labelled vertices have a successor with label \(\{\texttt{fbt},0\}\) and one with label \(\{\texttt{fbt},1\}\), and all fbt-labelled vertices that are additionally labelled by \(0\) or \(1\) have no set-labelled successor: \[\forall\pi.\ \mathbf{G}\left(\texttt{fbt}_{\pi}\to((\exists\pi_{0}.\ \mathbf{X}(\texttt{fbt}_{\pi_{0}} \wedge 0_{\pi_{0}}))\wedge(\exists\pi_{1}.\ \mathbf{X}(\texttt{fbt}_{\pi_{1}} \wedge 1_{\pi_{1}}))\wedge((0_{\pi}\lor 1_{\pi})\to\forall\pi^{\prime}.\ \mathbf{X} \,\texttt{fbt}_{\pi^{\prime}}))\right)\] 3. From set-labeled vertices, only set-labeled vertices are reachable: \[\forall\pi.\ \mathbf{G}(\texttt{set}\to\mathbf{G}\,\texttt{set})\] 4. For every path of fbt-labelled vertices starting at a successor of the initial vertex, there is a path of set-labelled vertices (also starting at a successor of the initial vertex) with the same \(\{0,1\}\) labelling: \[\forall\pi.\ \big{(}(\mathbf{X}\,\texttt{fbt}_{\pi})\to\exists\pi^{\prime}.\ \mathbf{X}( \texttt{set}_{\pi^{\prime}}\wedge\mathbf{G}(0_{\pi}\leftrightarrow 0_{\pi^{ \prime}}))\big{)}\] 5. Any two paths starting in the same set-labelled vertex have the same sequence of labels: \[\forall\pi.\ \mathbf{G}\left(\texttt{set}_{\pi}\to\forall\pi^{\prime}.\ \mathbf{G}(0_{\pi} \leftrightarrow 0_{\pi^{\prime}})\right).\] It is easy to check that \(\mathcal{T}\mathfrak{c}\models\varphi_{\mathfrak{c}}\). Note however that it is not the only model of \(\varphi_{\mathfrak{c}}\): for instance, some paths may be duplicated, or merged after some steps if their label sequences share a common suffix. So, consider an arbitrary transition system \(\mathcal{T}=(V,E,v_{I},\lambda)\) such that \(\mathcal{T}\models\varphi_{\mathfrak{c}}\). By condition 2, for every set \(A\subseteq\mathbb{N}\), there is a path \(\rho_{A}\) starting at a successor of \(v_{I}\) such that \(\lambda(\rho_{A}(i))=\{\texttt{fbt},1\}\) if \(i\in A\) and \(\lambda(\rho_{A}(i))=\{\texttt{fbt},0\}\) if \(i\notin A\). Condition 3 implies that there is also a set-labelled path \(\rho^{\prime}_{A}\) such that \(\rho^{\prime}_{A}\) starts at a successor of \(v_{I}\), and has the same \(\{0,1\}\) labelling as \(\rho_{A}\). Finally, by condition 4, if \(A\neq B\) then \(\rho^{\prime}_{A}(0)\neq\rho^{\prime}_{B}(0)\). So, the initial vertex has at least as many successors as there are subsets of \(\mathbb{N}\), i.e., at least \(\mathfrak{c}\) many. Before moving to the proof that HyperCTL\({}^{*}\) satisfiability is \(\Sigma^{2}_{1}\)-hard, we introduce one last auxiliary formula that will be used in the reduction, showing that addition and multiplication can be defined in HyperCTL\({}^{*}\), and in fact even in HyperLTL, as follows: Let \(\text{AP}=\{\texttt{arg1},\texttt{arg2},\texttt{res},\texttt{add},\texttt{mult}\}\) and let \(T_{(+,\cdot)}\) be the set of all traces \(t\in\left(2^{\text{AP}}\right)^{\omega}\) such that * there are unique \(n_{1},n_{2},n_{3}\in\mathbb{N}\) with \(\texttt{arg1}\in t(n_{1})\), \(\texttt{arg2}\in t(n_{2})\), and \(\texttt{res}\in t(n_{3})\), and * either \(\texttt{add}\in t(n)\) and \(\texttt{mult}\notin t(n)\) for all \(n\) and \(n_{1}+n_{2}=n_{3}\), or \(\texttt{mult}\in t(n)\) and \(\texttt{add}\notin t(n)\) for all \(n\) and \(n_{1}\cdot n_{2}=n_{3}\). **Lemma 5.5**.: _There is a HyperLTL sentence \(\varphi_{(+,\cdot)}\) which has \(T_{(+,\cdot)}\) as unique model._ Proof.: Consider the conjunction of the following HyperLTL sentences: 1. For every trace \(t\) there are unique \(n_{1},n_{2},n_{3}\in\mathbb{N}\) with \(\texttt{arg1}\in t(n_{1})\), \(\texttt{arg2}\in t(n_{2})\), and \(\texttt{res}\in t(n_{3})\): \[\forall\pi.\ \bigwedge_{a\in\{\texttt{arg1},\texttt{arg2},\texttt{res}\}}(\neg a_{ \pi})\,\mathbf{U}(a_{\pi}\wedge\mathbf{X}\,\mathbf{G}\neg a_{\pi})\] 2. Every trace \(t\) satisfies either \(\texttt{add}\in t(n)\) and \(\texttt{mult}\notin t(n)\) for all \(n\) or \(\texttt{mult}\in t(n)\) and \(\texttt{add}\notin t(n)\) for all \(n\): \[\forall\pi.\ \mathbf{G}(\texttt{add}_{\pi}\wedge\neg\texttt{mult}_{\pi})\lor \mathbf{G}(\texttt{mult}_{\pi}\wedge\neg\texttt{add}_{\pi})\] In the following, we only consider traces satisfying these formulas, as all others are not part of a model. Thus, we will speak of addition traces (if add holds) and multiplication traces (if mult holds). Furthermore, every trace encodes two unique arguments (given by the positions \(n_{1}\) and \(n_{2}\) such that \(\mathtt{arg1}\in t(n_{1})\) and \(\mathtt{arg2}\in t(n_{2})\)) and a unique result (the position \(n_{3}\) such that \(\mathtt{res}\in t(n_{3})\)). Next, we need to express that all possible arguments are represented in a model, i.e. for every \(n_{1}\) and every \(n_{2}\) there are two traces \(t\) with \(\mathtt{arg1}\in t(n_{1})\) and \(\mathtt{arg2}\in t(n_{2})\), one addition trace and one multiplication trace. We do so inductively. 1. There are two traces with both arguments being zero (i.e. \(\mathtt{arg1}\) and \(\mathtt{arg2}\) hold in the first position), one for addition and one for multiplication: \[\bigwedge_{a\in\{\mathtt{add},\mathtt{mult}\}}\exists\pi.\ a_{\pi}\wedge \mathtt{arg1}_{\pi}\wedge\mathtt{arg2}_{\pi}\] 2. Now, we express that for every trace, say encoding the arguments \(n_{1}\) and \(n_{2}\), the argument combinations \((n_{1}+1,n_{2})\) and \((n_{1},n_{2}+1)\) are also represented in the model, again both for addition and multiplication (here we rely on the fact that either add or mult holds at every position, as specified above): \[\forall\pi.\ \exists\pi_{1},\pi_{2}.\ \left(\bigwedge_{i\in\{1,2\}} \mathtt{add}_{\pi}\leftrightarrow\mathtt{add}_{\pi_{i}}\right)\wedge \mathbf{F}(\mathtt{arg1}_{\pi}\wedge\mathbf{X}\,\mathtt{arg1}_{\pi_ {1}})\wedge\mathbf{F}(\mathtt{arg2}_{\pi}\wedge\mathtt{arg2}_{\pi_{1}})\wedge\] \[\mathbf{F}(\mathtt{arg1}_{\pi}\wedge\mathtt{arg1}_{\pi_{2}})\wedge \mathbf{F}(\mathtt{arg2}_{\pi}\wedge\mathbf{X}\,\mathtt{arg2}_{\pi_{2}})\] Every model of these formulas contains a trace representing each possible combination of arguments, both for addition and multiplication. To conclude, we need to express that the result in each trace is correct. We do so by capturing the inductive definition of addition in terms of repeated increments (which can be expressed by the next operator) and the inductive definition of multiplication in terms of repeated addition. Formally, this is captured by the next formulas: 1. For every trace \(t\): if \(\{\mathtt{add},\mathtt{arg1}\}\subseteq t(0)\) then \(\mathtt{arg2}\) and \(\mathtt{res}\) have to hold at the same position (this captures \(0+n=n\)): \[\forall\pi.\ (\mathtt{add}_{\pi}\wedge\mathtt{arg1}_{\pi})\rightarrow\mathbf{F}( \mathtt{arg2}_{\pi}\wedge\mathtt{res}_{\pi})\] 2. For each trace \(t\) with \(\mathtt{add}\in t(0)\), \(\mathtt{arg1}\in t(n_{1})\), \(\mathtt{arg2}\in t(n_{2})\), and \(\mathtt{res}\in t(n_{3})\) such that \(n_{1}>0\) there is a trace \(t^{\prime}\) such that \(\mathtt{add}\in t^{\prime}(0)\), \(\mathtt{arg1}\in t^{\prime}(n_{1}-1)\), \(\mathtt{arg2}\in t^{\prime}(n_{2})\), and \(\mathtt{res}\in t^{\prime}(n_{3}-1)\) (this captures \(n_{1}+n_{2}=n_{3}\Leftrightarrow n_{1}-1+n_{2}=n_{3}-1\) for \(n_{1}>0\)): \[\forall\pi.\ \exists\pi^{\prime}.\ (\mathtt{add}_{\pi}\wedge\neg\mathtt{arg1}_{\pi}) \rightarrow(\mathtt{add}_{\pi^{\prime}}\wedge\mathbf{F}(\mathbf{X}\,\mathtt{ arg1}_{\pi^{\prime}}\wedge\mathtt{arg1}_{\pi^{\prime}})\wedge\mathbf{F}(\mathbf{arg2}_{\pi} \wedge\mathtt{arg2}_{\pi^{\prime}})\wedge\mathbf{F}(\mathbf{X}\,\mathtt{res}_{ \pi}\wedge\mathtt{res}_{\pi^{\prime}}))\] 3. For every trace \(t\): if \(\{\mathtt{mult},\mathtt{arg1}\}\subseteq t(0)\) then also \(\mathtt{res}\in t(0)\) (this captures \(0\cdot n=0\)): \[\forall\pi.\ (\mathtt{mult}_{\pi}\wedge\mathtt{arg1}_{\pi})\rightarrow\mathtt{res}_{\pi}\] 4. Similarly, for each trace \(t\) with \(\mathtt{mult}\in t(0)\), \(\mathtt{arg1}\in t(n_{1})\), \(\mathtt{arg2}\in t(n_{2})\), and \(\mathtt{res}\in t(n_{3})\) such that \(n_{1}>0\) there is a trace \(t^{\prime}\) such that \(\mathtt{mult}\in t^{\prime}(0)\), \(\mathtt{arg1}\in t^{\prime}(n_{1}-1)\), \(\mathtt{arg2}\in t^{\prime}(n_{2})\), and \(\mathtt{res}\in t^{\prime}(n_{2})\), and \(\mathtt{res}\in t^{\prime}(n_{3}-n_{2})\). The latter requirement is expressed by the existence of a trace \(t^{\prime\prime}\) with \(\mathtt{add}\in t^{\prime\prime}(0)\), \(\mathtt{arg2}\in t^{\prime\prime}(n_{2})\), \(\mathtt{res}\in t^{\prime\prime}(n_{3})\), and \(\mathtt{arg1}\) holding in \(t^{\prime\prime}\) at the same time as \(\mathtt{res}\) in \(t^{\prime}\), which implies \(\mathtt{res}\in t^{\prime}(n_{3}-n_{2})\). Altogether, this captures \(n_{1}\cdot n_{2}=n_{3}\Leftrightarrow(n_{1}-1)\cdot n_{2}=n_{3}-n_{2}\) for \(n_{1}>0\). \[\forall\pi.\ \exists\pi^{\prime},\pi^{\prime\prime}. \ (\mathtt{mult}_{\pi}\wedge\neg\mathtt{arg1}_{\pi})\to(\mathtt{ mult}_{\pi^{\prime}}\wedge\mathtt{add}_{\pi^{\prime\prime}}\wedge\] \[\mathbf{F}(\mathbf{X}\,\mathtt{arg1}_{\pi}\wedge\mathtt{arg1}_{ \pi^{\prime}})\wedge\mathbf{F}(\mathtt{arg2}_{\pi}\wedge\mathtt{arg2}_{\pi^{ \prime}}\wedge\mathtt{arg2}_{\pi^{\prime\prime}})\wedge\] \[\mathbf{F}(\mathtt{res}_{\pi^{\prime}}\wedge\mathtt{arg1}_{\pi^{ \prime\prime}})\wedge\mathbf{F}(\mathtt{res}_{\pi}\wedge\mathtt{res}_{\pi^{ \prime\prime}}))\] Now, \(T_{(+,\cdot)}\) is a model of the conjunction \(\varphi_{(+,\cdot)}\) of these eight formulas. Conversely, every model of \(\varphi_{(+,\cdot)}\) contains all possible combinations of arguments (both for addition and multiplication) due to Formulas (3) and (4). Now, Formulas (5) to (8) ensure that the result is _correct_ on these traces. Altogether, this implies that \(T_{(+,\cdot)}\) is the unique model of \(\varphi_{(+,\cdot)}\). To establish \(\Sigma^{2}_{1}\)-hardness, we give an encoding of formulas of existential third-order arithmetic into HyperCTL\({}^{*}\), i.e. every formula of the form \(\exists x_{1}.\ \ldots\exists x_{n}.\ \psi\) where \(x_{1},\ldots,x_{n}\) are third-order variables and \(\psi\) is a sentence of second-order arithmetic can be translated into a HyperCTL\({}^{*}\) sentence. As explained in Section 2, we can (and do for the remainder of the section) assume that first-order (type 0) variables range over natural numbers, second-order (type 1) variables range over sets of natural numbers, and third-order (type 2) variables range over sets of sets of natural numbers. **Lemma 5.6**.: _One can effectively translate sentences \(\varphi\) of existential third-order arithmetic into HyperCTL\({}^{*}\) sentences \(\varphi^{\prime}\) such that \((\mathbb{N},+,\cdot,<,\in)\) is a model of \(\varphi\) if and only if \(\varphi^{\prime}\) is satisfiable._ Proof.: The idea of the proof is as follows. We represent sets of natural numbers as infinite paths with labels in \(\{0,1\}\), so that quantification over sets of natural numbers in \(\psi\) can be replaced by HyperCTL\({}^{*}\) path quantification. First-order quantification is handled in the same way, but using paths where exactly one vertex is labelled 1. In particular we encode first- and second-order variables \(x\) of \(\varphi\) as path variables \(\pi_{x}\) of \(\varphi^{\prime}\). For this to work, we need to make sure that every possible set has a path representative in the transition system (possibly several isomorphic ones). This is where formula \(\varphi_{\mathfrak{c}}\) defined in Lemma 5.4 is used. For arithmetical operations, we rely on the formula \(\varphi_{(+,\cdot)}\) from Lemma 5.5. Finally, we associate with every existentially quantified third-order variable \(x_{i}\) an atomic proposition \(a_{i}\), so that for a second-order variable \(y\), the atomic formula \(y\in x_{i}\) is interpreted as the atomic proposition \(a_{i}\) being true on the second vertex of \(\pi_{y}\). This is all explained in more details below. Let \(\varphi=\exists x_{1}.\ \ldots\exists x_{n}.\ \psi\) where \(x_{1},\ldots,x_{n}\) are third-order variables and \(\psi\) is a formula of second-order arithmetic. We use the atomic propositions \[\mathrm{AP}=\{a_{1},\ldots,a_{n},0,1,\mathtt{set},\mathtt{fbt},\mathtt{arg1}, \mathtt{arg2},\mathtt{res},\mathtt{mult},\mathtt{add}\}.\] Given an interpretation \(\nu:\{x_{1},\ldots,x_{n}\}\to 2^{(2^{\mathbb{N}})}\) of the third-order variables of \(\varphi\), we denote by \(\mathcal{T}_{\nu}\) the transition system over \(\mathrm{AP}\) obtained as follows: We start from \(\mathcal{T}\mathfrak{c}\), and extend it with an \(\{a_{1},\ldots,a_{n}\}\)-labelling by setting \(a_{i}\in\lambda(\rho_{A}(0))\) if \(A\in\nu(x_{i})\); then, we add to this transition system all traces in \(T_{(+,\cdot)}\) as disjoint paths below the initial vertex. From the formulas \(\varphi_{\mathfrak{c}}\) and \(\varphi_{(+,\cdot)}\) defined in Lemmas 5.4 and 5.5, it is not difficult to construct a formula \(\varphi_{(\mathfrak{c},+,\cdot)}\) such that: * For all \(\nu:\{x_{1},\ldots,x_{n}\}\to 2^{(2^{\mathbb{N}})}\), the transition system \(\mathcal{T}_{\nu}\) is a model of \(\varphi_{(\mathfrak{c},+,\cdot)}\). * Conversely, in any model \(\mathcal{T}=(V,E,v_{I},\lambda)\) of \(\varphi_{(\mathfrak{c},+,\cdot)}\), the following conditions are satisfied: 1. For every path \(\rho\) starting at a set-labelled successor of the initial vertex \(v_{I}\), the vertex \(\rho(0)\) has a label of the form \(\lambda(\rho(0))=\{\mathtt{set},b\}\cup\ell\) with \(b\in\{0,1\}\) and \(\ell\subseteq\{a_{1},\ldots,a_{n}\}\), and every vertex \(\rho(i)\) with \(i>0\) has a label \(\lambda(\rho(i))=\{\mathtt{set},0\}\) or \(\lambda(\rho(i))=\{\mathtt{set},1\}\). 2. For every \(A\subseteq\mathbb{N}\), there exists a set-labelled path \(\rho_{A}\) starting at a successor of \(v_{I}\) such that \(1\in\lambda(\rho_{A}(i))\) if \(i\in A\), and \(0\in\lambda(\rho_{A}(i))\) if \(i\notin A\). Moreover, all such paths have the same \(\{a_{1},\ldots,a_{n}\}\) labelling; this can be expressed by the formula \[\forall\pi,\pi^{\prime}.\ \mathbf{X}\left(\Big{(}\,\mathbf{G}(\mathtt{set}_{\pi} \wedge\mathtt{set}_{\pi^{\prime}}\wedge(1_{\pi}\leftrightarrow 1_{\pi^{\prime}})) \Big{)}\rightarrow\bigwedge_{a\in\{a_{1},\ldots,a_{n}\}}a_{\pi}\leftrightarrow a _{\pi^{\prime}}\right)\,.\] 3. For every path \(\rho\) starting at an add- or mult-labelled successor of the initial vertex, the label sequence \(\lambda(\rho(0))\lambda(\rho(1))\cdots\) of \(\rho\) is in \(T_{(+,\cdot)}\). 4. Conversely, for every trace \(t\in T_{(+,\cdot)}\), there exists a path \(\rho\) starting at a successor of the initial vertex such that \(\lambda(\rho(0))\lambda(\rho(1))\cdots=t\). We then let \(\varphi^{\prime}=\varphi_{(\epsilon,+,\cdot)}\wedge h(\psi)\), where \(h(\psi)\) is defined inductively from the second-order body \(\psi\) of \(\varphi\) as follows: * \(h(\psi_{1}\vee\psi_{2})=h(\psi_{1})\lor h(\psi_{2})\) and * \(h(\neg\psi_{1})=\neg h(\psi_{1})\). * If \(x\) ranges over sets of natural numbers, \[h(\exists x.\ \psi_{1})=\exists\pi_{x}.\ ((\mathbf{X}\,\mathtt{set}_{\pi_{x}}) \wedge h(\psi_{1})),\] and \[h(\forall x.\ \psi_{1})=\forall\pi_{x}.\ ((\mathbf{X}\,\mathtt{set}_{\pi_{x}}) \to h(\psi_{1})).\] * If \(x\) ranges over natural numbers, \[h(\exists x.\ \psi_{1})=\exists\pi_{x}.\ ((\mathbf{X}\,\mathtt{set}_{\pi_{x}}) \wedge\mathbf{X}(0_{\pi_{x}}\,\mathbf{U}(1_{\pi_{x}}\wedge\mathbf{X}\, \mathbf{G}\,0_{\pi_{x}}))\wedge h(\psi_{1})),\] and \[h(\forall x.\ \psi_{1})=\forall\pi_{x}.\ ((\mathbf{X}\,\mathtt{set}_{\pi_{x}}) \wedge\mathbf{X}(0_{\pi_{x}}\,\mathbf{U}(1_{\pi_{x}}\wedge\mathbf{X}\, \mathbf{G}\,0_{\pi_{x}}))\to h(\psi_{1})).\] Here, the subformula \(0_{\pi_{x}}\,\mathbf{U}(1_{\pi_{x}}\wedge\mathbf{X}\,\mathbf{G}\,0_{\pi_{x}})\) expresses that there is a single \(1\) on the trace assigned to \(\pi_{x}\), i.e. the path represents a singleton set. * If \(y\) ranges over sets of natural numbers, \(h(y\in x_{i})=\mathbf{X}(a_{i})_{\pi_{y}}\). * If \(x\) ranges over natural numbers and \(y\) over sets of natural numbers, \(h(x\in y)=\mathbf{F}(1_{\pi_{x}}\wedge 1_{\pi_{y}})\). * \(h(x<y)=\mathbf{F}(1_{\pi_{x}}\wedge\mathbf{X}\,\mathbf{F}\,1_{\pi_{y}})\). * \(h(x\cdot y=z)=\exists\pi.\ (\mathbf{X}\,\mathtt{add}_{\pi})\wedge\mathbf{F}( \mathtt{arg1}_{\pi}\wedge 1_{\pi_{x}})\wedge\mathbf{F}(\mathtt{arg2}_{\pi} \wedge 1_{\pi_{y}})\wedge\mathbf{F}(\mathtt{res}_{\pi}\wedge 1_{\pi_{z}})\), and \(h(x+y=z)=\exists\pi.\ (\mathbf{X}\,\mathtt{mult}_{\pi})\wedge\mathbf{F}( \mathtt{arg1}_{\pi}\wedge 1_{\pi_{x}})\wedge\mathbf{F}(\mathtt{arg2}_{\pi} \wedge 1_{\pi_{y}})\wedge\mathbf{F}(\mathtt{res}_{\pi}\wedge 1_{\pi_{z}})\). If \(\psi\) is true under some interpretation \(\nu\) of \(x_{1},\ldots,x_{n}\) as sets of sets of natural numbers, then the transition system \(\mathcal{T}_{\nu}\) defined above is a model of \(\varphi^{\prime}\). Conversely, if \(\mathcal{T}\models\varphi^{\prime}\) for some transition system \(\mathcal{T}\), then for all sets \(A\subseteq\mathbb{N}\) there is a path \(\rho_{A}\) matching \(A\) in \(\mathcal{T}\), and all such paths have the same \(\{a_{1},\ldots,a_{n}\}\)-labelling, so we can define an interpretation \(\nu\) of \(x_{1},\ldots,x_{n}\) by taking \(A\in\nu(x_{i})\) if and only if \(a_{i}\in\lambda(\rho_{A}(0))\). Under this interpretation \(\psi\) holds, and thus \(\varphi\) is true, as first- and second-order quantification in \((\mathbb{N},+,\cdot,<,\in)\) is mimicked by path quantification in \(\mathcal{T}\). Now, we have all the tools at hand to prove the lower bound on the HyperCTL\({}^{*}\) satisfiability problem. **Lemma 5.7**.: _HyperCTL\({}^{*}\) satisfiability is \(\Sigma^{2}_{1}\)-hard._ Proof.: Let \(N\) be a \(\Sigma^{2}_{1}\) set, i.e. \(N=\{x\in\mathbb{N}\mid\exists x_{0}.\ \cdots\exists x_{k}.\ \psi(x,x_{0},\dots,x_{k})\}\) for some second-order arithmetic formula \(\psi\) with existentially quantified third-order variables \(x_{i}\). For every \(n\in\mathbb{N}\), we define the sentence \[\varphi_{n}=\exists x_{0}.\ \cdots\exists x_{k}.\ \psi(n,x_{0},\dots,x_{k})\,.\] Recall that every fixed natural number \(n\) is definable in first-order arithmetic, which is the reason we can use \(n\) in \(\psi\). Then \(\varphi_{n}\) is true if and only if \(n\in N\). Combining this with Lemma 5.6, we obtain a computable function that maps any \(n\in\mathbb{N}\) to a HyperCTL\({}^{*}\) formula \(\varphi^{\prime}_{n}\) such that \(n\in N\) if and only if \(\varphi^{\prime}_{n}\) is satisfiable. ### Variations of HyperCTL\({}^{*}\) Satisfiability The general HyperCTL\({}^{*}\) satisfiability problem, as studied above, asks for the existence of a model of arbitrary size. In the \(\Sigma^{2}_{1}\)-hardness proof we relied on uncountable models with infinite branching. Hence, it is natural to ask whether satisfiability is easier when we consider restricted classes of transition systems. In the remainder of this section, we study the following variations of satisfiability. * The HyperCTL\({}^{*}\)_finite satisfiability problem_: given a HyperCTL\({}^{*}\) sentence, determine whether it has a finite model. * The HyperCTL\({}^{*}\)_finitely-branching satisfiability problem_: given a HyperCTL\({}^{*}\) sentence, determine whether it has a finitely-branching model.6 Footnote 6: A transition system is finitely-branching, if every vertex has only finitely many successors. * The HyperCTL\({}^{*}\)_countable satisfiability problem_: given a HyperCTL\({}^{*}\) sentence, determine whether it has a countable model. Let us begin with finite satisfiability. In contrast to general satisfiability, it is much simpler, but still undecidable. **Theorem 5.8**.: _HyperCTL\({}^{*}\) finite satisfiability is \(\Sigma^{0}_{1}\)-complete._ Proof.: The upper bound follows from HyperCTL\({}^{*}\) model checking being decidable [14] (therefore, the finite satisfiability problem is recursively enumerable and thus in \(\Sigma^{0}_{1}\)) while the matching lower bound is inherited from HyperLTL [16]. Next, we show that the complexity of HyperCTL\({}^{*}\) finitely-branching satisfiability and countable satisfiability lies between that of finite satisfiability and general satisfiability: both are equivalent to _truth in second-order arithmetic_, that is, the problem of deciding whether a given sentence of second-order arithmetic is satisfied in the standard model \((\mathbb{N},0,1,+,\cdot,<,\in)\) of second-order arithmetic. **Theorem 5.9**.: _All of the following problems are effectively interreducible:_ 1. _HyperCTL\({}^{*}\) countable satisfiability._ 2. _HyperCTL\({}^{*}\) finitely-branching satisfiability._ 3. _Truth in second-order arithmetic._ To prove Theorem 5.9, we show the implication \((1)\Rightarrow(3)\) in Lemma 5.10 and the implication \((2)\Rightarrow(3)\) in Lemma 5.11. Then, in Lemma 5.15 we show both converse implications simultaneously. We start by showing that countable satisfiability can be effectively reduced to truth in second-order arithmetic. As every countable set is in bijection with the natural numbers, countable satisfiability asks for the existence of a model whose set of vertices is the set of natural numbers. This can easily be expressed in second-order arithmetic, leading to a fairly straightforward reduction to truth in second-order arithmetic. **Lemma 5.10**.: _There is an effective reduction from HyperCTL\({}^{*}\) countable satisfiability to truth in second-order arithmetic._ Proof.: Let \(\varphi\) be a HyperCTL\({}^{*}\) sentence. We construct a sentence \(\varphi^{c}\) of second-order arithmetic such that \((\mathbb{N},0,1,+,\cdot,<,\in)\models\varphi^{c}\) if and only if \(\varphi\) has a countable model, or, equivalently, if and only if \(\varphi\) has a model of the form \(\mathcal{T}=(\mathbb{N},E,0,\lambda)\) with vertex set \(\mathbb{N}\), which implies that the set \(E\) of edges is a subset of \(\mathbb{N}\times\mathbb{N}\). Note that we assume (w.l.o.g.) that the initial vertex is \(0\). The labeling function \(\lambda\) maps each natural number (that is, each vertex) to a set of atomic propositions. We assume a fixed encoding of valuations in \(2^{\mathrm{AP}}\) as natural numbers in \(\{0,\ldots,|2^{\mathrm{AP}}|-1\}\), so that we can equivalently view \(\lambda\) as a function \(\lambda:\mathbb{N}\to\mathbb{N}\) such that \(\lambda(n)<|2^{\mathrm{AP}}|\) for all \(n\in\mathbb{N}\). Note that binary relations over \(\mathbb{N}\) can be encoded by functions from natural numbers to natural numbers, and the encoding can be implemented in first-order arithmetic. The formula \(\varphi^{c}\) is defined as \[\varphi^{c}=\exists E.\,\exists\lambda.\,\left(\forall x.\,\lambda(x)<|2^{ \mathrm{AP}}|\right)\wedge\varphi^{\prime}(E,\lambda,0)\,,\] where \(E\) is a second-order variable ranging over subsets of \(\mathbb{N}\times\mathbb{N}\), \(\lambda\) a second-order variable ranging over functions from \(\mathbb{N}\to\mathbb{N}\), and \(\varphi^{\prime}(E,\lambda,i)\), defined below, expresses the fact that the transition system \((\mathbb{N},E,0,\lambda)\) is a model of \(\varphi\). We use the following abbreviations: * Given a second-order variable \(f\) ranging over functions from \(\mathbb{N}\) to \(\mathbb{N}\), the formula \(\mathit{path}(f,E)=\forall n.\,\left(f(n),f(n+1)\right)\in E\) expresses the fact that \(f(0)f(1)f(2)\ldots\) is a path in \((\mathbb{N},E,0,\lambda)\). * Given second-order variables \(f\) and \(f^{\prime}\) ranging over functions from \(\mathbb{N}\) to \(\mathbb{N}\) and a first-order variable \(i\) ranging over natural numbers, we let \[\mathit{branch}(f,f^{\prime},i,E)=\mathit{path}(f,E)\wedge\mathit{path}(f^{ \prime},E)\wedge\forall j\leq i.\,\,f(j)=f^{\prime}(j)\,.\] This formula is satisfied by paths \(f\) and \(f^{\prime}\) if \(f\) and \(f^{\prime}\) coincide up to (and including) position \(i\). We will use to restrict path quantification to those that start at a given position of a given path. We define \(\varphi^{\prime}\) inductively from \(\varphi\), therefore considering in general HyperCTL\({}^{*}\) formulas with free variables \(\pi_{1},\ldots,\pi_{k}\), in which case the formula \(\varphi^{\prime}\) has free variables \(E,\lambda,f_{\pi_{1}},\ldots,f_{\pi_{k}},i\). The variable \(i\) is interpreted as the current time point. If \(\varphi\) is a sentence, \(i\) is not free in \(\varphi^{\prime}\), as we use \(0\) in that case. Also, the translation depends on an ordering of the free variables of \(\varphi\), i.e. quantified paths start at position \(i\) of the largest variable, as path quantification depends on the context of a formula with free variables. In the following, we indicate by ordering by the naming of the variables, i.e. we have \(\pi_{1}<\cdots<\pi_{k}\). * \(a^{\prime}_{\pi_{j}}(E,\lambda,f_{\pi_{1}},\ldots,f_{\pi_{k}},i)=\bigvee_{\{v \in 2^{\mathrm{AP}}|a\in v\}}\lambda(f_{\pi_{j}}(i))=[v]\), where \([v]\) is the encoding of \(v\) as a natural number. * If \(\varphi(\pi_{1},\ldots,\pi_{k})=\neg\psi\) then \(\varphi^{\prime}(E,\lambda,f_{\pi_{1}},\ldots,f_{\pi_{k}},i)=\neg(\psi^{\prime }(E,\lambda,f_{\pi_{1}},\ldots,f_{\pi_{k}},i))\). * If \(\varphi(\pi_{1},\ldots,\pi_{k})=\psi_{1}\vee\psi_{2}\) then \[\varphi^{\prime}(E,\lambda,f_{\pi_{1}},\ldots,f_{\pi_{k}},i)=(\psi^{\prime}_{1} (E,\lambda,f_{\pi_{1}},\ldots,f_{\pi_{k}},i))\vee(\psi^{\prime}_{2}(E,\lambda, f_{\pi_{1}},\ldots,f_{\pi_{k}},i)).\] * If \(\varphi(\pi_{1},\ldots,\pi_{k})=\mathbf{X}\,\psi\), then we define \[\varphi^{\prime}(E,\lambda,f_{\pi_{1}},\ldots,f_{\pi_{k}},i)=\psi^{\prime}(E, \lambda,f_{\pi_{1}},\ldots,f_{\pi_{k}},i+1)\,.\] * If \(\varphi(\pi_{1},\ldots,\pi_{k})=\psi_{1}\,\mathbf{U}\,\psi_{2}\), then we define \[\varphi^{\prime}(E,\lambda, \pi_{1},\ldots,f_{\pi_{k}},i)=\exists j.\,j\geq i\wedge\psi_{2}^{ \prime}(E,\lambda,f_{\pi_{1}},\ldots,f_{\pi_{k}},j)\wedge\] \[\forall j^{\prime}.\,\,(i\leq j^{\prime}<j\to\psi_{1}^{\prime}(E, \lambda,f_{\pi_{1}},\ldots,f_{\pi_{k}},j^{\prime}))\,.\] * If \(\varphi=\exists\pi_{1}.\psi(\pi_{1})\) is a sentence, then we define \[\varphi^{\prime}(E,\lambda)=\exists f_{\pi_{1}}.\,\,path(f_{\pi_{1}},E)\wedge f _{\pi_{1}}(0)=0\wedge\psi^{\prime}(E,\lambda,f_{\pi_{1}},0)\,.\] Recall that \(f_{\pi_{1}}\) ranges over functions from \(\mathbb{N}\) to \(\mathbb{N}\) and note that the formula requires \(f\) to encode a path and to start at the initial vertex \(0\). If \(\varphi(\pi_{1},\ldots,\pi_{k})=\exists\pi_{k+1}.\,\,\psi(\pi_{1},\ldots,\pi_{ k},\pi_{k+1})\) with \(k>0\), then we define \[\varphi^{\prime}(E,\lambda,f_{\pi_{1}},\ldots,f_{\pi_{k}},i)=\exists f_{\pi_{ k+1}}.\,\,branch(f_{\pi_{k+1}},f_{\pi_{k}},i,E)\wedge\psi^{\prime}(E,\lambda,f_{ \pi_{1}},\ldots,f_{\pi_{k}},f_{\pi_{k+1}},i)\,.\] Here, we make use of the ordering of the free variables of \(\varphi\), as the translated formula requires the function assigned to \(f_{\pi_{k+1}}\) to encode a path branching of the path encoded by the function assigned to \(f_{\pi_{k}}\). * If \(\varphi=\forall\pi_{1}.\,\,\psi(\pi_{1})\) is a sentence, then we define \[\varphi^{\prime}(E,\lambda)=\forall f_{\pi_{1}}.\,\,(\text{\it path}(f_{\pi_{1 }},E)\wedge f_{\pi_{1}}(0)=0)\to\psi^{\prime}(E,\lambda,f_{\pi_{1}},0)\,.\] If \(\varphi(\pi_{1},\ldots,\pi_{k})=\forall\pi_{k+1}.\,\,\psi(\pi_{1},\ldots,\pi_ {k},\pi_{k+1})\) with \(k>0\), then we define \[\varphi^{\prime}(E,\lambda,f_{\pi_{1}},\ldots,f_{\pi_{k}},i)=\forall f_{\pi_{ k+1}}.\,\,branch(f_{\pi_{k+1}},f_{\pi_{k}},i,E)\to\psi^{\prime}(E,\lambda,f_{ \pi_{1}},\ldots,f_{\pi_{k}},f_{\pi_{k+1}},i)\] Now, \(\varphi\) has a countable model if and only if the second-order sentence \(\varphi^{c}\) is true in \((\mathbb{N},0,1,+,\cdot,<,\in)\). Since every finitely-branching model has countably many vertices that are reachable from the initial vertex, the previous proof can be easily adapted for the case of finitely-branching satisfiability. **Lemma 5.11**.: _There is an effective reduction from HyperCTL\({}^{*}\) finitely-branching satisfiability to truth in second-order arithmetic._ Proof.: Let \(\varphi\) be a HyperCTL\({}^{*}\) sentence. We construct a second-order arithmetic formula \(\varphi^{\prime b}\) such that \((\mathbb{N},0,1,+,\cdot,<,\in)\models\varphi^{\prime b}\) if and only if \(\varphi\) has a finitely-branching model, which we can again assume without loss of generality to be of the form \(\mathcal{T}=(\mathbb{N},E,0,\lambda)\), where the set of vertices is \(\mathbb{N}\), the set \(E\) of edges is a subset of \(\mathbb{N}\times\mathbb{N}\), the initial vertex is \(0\), and the labeling function \(\lambda\) is encoded as a function from \(\mathbb{N}\) to \(\mathbb{N}\). The formula \(\varphi^{\prime b}\) is almost identical to \(\varphi^{c}\), only adding the finite branching requirement: \[\varphi^{\prime b}=\exists E.\,\,\exists\lambda.\,\,(\forall x.\,\,\lambda(x)< |2^{\text{AP}}|)\wedge(\forall x.\,\,\exists y.\,\,\forall z.\,\,(x,z)\in E\to z <y)\wedge\varphi^{\prime}(E,\lambda,0)\,.\qed\] Now, we consider the converse, i.e. that truth of second-order arithmetic can be reduced to countable and finitely-branching satisfiability. To this end, we adapt the \(\Sigma^{2}_{1}\)-hardness proof for HyperCTL\({}^{*}\). Recall that we constructed a formula whose models contain all \(\{0,1\}\)-labelled paths, which we used to encode the subsets of \(\mathbb{N}\). In that proof, we needed to ensure that the initial vertices of all these paths are pairwise different in order to encode existential third-order quantification, which resulted in uncountably many successors of the initial vertex. Also, we used the traces in \(T_{(+,\cdot)}\) to encode arithmetic operations. Here, we only have to encode first- and second-order quantification, so we can drop the requirement on the initial vertices of the paths encoding subsets, which simplifies our construction and removes one source of infinite branching. However, there is a second source of infinite branching, i.e. the infinitely many traces in \(T_{(+,\cdot)}\) which all start at successors of the initial vertex. This is unavoidable: To obtain formulas that always have finitely-branching models, we can no longer work with \(T_{(+,\cdot)}\). We begin by explaining the reason for this and then explain how to adapt the construction to obtain the desired result. Recall that we defined \(T_{(+,\cdot)}\) over \(\mathrm{AP}=\{\mathtt{arg1},\mathtt{arg2},\mathtt{res},\mathtt{add},\mathtt{mult}\}\) as the set of all traces \(t\in\left(2^{\mathrm{AP}}\right)^{\omega}\) such that * there are unique \(n_{1},n_{2},n_{3}\in\mathbb{N}\) with \(\mathtt{arg1}\in t(n_{1})\), \(\mathtt{arg2}\in t(n_{2})\), and \(\mathtt{res}\in t(n_{3})\), and * either \(\mathtt{add}\in t(n)\) and \(\mathtt{mult}\notin t(n)\) for all \(n\) and \(n_{1}+n_{2}=n_{3}\), or \(\mathtt{mult}\in t(n)\) and \(\mathtt{add}\notin t(n)\) for all \(n\) and \(n_{1}\cdot n_{2}=n_{3}\). An application of Konig's Lemma [14] shows that there is no finitely-branching transition system whose set of traces is \(T_{(+,\cdot)}\). The reason is that \(T_{(+,\cdot)}\) is not (topologically) closed (see definitions below), while the set of traces of a finitely-branching transition system is always closed. Let \(\mathrm{Pref}(t)\subseteq(2^{\mathrm{AP}})^{*}\) denote the set of finite prefixes of a trace \(t\in(2^{\mathrm{AP}})^{\omega}\). Furthermore, let \(\mathrm{Pref}(T)=\bigcup_{t\in T}\mathrm{Pref}(t)\) be the set of finite prefixes of a set \(T\subseteq(2^{\mathrm{AP}})^{\omega}\) of traces. The closure \(\mathrm{cl}(T)\subseteq(2^{\mathrm{AP}})^{\omega}\) of such a set \(T\) is defined as \[\mathrm{cl}(T)=\{t\in(2^{\mathrm{AP}})^{\omega}\mid\mathrm{Pref}(t)\subseteq \mathrm{Pref}(T)\}.\] For example, \(\{\mathtt{add}\}^{\omega}\in\mathrm{cl}(T_{(+,\cdot)})\) and \(\{\mathtt{mult}\}^{*}\{\mathtt{arg2},\mathtt{mult}\}\{\mathtt{mult}\}^{\omega} \subseteq\mathrm{cl}(T_{(+,\cdot)})\). Note that we have \(T\subseteq\mathrm{cl}(T)\) for every \(T\). As usual, we say that \(T\) is closed if \(T=\mathrm{cl}(T)\). Let \(\mathrm{AP}\) be finite and let \(T\subseteq(2^{\mathrm{AP}})^{\omega}\) be closed. Furthermore, let \(\mathcal{T}(T)\) be the finitely-branching transition system \((\mathrm{Pref}(T),E,\varepsilon,\lambda)\) with \[E=\{(w,wv)\mid wv\in\mathrm{Pref}(T)\text{ and }v\in 2^{\mathrm{AP}}\},\] \(\lambda(\varepsilon)=\emptyset\), and \(\lambda(wv)=v\) for all \(wv\in\mathrm{Pref}(T)\) with \(v\in 2^{\mathrm{AP}}\). **Remark 5.12**.: The set of traces of paths of \(\mathcal{T}(T)\) starting at the successors of the initial vertex \(\varepsilon\) is exactly \(T\). In the following, we show that we can replace the use of \(T_{(+,\cdot)}\) by \(\mathrm{cl}(T_{(+,\cdot)})\) and still capture addition and multiplication in HyperLTL. We begin by characterising the difference between \(T_{(+,\cdot)}\) and \(\mathrm{cl}(T_{(+,\cdot)})\) and then show that \(\mathrm{cl}(T_{(+,\cdot)})\) is also the unique model of some HyperLTL sentence \(\varphi_{(+,\cdot)}^{cl}\). Intuitively, a trace is in \(\mathrm{cl}(T_{(+,\cdot)})\setminus T_{(+,\cdot)}\) if at least one of the arguments (the propositions \(\mathtt{arg1}\) and \(\mathtt{arg2}\)) are missing. In all but one case, this also implies that \(\mathtt{res}\) does not occur in the trace, as the position of \(\mathtt{res}\) is (almost) always greater than the positions of the arguments. The only exception is when \(\mathtt{mult}\) holds and \(\mathtt{res}\) holds at the first position, i.e. in the limit of traces encoding \(0\cdot n=n\) for \(n\) tending towards infinity. Let \(D\) be the set of traces \(t\) over \(\mathrm{AP}=\{\mathtt{arg1},\mathtt{arg2},\mathtt{res},\mathtt{add},\mathtt{ mult}\}\) such that * for each \(a\in\{\mathtt{arg1},\mathtt{arg2},\mathtt{res}\}\) there is at most one \(n\) such that \(a\in t(n)\), and * either \(\mathtt{add}\in t(n)\) and \(\mathtt{mult}\notin t(n)\) for all \(n\), or \(\mathtt{mult}\in t(n)\) and \(\mathtt{add}\notin t(n)\) for all \(n\), * there is at least one \(a\in\{\mathtt{arg1},\mathtt{arg2}\}\) such that \(a\notin t(n)\) for all \(n\). * Furthermore, if there is an \(n\) such that \(\mathtt{res}\in t(n)\), then \(\mathtt{mult}\in t(0)\), \(n=0\), and either \(\mathtt{arg1}\in t(0)\) or \(\mathtt{arg2}\in t(0)\). **Remark 5.13**.: \(\mathrm{cl}(T_{(+,\cdot)})\setminus T_{(+,\cdot)}=D\)_._ Now, we show the analogue of Lemma 5.5 for \(\mathrm{cl}(T_{(+,\cdot)})\). **Lemma 5.14**.: _There is a HyperLTL sentence \(\varphi_{(+,\cdot)}^{cl}\) which has \(\mathrm{cl}(T_{(+,\cdot)})\) as unique model._ Proof.: We adapt the formula \(\varphi_{(+,\cdot)}\) presented in the proof of Lemma 5.5 having \(T_{(+,\cdot)}\) as unique model. Consider the conjunction of the following HyperLTL sentences: 1. For every trace \(t\) and every \(a\in\{\mathtt{arg1},\mathtt{arg2},\mathtt{res}\}\) there is at most one \(n\) such that \(a\in t(n)\): \[\forall\pi.\ \bigwedge_{a\in\{\mathtt{arg1},\mathtt{arg2},\mathtt{res}\}}( \mathbf{G}\neg a_{\pi})\vee(\neg a_{\pi})\,\mathbf{U}(a_{\pi}\wedge\mathbf{X} \,\mathbf{G}\neg a_{\pi})\] 2. For all traces \(t\): If both \(\mathtt{arg1}\) and \(\mathtt{arg2}\) appear in \(t\), then also \(\mathtt{res}\) (this captures the fact that the position of \(\mathtt{res}\) is determined by the positions of \(\mathtt{arg1}\) and \(\mathtt{arg2}\)): \[\forall\pi.\ (\mathbf{F}\,\mathtt{arg1}_{\pi}\wedge\mathbf{F}\,\mathtt{ arg2}_{\pi})\rightarrow\mathbf{F}\,\mathtt{res}_{\pi}\] 3. Every trace \(t\) satisfies either \(\mathtt{add}\in t(n)\) and \(\mathtt{mult}\notin t(n)\) for all \(n\) or \(\mathtt{mult}\in t(n)\) and \(\mathtt{add}\notin t(n)\) for all \(n\): \[\forall\pi.\ \mathbf{G}(\mathtt{add}_{\pi}\wedge\neg\mathtt{mult}_{\pi})\vee \mathbf{G}(\mathtt{mult}_{\pi}\wedge\neg\mathtt{add}_{\pi})\] 4. For all traces \(t\): If there is an \(a\in\{\mathtt{arg1},\mathtt{arg2}\}\) such that \(a\notin t(n)\) for all \(n\), but \(\mathtt{res}\in t(n_{3})\) for some \(n_{3}\), then \(\{\mathtt{mult},\mathtt{res}\}\subseteq t(0)\) and \(\{\mathtt{arg1},\mathtt{arg2}\}\cap t(0)\neq\emptyset\): \[\forall\pi.\ \left(\mathbf{F}\,\mathtt{res}_{\pi}\wedge\bigvee_{a\in\{\mathtt{ arg1},\mathtt{arg2}\}}\mathbf{G}\neg a_{\pi}\right)\rightarrow\left(\mathtt{ mult}_{\pi}\wedge\mathtt{res}_{\pi}\wedge\bigvee_{a\in\{\mathtt{arg1}, \mathtt{arg2}\}}a\right)\] We again only consider traces satisfying these formulas in the remainder of the proof, as all others are not part of a model. Also, we again speak of addition traces (if \(\mathtt{add}\) holds) and multiplication traces (if \(\mathtt{mult}\) holds). Furthermore, if a trace satisfies the (guard) formula \(\varphi_{g}=\mathbf{F}\,\mathrm{arg_{1}}\wedge\mathbf{F}\,\mathtt{arg2}\), then it encodes two unique arguments (given by the unique positions \(n_{1}\) and \(n_{2}\) such that \(\mathtt{arg1}\in t(n_{1})\) and \(\mathtt{arg2}\in t(n_{2})\). As the above formulas are satisfied, such a trace also encodes a result via the unique position \(n_{3}\) such that \(\mathtt{res}\in t(n_{3})\). As before, we next express that every combination of inputs is present: 1. There are two traces with both arguments being zero, one for addition and one for multiplication: \[\bigwedge_{a\in\{\mathtt{add},\mathtt{mult}\}}\exists\pi.\ a_{\pi}\wedge \mathtt{arg1}_{\pi}\wedge\mathtt{arg2}_{\pi}\] 2. For every trace encoding the arguments \(n_{1}\) and \(n_{2}\), the argument combinations \((n_{1}+1,n_{2})\) and \((n_{1},n_{2}+1)\) are also represented in the model, again both for addition and multiplication (here we rely on the fact that either \(\mathtt{add}\) or \(\mathtt{mult}\) holds at every position, as specified above). Note however, that not every trace will encode two inputs, which is why we have to use the guard \(\varphi_{g}\). \[\forall\pi.\ \varphi_{g}\rightarrow\exists\pi_{1},\pi_{2}.\ \left( \bigwedge_{i\in\{1,2\}}\mathtt{add}_{\pi}\leftrightarrow\mathtt{add}_{\pi_{i}} \right)\wedge\mathbf{F}(\mathtt{arg1}_{\pi}\wedge\mathbf{X}\,\mathtt{arg1}_{ \pi_{1}})\wedge\mathbf{F}(\mathtt{arg2}_{\pi}\wedge\mathtt{arg2}_{\pi_{1}}) \wedge\] \[\mathbf{F}(\mathtt{arg1}_{\pi}\wedge\mathtt{arg1}_{\pi_{2}})\wedge \mathbf{F}(\mathtt{arg2}_{\pi}\wedge\mathbf{X}\,\mathtt{arg2}_{\pi_{2}})\] Every model of these formulas contains a trace representing each possible combination of arguments, both for multiplication and addition. To conclude, we need to express that the result in each trace is correct by again capturing the inductive definition of addition in terms of repeated increments and the inductive definition of multiplication in terms of repeated addition. The formulas differ from those in the proof of Lemma 5.5 only in the use of the guard \(\varphi_{g}\). 1. For every trace \(t\): if \(\{\mathtt{add},\mathtt{arg1}\}\subseteq t(0)\) and \(\mathtt{arg2}\) appears in \(t\) then \(\mathtt{arg2}\) and \(\mathtt{res}\) have to hold at the same position (this captures \(0+n=n\)): \[\forall\pi.\ (\varphi_{g}\wedge\mathtt{add}\wedge\mathtt{arg1}_{\pi})\to \mathbf{F}(\mathtt{arg2}_{\pi}\wedge\mathtt{res}_{\pi})\] 2. For each trace \(t\) with \(\mathtt{add}\in t(0)\), \(\mathtt{arg1}\in t(n_{1})\), \(\mathtt{arg2}\in t(n_{2})\), and \(\mathtt{res}\in t(n_{3})\) such that \(n_{1}>0\) there is a trace \(t^{\prime}\) such that \(\mathtt{add}\in t^{\prime}(0)\), \(\mathtt{arg1}\in t^{\prime}(n_{1}-1)\), \(\mathtt{arg2}\in t^{\prime}(n_{2})\), and \(\mathtt{res}\in t^{\prime}(n_{3}-1)\) (this captures \(n_{1}+n_{2}=n_{3}\Leftrightarrow n_{1}-1+n_{2}=n_{3}-1\) for \(n_{1}>0\)): \[\forall\pi.\ \exists\pi^{\prime}.\ (\varphi_{g}\wedge\mathtt{add}_{\pi}\wedge\neg \mathtt{arg1}_{\pi})\to(\mathtt{add}_{\pi^{\prime}}\wedge\mathbf{F}(\mathbf{X }\,\mathtt{arg1}_{\pi^{\prime}}\wedge\mathtt{arg1}_{\pi^{\prime}})\wedge \mathbf{F}(\mathtt{arg2}_{\pi}\wedge\mathtt{arg2}_{\pi^{\prime}})\wedge\mathbf{ F}(\mathbf{X}\,\mathtt{res}_{\pi}\wedge\mathtt{res}_{\pi^{\prime}}))\] 3. For every trace \(t\): if \(\{\mathtt{mult},\mathtt{arg1}\}\subseteq t(0)\) then also \(\mathtt{res}\in t(0)\) (this captures \(0\cdot n=0\)): \[\forall\pi.\ (\mathtt{mult}\wedge\mathtt{arg1}_{\pi})\to\mathtt{res}_{\pi}\] 4. Similarly, for each trace \(t\) with \(\mathtt{mult}\in t(0)\), \(\mathtt{arg1}\in t(n_{1})\), \(\mathtt{arg2}\in t(n_{2})\), and \(\mathtt{res}\in t(n_{3})\) such that \(n_{1}>0\) there is a trace \(t^{\prime}\) such that \(\mathtt{mult}\in t^{\prime}(0)\), \(\mathtt{arg1}\in t^{\prime}(n_{1}-1)\), \(\mathtt{arg2}\in t^{\prime}(n_{2})\), and \(\mathtt{res}\in t^{\prime}(n_{3}-n_{2})\). The latter requirement is expressed by the existence of a trace \(t^{\prime\prime}\) with \(\mathtt{add}\in t^{\prime\prime}(0)\), \(\mathtt{arg2}\in t^{\prime\prime}(n_{2})\), \(\mathtt{res}\in t^{\prime\prime}(n_{3})\), and \(\mathtt{arg1}\) holding in \(t^{\prime\prime}\) at the same time as \(\mathtt{res}\) in \(t^{\prime}\), which implies \(\mathtt{res}\in t^{\prime}(n_{3}-n_{2})\). Altogether, this captures \(n_{1}\cdot n_{2}=n_{3}\Leftrightarrow(n_{1}-1)\cdot n_{2}=n_{3}-n_{2}\) for \(n_{1}>0\). \[\forall\pi.\ \exists\pi^{\prime},\pi^{\prime\prime}. (\varphi_{g}\wedge\mathtt{mult}_{\pi}\wedge\neg\mathtt{arg1}_{\pi}) \to(\mathtt{mult}_{\pi^{\prime}}\wedge\mathtt{add}_{\pi^{\prime\prime}}\wedge\] \[\mathbf{F}(\mathbf{X}\,\mathtt{arg1}_{\pi}\wedge\mathtt{arg1}_{\pi^ {\prime}})\wedge\mathbf{F}(\mathtt{arg2}_{\pi}\wedge\mathtt{arg2}_{\pi^{\prime }}\wedge\mathtt{arg2}_{\pi^{\prime\prime}})\wedge\] \[\mathbf{F}(\mathtt{res}_{\pi^{\prime}}\wedge\mathtt{arg1}_{\pi^{ \prime\prime}})\wedge\mathbf{F}(\mathtt{res}_{\pi}\wedge\mathtt{res}_{\pi^{ \prime\prime}})\] Now, \(\operatorname{cl}(T_{(+,\cdot)})\) is a model of the conjunction \(\varphi_{(+,\cdot)}^{cl}\) of these ten formulas. Conversely, every model of \(\varphi_{(+,\cdot)}^{cl}\) contains all possible combinations of arguments (both for addition and multiplication) due to Formulas (5) and (6). Now, Formulas (7) to (10) ensure that the result is _correct_ on these traces. Furthermore, all traces in \(D\), but not more, are also contained due to the first four formulas. Altogether, this implies that \(\operatorname{cl}(T_{(+,\cdot)})\) is the unique model of \(\varphi_{(+,\cdot)}^{cl}\). We are now ready to prove the lower bounds for HyperCTL\({}^{*}\) countable and finitely-branching satisfiability. **Lemma 5.15**.: _There is an effective reduction from truth in second-order arithmetic to HyperCTL\({}^{*}\) countable and finitely-branching satisfiability._ Proof.: We proceed as in the proof of Lemma 5.6. Given a sentence \(\varphi\) of second-order arithmetic we construct a HyperCTL\({}^{*}\) formula \(\varphi^{\prime}\) such that \((\mathbb{N},+,\cdot,<,\in)\) is a model of \(\varphi\) if and only if \(\varphi^{\prime}\) is satisfied by a countable and finitely-branching model. As before, we represent sets of natural numbers as infinite paths with labels in \(\{0,1\}\), so quantification over sets of natural numbers and natural numbers is captured by path quantification. The major difference between our proof here and the one of Lemma 5.6 is that we do not need to deal with third-order quantification here. This means we only need to have every possible \(\{0,1\}\)-labeled path in our models, but not with pairwise distinct initial vertices. In particular, the finite (and therefore finitely-branching) transition system \(\mathcal{T}_{f}\) depicted in Figure 3 has all such paths. For arithmetical operations, we rely on the HyperLTL sentence \(\varphi^{cl}_{(+,\cdot)}\) from Lemma 5.14, with its unique model \(\operatorname{cl}(T_{(+,\cdot)})\), and the transition system \(\mathcal{T}(\operatorname{cl}(T_{(+,\cdot)}))\), which is countable, finitely-branching, and whose set of traces starting at the successors of the initial vertex is exactly \(\operatorname{cl}(T_{(+,\cdot)})\). We combine \(\mathcal{T}_{f}\) and \(\mathcal{T}(\operatorname{cl}(T_{(+,\cdot)}))\) by by identifying their respective initial vertices, but taking the disjoint union of all other vertices. The resulting transition system \(\mathcal{T}_{0}\) contains all traces encoding the subsets of the natural numbers as well as the traces required to model arithmetical operations. Furthermore, it is still countable and finitely-branching. Let \(\operatorname{AP}=\{0,1,\operatorname{fbt},\operatorname{arg1},\operatorname {arg2},\operatorname{res},\operatorname{mult},\operatorname{add}\}\). Using parts of the formula \(\varphi_{\mathfrak{c}}\) defined in Lemma 5.4 and the formula \(\varphi^{cl}_{(+,\cdot)}\) defined in Lemma 5.14, it is not difficult to construct a formula \(\varphi^{cl}_{(\mathfrak{c},+,\cdot)}\) such that: * The transition system \(\mathcal{T}_{0}\) is a model of \(\varphi^{cl}_{(\mathfrak{c},+,\cdot)}\). * Conversely, in any model \(\mathcal{T}=(V,E,v_{I},\lambda)\) of \(\varphi^{cl}_{(\mathfrak{c},+,\cdot)}\), the following conditions are satisfied: 1. For every path \(\rho\) starting at a fbt-labelled successor of the initial vertex \(v_{I}\), every vertex \(\rho(i)\) with \(i\geq 0\) has a label \(\lambda(\rho(i))=\{\operatorname{fbt},0\}\) or \(\lambda(\rho(i))=\{\operatorname{fbt},1\}\). 2. For every \(A\subseteq\mathbb{N}\), there exists a fbt-labelled path \(\rho_{A}\) starting at a successor of \(v_{I}\) such that \(1\in\lambda(\rho_{A}(i))\) if \(i\in A\), and \(0\in\lambda(\rho_{A}(i))\) if \(i\notin A\). 3. For every path \(\rho\) starting at an add- or mult-labelled successor of the initial vertex, the label sequence \(\lambda(\rho(0))\lambda(\rho(1))\cdots\) of \(\rho\) is in \(\operatorname{cl}(T_{(+,\cdot)})\). 4. Conversely, for every trace \(t\in\operatorname{cl}(T_{(+,\cdot)})\), there exists a path \(\rho\) starting at a successor of the initial vertex such that \(\lambda(\rho(0))\lambda(\rho(1))\cdots=t\). We then let \(\varphi^{\prime}=\varphi^{cl}_{(\mathfrak{c},+,\cdot)}\wedge h(\psi)\), where \(h(\varphi)\) is defined inductively from \(\varphi\) as in the proof of Lemma 5.6: * \(h(\psi_{1}\vee\psi_{2})=h(\psi_{1})\lor h(\psi_{2})\) and * \(h(\neg\psi_{1})=\neg h(\psi_{1})\). * If \(x\) ranges over sets of natural numbers, \[h(\exists x.\ \psi_{1})=\exists\pi_{x}.\ ((\mathbf{X}\operatorname{fbt}_{\pi_{x}}) \wedge h(\psi_{1})),\] and \[h(\forall x.\ \psi_{1})=\forall\pi_{x}.\ ((\mathbf{X}\operatorname{fbt}_{\pi_{x}}) \to h(\psi_{1})).\] * If \(x\) ranges over natural numbers, \[h(\exists x.\ \psi_{1})=\exists\pi_{x}.\ ((\mathbf{X}\operatorname{fbt}_{\pi_{x}}) \wedge\mathbf{X}(0_{\pi_{x}}\operatorname{\mathbf{U}}(1_{\pi_{x}}\wedge \mathbf{X}\operatorname{\mathbf{G}}0_{\pi_{x}}))\wedge h(\psi_{1})),\] and \[h(\forall x.\ \psi_{1})=\forall\pi_{x}.\ ((\mathbf{X}\operatorname{fbt}_{\pi_{x}}) \wedge\mathbf{X}(0_{\pi_{x}}\operatorname{\mathbf{U}}(1_{\pi_{x}}\wedge \mathbf{X}\operatorname{\mathbf{G}}0_{\pi_{x}}))\to h(\psi_{1})).\] Figure 3. A depiction of \(\mathcal{T}_{f}\). All vertices but the initial one are labelled by fbt. Here, the subformula \(0_{\pi_{x}}\,\mathbf{U}(1_{\pi_{x}}\wedge\mathbf{X}\,\mathbf{G}\,0_{\pi_{x}})\) expresses that there is a single \(1\) on the trace assigned to \(\pi_{x}\), i.e. the path represents a singleton set. * If \(x\) ranges over natural numbers and \(y\) over sets of natural numbers, \(h(x\in y)=\mathbf{F}(1_{\pi_{x}}\wedge 1_{\pi_{y}})\). * \(h(x<y)=\mathbf{F}(1_{\pi_{x}}\wedge\mathbf{X}\,\mathbf{F}\,1_{\pi_{y}})\). * \(h(x\cdot y=z)=\exists\pi.\ (\mathbf{X}\,\mathtt{add}_{\pi})\wedge\mathbf{F}( \mathtt{arg1}_{\pi}\wedge 1_{\pi_{x}})\wedge\mathbf{F}(\mathtt{arg2}_{\pi} \wedge 1_{\pi_{y}})\wedge\mathbf{F}(\mathtt{res}_{\pi}\wedge 1_{\pi_{z}})\), and \(h(x+y=z)=\exists\pi.\ (\mathbf{X}\,\mathtt{mult}_{\pi})\wedge\mathbf{F}( \mathtt{arg1}_{\pi}\wedge 1_{\pi_{x}})\wedge\mathbf{F}(\mathtt{arg2}_{\pi} \wedge 1_{\pi_{y}})\wedge\mathbf{F}(\mathtt{res}_{\pi}\wedge 1_{\pi_{z}})\). If \(\varphi\) is true in \((\mathbb{N},+,\cdot,<,\in)\), then the countable and finitely-branching transition system \(\mathcal{T}_{0}\) defined above is a model of \(\varphi^{\prime}\). Conversely, if \(\mathcal{T}\models\varphi^{\prime}\) for some transition system \(\mathcal{T}\), then for all sets \(A\subseteq\mathbb{N}\) there is a path \(\rho_{A}\) matching \(A\) in \(\mathcal{T}\) and trace quantification in \(\mathcal{T}\) mimics first- and second-order in \((\mathbb{N},+,\cdot,<,\in)\). Thus, \(\varphi\) is true in \((\mathbb{N},+,\cdot,<,\in)\). Note that the preceding proof shows that even HyperCTL\({}^{*}\) bounded-branching satisfiability is equivalent to truth in second-order arithmetic, i.e., the question whether a given sentence is satisfied by a transition system where each vertex has at most \(k\) successors, for some uniform \(k\in\mathbb{N}\). ## 6. Conclusion In this work, we have settled the complexity of the satisfiability problems for HyperLTL and HyperCTL\({}^{*}\). In both cases, we significantly increased the lower bounds, i.e. from \(\Sigma^{0}_{1}\) and \(\Sigma^{1}_{1}\) to \(\Sigma^{1}_{1}\) and \(\Sigma^{2}_{1}\), respectively, and presented the first upper bounds, which are tight in both cases. Along the way, we also determined the complexity of restricted variants, e.g. HyperLTL satisfiability restricted to ultimately periodic traces (or, equivalently, to finite traces) is still \(\Sigma^{1}_{1}\)-complete while HyperCTL\({}^{*}\) satisfiability restricted to finite transition systems is \(\Sigma^{0}_{1}\)-complete. Furthermore, we proved that both countable and the finitely-branching satisfiability for HyperCTL\({}^{*}\) are as hard as truth in second-order arithmetic. As a key step in our proofs, we showed a tight bound of \(\mathfrak{c}\) on the size of minimal models for satisfiable HyperCTL\({}^{*}\) sentences. Finally, we also showed that deciding membership in any level of the HyperLTL quantifier alternation hierarchy is \(\Pi^{1}_{1}\)-complete. Thus our results show that satisfiability is highly undecidable, both for HyperLTL and even more so for HyperCTL\({}^{*}\). Note that both logics are synchronous, i.e. time passes at the same rate on all traces/paths under consideration. Recently, several asynchronous logics for the specification of hyperproperties have been presented [1, 1, 1, 1, 1]. Similarly, logics for probabilistic hyperproperties have been introduced [1, 1, 1, 1]. For both classes, the exact complexity of the satisfiability problems has, to the best of our knowledge, not been studied in detail. ## Acknowledgment This work was partially funded by EPSRC grants EP/S032207/1 and EP/V025848/1 and DIREC - Digital Research Centre Denmark. We thank Karoliina Lehtinen and Wolfgang Thomas for fruitful discussions.
2302.08956
AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages
Africa is home to over 2,000 languages from more than six language families and has the highest linguistic diversity among all continents. These include 75 languages with at least one million speakers each. Yet, there is little NLP research conducted on African languages. Crucial to enabling such research is the availability of high-quality annotated datasets. In this paper, we introduce AfriSenti, a sentiment analysis benchmark that contains a total of >110,000 tweets in 14 African languages (Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozambican Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigrinya, Twi, Xitsonga, and Yor\`ub\'a) from four language families. The tweets were annotated by native speakers and used in the AfriSenti-SemEval shared task (The AfriSenti Shared Task had over 200 participants. See website at https://afrisenti-semeval.github.io). We describe the data collection methodology, annotation process, and the challenges we dealt with when curating each dataset. We further report baseline experiments conducted on the different datasets and discuss their usefulness.
Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Abinew Ali Ayele, Nedjma Ousidhoum, David Ifeoluwa Adelani, Seid Muhie Yimam, Ibrahim Sa'id Ahmad, Meriem Beloucif, Saif M. Mohammad, Sebastian Ruder, Oumaima Hourrane, Pavel Brazdil, Felermino Dário Mário António Ali, Davis David, Salomey Osei, Bello Shehu Bello, Falalu Ibrahim, Tajuddeen Gwadabe, Samuel Rutunda, Tadesse Belay, Wendimu Baye Messelle, Hailu Beshada Balcha, Sisay Adugna Chala, Hagos Tesfahun Gebremichael, Bernard Opoku, Steven Arthur
2023-02-17T15:40:12Z
http://arxiv.org/abs/2302.08956v5
# AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages ###### Abstract Africa is home to over 2000 languages from over six language families and has the highest linguistic diversity among all continents. This includes 75 languages with at least one million speakers each. Yet, there is little NLP research conducted on African languages. Crucial in enabling such research is the availability of high-quality annotated datasets. In this paper, we introduce AfriSenti, which consists of 14 sentiment datasets of 110,000+ tweets in 14 African languages (Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozambique Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigirinya, Twi, Xitsonga, and Yoruba) from four language families annotated by native speakers. The data is used in SemEval 2023 Task 12, the first Afro-centric SemEval shared task. We describe the data collection methodology, annotation process, and related challenges when curating each of the datasets. We conduct experiments with different sentiment classification baselines and discuss their usefulness. We hope AfriSenti enables new work on under-represented languages.1 Footnote 1: The dataset is available at [https://github.com/afafrisenti-semeval/afrisent-semeval-2023](https://github.com/afafrisenti-semeval/afrisent-semeval-2023). ## 1 Introduction Africa has a long and rich linguistic history, experiencing language contact, language expansion, development of trade languages, language shift, and language death, on several occasions. The continent is incredibly linguistically diverse and home to over 2000 languages. This includes 75 languages with at least one million speakers each. Africa has a rich tradition of storytelling, poems, songs, and literature Carter-Black (2007); Banks-Wallace (2002) while recent years have seen a proliferation of communication in digital and social media. Code-switching is common in these new forms of communication where speakers alternate between two or more languages in the context of a single conversation Santy et al. (2021); Angel et al. (2020); Thara and Poornachandran (2018). However, despite this linguistic richness, African languages have been comparatively under-represented in natural language processing (NLP) research. An influential sub-area of NLP deals with sentiment, valence, emotions, and affect in language Liu (2020). Computational analysis of emotion states in language and the creation of systems that predict these states from utterances have applications in literary analysis and cultronomics (Rea Figure 1: Countries and languages represented in the AfriSenti data collection (Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozamb Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigirinya, Twi, Xitsonga, and Yoruba). gan et al., 2016; Hamilton et al., 2016), commerce (e.g., tracking feelings towards products), and research in psychology and social science (Dodds et al., 2015; Hamilton et al., 2016). Despite tremendous amount of work in this important space over the last two decades, there is little work on African languages, partially due to a lack of high-quality annotated data. To enable sentiment analysis research in African languages, we present **AfriSenti**, the largest sentiment analysis benchmark for under-represented languages--covering 110,000+ annotated tweets in 14 African languages2 (Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozambican Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigrinya, Twi, Xitsonga, and Yoruba) from four language families (Afro-Asiatic, English Creole, Indo-European and Niger-Congo). We show the represented countries and languages in Figure 1 and provide examples of the data in Table 1. AfriSenti is an extension of NaijaSenti (Muhammad et al., 2022), which is a sentiment corpus in four major Nigerian languages: Hausa, Igbo, Nigerian Pidgin,and Yoruba. Footnote 2: For simplicity, we use the term language to refer to language varieties including dialects. The datasets are used in the first Afrocentric SemEval shared task, _SemEval 2023 Task 12: Sentiment analysis for African languages (AfriSentiSemiEval)_. AfriSenti allows the research community to build sentiment analysis systems for various African languages and enables the study of sentiment and contemporary language use in African languages. We publicly release the corpora, which provide further opportunities to investigate the difficulty of sentiment analysis for African languages. Our contributions are: (1) the creation of the largest Twitter dataset for sentiment analysis in African languages by annotating 10 new datasets and curating four existing ones (Muhammad et al., 2022), (2) the discussion of the data collection and annotation process in 14 low-resource African languages, (3) the release sentiment lexicons for all languages, (4) the presentation of classification baseline results using our datasets. ## 2 Related Work Research in sentiment analysis developed since the early days of lexicon-based sentiment analysis approaches (Turney, 2002; Taboada et al., 2011; Mohammad et al., 2013) to more advanced machine learning (Agarwal and Mittal, 2016; Le \begin{table} \begin{tabular}{l l l} \hline \hline **Lang.** & **Tweet** & **Sentiment** \\ \hline anh & _p anh3 xeon1 the Brian and Nguyen, 2020), deep learning-based methods (Zhang et al., 2018; Yadav and Vishwakarma, 2020), and hybrid approaches that combine lexicon and machine learning-based approaches (Gupta and Joshi, 2020; Kaur et al., 2022). Nowadays, pretrained language models (PLMs), such as XLM-R (Conneau et al., 2020), mDeBERTaV3 (He et al., 2021), AfriBERTa (Ogueji et al., 2021), AfroX-LMR (Alabi et al., 2022) and XLM-T (Barbieri et al., 2022) provide state-of-the-art performance for sentiment classification. Recent work in sentiment analysis focused on sub-tasks that tackle new challenges, including aspect-based (Chen et al., 2022), multimodal (Liang et al., 2022), explainable (neuro-symbolic) (Cambria et al., 2022), and multilingual sentiment analysis (Muhammad et al., 2022). On the other hand, standard sentiment analysis sub-tasks such as polarity classification (positive, negative, neutral) are widely considered saturated and solved (Poria et al., 2020), with an accuracy of 97.5% in certain domains (Raffel et al., 2020; Jiang et al., 2020). However, while this may be true for high-resource languages in relatively clean, long-form text domains such as movie reviews, noisy user-generated data in under-represented languages still presents a challenge (Yimam et al., 2020). Additionally, African languages present new challenges for sentiment analysis such as dealing with tone, code-switching, and digraphia (Adebara and Abdul-Mageed, 2022). Existing work in sentiment analysis for African languages has therefore mainly focused on polarity classification (Mataoui et al., 2016; El Abdouli et al., 2017; Moudjari et al., 2020; Yimam et al., 2020; Muhammad et al., 2022; Martin et al., 2021). We present with AfriSenti the largest and most multilingual dataset for sentiment analysis in African languages. ## 3 Overview of the AfriSenti Datasets AfriSenti covers 14 African languages, each with unique linguistic characteristics and writing systems, which are shown in Table 2. As shown in Figure 2, the dataset includes six languages of the Afroasiatic family, six languages of the Niger-Congo family, one from the English Creole family, and one from the Indo-European family. Writing SystemsScripts serve not only as a means of transcribing spoken language, but also as powerful cultural symbols that reflect people's identity (Sterponi and Lai, 2014). For instance, the Bamun script is deeply connected to the identity of Bamun speakers in Cameroon, while the Geez/Ethiopic script (for Amharic and Tigrinya) evokes the strength and significance of Ethiopian \begin{table} \begin{tabular}{l l l l l} \hline \hline **Language** & **ISO Code** & **Subregion** & **Spoken In** & **Script** \\ \hline Amharic & amh & East Africa & Ethiopia & Ethiop \\ Algerian Arabic/Darija & arq & North Africa & Algeria & Arabic \\ Hausa & hau & West Africa & Northern Nigeria, Ghana, Cameroon, & Latin \\ Igbo & ibo & West Africa & Southeastern Nigeria & Latin \\ Kinyarwanda & kin & East Africa & Rwanda & Latin \\ Moroccan Arabic/Darija & ary & Northern Africa & Morocco & Arabic/Latin \\ Mozambique Portuguese & pt-MZ & Southeastern Africa & Mozambique & Latin \\ Nigerian Pidgin & pcm & West Africa & Northern Nigeria, Ghana, Cameroon, & Latin \\ Oromo & orm & East Africa & Ethiopia & Latin \\ Swahili & swa & East Africa & Tanzania, Kenya, Mozambique & Latin \\ Tigrinya & tir & East Africa & Ethiopia & Ethiopic \\ Twi & twi & West Africa & Ghana & Latin \\ Xitsonga & tso & Southern Africa & Mozambique, South Africa, Zimbabwe, Eswatini & Latin \\ Yorubá & yor & West Africa & Southwestern and Central Nigeria & Latin \\ \hline \hline \end{tabular} \end{table} Table 2: African languages included in our study (Lewis, 2009). Figure 2: Language Family (shown in green) in the AfriSenti datasets. culture (Sterponi and Lai, 2014). Similarly, the Ajami script, a variant of the Arabic script used in various African languages such as Hausa, serves as a reminder of the rich African cultural heritage of the Hausa community (Gee, 2005). African languages, with a few exceptions, use the Latin script, written from left to right, or the Arabic script, written from right to left (Gee, 2005; Meshesha and Jawahar, 2008), with the Latin script being the most widely used in Africa (Eberhard et al., 2020). Ten languages out of fourteen in AfriSenti are written in Latin script, two in Arabic script, and two in Ethiopia (or Geez) script. On social media, people may write Moroccan Arabic (Darija) and Algerian Arabic (Darja) in both Latin and Arabic characters due to various reasons including access to technology, i.e., the fact that Arabic keyboards were not easily accessible on commonly used devices for many years, code-switching, and other phenomena. This makes Algerian and Moroccan Arabicgraphic, i.e., their texts can be written in multiple scripts on social media. Similarly, Amharic is graphic and is written in both Latin and Geez script (Belay et al., 2021). This constitutes an additional challenge to the processing of these languages in NLP.3 Footnote 3: Table 1 shows an example of Moroccan Darija tweets written in Latin and Arabic script. For Algerian Arabic/Darija and Amharic, AfriSenti includes data in only Arabic and Geez scripts. Geographic RepresentationAfriSenti covers the majority of African sub-regions. Many African languages are spoken in neighbouring countries within the same sub-regions. For instance, variations of Hausa are spoken in Nigeria, Ghana, and Cameroon, while Swahili variants are widely spoken in East African countries, including Kenya, Tanzania, and Uganda. AfriSenti also includes datasets in the top three languages with the highest numbers of speakers in Africa (Swahili, Amharic, and Hausa). We show the geographic distribution of languages in AfriSenti in Figure 1. New and Existing DatasetsAfriSenti includes existing and newly created datasets as shown in Table 3. For the existing datasets whose test sets are public, we created new test sets to further evaluate their performance in the AfriSenti-SemEval shared task. ## 4 Data Collection and Processing Twitter's Limited Support for African LanguagesSince many people share their opinions on Twitter, the platform is widely used to study sentiment analysis (Muhammad et al., 2022). However, the Twitter API's support for African languages is limited, which makes it difficult for researchers to collect data. Specifically, the Twitter language API currently supports only Amharic out of more than 2000 African languages4. This disparity in language coverage highlights the need for further research and development in NLP for low-resource languages. Footnote 4: [https://blog.twitter.com/engineering/en_us/a/2015/evaluating-language-identification-performance](https://blog.twitter.com/engineering/en_us/a/2015/evaluating-language-identification-performance) ### Tweet Collection We used the Twitter Academic API to collect tweets. However, as the API does not provide language identification for tweets in African languages, we use location-based and vocabulary-based heuristics to collect the datasets. #### 4.1.1 Location-based data collection For all languages except Algerian Arabic and Afan Oromo, we used a location-based collection approach to filter out results. Hence, tweets were collected based on the names of the countries where the majority of the target language speakers are located. For Afaan Oromo, tweets were collected globally due to the small size of the data collected from Ethiopia. \begin{table} \begin{tabular}{l l l l} \hline \hline **Lang.** & **New** & **Existing** & **Source** \\ \hline ama & test & train, dev & Yimam et al. (2020) \\ arq & all & ✗ & - \\ ary & all & ✗ & - \\ hau & ✗ & all & Muhammad et al. (2022) \\ ibo & ✗ & all & Muhammad et al. (2022) \\ kin & all & ✗ & - \\ orm & all & ✗ & - \\ pcm & ✗ & all & Muhammad et al. (2022) \\ pt-M2 & all & ✗ & - \\ swa & all & ✗ & - \\ tir & all & ✗ & - \\ tso & all & ✗ & - \\ twi & all & ✗ & - \\ yor & ✗ & all & Muhammad et al. (2022) \\ \hline \hline \end{tabular} \end{table} Table 3: The AfriSenti datasets. We show the new and previously available datasets (with their sources). #### 4.1.2 Vocabulary-based Data Collection As different languages are spoken within the same region in Africa (Amfo and Anderson, 2019), the location-based approach did not help in all cases. For instance, searching for tweets from _"Lagos"_ (Nigeria) returned tweets in multiple languages, such as _Yoriba_, _Igbo_, _Hausa_, _Pidgin_, _English_, etc. To address these challenges, we combined the location-based approach with vocabulary-based collection strategies. These included the use of stopwords, sentiment lexicons, and a language detection tool. For languages that used the Geez script, we used the Ethiopic Twitter Dataset for Amharic (ETD-AM), which includes tweets that were collected since 2014 (Yimam et al., 2019). Data collection using stopwordsMost African languages do not have curated stopword lists (Emezue et al., 2022). Therefore, we created stopword lists for some AfriSenti languages and used them to collect data. We used corpora from different domains, i.e. news data and religious texts, to rank words based on their frequency (Adelani et al., 2021). We filtered out the top 100 words by deleting domain-specific words (e.g., the word _God_ in religious texts) and created lists based on the top 50 words that appeared across domains. We also used a word co-occurrence-based approach to extract stopwords (Liang et al., 2009) using text sources from different domains. We lower-cased and removed punctuation symbols and numbers, constructed a co-occurrence graph, and filtered out the words that occurred most often. Native speakers verified the generated lists before use. This approach worked the best for Xistonga. Data collection using sentiment lexiconsAs data collection based on stopwords sometimes results in tweets that are inadequate for sentiment analysis (e.g., too many neutral tweets), we used a sentiment lexicon--a dictionary of positive and negative words--for tweet collection. This allows for a balanced collection across sentiment classes (positive/negative/neutral). For Moroccan Darija, we used emotion word list curated by Outchakoucht and Es-Samaali (2021). Table 4 provides details on the sentiment lexicons in AfriSenti and indicates whether they were manually created or translated. Data collection using mixed lists of wordsBesides stopwords and sentiment lexicons, native speakers provided lists of language-specific terms including generic words. For instance, this strategy helped us collect Algerian Arabic tweets, and the generic terms included equivalents of words such as "_"_" _(the crowd)_ and names of Algerian cities. ### Language Detection As we mainly used heuristics for data collection, the result included tweets in a language that is different from the target one. For instance, when collecting tweets using lists of Amharic words, some returned tweets were in Tigrinya, due to Amharic-Tigrinya code-mixing. Similarly, we applied an additional manual filtering step in the case of Tunisian, Moroccan, and Modern Standard Arabic tweets that were returned when searching for Algerian Arabic ones due to overlapping terms. Hence, we used different techniques for language detection as a post-processing step. Language detection using existing toolsFew African languages have pre-existing language de \begin{table} \begin{tabular}{l l l l} \hline \hline **Lang.** & **Manually** & **Translated** & **Source** \\ \hline ama & ✓ & ✗ & Yimam et al. (2020) \\ arq & ✓ & ✗ & - \\ hau & ✓ & ✓ & Muhammad et al. (2022) \\ ibo & ✓ & ✓ & Muhammad et al. (2022) \\ ary & ✗ & ✗ & - \\ orm & ✓ & ✗ & Yimam et al. (2020) \\ pcm & ✓ & ✗ & Muhammad et al. (2022) \\ pt-MZ & ✓ & ✗ & - \\ kin & ✗ & ✓ & - \\ swa & ✗ & ✗ & - \\ tire & ✓ & ✗ & Yimam et al. (2020) \\ tso & ✗ & ✓ & - \\ twi & ✗ & ✗ & - \\ yor & ✓ & ✓ & Muhammad et al. (2022) \\ \hline \hline \end{tabular} \end{table} Table 4: Manually collected and translated lexicons in AfriSenti. Figure 3: Label distributions for the different AfriSenti datasets. tection tools Keet (2021). We used Google CLD35 and the Pycld2 library6 for the supported AfriSenti languages Amharic, Oromo and Tigrinya). Footnote 5: [https://github.com/google/cld3](https://github.com/google/cld3) Footnote 6: [https://pypi.org/project/pycld2/](https://pypi.org/project/pycld2/) Manual language detectionFor languages that do not have a pre-existing tool, the detection was conducted by native speakers. For instance, annotators who are native speakers of Twi and Xitsonga manually labeled 2,000 tweets in these languages. In addition, as native speakers collected the Algerian Arabic tweets, they deleted all possible tweets that were expressed in another language or Arabic variation instead. Language detection using pre-trained language modelsTo reduce the effort spent on language detection, we also used a pretrained language model fine-tuned on 2,000 manually annotated tweets Caswell et al. (2020) to identify Twi and Xitsonga. Despite our efforts to detect the right languages, it is worth mentioning that as multilingualism is common in African societies, the final dataset contains many code-mixed tweets. ### Tweet Anonymization and Pre-processing We anonymized the tweets by replacing all _@mentions_ by _@user_ and removed all URLs. For the Nigerian language test sets, we further lower-cased the tweets Muhammad et al. (2022). ## 5 Data Annotation Challenges Tweet samples were randomly selected based on the different collection strategies. Then, with the exception of the Ethiopian languages, each tweet was annotated by three native speakers. We followed the sentiment annotation guidelines by Mohammad (2016) and used majority voting Davani et al. (2021) to determine the final sentiment label for each tweet Muhammad et al. (2022). We discarded the cases where all annotators disagree. The datasets of the three Ethiopian languages Amharic, Tigriniya, and Oromo) were annotated using two independent annotators, and then curated by a third more experienced individual who decided on the final gold labels. Prabhakaran et al. (2021) showed that the majority vote conceals systematic disagreements between annotators resulting from their sociocultural backgrounds and experiences. Therefore, we release all the individual labels to the research community. We report the free marginal multi-rater kappa scores Randolph (2005) in Table 5 since chance-adjusted scores such as Fleiss-\(\kappa\) can be low despite a high agreement due to the imbalanced label distributions Randolph (2005); Falotico and Quatto (2015); Matheson (2019). We obtained intermediate to good levels of agreement (\(0.40-0.75\)) across all languages, except for Oromo where we obtained a low agreement score due the annotation challenges that we discuss in Section 5. Table 6 shows the number of tweets in each of the 14 datasets. The Hausa collection of tweets is the largest AfriSenti dataset and the Xitsonga dataset is the smallest one. Figure 3 shows the distribution of the labeled classes in the datasets. We observe that the distribution for some languages such as ha is fairly equitable while in others such as pcm, the proportion of tweets in each class varies widely. Sentiment annotation for African languages presents some challenges Muhammad et al. (2022) that we highlight in the following. TwiA significant portion of tweets in _Twi_ were ambiguous, making it difficult to accurately categorize sentiment. Some tweets contained symbols that are not in the Twi alphabet, which is a frequent occurrence due to the lack of support for certain Twi letters on keyboards Scannell (2011). For example, "\(\circ\)" is replaced by the English letter "c", and "\(\epsilon\)" is replaced by "3". Additionally, tweets are more often annotated as negative (cf. Figure 3). This is due to some common expressions that can be seen as offensive depending on the context. For instance, "_Tweaa_" was once considered an insult but has become a playful expression through trolling, and "_gyae gyemii_" is commonly used by young people to say "stop" while its literal meaning is "stop fooling". Mozambican Portuguese and XitsongaOne of the significant challenges for the Mozambican Portuguese and Xitsonga data annotators was the presence of code-mixed and sarcastic tweets. Code-mixing in tweets made it challenging for the annotators to determine the intended meaning of the tweet as it involved multiple languages spoken in Mozambique that some annotators did not understand. Similarly, the presence of two variants of Xitsonga spoken in Mozambique Changana and Ronga) added to the complexity of the annotation task. Additionally, sarcasm was a source of disagreement among annotators, leading to the exclusion of many tweets from the final dataset. Ethiopian languagesFor Oromo and Tigrinya, challenges included finding annotators and the lack of a reliable Internet connection and access to personal computers. Although we trained the Oromo annotators, we observed severe problems in the quality of the annotated data which led to a low agreement score. Algerian ArabicFor Algerian Arabic, the main challenge was the use of sarcasm. When this caused a disagreement among the annotators, the tweet was further labeled by two additional annotators. If all the annotators did not agree on one final label, we discarded it. As Twitter is also commonly used to discuss controversial topics in the region, we removed offensive tweets. ## 6 Experiments ### Setup For our baseline experiments, we considered three settings: (1) monolingual baseline models based on multilingual pre-trained language models for 12 AfriSenti languages with training data, (2) multilingual training of all 12 languages, and their evaluation on a combined test of all 12 languages, (3) Zero-shot transfer to Oromo (orm) and Tigrinya (tir) from any of the 12 languages with available training data. Monolingual baseline modelsWe _fine-tune_ massively multilingual pre-trained language models (PLMs) trained on 100 languages from around the world and Africa-centric PLMs trained exclusively on languages spoken in Africa. For the massively multilingual PLMs, we selected two representative PLMs: XLM-R-{base & large} (Conneau et al., 2020) and mDeBERTaV3 (He et al., 2021). For the Africa-centric models, we make use of AfriBERTa-large (Ogueji et al., 2021) and AfroXLMR-{base & large} (Alabi et al., 2022) -- an XLM-R model adapted to African languages. AfriBERTa was pre-trained from the scratch on 11 African languages including nine of the AfriSenti languages while AfroXLMR supports 10 of the AfriSenti languages. Additionally, we fine-tune XLM-T (Barbieri et al., 2022), an XLM-R model adapted to the multilingual Twitter domain, supporting over 30 languages but fewer African languages due to a lack of coverage by Twitter's language API (cf. SS4). ### Experimental Results Table 7 shows the results of the monolingual baseline models on AfriSenti. AfriBERTa obtained the worst performance on average (\(61.7\)) especially for languages it was not pre-trained on, e.g., \(<50\) for the Arabic dialects. However, it achieved a good results for languages it has been pre-trained on, such as hau, ibo, swa, yor. XLM-R-base led to a performance that is comparable to AfriBERTa on average, was worse for most African languages, but better for Arabic dialects and pt-MZ. On the other hand, AfroXLMR-base and mDeBERTaV3 achieve similar performance, although AfroXLMR-base performs slightly better for kin and pcm compared to other models. Overall, considering models with up to 270M parameters, XLM-T achieves \begin{table} \begin{tabular}{l r r r r r r r r r r r r r} \hline \hline & **ama** & **arg** & **hau** & **ibo** & **ary** & **orm** & **pcm** & **pt-MZ** & **kin** & **swa** & **tir** & **tso** & **twi** & **yor** \\ \hline **train** & 5,985 & 1,652 & 14,173 & 10,193 & 5,584 & - & 5,122 & 3,064 & 3303 & 1,811 & - & 805 & 3,482 & 8,523 \\ **dev** & 1,498 & 415 & 2,678 & 1,842 & 1,216 & 397 & 1,282 & 768 & 828 & 454 & 399 & 204 & 389 & 2,091 \\ **test** & 2,000 & 959 & 5,304 & 3,683 & 2,962 & 2,097 & 4,155 & 3,663 & 1027 & 749 & 2,001 & 255 & 950 & 4,516 \\ \hline **Total** & 9,483 & 3,062 & 22,155 & 15,718 & 9,762 & 2,494 & 10,559 & 7,495 & 5,158 & 3,014 & 2,400 & 1,264 & 4,821 & 15,130 \\ \hline \hline \end{tabular} \end{table} Table 6: Sizes and splits of the AfriSenti datasets. We do not allocate training splits for Oromo (orm) and Tigrinya (tir) due to the limited size of the data and only evaluate on them in a zero-shot transfer setting in §6. \begin{table} \begin{tabular}{l r r r r r r r r r r r r} \hline \hline & & & & & & & & & & & 2-way & \\ \cline{2-13} Lang. & **arg** & **ary** & **hau** & **ibo** & **kin** & **pcm** & **pt-MZ** & **swa** & **tso** & **twi** & **yor** & **ama** & **orm** & **tir** \\ \hline \(\kappa\) & \(0.41\) & \(0.62\) & \(0.66\) & \(0.61\) & \(0.43\) & \(0.60\) & \(0.50\) & \(-\) & \(0.50\) & 0.51 & \(0.65\) & \(0.47\) & \(0.20\) & \(0.51\) \\ \hline \hline \end{tabular} \end{table} Table 5: Inter-annotator agreement scores using the free marginal multi-rater kappa (Randolph, 2005) for the different languages. the best performance, which highlights the importance of domain-specific pre-training. XLM-T performs particularly well on Arabic and Portuguese dialects, i.e., arq, ary and pt-MZ, where it outperforms AfriBERTa by \(21.8\), \(14.2\), and \(13.0\) and AfroXLMR-base by \(4.0\), \(5.9\), and \(4.7\) F1 points respectively. AfroXLMR-large achieves the best overall performance and improves over XLM-T by \(2.5\) F1 points, which highlights the benefit of scaling for large PLMs. Scaling is of limited use for XLM-R-large, however, as it has not been pre-trained on many of the African languages. Overall, our results demonstrate the importance of both language and domain-specific pre-training as well as the benefits of scale for appropriately pre-trained models. Table 8 shows the performance of multilingual models that were fine-tuned on the combined training data and evaluated on the combined test data of all languages. Similar to before, AfroXLMR-large achieves the best performance, outperforming AfroXLMR-base, XLM-R-large, and XLM-T-base by more than 2.5 F1 points. Finally, Table 9 shows the zero-shot cross-lingual transfer performance from models trained on different source languages with available training data to the test-only languages orm and tr. The best source languages are Hausa or Amharic for orm, and Hausa or Yoruba for tr. Hausa even outperforms a multilingually trained model. The impressive performance for transfer between Hausa and Oromo may be because both are from the same language family and share a similar Latin script. In addition, Hausa has the largest training dataset in AfriSenti. Both linguistic similarity and size of source language data have been shown to correlate with successful cross-lingual transfer Lin et al. (2019). However, it is unclear why Yoruba performs particularly well for tr despite the difference in script. One hypothesis is that Yoruba may be a good source language in general, as shown in Adelani et al. (2022) where Yoruba is the second best source language for named entity recognition in African languages. ## 7 Conclusion and Future Work We presented AfriSenti, a collection of sentiment Twitter datasets annotated by native speakers in 14 African languages used in the first Afro-centric SemEval shared task--SemEval 2023 Task 12: Sentiment analysis for African languages (AfriSentiSemEval). We reported the challenges faced during data collection and annotation as well as experimental results using state-of-the-art pre-trained language models. We release the datasets, and data resources to the research community. AfriSenti opens up new avenues for sentiment analysis research in under-represented languages. In the future, we plan to extend _AfriSenti_ to more African languages and different sentiment analysis sub-tasks. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Lang.**} & **In XLM-R or** & **In** & **In** & **In** & **AfriBERTa** & **XLM-R** & **AfroXLMR** & **mb+BERTa** & **XLM-T** & **XLM-R** & **AfroXLMR** \\ & **mb+BERTa?** & **AfriBERTa** & **AfroXLMR** & **XLM-T** & **large** & **base** & **base** & **base** & **large** & **large** \\ \hline anh & ✓ & ✓ & ✓ & ✓ & 56.9 & 60.2 & 54.9 & 57.6 & 60.8 & **61.8** & 61.6 \\ arq & ✓ & ✗ & ✓ & ✓ & 47.7 & 65.9 & 65.5 & 65.7 & **69.5** & 63.9 & 68.3 \\ ary & ✓ & ✗ & ✓ & ✓ & 44.1 & 50.9 & 52.4 & 55.0 & **58.3** & 57.7 & 56.6 \\ hau & ✓ & ✓ & ✓ & ✗ & 78.7 & 73.2 & 77.5 & 73.3 & 75.7 & **80.7** \\ libo & ✗ & ✓ & ✓ & ✗ & 78.6 & 75.6 & 76.3 & 77.5 & 76.1 & 76.5 & **79.5** \\ kin & ✗ & ✓ & ✓ & ✗ & 62.7 & 56.7 & 67.2 & 65.5 & 59.0 & 55.7 & **70.6** \\ pcm & ✗ & ✓ & ✓ & ✗ & 62.3 & 63.8 & 67.6 & 66.2 & 66.6 & 67.2 & **68.7** \\ pt-Mz & ✓ & ✗ & ✓ & 58.3 & 70.1 & 66.6 & 68.6 & 71.3 & 71.6 & **71.6** \\ swa & ✓ & ✓ & ✓ & ✗ & 61.5 & 57.8 & 60.8 & 59.5 & 58.4 & 61.4 & **63.4** \\ tso & ✗ & ✗ & ✗ & 51.6 & 47.4 & 45.9 & 47.4 & **53.8** & 43.7 & 47.3 \\ twi & ✗ & ✗ & ✗ & **65.2** & 61.4 & 62.6 & 63.8 & 65.1 & 59.9 & 64.3 \\ yor & ✗ & ✓ & ✓ & ✗ & 72.9 & 62.7 & 70.0 & 68.4 & 64.2 & 62.4 & **74.1** \\ AVG & - & - & - & - & 61.7 & 61.9 & 63.9 & 64.2 & 64.7 & 63.1 & **67.2** \\ \hline \hline \end{tabular} \end{table} Table 7: Accuracy scores of monolingual baselines for AfriSenti on the 12 languages with training splits. Results are averaged over 5 runs. \begin{table} \begin{tabular}{l c} \hline \hline **Model** & **F1** \\ \hline AfriBERTa-large & 64.7 \\ XLM-R-base & 64.3 \\ AfroXLMR-base & 68.4 \\ mDeBERTaV3-base & 66.1 \\ XLM-T-base & 65.9 \\ XLM-R-large & 66.9 \\ AfroXLMR-large & **71.2** \\ \hline \hline \end{tabular} \end{table} Table 8: Multilingual training and evaluation on combined test sets of all languages. Average over 5 runs. ## 8 Ethics Statement Sentiment and emotions are complex and nuanced mental states. Additionally, each individual expresses sentiment differently through language, which results in large amounts of variation. Therefore, several ethical considerations should be accounted for when working on sentiment analysis. See Mohammad (2022, 2023) for a comprehensive discussion of ethical considerations relevant to sentiment and emotion analysis. ## Acknowledgements We thank all the volunteer annotators involved in this project. Without their support and valuable contributions, this project would not have been possible. This research was partly funded by the Lacuna Fund, an initiative co-founded by The Rockefeller Foundation, Google.org, and Canada's International Development Research Centre. The views expressed herein do not necessarily represent those of Lacuna Fund, its Steering Committee, its funders, or Meridian Institute. We are grateful to Adnan Ozturel for helpful comments on a draft of this paper. We thank Tal Perry for providing the LightTag Perry (2021) annotation tool. We also thank the Language Technology Group, University of Hamburg, for allowing us to use the WebAnno (Yimam et al., 2013) annotation tool for all the Ethiopian languages annotation tasks. David Adelani acknowledges the support of DeepMind Academic Fellowship Programme. Finally, we are grateful for the support of Masakhane.
2310.07874
Refined Mechanism Design for Approximately Structured Priors via Active Regression
We consider the problem of a revenue-maximizing seller with a large number of items $m$ for sale to $n$ strategic bidders, whose valuations are drawn independently from high-dimensional, unknown prior distributions. It is well-known that optimal and even approximately-optimal mechanisms for this setting are notoriously difficult to characterize or compute, and, even when they can be found, are often rife with various counter-intuitive properties. In this paper, following a model introduced recently by Cai and Daskalakis~\cite{cai2022recommender}, we consider the case that bidders' prior distributions can be well-approximated by a topic model. We design an active learning component, responsible for interacting with the bidders and outputting low-dimensional approximations of their types, and a mechanism design component, responsible for robustifying mechanisms for the low-dimensional model to work for the approximate types of the former component. On the active learning front, we cast our problem in the framework of Randomized Linear Algebra (RLA) for regression problems, allowing us to import several breakthrough results from that line of research, and adapt them to our setting. On the mechanism design front, we remove many restrictive assumptions of prior work on the type of access needed to the underlying distributions and the associated mechanisms. To the best of our knowledge, our work is the first to formulate connections between mechanism design, and RLA for active learning of regression problems, opening the door for further applications of randomized linear algebra primitives to mechanism design.
Christos Boutsikas, Petros Drineas, Marios Mertzanidis, Alexandros Psomas, Paritosh Verma
2023-10-11T20:34:17Z
http://arxiv.org/abs/2310.07874v1
# Refined Mechanism Design for Approximately Structured Priors via Active Regression ###### Abstract We consider the problem of a revenue-maximizing seller with a large number of items \(m\) for sale to \(n\) strategic bidders, whose valuations are drawn independently from high-dimensional, unknown prior distributions. It is well-known that optimal and even approximately-optimal mechanisms for this setting are notoriously difficult to characterize or compute, and, even when they can be found, are often rife with various counter-intuitive properties. In this paper, following a model introduced recently by Cai and Daskalakis [1], we consider the case that bidders' prior distributions can be well-approximated by a topic model. We design an active learning component, responsible for interacting with the bidders and outputting low-dimensional approximations of their types, and a mechanism design component, responsible for robustifying mechanisms for the low-dimensional model to work for the approximate types of the former component. On the active learning front, we cast our problem in the framework of Randomized Linear Algebra (RLA) for regression problems, allowing us to import several breakthrough results from that line of research, and adapt them to our setting. On the mechanism design front, we remove many restrictive assumptions of prior work on the type of access needed to the underlying distributions and the associated mechanisms. To the best of our knowledge, our work is the first to formulate connections between mechanism design, and RLA for active learning of regression problems, opening the door for further applications of randomized linear algebra primitives to mechanism design. ## 1 Introduction The design of revenue-optimal auctions is a central problem in Economics and Computer Science. In this problem, a revenue-maximizing seller has \(m\) heterogeneous items for sale to \(n\) strategic bidders. Each bidder \(i\) has a type \(\mathbf{t}_{i}\in\mathbb{R}^{d}\) which contains enough information to encode the bidder's willingness to pay for every subset of items. Bidders' types are private information, and thus, in order to provide meaningful guarantees on the seller's revenue, the standard approach in Economics is to make a Bayesian assumption: types are drawn from a joint distribution \(\mathcal{D}\). Assuming a single item for sale and bidders' types that are drawn independently from known distributions, Myerson's seminal work [21] provides a closed-form solution for the revenue-optimal mechanism. Beyond this single-item case, however, multi-item mechanism design remains an active research agenda, even forty years later. Optimal mechanisms are no longer tractable, in any sense of the word, as well as exhibit various counter-intuitive properties [23, 1, 13, 14, 15, 16, 17, 18, 19]; see [15] for a survey. On the other hand, significant research effort has culminated in numerous compelling positive results, such as simple and approximately optimal auctions [1, 1, 18, 19, 20, 21, 22, 23], as well as efficient algorithms for computing near-optimal auctions [1, 1, 18, 19, 20, 21, 22, 24], even with just sampling access to the type distribution [1, 1, 13, 13, 13, 14, 15, 16, 17, 18, 19, 20, 21]. Despite all this progress, however, there are key challenges in applying these results in practice. First, the computational complexity, sample complexity, approximation guarantees, and communication complexity (i.e., the amount of information the bidder should communicate to the seller) of these results often depend on the number of items \(m\), which could be prohibitively large (e.g., think of \(m\) as the number of items on Amazon.com). Second, bidders' type distributions are typically high-dimensional, or otherwise complex objects, that are not known nor can they be sampled. Instead, the designer might have an estimated distribution \(\hat{\mathcal{D}}\), e.g., through market research, that is close to the real distribution \(\mathcal{D}\). Motivated by these issues, Cai and Daskalakis [1] introduce a model where the true type distribution \(\mathcal{D}_{i}\) of bidder \(i\) is close to a structured distribution \(\hat{\mathcal{D}}_{i}\). Specifically, they assume that there is an underlying design matrix \(\mathbf{A}\in\mathbb{R}^{m\times k}\) of \(k\) "archetypes," with \(k\ll m\). Intuitively, bidder \(i\) can be approximated by a linear combination of \(k\) archetypal bidders. This same assumption has been central in the study of recommender systems. In this model, [1] give a framework for transforming a mechanism \(\hat{\mathcal{M}}\) for the low-dimensional distribution \(\hat{\mathcal{D}}_{z}\) into a mechanism for the true type distribution with good revenue guarantees, and whose computational and communication complexity does not depend on the number of items \(m\). The impact of their work notwithstanding, the above results require strong structural assumptions on: the design matrix \(\mathbf{A}\); the bidders' valuation functions; and very specific access to (or exact knowledge of) the structured distribution \(\hat{\mathcal{D}}_{z}\) and the mechanism \(\hat{\mathcal{M}}\). _Our work connects the recommender system approaches for mechanism design with recent progress on Randomized Linear Algebra for active learning for regression problems. We relax, and even remove these restrictive assumptions, and open the door to future exploration of more elaborate recommender system models in the context of mechanism design, using randomized linear algebra primitives._ **The framework and results [1].** To place our results in context, we start with a brief overview of [1], which considers a setting where the type distribution \(\mathcal{D}_{i}\) of bidder \(i\) is close to a distribution \(\hat{\mathcal{D}}_{i}\) in the Prokhorov distance, under the \(\ell_{\infty}\) norm (Definition 1). Here, \(\hat{\mathcal{D}}_{i}\) first samples a vector \(\mathbf{z}\in[0,1]^{k}\) from a low-dimensional distribution \(\hat{\mathcal{D}}_{z,i}\), and then outputs \(\mathbf{A}\mathbf{z}\), where \(\mathbf{A}\in\mathbb{R}^{m\times k}\) is a known matrix. The proposed framework has a learning and a mechanism design component. Their learning component consists of a communication-efficient1 query protocol \(\mathcal{Q}\) for interacting with each bidder \(i\) such that, if the type \(\mathbf{t}_{i}\) of bidder \(i\) satisfies \(\|\mathbf{t}_{i}-\mathbf{A}\mathbf{z}\|_{\infty}\leq\varepsilon\), the query protocol outputs a vector \(\mathcal{Q}(\mathbf{t}_{i})\) such that \(\|\mathcal{Q}(\mathbf{t}_{i})-\mathbf{z}\|_{\infty}\leq\zeta_{\infty}\).2[1] give such query protocols under strong (distinct) conditions on \(\mathbf{A}\), and specifically when \(\mathbf{A}\): _(i)_ satisfies an assumption similar to the separability condition of [13], _(ii)_ is generated from a distribution where each archetype is an independent copy of an \(m\)-dimensional Gaussian, or _(iii)_ is generated from a distribution where each archetype is an independent copy of an \(m\)-dimensional, bounded distribution with weak dependence. We discuss these restrictions in more detail in Appendix D.4.1. The query complexity, as well as the error \(\zeta_{\infty}\), depend on them, but, importantly, are independent of the number of items \(m\). Footnote 1: By efficient we mean a query protocol that asks each bidder a small amount of queries. Se also Definition 3. Footnote 2: Recall that for any (integer) \(1\leq p<\infty\) and a vector \(\mathbf{x}\in\mathbb{R}^{d}\), \(\|\mathbf{x}\|_{p}^{p}=\sum_{i=1}^{d}|\mathbf{x}_{i}|^{p}\); for \(p=\infty\), \(\|\mathbf{x}\|_{\infty}=\max_{i=1\dots d}|\mathbf{x}_{i}|\). See [1] for an exact expression for \(\zeta_{\infty}\). Their mechanism design component is a refinement of a robustification result of [1]. For this transformation to work, one needs to interact with the mechanism and the underlying distributions using highly non-trivial operations, which are computationally demanding, and require exact knowledge of bidders' valuation functions. In our work we overcome this issues by developing new reductions and plugging them in the framework established by [1] and [1]. The overall interplay between the different mechanism design and active regression components can be seen in Fig. 1. Combining the two components, [1] obtain mechanisms for \(\mathcal{D}\), for constrained-additive bidders.3 In these mechanisms, each bidder is required to answer a small number of simple Yes/No queries of the form "are you willing to pay \(p\) for item \(j\)?", such that the loss in revenue and the violation in incentive compatibility do not depend on the number of items \(m\). Footnote 3: A function is constrained-additive if \(v(t,S)=\max\limits_{T\in\mathcal{I}\cap 2^{\beta}}\sum_{j\in\mathcal{T}}t_{j}\), where \(\mathcal{I}\) is a downward-closed set system. ### Our contributions **Randomized Linear Algebra (RLA) for active learning.** RLA for active learning has focused on solving regression problems of the form \[\mathbf{z}=\arg\min_{\mathbf{x}}\|\mathbf{t}-\mathbf{A}\mathbf{x}\|_{p},\] for \(\ell_{p}\) norms with \(1\leq p<\infty\), by querying _only_ a subset of elements of the vector \(\mathbf{t}\). In prior work on RLA for active learning, the focus has been on recovering an approximate solution that achieves a relative error or constant factor approximation to the optimum loss. We adapt these bounds to our setting, which could include noisy instead of exact queries, and prove bounds for the \(\ell_{p}\) norm error of the exact minus the approximate solution. Specifically, we bound \(\|\mathcal{Q}(\mathbf{t}_{i})-\mathbf{z}\|_{p}\leq\zeta_{p}.\) We provide bounds on \(\zeta_{p}\) for all integers \(1\leq p<\infty\). Our bounds depend on the modelling error \(\|\mathbf{t}-\mathbf{Ax}\|_{p}\) and some measure of the query noise (see Definitions 4 and 5); both dependencies are expected. Importantly, our bounds hold for _arbitrary archetype matrices_, very much unlike the work of [1], which focused on very restricted classes of matrices. A single property of the archetype matrix, the smallest singular value with respect to the \(\ell_{p}\) induced matrix norm, \(\sigma_{\min,p}(\mathbf{A})\), can characterize the quality of the error. As \(\sigma_{\min,p}(\mathbf{A})\) decreases, the error \(\zeta_{p}\) grows. Intuitively, \(\sigma_{\min,p}(\mathbf{A})\) is a measure of independence of the archetypes, with small values corresponding to linearly dependent archetypes. The query complexity needed to achieve error \(\zeta_{p}\) is almost linear (up to logarithmic factors) on \(k\) for \(p=1\) and \(p=2\), and grows with \(k^{\prime\prime/2}\) for \(p\geq 3\). Our query complexity bounds have no dependency on \(m\) (number of items) or \(d\) (dimensionality of a type) enabling us to produce results way beyond constrained-additive valuations, as \(d=2^{m}\) dimensions suffice to encode _arbitrary_ valuation functions. It is critical to highlight that our ability to provide bounds on the approximation error for arbitrary archetype matrices is, at least partly, due to leveraging information from the archetype matrix \(\mathbf{A}\). Specifically, we use this information to select which bidder types to query, instead of just querying types uniformly at random. This information involves the computation or approximation of the well-studied leverage scores of \(\mathbf{A}\) for \(p=2\) and of the so-called Lewis weights for all other values of \(p\) (see Section 3.1). We do note that in our framework, the errors due to the modeling of the bidder type \(\mathbf{t}\) as the product \(\mathbf{Az}\) and the query noise are always bounded by the respective \(\ell_{p}\) norm. Thus, our models are more restrictive compared to the \(\ell_{\infty}\) norm models of [1]. However, to the best of our knowledge, even assuming such restrictive models, the results and tools of prior work do not extend to arbitrary archetype matrices. This is precisely the gap that is bridged by our work, using RLA for active learning of \(\ell_{p}\) norm regression for \(1\leq p<\infty\). **Mechanism Design.** On the mechanism design front, our main contribution is relaxing the assumptions of [1, 1] on the type of access needed to the low-dimensional distribution and the mechanism for it. Specifically, we further refine the robustification result of Brustle et al. [1] and remove the need for the aforementioned strong oracle. The main difficulty of transforming mechanisms for one distribution into mechanisms for another distribution is that the two distributions might not share the same support. The crux of the issue is that the incentive constraints are very delicate; a small change in the underlying distribution may drastically change the agents' _valuation distribution_ over the mechanisms' outcomes. One way to tame the distribution of outcomes is to map bids that are not in the support of the initial distribution, to bids that are. Brustle et al. [1] do this by "optimally misreporting" on behalf of the bidder, by calculating \(\operatorname*{argmax}_{\mathbf{t}_{i}^{\prime}\leq supp(\widetilde{\mathcal{D }}_{z,i})}\mathbb{E}_{\mathbf{b}_{-i}\sim\widetilde{\mathcal{D}}_{z,i}}[u_{i}( v_{i},\hat{\mathcal{M}}(\mathbf{t}_{i}^{\prime},\mathbf{b}_{-i}))]\), where \(\widetilde{\mathcal{D}}_{z,i}\) is a rounded-down version of \(\hat{\mathcal{D}}_{z,i}\). As we've discussed, for this operation to be viable, many things need to be assumed about what the designer knows and can compute about \(\hat{\mathcal{M}}\), \(\hat{\mathcal{D}}_{z,i}\), and the bidder's valuation function. Our approach, instead, leverages the fact that when two distributions are close in Prokhorov distance, under any \(\ell_{p}\) norm, _any_ point on the support of one distribution is close to a point on the support of the other, with high probability. Our construction simply maps a report \(\mathbf{w}_{i}\) to the "valid" report (approximately) closest to \(\mathbf{w}_{i}\) in \(\ell_{p}\) distance. This operation is linear on the support size. Furthermore, our overall robustification result holds for all norms, not just \(\ell_{\infty}\), and our construction is completely agnostic to bidders' valuation functions. **Combining the components.** Combining the two components we can, without any assumptions on \(\mathbf{A}\), given a mechanism for the low-dimensional prior, design mechanisms with comparable revenue guarantees, where each bidder is required to answer a small number of queries. Our queries ask a bidder her value for a subset of items, and our mechanism can accommodate _any_ valuation function, significantly extending the scope of our results. **Related Work.** Aside from the work of [1], on the mechanism design front, the most relevant works are [1], that consider learning multi-item auctions given "approximate" distributions, and [1], that consider learning multi-item auctions when type distributions are correlated, yet admit special structure. On the RLA front, we leverage and adapt multiple recent results on approximating \(\ell_{p}\) regression problems in an active learning setting. We discuss prior work on RLA for active learning and its connections to our setting in Section 3.1 and Appendix D. ## 2 Preliminaries Let \([n]\coloneqq\{1,2,\ldots,n\}\) denote the first \(n\) natural numbers. A revenue-maximizing seller has a set \([m]\) of \(m\) heterogeneous items for sale to \(n\) strategic bidders indexed by \([n]\). Bidder \(i\) has a private type vector \(\mathbf{t}_{i}\in\mathbb{R}^{d}\), and the types of all bidders are represented by a _valuation profile_\(\mathbf{t}=(\mathbf{t}_{1},\ldots,\mathbf{t}_{n})\). Bidder \(i\) has a valuation function \(v_{i}:\mathbb{R}^{d}\times 2^{[m]}\rightarrow\mathbb{R}_{+}\), that takes as input the bidder's type and a (possibly randomized4) set of items \(S\subseteq[m]\) and outputs the bidder's value for \(S\). Note that \(d\leq 2^{m}\), since expressing a valuation function requires at most one real number per subset of items. Types are drawn independently. Let \(\mathcal{D}=\times_{i\in[n]}\mathcal{D}_{i}\) be the distribution over bidders' types, \(\mathcal{D}_{-i}=\times_{j\in[n]/\{i\}}\mathcal{D}_{j}\) be the distribution of all bidders excluding \(i\), and \(supp(\mathcal{D})\) be the support of a distribution \(\mathcal{D}\). Footnote 4: Each subset might be selected according to a probability distribution. **Mechanisms.** A mechanism \(\mathcal{M}=(x,p)\) is a tuple where \(x:\mathbb{R}_{+}^{nd}\to 2^{[nm]}\) is the allocation rule, and \(p:\mathbb{R}_{+}^{nd}\rightarrow\mathbb{R}_{+}^{n}\) is the payment rule, which map _reported_ types to allocations of the items and payments, respectively. Specifically, \(x_{i,j}(\mathbf{b})\coloneqq(x(\mathbf{b}))_{i,j}\) denotes the probability that bidder \(i\) receives item \(j\) for input valuation profile \(\mathbf{b}\), and \(p_{i}(\mathbf{b})\coloneqq(p(\mathbf{b}))_{i}\) denotes the amount bidder \(i\) has to pay. Let \(u_{i}(\mathbf{t}_{i},\mathcal{M}(\mathbf{b}))\) be the utility of bidder \(i\) with type \(\mathbf{t}_{i}\) for participating in mechanism \(\mathcal{M}\), under reports \(\mathbf{b}\). Bidders are risk-free and quasi-linear i.e., \(u_{i}(\mathbf{t}_{i},\mathcal{M}(\mathbf{b}))=\mathbb{E}\left[v_{i}(\mathbf{t }_{i},x(\mathbf{b}))-p_{i}(\mathbf{b})\right]\), where the expectation is taken over the randomness of the allocation rule. Since we only consider truthful mechanisms, unless stated otherwise, reported types will be the same as the true types. A bidder's objective is to maximize her utility. The seller strives to design mechanisms that incentiize bidders to report truthfully. We use the following notions of truthfulness. A mechanism \(\mathcal{M}\) is _\(\varepsilon\)-Bayesian Incentive Compatible (\(\varepsilon\)-BIC)_, if for each bidder \(i\in[n]\), any type \(\mathbf{t}_{i}\) and misreport \(\mathbf{t}_{i}^{\prime}\) we have that: \(\mathbb{E}_{\mathbf{t}_{-i}\sim\mathcal{D}_{-i}}[u_{i}(\mathbf{t}_{i},\mathcal{ M}(\mathbf{t}_{i},\mathbf{t}_{-i}))]\geq\mathbb{E}_{\mathbf{t}_{-i}\sim \mathcal{D}_{-i}}[u_{i}(\mathbf{t}_{i},\mathcal{M}(\mathbf{t}_{i}^{\prime}, \mathbf{t}_{-i}))]-\varepsilon\). A mechanism \(\mathcal{M}\) is \((\varepsilon,\delta)\)-BIC if for each bidder \(i\in[n]\), and any misreport \(\mathbf{t}_{i}^{\prime}\) we have that: \[\mathbb{P}_{\mathbf{t}_{i}\sim\mathcal{D}_{i}}\left[\mathbb{E}_{\mathbf{t}_{-i} \sim\mathcal{D}_{-i}}[u_{i}(\mathbf{t}_{i},\mathcal{M}(\mathbf{t}_{i},\mathbf{ t}_{-i}))]\geq\mathbb{E}_{\mathbf{t}_{-i}\sim\mathcal{D}_{-i}}[u_{i}(\mathbf{t}_{i}, \mathcal{M}(\mathbf{t}_{i}^{\prime},\mathbf{t}_{-i}))]-\varepsilon\right]\geq 1-\delta.\] A \((\varepsilon,0)\)-BIC mechanism is a \(\varepsilon\)-BIC mechanism; a \(0\)-BIC mechanism is simply BIC. Finally, a mechanism \(\mathcal{M}\) is _ex-post Individually Rational (IR)_ if for all valuation profiles \(\mathbf{t}\) and all bidders \(i\in n\), \(u_{i}(\mathbf{t}_{i},\mathcal{M}(\mathbf{t}_{i},\mathbf{t}_{-i}))\geq 0\). The seller's objective is to maximize her expected _revenue_. For a mechanism \(\mathcal{M}\) and distribution \(\mathcal{D}\) we denote the expected revenue as \(Rev(\mathcal{M},\mathcal{D})=\mathbb{E}_{\mathbf{t}\sim\mathcal{D}}\left[\sum_{i \in[n]}p_{i}(\mathbf{t})\right]\). Note that, we are calculating revenue assuming truthful reports, even for, e.g., \((\varepsilon,\delta)\)-BIC mechanisms. **Statistical Distance.** In this work we design mechanisms that work well, as long as they are evaluated on distributions that are "close" to \(\mathcal{D}\). Here, we define the notion of distance between two probability measures that we use throughout the paper. **Definition 1** (Prokhorov Distance).: _Let \((\mathcal{U},d)\) be a metric space and \(\mathcal{B}\) be a \(\sigma\)-algebra on \(\mathcal{U}\). For \(A\in\mathcal{B}\), let \(A^{\varepsilon}=\{x:\exists y\in A\text{ s.t. }\pi(x,y)<\varepsilon\}\) where \(\pi\) is some distance metric. Two probability measures \(P\), \(Q\) on \(\mathcal{B}\) have Prokhorov distance: \(inf\{\varepsilon>0:P(A)\leq Q(A^{\varepsilon})+\varepsilon\text{ and }Q(A)\leq P(A^{\varepsilon})+\varepsilon,\forall A\in\mathcal{B}\}\). We choose \(\pi\) to be the \(\ell_{p}\)-distance, and we denote the Prokhorov distance between measures \(P,Q\) as \(\pi_{p}(P,Q)\)._ The following is an equivalent characterization of Prokhorov distance due to Strassen [14]. **Lemma 1** ([14]).: _Let \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) be two distributions supported on \(\mathbb{R}^{n}\). \(\pi_{p}(\mathcal{D},\mathcal{D}^{\prime})\leq\varepsilon\) iff there exists coupling \(\gamma\) of \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\), such that \(\mathbb{P}_{(\mathbf{x},\mathbf{y})\sim\gamma}[\|\mathbf{x}-\mathbf{y}\|_{p}> \varepsilon]\leq\varepsilon\)._ **Recommendation system-inspired model.** We assume that, for each bidder, there exists a known design matrix \(\mathbf{A}\in\mathbb{R}^{d\times k}\)5, where the columns of \(\mathbf{A}\) represent \(k\) "archetypes", for a constant \(k\). Our results hold if these matrices are different for each bidder, however, for ease of notation we will assume all bidders have the same design matrix. For each bidder \(i\) there exists a distribution \(\hat{\mathcal{D}}_{z,i}\) supported on the latent space \([0,1]^{k}\). Let \(\hat{\mathcal{D}}_{i}=\mathbf{A}\circ\hat{\mathcal{D}}_{z,i}\) be the distribution induced by multiplying a sample from \(\hat{\mathcal{D}}_{z,i}\) with the design matrix \(\mathbf{A}\), i.e., \(\hat{\mathcal{D}}_{i}\) is the distribution of \(\mathbf{A}\mathbf{y}\) where \(\mathbf{y}\sim\hat{\mathcal{D}}_{z,i}\). Footnote 5: Notice that we consider the more general case of having \(d\) number of rows (instead of \(m\)). The valuation function over the latent types is defined as \(v_{i}^{\mathbf{A}}(\mathbf{z}_{i},S)\coloneqq v_{i}(\mathbf{A}\mathbf{z}_{i},S)\) for any bundle \(S\subseteq[m]\). We will use the following notion of Lipschitz continuity for valuation functions. **Definition 2** (Lipschitz Valuation).: _A valuation function \(v(\cdot,\cdot):\mathbb{R}^{d}\times[0,1]^{m}\rightarrow\mathbb{R}_{+}\) is \(\mathcal{L}\)-Lipschitz, if for any two types \(\mathbf{t},\mathbf{t}^{\prime}\in\mathbb{R}^{d}\) and any bundle \(S\subseteq[m]\), \(|v(\mathbf{t},S)-v(\mathbf{t}^{\prime},S)|\leq\mathcal{L}\|\mathbf{t}-\mathbf{t} ^{\prime}\|_{\infty}\)._ We include a table, Table 1, with all the notation used throughout the paper in the appendix. Active Learning for Regression and Mechanism Design In this section, we state our main results, deferring all technical proofs to the appendix. We present mechanisms that are completely agnostic with respect to \(\mathcal{D}\), the distribution from which bidders' types are drawn from. However, we have limited access (described later in this section) to _(i)_ a design matrix \(\mathbf{A}\); _(ii)_ distributions \(\hat{\mathcal{D}}_{z,i}\) over \(\mathbb{R}^{k}\), where for all \(i\in[n]\), \(\pi_{p}(\mathcal{D}_{i},\mathbf{A}\circ\hat{\mathcal{D}}_{z,i})\leq\varepsilon_ {\texttt{mdl},p}\) for some \(\varepsilon_{\texttt{mdl},p}>0\); and _(iii)_ a mechanism \(\hat{\mathcal{M}}\) for \(\hat{\mathcal{D}}_{z}=\times_{i\in[n]}\hat{\mathcal{D}}_{z,i}\). This limited access to the design matrix motivates the use of active learning, which deals precisely with settings where the algorithm is allowed to (interactively) query a subset of the available data points for their respective labels (see [11, 12] for precise definitions of the active learning setting in regression problems). Our approach is modular and starts by building an active learning component for regression problems (Section 3.1) followed by the mechanism design component (Section 3.2). We combine the two components to get an overall mechanism for \(\mathcal{D}\) in Section 3.3. ### Active learning for regression via Randomized Linear Algebra Our objective is to design a communication-efficient, active learning query protocol for the seller that interacts with each bidder \(i\), and infers their type \(\mathbf{t}_{i}\in\mathbb{R}^{d}\) by accessing only a small subset of elements of the type vector (as \(d\) is very large). We use \(\mathcal{Q}\) to denote the query protocol, whose output is a vector in the low-dimensional latent space \(\mathbb{R}^{k}\). A bidder interacts with the query protocol _truthfully_ if it is in her best interest to evaluate functions requested by the protocol on her true private type \(\mathbf{t}_{i}\). We use \(\mathcal{Q}(\mathbf{t}_{i})\in\mathbb{R}^{k}\) to denote the output of \(\mathcal{Q}\) when interacting with a truthful bidder with type \(\mathbf{t}_{i}\). We now define the notion of an \((\varepsilon_{\texttt{mdl},p},\zeta_{p},p)\)-query protocol and the notion of _query noise_. **Definition 3** (\((\varepsilon_{\texttt{mdl},p},\zeta_{p},p)\)-query protocol).: \(\mathcal{Q}\) _is called an \((\varepsilon_{\texttt{mdl},p},\zeta_{p},p)\)-query protocol, if, for all \(\mathbf{t}\in\mathbb{R}^{d}\) and \(\mathbf{z}\in\mathbb{R}^{k}\) satisfying \(\|\mathbf{t}-\mathbf{A}\mathbf{z}\|_{p}\leq\varepsilon_{\texttt{mdl},p}\), we have \(\|\mathbf{z}-\mathcal{Q}(\mathbf{t})\|_{p}\leq\zeta_{p}\)._ **Definition 4** (Query noise).: _Let \(\mathbf{t}_{i}\) be the true type of a bidder. Our query protocol can access entries of \(\mathbf{t}_{i}+\boldsymbol{\epsilon}_{\texttt{sq},p}\), where \(\boldsymbol{\epsilon}_{\texttt{sq},p}\) is an (unknown) vector. The query noise \(\varepsilon_{\texttt{sq},p}\) satisfies \(\|\boldsymbol{\epsilon}_{\texttt{sq},p}\|_{p}\leq\varepsilon_{\texttt{sq},p}\)._ The query noise depends on the specifics of the interactions between the seller and the bidder. For example, if the seller is only allowed to ask queries of the form "'what is your value for the subset \(S\)?", the query noise \(\varepsilon_{\texttt{sq},p}\) is equal to zero. Our bounds will also depend on the _model error_. **Definition 5** (Model error).: _Given a valuation profile \(\mathbf{t}\in\mathbb{R}^{nd}\), the model error is \(\varepsilon_{\texttt{mdl},p}\) if, for all \(i\in[n]\), there exists a \(\mathbf{z}_{i}\in\mathbb{R}^{k}\) such that \(\|\mathbf{t}_{i}-\mathbf{A}\mathbf{z}_{i}\|_{p}\leq\varepsilon_{\texttt{mdl},p}\)._ Note that, we don't have bounds of the form "\(\|\mathbf{t}_{i}-\mathbf{A}\mathbf{z}_{i}\|\leq\varepsilon\)" for individual types, but for the distributions \(\mathcal{D}_{i}\) and \(\mathbf{A}\circ\hat{\mathcal{D}}_{z,i}\). The characterization of Prokhorov distance (Lemma 1) allows us to relate the two quantities in the proofs that follow. We now rephrase the above discussion in order to cast it in the framework of Randomized Linear Algebra _(RLA)_ and active learning. Dropping the index \(i\) for notational simplicity, we assume that \(\mathbf{A}\mathbf{z}\approx\mathbf{t}\) and we seek to recover an approximate solution vector \(\mathcal{Q}(\mathbf{t})\) such that the \(\ell_{p}\) norm error between the approximate and the optimal solution is bounded. _Importantly_, the query protocol \(\mathcal{Q}\) is _not_ allowed full access to the vector \(\mathbf{t}\) in order to construct the approximate solution vector. This is a well-studied problem in the context of RLA: the learner is given a large collection of \(k\)-dimensional data points (the \(d\gg k\) rows of the design matrix \(\mathbf{A}\in\mathbb{R}^{d\times k}\)), but can only query a small subset of the real-valued labels associated with each data point (elements of the vector \(\mathbf{t}\in\mathbb{R}^{d}\)). Prior work in RLA and active learning has studied this problem in order to identify the optimal number of queries that allow efficient, typically relative error, approximations of the loss function. In our parlance, prior work has explored the existence of query protocols that construct a vector \(\mathcal{Q}(\mathbf{t})\) such that \[\|\mathbf{t}-\mathbf{A}\mathcal{Q}(\mathbf{t})\|_{p}\leq\gamma_{p}\|\mathbf{t }-\mathbf{A}\mathbf{z}\|_{p}, \tag{1}\] where \(\gamma_{p}>1\) is an error parameter that controls the approximation accuracy. Of particular interest in the RLA literature are _relative error_ approximations, with \(\gamma_{p}=1+\epsilon\), for some small \(\epsilon>0\); see [12, 12] for a detailed discussion. However, relative error approximations are less important in our setting, since our protocols in Section 3.2 necessitate \(\zeta_{p}\geq\varepsilon_{\texttt{mdl},p}\). For \(p=2\), the underlying problem is active learning for least-squares regression: [12] analyzed its complexity (namely, the number of queries) of query protocols in this setting, eventually providing matching upper and lower bounds. Similarly, for \(p=1\), the underlying problem is active learning for least absolute deviation regression, a robust version of least-squares regression: [14] analyzed the complexity of query protocols in this setting. The query protocols of [13, 14] are straightforward: they sample a small set of labels (i.e., bidder types) and elicit the bidder's preferences for this set. Then, the respective \(\ell_{p}\) norm regression problem is solved on the smaller set and the resulting solution is returned as an approximation to the original solution.6 The types to be sampled (see Appendix D.1 for details) are selected using distributions that can be constructed by accessing _only_ the design matrix \(\mathbf{A}\). Specifically, for the \(p=2\) case, one needs to compute or approximate the _leverage scores_ of the rows of the matrix \(\mathbf{A}\). For the \(p=1\) case, one needs to compute or approximate the _Lewis weights_ of the design matrix \(\mathbf{A}\). (The Lewis weights are an extension of the leverage scores to \(\ell_{p}\) norms for \(p\neq 2\).) The work of [13, 14, 14, 14, 14] for the \(p=2\) case involves more elaborate query protocols, using primitives such as volume sampling and the Batson-Spielman-Srivastava sparsifier to improve the query complexity. Finally, the \(p>2\) case for active learning for regression problems was recently resolved in [13, 13]; we discuss their approach in our context in Appendix D.2. Footnote 6: To be precise, multiple smaller problems have to be solved and a “good enough” solution has to be chosen in order to boost the success probability. See Appendix D for details. To the best of our knowledge, our work is the first one to formulate connections between mechanism design and Randomized Linear Algebra for active learning. Two technical points of departure that are needed in order to adapt the RLA work for active learning to the mechanism design framework are: _(i)_ we need to derive bounds of the form of eqn. (1) for the \(\ell_{p}\) norm distance between the exact and approximate solutions, whereas prior work typically bounds the error of the _loss_ function when an approximate solution is used; and _(ii)_ the entries of the bidder's type vector \(\mathbf{t}\) might not be known exactly, but only up to a small error. The latter assumption corresponds to the use of _noisy queries_ in the model of [13] and is known to be equivalent, up to logarithmic factors, to _threshold queries_ via binary search. Our work addresses both technicalities and seamlessly combines the RLA work for active learning with mechanism design. Prior to stating our main result, we need to define a fundamental property of the design matrix \(\mathbf{A}\in\mathbb{R}^{d\times k}\) that will affect the approximation error. Let \[\sigma_{\min,p}(\mathbf{A})=\min_{\mathbf{x}\in\mathbb{R}^{k},\ \|\mathbf{x}\|_{p}=1}\|\mathbf{A}\mathbf{x}\|_{p}. \tag{2}\] For \(p=2\), this is simply the smallest singular value of the matrix \(\mathbf{A}\). For other values of \(p\), the above definition is the standard generalization of the smallest singular value of \(\mathbf{A}\) for the induced matrix \(\ell_{p}\) norm. Notice that \(\sigma_{\min,p}(\mathbf{A})\) is a property of the matrix \(\mathbf{A}\) and can be computed _a priori_ via, say, the QR factorization or the Singular Value Decomposition (SVD) for \(p=2\) and via linear programming for \(p=1\). As we will see in Theorem 1 below, smaller values of \(\sigma_{\min,p}(\mathbf{A})\) result in increased sample complexity for our query protocols. **Theorem 1**.: _Let \(\mathbf{A}\in\mathbb{R}^{d\times k}\) be the design matrix, and recall the definitions of the model error \(\varepsilon_{\texttt{{a}l},p}\) (Definition 5) and the query noise (Definition 4). For all integers \(1\leq p<\infty\), there exist query protocols \(\mathcal{Q}\) using \(s_{p}\) queries for each bidder \(i\in[n]\), such that, with probability at least \(1-\delta\),_ \[\|\mathbf{z}_{i}-\mathcal{Q}(\mathbf{t}_{i})\|_{p}\leq\frac{c_{p}(\varepsilon_ {\texttt{{a}l},p}+\varepsilon_{\texttt{{a}l},p})}{\sigma_{\min,p}(\mathbf{A})}= \zeta_{p}\] _holds for all \(n\) bidders \(i\in[n]\). Here \(c_{p}\) is a small constant that depends on \(p\).7 The respective query complexities for \(p=1\) and \(p=2\) are (asymptotically) identical:_ Footnote 7: We make no attempt to optimize constants and focus on simplicity of presentation. In our proofs, \(c_{1}=2.5\); \(c_{2}=7.5\); and for \(p\geq 3\), \(c_{p}=18\cdot(200)^{1/p}+3\). Notice that the last constant converges to \(21\) as \(p\) increases. \[s_{1}=s_{2}=O\left(k\cdot\ln k\cdot\ln\nicefrac{{n}}{{\delta}}\right). \tag{3}\] _For \(p\geq 3\), the query complexity is_ \[s_{p}=O\left(k^{\nicefrac{{p}}{{2}}}\cdot\ln^{3}k\cdot\ln\nicefrac{{n}}{{ \delta}}\right). \tag{4}\] Several comments are in order. _(i)_ The error \(\zeta_{p}\) is a small constant times the modelling error plus the error due to noisy queries. In the limit case where the modelling error is equal to zero and the queries are noiseless, the bidders' types can be recovered exactly in our framework. However, as the modelling error and the query noise increase, approximating user types becomes harder and less accurate. _(ii)_ Importantly, the approximation accuracy of Theorem 1 grows linearly with the inverse of the smallest \(\ell_{p}\) norm singular value of the design matrix \(\mathbf{A}\). Our results indicate that the approximation accuracy of the query model \(\mathcal{Q}\) depends on this simple property of the design matrix \(\mathbf{A}\). For example, focusing on the \(p=2\) case, our theorem shows that as the archetypes (columns of the matrix \(\mathbf{A}\)) become more linearly dependent and the smallest singular value approaches zero, the error of our approximation worsens. This is quite reasonable: if archetypes are linearly dependent, then it is increasingly difficult to approximate the respective entries of the vector \(\mathbf{z}\). _(iii)_ The query complexities \(s_{1}\) and \(s_{2}\) are asymptotically identical, growing linearly with \(k\ln k\), where \(k\) is the number of archetypes. They both depend on the log of the number of bidders (due to a union bound) and on the log of \(\nicefrac{{1}}{{\delta}}\), where \(\delta\) is the failure probability. The query complexity for \(p\geq 3\) is larger and is dominated by the \(k^{\nicefrac{{p}}{{2}}}\) term. Importantly, the query complexity remains independent of \(d\), the number of bidder types, which, in worst case, could be exponential to the number of underlying items. _(iv)_ Improving the sampling complexities \(s_{1}\) and \(s_{2}\) has been a topic of intense interest in the RLA community and we defer the reader to [11, 12], which has essentially provided matching upper and lower bounds for various values of \(p\). We just note that for the well-studied \(p=2\) case, volume sampling approaches [11, 12, 12, 13] achieve essentially matching bounds, while the work of [10] removes (at least in expectation) the \(\ln k\) factor from \(s_{2}\), at the expense of significant additional protocol complexity. From a technical perspective, we note that \(\zeta_{p}\geq\varepsilon_{\texttt{mdl},p}\), as necessitated in Theorem 2 and that our query protocols are all one-round protocols. Finally, notice that our theorem works for all \(p\geq 1\), but not for \(p=\infty\), which was the setting of [14]. In Appendix D.4, we present a (modest) improvement of the result of [14] and explain why it seems improbable that the \(p=\infty\) case can be generalized to a much broader class of design matrices. This is a strong motivating factor to explore properties of mechanism design for the recommender system setting for other \(\ell_{p}\) norms, as we do in this work. ### The Mechanism Design component The goal of the mechanism design component is to transform a mechanism \(\hat{\mathcal{M}}\) for \(\hat{\mathcal{D}}\) into a mechanism \(\mathcal{M}\) for \(\mathcal{D}_{z}\). We first define exactly the type of access to \(\hat{\mathcal{D}}_{z}\) and \(\hat{\mathcal{M}}\) our construction requires. **Definition 6** (Access to \(\hat{\mathcal{M}}\)).: _By "query access to \(\hat{\mathcal{M}}\)"we mean access to an oracle which, given a valuation profile \(\mathbf{t}\), outputs the allocation and payments of \(\hat{\mathcal{M}}\) on input \(\mathbf{t}\)._ **Definition 7** (Access to \(\hat{\mathcal{D}}_{z}\)).: _By "oracle access to (1) a sampling algorithm \(\mathcal{S}_{i}\) for each \(i\in[n]\), where \(\mathcal{S}_{i}(\mathbf{x},\delta)\) draws a sample from the conditional distribution of \(\hat{\mathcal{D}}_{z,i}\) on the \(k\)-dimensional cube \(\times_{j\in[k]}[x_{j},x_{j}+\delta_{j})\), and (2) an oracle which, given as input a type \(\mathbf{t}_{i}\) for bidder \(i\), outputs the type in the support of \(\hat{\mathcal{D}}_{z,i}\) that is closest to \(\mathbf{t}_{i}\) in \(\ell_{p}\) distance, i.e., outputs \(argmin_{\mathbf{t}_{i}^{\prime}\in supp(\mathcal{D}_{z,i})}\|\mathbf{t}_{i}- \mathbf{t}_{i}^{\prime}\|_{p}\)._ If the allocation is randomized, our approach works even if the query to the oracle returns a (correct) deterministic instantiation of the randomized allocation.8 Footnote 8: That is, if, for example, \(\hat{\mathcal{M}}\) allocates item \(j\) to bidder \(i\) with probability \(1/2\) and with the remaining probability item \(j\) is not allocated, our construction does not need to know this distribution/fractional allocation and works even if nature samples and returns an integral allocation for item \(j\). In Definition 7, the first part of our oracle access (sampling from the conditional) is also necessary in [14]. The second part is new to our work, and replaces a strong requirement in [14]. In more detail, given a type \(\mathbf{t}_{i}\), Cai and Daskalakis [14] (as well as Brustle et al. [1]) need to know if \(\mathbf{t}_{i}\in supp(\widehat{\mathcal{D}}_{z,i})\), and, if not, need access to \(argmax_{\mathbf{t}_{i}\in supp(\widehat{\mathcal{D}}_{z,i})}\operatorname{\mathbb{ E}}_{\mathbf{b}_{-i}\sim\widehat{\mathcal{D}}_{z,-i}}[u_{i}(v_{i},\hat{ \mathcal{M}}(\mathbf{t}_{i}^{\prime},\mathbf{b}_{-i}))]\) where \(\widetilde{\mathcal{D}}_{z,i}\) is a rounded-down version of \(\hat{\mathcal{D}}_{z,i}\). However, for arbitrary distributions and mechanisms, this task might be computationally inefficient, or simply infeasible. In our work, we need access to \(argmin_{\mathbf{t}_{i}^{\prime}\in supp(\hat{\mathcal{D}}_{z,i})}\|\mathbf{t}_ {i}-\mathbf{t}_{i}^{\prime}\|_{p}\) in the "no" case. Given these definitions, our main theorem for this component is stated as follows. **Theorem 2**.: _Let \(\mathcal{D}=\times_{i=1}^{n}\mathcal{D}_{i}\) be the bidders' type distribution and \(v_{i}:\mathbb{R}^{d}\times 2^{[m]}\to\mathbb{R}_{+}\) be a \(\mathcal{L}\)-Lipschitz valuation function for each bidder \(i\in[n]\). Also, let \(\mathbf{A}\in\mathbb{R}^{d\times k}\) be a design matrix and \(\hat{\mathcal{D}}_{z}=\times_{i=1}^{n}\hat{\mathcal{D}}_{z,i}\), where \(\hat{\mathcal{D}}_{z,i}\) is a distribution over \(\mathbb{R}^{k}\) for each \(i\in[n]\)._ _Suppose that we are given (1) query access to a mechanism \(\hat{\mathcal{M}}\) that is IR and BIC w.r.t. \(\hat{\mathcal{D}}_{z}\) and valuations \(\{v_{i}^{\mathbf{A}}\}_{i\in[n]}\), (2) oracle access to \(\hat{\mathcal{D}}_{z}\), and (3) any \((\varepsilon_{\texttt{mdl},p},\zeta_{p},p)\)-query protocol \(\mathcal{Q}\) with \(\zeta_{p}\geq\varepsilon_{\texttt{mdl},p}\). Then, we can construct a mechanism \(\mathcal{M}\) that is oblivious to \(\mathcal{D}\) and \(v(\cdot,\cdot)\), such that for all \(\mathcal{D}\) that satisfy \(\pi_{p}(\mathcal{D}_{i},\mathbf{A}\circ\mathcal{D}_{z,i})\leq\varepsilon_{ \texttt{{all}},p}\) for all \(i\in[n]\), the following hold: (1) \(\mathcal{M}\) only interacts with every bidder using \(\mathcal{Q}\) once, (2) \(\mathcal{M}\) is IR and \((\eta,\mu)\)-BIC w.r.t. \(\mathcal{D}\), where \(\mu=O(\sqrt{\zeta_{p}})\) and \(\eta=O(n\|\mathbf{A}\|_{\infty}\mathcal{L}\sqrt{\zeta_{p}})\), and (3) the expected revenue of \(\mathcal{M}\) is at least \(Rev(\hat{\mathcal{M}},\hat{\mathcal{D}}_{z})-O(n\eta)\)._ Note that \(\mathcal{M}\) is an indirect mechanism, so it is slightly imprecise to call it \((\eta,\mu)\)-BIC. Formally, interacting with \(\mathcal{Q}\) truthfully is an approximate Bayes-Nash equilibrium. In order to prove Theorem 2, we use a key lemma, Lemma 2, which establishes the robustness guarantees of Theorem 2, but in the space of latent types. Intuitively, let \(\mathbf{t}_{i}\) be the type of bidder \(i\), and \(\mathbf{z}_{i}\) be a random variable distributed according to \(\hat{\mathcal{D}}_{z,i}\). We know that \(\pi_{p}(\mathcal{D}_{i},\mathbf{A}\circ\hat{\mathcal{D}}_{i})\leq\varepsilon_ {\texttt{{all}},p}\leq\zeta_{p}\). Due to Lemma 1, there exists a coupling such that with probability greater than \(1-\zeta_{p}\), \(\|\mathbf{t}_{i}-\mathbf{A}\mathbf{z}_{i}\|_{p}\leq\zeta_{p}\). Since the seller uses a \((\varepsilon_{\texttt{{all}},p},\zeta_{p},p)\)-query protocol, with probability at least \(1-\zeta_{p}\), \(\|\mathcal{Q}(\mathbf{t}_{i})-\mathbf{z}_{i}\|_{p}\leq\zeta_{p}\). Note that, this implies that \(\mathbf{z}_{i}\) and \(\mathcal{Q}(t_{i})\) are distributed such that their Prokhorov distance is at most \(\zeta_{p}\). At this step, Lemma 2 provides us a mechanism \(\widehat{\mathcal{M}}\), constructed from \(\hat{\mathcal{M}}\), that we can execute on types \(\mathcal{Q}(\mathbf{t}_{1}),\ldots,\mathcal{Q}(\mathbf{t}_{n})\), obtained by interacting with the bidders via the query protocol. With probability at least \(1-\zeta_{p}\), we have that \(\|\mathbf{t}_{i}-\mathbf{A}\mathbf{z}_{i}\|_{\infty}\leq\|\mathbf{t}_{i}- \mathbf{A}\mathbf{z}_{i}\|_{p}\leq\varepsilon_{\texttt{{all}},p}\) and thus, using the fact that the query protocol ensures \(\|\mathcal{Q}(\mathbf{t}_{i})-\mathbf{z}_{i}\|_{p}\leq\zeta_{p}\) as well, we have \(\|\mathbf{t}_{i}-\mathbf{A}\mathcal{Q}(\mathbf{t}_{i})\|_{\infty}\leq \varepsilon_{\texttt{{all}},p}+k\|\mathbf{A}\|_{\infty}\zeta_{p}\). The guarantees of \(\widehat{\mathcal{M}}\) for the distribution over \(\mathcal{Q}(\mathbf{t}_{i})\)s are therefore translated into guarantees (with a small error) of the overall mechanism for the \(\mathcal{D}\). The proof of Lemma 2 is quite involved and is the main focus of our analysis. Here, we sketch the key ideas behind the proof, and defer all formal arguments to Appendix C. The proof uses the following notion of a rounded distribution. **Definition 8** (Rounded Distribution).: _Let \(\mathcal{F}\) be a distribution supported on \(\mathbb{R}^{k}_{\geq 0}\). For any \(\delta>0\) and \(\ell\in[0,\delta]^{k}\), we define function \(r^{(\ell,\delta)}:\mathbb{R}^{k}_{\geq 0}\mapsto\mathbb{R}^{k}_{\geq 0}\) such that \(r^{(\ell,\delta)}_{i}(\mathbf{x})=max\left\{\left\lfloor\frac{x_{i}-\ell_{i}}{ \delta}\right\rfloor\cdot\delta+\ell_{i},0\right\}\) for all \(i\in[k]\). Let \(\mathbf{x}\) be a random vector sampled from \(\mathcal{F}\). We define \(\left\lfloor\mathcal{F}\right\rfloor_{\ell,\delta}\) as the distribution for the random variable \(r^{(\ell,\delta)}(\mathbf{x})\), and we call \(\left\lfloor\mathcal{F}\right\rfloor_{\ell,\delta}\) the rounded distribution of \(\mathcal{F}\)._ We follow an approach similar to Brustle et al. [1]. The main idea is that arguing directly about mechanisms for distributions that are close in Prokhorov distance is difficult. On the flip side, arguing about mechanisms for distributions that are close in total variation distance is much easier, since the total variation is a more stringent (and hence more tamable) notion of distance. The key observation is that, if two distributions are close in Prokhorov distance then, in expectation over the random parameter \(\ell\), their rounded-down versions are close in total variation distance. Our overall construction is via three reductions. First, in Lemma 3, given a mechanism for \(\hat{\mathcal{F}}_{z}\) we design a mechanism for the rounded-down version. Second, in Lemma 4, given a mechanism for the rounded-down \(\hat{\mathcal{F}}_{z}\) we design a mechanism for \(\left\lfloor\mathcal{F}_{z}\right\rfloor_{\ell,\delta}\), which maintains its guarantees if \(\pi_{p}(\mathcal{F}_{z},\hat{\mathcal{F}}_{z})\) is small. Third, in Lemma 5, given a mechanism for \(\left\lfloor\mathcal{F}_{z}\right\rfloor_{\ell,\delta}\) we design a mechanism for \(\mathcal{F}_{z}\). Fig. 1 presents a detailed overview of the overall design architecture, and how the RLA and different mechanism design components interact. Our proofs for Lemmas 3 and 5 are adaptations of the corresponding lemmas of [1], where our main task is to flesh out the exact dependence on the dimensionality of the latent space and the \(\ell_{p}\)-norm (versus the \(\ell_{1}\)-norm in [1]). The novelty of our approach comes in the construction and analysis of Lemma 4. The difficulty of transforming mechanisms for \(\left\lfloor\mathcal{F}_{z}\right\rfloor_{\ell,\delta}\) into mechanisms for \(\left\lfloor\mathcal{F}_{z}\right\rfloor_{\ell,\delta}\) is that the two distributions might not share the same support. Thus, we need a way to map bids that are not in the support of \(\left\lfloor\hat{\mathcal{F}}_{z}\right\rfloor_{\ell,\delta}\) to bids that are. Brustle et al. [1] do this by "optimally misreporting" on behalf of the bidder, by calculating \(argmax_{\mathbf{z}\in supp(\left\lfloor\hat{\mathcal{F}}_{z,i}\right\rfloor_{ \ell,\delta})}\mathbb{E}_{b_{-i}\sim\left\lfloor\langle\hat{\mathcal{F}}_{z} \rangle-\right\rfloor_{\ell,\delta}}[u_{i}(v_{i},\hat{\mathcal{M}}(\mathbf{z}, \mathbf{b}_{-i}))]\), and then picking matching payments that make the overall mechanism IR. Our approach leverages the fact that \(\hat{\mathcal{F}}_{z,i}\) and \(\mathcal{F}_{z,i}\) are close in Prokhorov distance, and thus any point on the support of one distribution is close to a point on the support of the other, with high probability. An ideal construction would map a report \(\mathbf{w}_{i}\) to the "valid" report (i.e., a report in the support of \(\left\lfloor\hat{\mathcal{F}}_{z,i}\right\rfloor_{\ell,\delta}\)) that minimizes the \(\ell_{p}\) distance to \(\mathbf{w}_{i}\). This operation is linear on the support of \(\left\lfloor\hat{\mathcal{F}}_{z,i}\right\rfloor_{\ell,\delta}\), and does not need any information on the valuation functions, nor on the actual probabilities that elements of the distribution are sampled with. Unfortunately, our assumption on what "oracle access" means does not allow us to do this operation (finding the closest point w.r.t. \(\ell_{p}\)) on \(\left\lfloor\hat{\mathcal{F}}_{z,i}\right\rfloor_{\ell,\delta}\), but only on \(\hat{\mathcal{F}}_{z,i}\); we prove that, by occurring a small loss, our assumption suffices. ### Putting everything together Combining Theorems 1 and 2, we can give mechanisms for concrete settings. Formally, we have the following theorem. **Theorem 3**.: _Under the same setting as in Theorem 2, for bidders with arbitrary valuation functions, we can construct mechanism \(\mathcal{M}\) using only query access to the mechanism \(\hat{\mathcal{M}}\) (Definition 6) and oracle access to distribution \(\hat{\mathcal{D}}\) (Definition 7), and oblivious to the true type distribution \(\mathcal{D}\). We consider queries (to each bidder \(i\)) of the form "What is your value for the subset of items \(S\)?"_ _Mechanism \(\mathcal{M}\) is IR and \((\eta,\mu)\)-BIC w.r.t. \(\mathcal{D}\), where \(\mu=O(\sqrt{\zeta_{p}})\) and \(\eta=O(n\|\mathbf{A}\|_{\infty}\mathcal{L}\sqrt{\zeta_{p}})\), and the expected revenue of \(\mathcal{M}\) is at least \(\text{Rev}(\hat{\mathcal{M}},\hat{\mathcal{D}}_{z})-O(n\eta)\). Additionally, with probability at least \(1-\delta\),_ \[\zeta_{p}=c_{p}\cdot(\sigma_{\min,p}(\mathbf{A}))^{-1}\cdot\varepsilon_{ \texttt{end},p}\] _for a small constant \(c_{p}\) that depends on the parameter \(p\) (see footnote 7). The number of queries is \(O\left(k\cdot\ln k\cdot\ln\left(\nicefrac{{n}}{{\delta}}\right)\right)\) and \(O\left(k^{\nicefrac{{n}}{{2}}}\cdot\ln^{3}k\cdot\ln\left(\nicefrac{{n}}{{ \delta}}\right)\right)\) for \(p=1,2\) and \(p\geq 3\), respectively._ The proof of Theorem 3 follows from Theorem 2 and Theorem 1, and is deferred to Appendix B. As we've already discussed, the main mechanism of Cai and Daskalakis [13] requires bidders to have constrained-additive valuations9, as well \(\mathbf{A}\) to satisfy a number of restrictions. Here, we completely remove both conditions. On the flip-side, [13] ask bidders weaker queries, of the form "are you willing to pay price \(\tau\) for item \(j\)?" Using such queries, one can binary search over \(\tau\), and drive down the query noise (see Definition 4). For \(\ell_{\infty}\), the extra cost of such an operation would be \(\ln\left(\|\mathbf{A}\|_{\infty}/\varepsilon\right)\), where \(\varepsilon\) is the desired accuracy. However, for other \(p\)-norms, for the same target accuracy, this operation requires an extra factor of \(\Theta(\log(d^{1/p}))\) queries, giving a dependence on \(d\). Figure 1: Agents interact with the query protocol \(\mathcal{Q}\), which learns their latent types \(\mathcal{Q}(\mathbf{t}_{i})\)s. The mechanism design component (which is oblivious to the distributions of the true agents’ types \(\mathcal{D}\)) then uses these to produce the final allocation and payments, utilizing only query access to \(\hat{\mathcal{M}}\) and sampling access to \(\hat{\mathcal{D}}_{z}\) such that the overall framework is approximately \((\eta,\mu)\)-BIC wrt \(\mathcal{D}\). Conclusions and Future Work In this paper, we study mechanism design for prior distributions close to a topic model, inspired from the recommender systems literature. We formulate connections between mechanism design and Randomized Linear Algebra for active learning for regression problems, importing state-of-the-art results from Randomized Linear Algebra to mechanism design, and alleviate or relax restrictive assumptions of prior work. Developing a deeper understanding of such connections is an important direction for future research. For example, one could study this and other topic models in the context of mechanism design for correlated bidders, two-sided markets, information structure design, etc. Additionally, another interesting open problem would be to develop a framework for proving lower bounds for mechanism design (e.g., lower bounds on the query complexity for single-round or multi-round protocols used to communicate with the bidders) using known limitations of algorithms in active learning, and vice-versa. ### Acknowledgements Christos Boutsikas, Petros Drineas and Marios Mertzanidis are supported in part by a DOE award SC0022085, and NSF awards CCF-1814041, CCF-2209509, and DMS-2152687. Alexandros Psomas and Paritosh Verma are supported in part by an NSF CAREER award CCF-2144208, a Google Research Scholar Award, and a Google AI for Social Good award.
2304.12458
Model-Free Learning and Optimal Policy Design in Multi-Agent MDPs Under Probabilistic Agent Dropout
This work studies a multi-agent Markov decision process (MDP) that can undergo agent dropout and the computation of policies for the post-dropout system based on control and sampling of the pre-dropout system. The central planner's objective is to find an optimal policy that maximizes the value of the expected system given a priori knowledge of the agents' dropout probabilities. For MDPs with a certain transition independence and reward separability structure, we assume that removing agents from the system forms a new MDP comprised of the remaining agents with new state and action spaces, transition dynamics that marginalize the removed agents, and rewards that are independent of the removed agents. We first show that under these assumptions, the value of the expected post-dropout system can be represented by a single MDP; this "robust MDP" eliminates the need to evaluate all $2^N$ realizations of the system, where N denotes the number of agents. More significantly, in a model-free context, it is shown that the robust MDP value can be estimated with samples generated by the pre-dropout system, meaning that robust policies can be found before dropout occurs. This fact is used to propose a policy importance sampling (IS) routine that performs policy evaluation for dropout scenarios while controlling the existing system with good pre-dropout policies. The policy IS routine produces value estimates for both the robust MDP and specific post-dropout system realizations and is justified with exponential confidence bounds. Finally, the utility of this approach is verified in simulation, showing how structural properties of agent dropout can help a controller find good post-dropout policies before dropout occurs.
Carmel Fiscko, Soummya Kar, Bruno Sinopoli
2023-04-24T21:29:41Z
http://arxiv.org/abs/2304.12458v2
# Model-Free Learning and Optimal Policy Design in Multi-Agent MDPs Under Probabilistic Agent Dropout ###### Abstract This work studies a multi-agent Markov decision process (MDP) that can undergo agent dropout and the computation of policies for the post-dropout system based on control and sampling of the pre-dropout system. The controller's objective is to find an optimal policy that maximizes the value of the expected system given a priori knowledge of the agents' dropout probabilities. Finding an optimal policy for any specific dropout realization is a special case of this problem. For MDPs with a certain transition independence and reward separability structure, we assume that removing agents from the system forms a new MDP comprised of the remaining agents with new state and action spaces, transition dynamics that marginalize the removed agents, and rewards that are independent of the removed agents. We first show that under these assumptions, the value of the expected post-dropout system can be represented by a single MDP; this "robust MDP" eliminates the need to evaluate all \(2^{N}\) realizations of the system, where \(N\) denotes the number of agents. More significantly, in a model-free context, it is shown that the robust MDP value can be estimated with samples generated by the pre-dropout system, meaning that robust policies can be found before dropout occurs. This fact is used to propose a policy importance sampling (IS) routine that performs policy evaluation for dropout scenarios while controlling the existing system with good pre-dropout policies. The policy IS routine produces value estimates for both the robust MDP and specific post-dropout system realizations and is justified with exponential confidence bounds. Finally, the utility of this approach is verified in simulation, showing how structural properties of agent dropout can help a controller find good post-dropout policies before dropout occurs. ## 1 Introduction Research in the modeling and control of multi-agent systems (MAS) attempts to describe how the decisions of individuals translates into group behavior. For example, consider social media interactions, financial markets with liquidity providers, and robot swarms collaborating on a common goal [1]. Regulators who aim to achieve objectives on the system must therefore understand these interactions in designing satisfactory control policies. In this work we consider hierarchical control of MAS behavior where the agents' decision processes are affected by a central planner (CP). The agents select actions in pursuit of their local objectives, but their decisions may be altered by the CP's exerted controls. For example, the CP might control a bandwidth budget, or they might select different pricing schemes that affect the agents. If the objective of the CP is aligned with those of the agents, then the problem reduces to a centralized solution approach for the local agent policies. Otherwise, the agents will act to best achieve their local objectives given the constraints imposed. The CP selects actions with the purpose of corralling the agents' choices and thus controlling the resulting stochastic process to a desired outcome. To achieve this goal, the CP must consider their own control capabilities, the actions of the agents, and the limitations of the problem in terms of visibility, data, and computation ability. A basic Markov decision process (MDP) model may be constructed where the state space describes the agents' behavior, the action space represents the CP's controls, and the reward function encodes the CP's control objectives. The state-action to state transitions encode the agents' decision behavior, which may be produced by another process. For example, the field of multi-agent RL (MARL) describes how agents can learn individually optimal policies [2][3]. Given the MDP model, the CP can solve for policy to achieve their control objectives [4]. Under certain independence conditions, the MDP model can be expressed as a factored [5] or transition-independent MDP [6], which reduces the scale of the problem through inherent structural properties [7]. An issue that often rises is that many theoretical guarantees on policy performance require the MDP model to be stationary, which is at odds with controlling real world systems. This assumption becomes an issue in practice as a control policy that had been optimal may become arbitrarily poor upon some disturbance to the system. One simple solution is to learn system parameters such as the transition matrix or the Q values and gradually update them over time, but this option can be slow to respond to changes and are lacking in strong theoretical guarantees. Many recent investigations have approached this problem by assuming the MDP is stationary over discrete time units, identifying the system for each block of time, and choosing a policy based on the temporary model [8][9][10]. ### Agent Dropout In this paper, we focus on a specific type of non-stationarity: removal of agents from the MAS. Agent dropout occurs when, after some time has passed under normal operation, an agent or a group of agents leaves the system. This a change to the fundamental structure of the MAS, which in turn changes all aspects of the model. The CP must be able to know what control policy to enact if this disturbance happens. For example, consider a centrally controlled swarm of drones maintaining a formation where each drone is equidistant from one another. If a drone were to lose power or connectivity, the CP would need to change the exerted control policy to maintain the same objective for the new number of agents. Versions of this problem in terms of the graph links have been studied in [11] and [12]. If dropout occurs and the CP uses Q-learning or a non-stationary method like learning a model for a fixed block of time, it is likely that finding a tolerable post-dropout policy will take too long. In some literature, agent dropout is viewed as link failure within a communication graph. For example, in distributed settings, update protocols have been developed that are robust to such link failures and stochastic networks [13], [14]. These papers necessitate the assumption is that the graph is fully connected on average, thus ensuring convergence of the agents' value estimates. This work focuses on a related but different formulation: we assume that agents have a long-term notion of joining or leaving the group. In this case, the assumption of average full connectivity over time will be violated. We also do not assume that the CP's control is necessarily related to the link structure of the graph. We therefore consider adding or removing an agent as the initialization of a _new_ MDP with a redefined the state space, action space, reward function, and transition dynamics. In this case of structural change, the objective is to solve for an optimal policy for the _post-dropout_ system based on samples from _pre-dropout_ system. A related goal is to find a control policy that produces good value for both the pre-dropout and the post-dropout system. This is relevant to applications when the dropout may not be detected immediately, so a policy that is robust to both scenarios is desirable. In this paper, we consider a probabilistic form of the dropout problem, where the CP knows each agent's probability of dropout _a priori_. To give the problem tractability, we assume that the post-dropout MDP retains structure reflective of its pre-dropout counterpart. Specifically, we assume that the new transition kernel marginalizes the removed agents from the original transitions. We assume that the reward associated to a removed agent is independent of the system state and action; for example, this can also correspond to marginalization, or it can be zeroed out. Under these assumptions, the model-based version of the stated objectives are straightforward to solve. Thus, the focus in this paper is to develop a model-free solution technique. Most standard RL methods will require the two basic steps of policy evaluation and policy search. For example, one method for safe policy improvement is to evaluate policies with a high probability confidence bound and select new policies associated with high lower bounds [15]. Attempting to perform policy evaluation on the post-dropout system, however, is not straightforward, as we can only sample from the pre-dropout system; the post-dropout system does not yet exist. In addition, policy search step is also difficult. The pre-dropout system must be controlled with policies that yield good performance, but a good post-dropout policy may be a bad pre-dropout policy. We studied a form of this problem in [16], which proposed a policy evaluation method based on policy importance sampling (IS) [17] for deterministic dropout of a single agent. Policy IS is desirable for the dropout problem as it would successfully avoid the issue of sampling from the pre-dropout system with a bad policy. Policy IS cannot be applied directly to this problem, however, as the pre- and post-dropout systems have different models, meaning the IS estimator cannot be evaluated. The proposed solution enabled policy IS by leveraging structural connections between the pre-and post-dropout MDPs. We demonstrated that the target policy can be represented in the dimension of the pre-dropout policy; then, policy IS can be applied as usual with samples from the pre-dropout MDP to find an estimated value, which then can be marginalized to find the value of the post-dropout MDP. ### Main Contributions In this paper, we expand on the initial results of [16] to consider probabilistic dropout of any combination of agents. The CP's goal is to solve the probabilistic dropout problem: to find a policy that maximizes the value of the expected system given _a priori_ knowledge of the agents' dropout probabilities. Note that finding optimal policies for any specific dropout realization is a special case of this problem. A naive approach to the probabilistic dropout problem would estimate a policy's value for each realization of the post-dropout system and then take the expectation. If there ware \(N\) agents, then there would be \(2^{N}\) system realizations with values needing estimation. As each estimate needs many trajectories to produce a usable result, this approach quickly becomes intractable. In this paper, we show that an alternate approach to solving the probabilistic dropout problem is to define an equivalent single MDP, which we call the _robust MDP_. We show that a similar policy IS technique can be defined for the probabilistic multi-agent case on the robust MDP. Thus, the robust MDP may be used to evaluate dropout realizations given samples generated by the pre-dropout MDP. This avoids the complexity issues of the naive approach, reducing the scale of the problem. In Section 2, the multi-agent MDP model is defined. The probabilistic agent dropout problem is defined in Section 3. The robust MDP formulation is presented in Section 4, and the model-free policy IS method and its performance are discussed in Section 5. Finally, simulations are in Section 6. ## 2 Preliminaries ### Multi-Agent MDPs Consider a multi-agent system modeled as a _Markov Decision Process_ (MDP) \(\mathcal{M}=(\mathcal{X},\mathcal{A},r,T,\gamma)\) consisting of the state space of the system \(\mathcal{X}\), the action space \(\mathcal{A}\), a reward function \(r:\mathcal{X}\times\mathcal{A}\rightarrow\mathbb{R}\), a probabilistic state-action to state transition function \(T:\mathcal{X}\times\mathcal{A}\rightarrow\Delta(\mathcal{X})\), and a discount parameter \(\gamma\in(0,1)\). Next, consider a finite set of agents \(\mathcal{N}=\{1,\dots,N\}\). Each agent selects some substate \(x_{n}\in\mathcal{X}_{n}\) where \(x_{n}\) could model some environmental state and/or personal action. We assume that the sizes \(|\mathcal{X}_{n}|\) are finite and identical across \(n\). A state of the MDP is thus the behavior across all the agents \(x=\{x_{1},\dots,x_{N}\}\) and the state space is \(\mathcal{X}=\bigotimes_{n\in\mathcal{N}}X_{n}\). In general, a state written with a set subscript such as \(x_{R}\) refers to the actions realized in state \(x\) by the agents in \(\mathcal{B}\), i.e. \(x_{\mathcal{B}}=\{x_{b}|b\in\mathcal{B}\}\), and the notation \(-\tilde{n}\) will refer to the set \(\{n|n\in\mathcal{N},n\neq\tilde{n}\}\). In this setup, we consider a CP that can broadcast a unique signal \(\alpha_{n}\in\mathcal{A}_{n}\) to each agent where \(\mathcal{A}_{n}\) is a finite set of options. The overall control space is thus \(\mathcal{A}=\bigotimes_{n\in\mathcal{N}}\mathcal{A}_{n}\) where one control is \(\boldsymbol{\alpha}=\{\alpha_{1},\dots,\alpha_{N}\}\). The next element of the MDP is the _state transition function_, which defines the state-action to state transition densities in the form \(p(x^{\prime}|x,\boldsymbol{\alpha})\). By design, each agent only sees the signal assigned to it from the CP. Furthermore, let the agents be connected by a communication structure encoded by a directed graph \((\mathcal{N},G)\). Then, with \(pa(n)\) denoting the parent set of \(n\) as given by \(G\), we assert the following assumption. **Assumption 1**.: _The agents' decision processes are Markovian and time-homogeneous:_ \[p(x_{n}^{t+1}|x^{t},\dots,x^{0},\alpha_{n}^{t},\dots,\alpha_{n}^{0})=p(x_{n}^ {t+1}|x_{pa(n)}^{t},\alpha_{n}^{t}),\;\forall\;\alpha_{n}\in\mathcal{A}_{n}, \;x_{n}\in\mathcal{X}_{n},\;n\in\mathcal{N},\;x\in\mathcal{X},\;t\geq 0. \tag{1}\] _Furthermore, each agent's decision process is independent of the CP actions assigned to the other agents:_ \[p(x_{n}^{\prime}|x_{pa(n)},\boldsymbol{\alpha})=p(x_{n}^{\prime}|x_{pa(n)}, \alpha_{n}),\;\forall\;\alpha_{n}\in\mathcal{A}_{n},\;x_{n}\in\mathcal{X}_{n}, \;n\in\mathcal{N},\;x\in\mathcal{X},\;t\geq 0. \tag{2}\] Equation 3 describes MAS where each agent makes their decision independently after observing the current state and control. For example, this includes general non-cooperative games [18]. Furthermore, the time homogeneity property means that the agents have learned their decision processes _a priori_, such as through MARL, game theory, or another paradigm. The CP is agnostic to the learning processes used by the agents as long as they satisfy the Markov and time homogeneity assumptions. As a result of Assumption 1, the overall state-action to state transition probabilities satisfy the following factored structure: \[p(x^{\prime}|x,\mathbf{\alpha})=\prod_{n\in\mathcal{N}}p(x^{\prime}_{n}|x_{pa(n)}, \alpha_{n}). \tag{3}\] MDPs whose transitions satisfy (3) are known as _transition independent MDPs_ (TI-MDPs). For ease of notation, the explicit dependence on \(pa(n)\) may be dropped, which implicitly defines the most general model of a fully-connected graph. Next, the following assumption will be made on the reward function of the MDP. **Assumption 2**.: _The reward function \(r(x,\mathbf{\alpha})\) satisfies the following separable structure:_ \[r(x,\mathbf{\alpha})\triangleq\sum_{n\in\mathcal{N}}r_{n}(x_{n},\alpha_{n}), \tag{4}\] _where each function \(r_{n}\) is non-negative, deterministic, and bounded for all \(x\in\mathcal{X}\), \(\mathbf{\alpha}\in\mathcal{A}\)._ Assumption 2 means that the CP's objective can be encoded per individual agent. For example, this reward can be the proportion of agents in desired goal states. TI-MDPs with separable reward functions defined over the same scope are known as _factored MDPs_[19, 5]. Through the rest of the paper, it will be assumed that Assumptions 1 and 2 hold. Finally, the CP may solve the MDP for some policy \(\pi:\mathcal{X}\rightarrow\mathcal{A}\). In this work we consider the standard discounted _value function_. The finite horizon value function is, \[V_{H}^{\pi}(x)=\mathbb{E}\left[\sum_{t=0}^{H}\gamma^{t}r(x_{t},\mathbf{\alpha}_{t} )|x_{0}=x,\mathbf{\alpha}_{t}\sim\pi(x_{t}),x_{t+1}|x_{t},\mathbf{\alpha}_{t}\sim T \right]. \tag{5}\] The infinite horizon value function can similarly be calculated as, \(V^{\pi}(x)=\lim_{H\rightarrow\infty}V_{H}^{\pi}(x).\) For brevity, the notation \(V^{\pi}\in\mathbb{R}^{|\mathcal{X}|}\) will refer to the vector of values \(V^{\pi}(x)\ \forall\ x\in\mathcal{X}\). An optimal policy \(\pi^{*}\) is one that maximizes the value function, \(\pi^{*}(x)\in\arg\max_{\pi}V^{\pi}(x)\). The optimal value \(V^{*}(x)\) is known to be unique, and for finite stationary MDPs there exists an optimal stationary deterministic policy [20]. Under a fixed policy, the MDP will evolve as a Markov chain. The _stationary distribution_\(\mu\) of a Markov chain under policy \(\pi\) is the left eigenvector corresponding to eigenvalue 1 of the transition matrix. Commonly used in dynamic programming methods to solve an MDP is the Bellman operator. **Definition 1**.: _The Bellman operator applied to the value function \(V(x)\) can be defined as,_ \[\mathbf{T}^{\pi}V(x) =\mathbb{E}_{\mathbf{\alpha}}\left[r(x,\mathbf{\alpha})+\gamma V(x^{ \prime})\Big{|}\mathbf{\alpha}\sim\pi,\ x^{\prime}|x,\mathbf{\alpha}\sim T,\ x_{0}=x \right], \tag{6}\] \[\mathbf{T}V(x) =\max_{\mathbf{\alpha}\in\mathcal{A}}\mathbb{E}\left[r(x,\mathbf{\alpha })+\gamma V(x^{\prime})\Big{|}\,x^{\prime}|x,\mathbf{\alpha}\sim T,\ x_{0}=x \right]. \tag{7}\] For factored MDPs, the Bellman operator can further be reduced by observing that, \[V_{k+1}(x)=\mathbf{T}V_{k}(x)=\sum_{n=1}^{N}\mathbf{T}_{n}V_{k}(x)=\sum_{n=1} ^{N}V_{k+1}(x_{n}), \tag{8}\] where the local agent values are defined iteratively as \(V_{0}(x_{n})=0\) and, \[\mathbf{T}_{n}V_{k}(x)=\max_{\alpha_{n}\in\mathcal{A}_{n}}\mathbb{E}\left[r_{ n}(x_{n},\alpha_{n})+\gamma V_{k}(x^{\prime}_{n})\Big{|}x^{\prime}_{n}|x, \alpha_{n}\sim T_{n}\right], \tag{9}\] where \(T_{n}=P(x^{\prime}_{n}|x,\alpha_{n})\) is one of the factors of the factored transition matrix [7]. ## 3 Problem Statement Consider a multi-agent system modeled as the MDP \(\mathcal{M}\). While the CP may have found a policy \(\pi\) that produces high value, the goodness of this policy is only valid for as long as the model \(\mathcal{M}\) is time-invariant, i.e. stationary. In this section, the objective is to formalize the model of the post-dropout system and discuss its structural relationship to the pre-dropout system. ### Probabilistic Agent Dropout Let the probability of dropout for agent \(n\) be \(1-\beta_{n}\), which is sampled independently of the other agents. Let the vector of probabilities be denoted by \(B=[\beta_{1},\ldots,\beta_{N}]\). Let the vector \(W\) denote a realization of the system via binary flags where \(w_{n}=1\) means the agent is in the system and \(w_{n}=0\) means the agent has left the system. Define the shorthand \(\mathcal{N}_{W}=\{n|w_{n}=1\}\) to be the set of agents active within the system, and similarly the shorthand \(-\mathcal{N}_{W}=\{n|w_{n}=0\}\) to be the set of dropped agents. In addition, define the shorthand notation \(\{x\}_{-\mathcal{N}_{W}}=\{x^{\prime}|x^{\prime}_{-\mathcal{N}_{W}}=x_{- \mathcal{N}_{W}}\}\) to be the set of states whose substates for agents \(-\mathcal{N}_{W}\) match those in the given state \(x\). The size of the set \(\{x\}_{-\mathcal{N}_{W}}\) will be \(|\mathcal{X}_{n}|^{|W=1|}\). Given \(\mu(x)\), the distribution \(\mu(x_{-\mathcal{N}_{W}})\) is the summation, \[\mu(x_{-\mathcal{N}_{W}})=\sum_{\{x\}_{-\mathcal{N}_{W}}}\mu(x). \tag{10}\] **Definition 2**.: _Define a realization of the multi-agent MDP subject to dropout: \((\mathcal{M}|W)=(\mathcal{N},\mathcal{X},\mathcal{A},T,r,\gamma|W)\), where:_ * \(\{\mathcal{X}|W\}\triangleq\bigotimes_{n\in\mathcal{N}_{W}}X_{n}\)_, where one state given the dropout configuration is referred to as_ \(\bar{x}\)_._ * \(\{\mathcal{A}|W\}\triangleq\bigotimes_{n\in\mathcal{N}_{W}}\mathcal{A}_{n}\)_, where one action given the dropout configuration is referred to as_ \(\bar{\boldsymbol{\alpha}}\)_._ * \(T(x,\boldsymbol{\alpha},x^{\prime}|W):\{\mathcal{X}|W\}\times\{\mathcal{A}|W\} \rightarrow\Delta\{\mathcal{X}|W\}\)_._ * \(r(x,\boldsymbol{\alpha}|W)\triangleq\sum_{n\in\mathcal{N}_{W}}r_{n}(x_{n}, \alpha_{n}|w_{n}=1)\)_, as the rewards to dropped agents are defined to be zero_ \(r_{n}(x_{n},\alpha_{n}|w_{n}=0)\triangleq 0\)_._ _The policy and value of the realized MDP will be referred to as:_ * \(\bar{\pi}(\bar{\boldsymbol{\alpha}}|\bar{x}):\{\mathcal{X}|W\}\rightarrow\Delta \{\mathcal{A}|W\}\)_, where the policy conditioned on the dropout configuration is_ \(\bar{\pi}=\prod_{n\in\mathcal{N}_{W}}P(\boldsymbol{\alpha}_{n}|\bar{x})\)_._ * \(\bar{V}^{\bar{\pi}}(\bar{x})\) _with corresponding finite time value_ \(\bar{V}_{H}^{\bar{\pi}}(\bar{x})\)_._ **Definition 3**.: _The pre-dropout system is equal to \((\mathcal{M}|W=\boldsymbol{1})\). This is equivalent to the original MDP \(\mathcal{M}\)._ **Assumption 3**.: _The pre-dropout system is an ergodic Markov chain under any fixed policy._ **Assumption 4**.: _The state-action-state transition probabilities of system \((\mathcal{M}|W)\) are equal to the marginalization of agents \(\{n|w_{n}=0\}\) from the transition probabilities of \(\mathcal{M}\) under \(\pi\)._ \[P(x^{\prime}_{n}|\bar{x},\bar{\boldsymbol{\alpha}}_{n})=\mathbb{E}_{x_{- \mathcal{N}_{W}}}\mathbb{E}_{\alpha_{-\mathcal{N}_{W}}}[P(x^{\prime}_{n}|x, \boldsymbol{\alpha})]. \tag{11}\] The overall transitions may be related as, \[P(\bar{x}^{\prime}|\bar{x},\bar{\boldsymbol{\alpha}}) =P(\bar{x}^{\prime}|x_{\mathcal{N}_{W}},\alpha_{\mathcal{N}_{W}}),\] \[=\mathbb{E}_{x_{-\mathcal{N}_{W}},\alpha_{-\mathcal{N}_{W}}}P( \bar{x}^{\prime}|x_{\mathcal{N}_{W}},x_{-\mathcal{N}_{W}},\alpha_{\mathcal{N} _{W}},\alpha_{-\mathcal{N}_{W}}),\] \[=\mathbb{E}_{x_{-\mathcal{N}_{W}},\alpha_{-\mathcal{N}_{W}}}P( \bar{x}^{\prime}|x,\boldsymbol{\alpha}). \tag{12}\] The main objective is thus to solve for \(\bar{\pi}^{*}\in\arg\max\bar{V}^{\bar{\pi}}(x)\) for any \(W\). A secondary objective is to solve for a robust policy that maximizes the value over the expected system realization, \(\pi^{\kappa}_{R}\in\arg\max\mathbb{E}_{W}[V^{\pi}(x|W)]\). This is the probabilistic dropout problem. It will become clear that these goals are related. If the model \(T\) of the original system is known, then the optimal post-dropout policy may easily computed. We thus focus our attention to the case where the model \(T\) is unknown and a policy must be learned from experience. All trajectories are sampled from the _original system_, i.e. the multi-agent MDP before any dropout has occurred. This has two critical implications. First, we are unable to sample from any realization of the post-dropout MDP as these systems do not yet exist. Second, we cannot exert a "bad" policy on the existing system; we always want to control the existing system with a policy that yields an acceptable amount of value. These observations are formalized in the following assumption. **Assumption 5**.: _The CP can generate trajectories from the pre-dropout system \(\mathcal{M}\) and cannot sample from any post-dropout realization of the system. The generating transition function \(P(x^{\prime}|x,\boldsymbol{\alpha})\) is unknown._ ### Policy Importance Sampling To control the existing system with a good policy while simultaneously enabling exploration of the policy space, we are interested in a policy importance sampling (IS) method. Given a dataset generated by behavioral policy \(\pi\), the goal of policy IS is to estimate the sample return had the trajectories instead been produced by target policy \(\phi\). In this way, the system can always be controlled with a behavioral policy known to produce good value while facilitating evaluation of alternate policies. In this section, we briefly recap standard policy IS and introduce prior work on adapting it for the dropout evaluation scenario. The sample return of a trajectory with horizon length \(H\) beginning at \(x_{0}=x\) is defined as, \[G_{H}(x)\triangleq\sum_{t=0}^{H-1}\gamma^{t}r(x_{t},\mathbf{\alpha}_{t}). \tag{13}\] Given a sample trajectory \(\tau=(x_{0},\mathbf{\alpha}_{0},r_{0},\dots,x_{H-1},\mathbf{\alpha}_{H-1},r_{H-1})\), generated by behavioral policy \(\pi\), standard policy IS estimates the return had \(\tau\) instead been generated under some target policy \(\phi\). For tractability, the following assumption on the policies must hold. **Assumption 6**.: \(\phi\) _is fully supported on \(\pi\), i.e. for \(\mathbf{\alpha}\) such that \(\phi(\mathbf{\alpha}|x)>0\) then \(\pi(\mathbf{\alpha}|x)>0\)._ With \(p\) as the joint distribution of \(\tau\) under \(\phi\) and \(q\) the joint distribution of \(\tau\) under \(\pi\), the estimated value is, \[V_{H}^{\phi}(x)=\mathbb{E}_{\tau\sim q}\left[\frac{p(\tau)}{q(\tau)}\sum_{t=0} ^{H-1}\gamma^{t}r_{t}(x_{t},\mathbf{\alpha}_{t})\right]=\mathbb{E}_{\tau\sim q} \left[\frac{p(\tau)}{q(\tau)}G_{H}(x)\right]. \tag{14}\] In comparing two policies on the same MDP, the IS ratio is, \[\frac{p(\tau)}{q(\tau)}=\frac{d(x_{0})\phi(\mathbf{\alpha}_{1}|x_{1})P(x_{2}|x_{1},\mathbf{\alpha}_{1})\dots\phi(\mathbf{\alpha}_{H}|x_{H})}{d(x_{0})\pi(\mathbf{\alpha}_{1} |x_{1})P(x_{2}|x_{1},\mathbf{\alpha}_{1})\dots\pi(\mathbf{\alpha}_{H}|x_{H})}=\frac{ \phi(\mathbf{\alpha}_{1}|x_{1})\dots\phi(\mathbf{\alpha}_{H}|x_{H})}{\pi(\mathbf{\alpha}_ {1}|x_{1})\dots\pi(\mathbf{\alpha}_{H}|x_{H})}=\prod_{t=1}^{H}\frac{\phi(\mathbf{ \alpha}_{t}|x_{t})}{\pi(\mathbf{\alpha}_{t}|x_{t})},\] which only depends on the chosen policies. The return estimate (14) can thus be evaluated purely from sampled and known information. In the dropout scenario, however, the objective is to evaluate policies for the _post-dropout_ system. Evaluating the probability ratio for a dropout realization \((\mathcal{M}|W)\) yields, \[\frac{p(\tau)}{q(\tau)}=\frac{d(\bar{x}_{0})\phi(\bar{\mathbf{\alpha}}_{1}|\bar{x }_{1})P(\bar{x}_{2}|\bar{x}_{1},\bar{\mathbf{\alpha}}_{1})\dots\phi(\bar{\mathbf{ \alpha}}_{H}|\bar{x}_{H})}{d(x_{0})\pi(\mathbf{\alpha}_{1}|x_{1})P(x_{2}|x_{1},\bm {\alpha}_{1})\dots\pi(\mathbf{\alpha}_{H}|x_{H})}, \tag{15}\] which will not cancel to a ratio of the policies. This problem arises because the samples were generated from the pre-dropout MDP which has a different model from target post-dropout system. As (15) cannot be evaluated, policy IS for a specific dropout realization may not be used by directly applying the existing method. In [16], we investigated adapting policy IS to the dropout scenario under the case of deterministic dropout of one agent. It is straightforward to expand this approach to consider multiple dropped agents and perform policy evaluation for one \((\mathcal{M}|W)\). One strategy to find an acceptable post-dropout policy could thus be to implement a policy search routine that leverages this policy IS method for the policy evaluation step. While this proposed method is valid, it scales poorly in terms of the data required to find acceptable policies for all possible combinations of dropped agents. For a set of \(N\) agents, there are \(2^{N}\) possible realizations of \(W\); therefore to fully solve the post-dropout system, the solution algorithm must be run \(2^{N}\) times. In terms of computation complexity and data complexity, this proposed approach becomes untenable. In the following section, we will demonstrate an alternate method to solving the probabilistic dropout problem based on solving a single MDP. ## 4 Robust MDP In this section, we develop an analytical understanding of node dropout in the multi-agent MDP. In particular, we demonstrate that the probabilistic dropout problem can be reduced to a single MDP, dubbed the _robust MDP_, which can be used to find a control policy robust to any dropout realization. The next definition establishes the robust MDP, which is the expected system over all the system realizations. **Definition 4**.: _Define the **robust multi-agent MDP**: \(\mathcal{M}^{R}=(\mathcal{N},B,\mathcal{X},\mathcal{A},T,r^{R},\gamma)\), where the reward is defined as:_ \[r^{R}(x,\boldsymbol{\alpha}) =\sum_{n\in\mathcal{N}}r_{n}^{R}(x_{n},\alpha_{n}) \tag{16}\] \[r_{n}^{R}(x_{n},\alpha_{n}) \triangleq\mathbb{E}_{w_{n}}[r_{n}(x_{n},\alpha_{n}|w_{n})]= \beta_{n}r_{n}(x_{n},\alpha_{n}|w_{n}=1) \tag{17}\] Let \(J\) be the value function of this system with associated policy \(\pi_{J}\). The function \(J\) will satisfy, \[J^{\pi}(x)=\mathbb{E}_{\boldsymbol{\alpha}}\left[\mathbb{E}_{W}[r(x, \boldsymbol{\alpha}|W)]+\gamma\mathbb{E}_{x^{\prime}}[J(x^{\prime})|x, \boldsymbol{\alpha}]\right]. \tag{18}\] As \(B\) is known _a priori_, the expected rewards over dropout realizations can easily be evaluated. This fact, combined with the assumption that trajectories from \(T\) can be generated, means that \(J\) can be estimated in a model-free setting. ### Value Theorems In this section, an analysis of the robust MDP will be performed to provide structure for the probabilistic dropout problem. The first theorem relates the value of a realization of the post-dropout MDP to samples generated by the pre-dropout system. **Theorem 1**.: _Value of a Realized Post-Dropout System: Consider a realization of the system \(W\) and some policy \(\bar{\pi}\). Let \(\pi_{U}\) denote a uniform policy. Define the policy,_ \[\pi(\boldsymbol{\alpha}|x)=\bar{\pi}(\bar{\boldsymbol{\alpha}}|\bar{x})\prod _{n\in-\mathcal{N}_{W}}\pi_{U}(\boldsymbol{\alpha}_{n}|x_{n}). \tag{19}\] _Then the finite horizon value of system \((\mathcal{M}|W)\) is equal to,_ \[\overline{V}_{H}^{\bar{\pi}}(\bar{x})=\mathbb{E}_{x_{-\mathcal{N}_{W}}} \mathbb{E}\left[\sum_{t=0}^{H-1}\gamma^{t}r(x_{t},\boldsymbol{\alpha}_{t}|W) \big{|}x_{0}=x,\boldsymbol{\alpha}_{t}|x_{t}\sim\pi,x_{t+1}|x_{t},\boldsymbol {\alpha}_{t}\sim T\right], \tag{20}\] _and the infinite horizon value is equal to,_ \[\overline{V}^{\bar{\pi}}(\bar{x})=\lim_{H\to\infty}\mathbb{E}_{x_{-\mathcal{N }_{W}}}\mathbb{E}\left[\sum_{t=0}^{H-1}\gamma^{t}r(x_{t},\boldsymbol{\alpha}_{ t}|W)\big{|}x_{0}=x,\boldsymbol{\alpha}_{t}|x_{t}\sim\pi,x_{t+1}|x_{t}, \boldsymbol{\alpha}_{t}\sim T\right], \tag{21}\] _where the expectation is with respect to the marginal distribution of the pre-dropout system under \(\pi\)._ As a consequence of Theorem 1 and under the policy design in (19), it holds that \(\overline{V}^{\bar{\pi}}(\bar{x})=V^{\pi}(x|W)\). Proof.: In this proof we construct a fictitious MDP with state space \(\mathcal{X}\), action space \(\mathcal{A}\), transition \(T\), and reward function \(r(x,\boldsymbol{\alpha}|W)\). This system is identically distributed to the pre-dropout system. We will use the notation \(U\) to denote the value function associated with this system. To prove the theorem, the goal will be to show that the expression \(\overline{V}^{\bar{\pi}}(\bar{x})=\mathbb{E}_{x_{-\mathcal{N}_{W}}}U_{t}^{\pi} (x)\) holds for all \(t\). Initial case: \[\overline{V}_{H}^{\bar{\pi}}(\bar{x})=\mathbb{E}_{\bar{\boldsymbol{\alpha}}}[r (\bar{x},\bar{\boldsymbol{\alpha}})]=\mathbb{E}_{\bar{\boldsymbol{\alpha}}} \left[\sum_{n\in\mathcal{N}_{W}}r_{n}(x_{n},\alpha_{n})\right].\] Note that for \(n\in\mathcal{N}_{W}\), \(r_{n}(x_{n},\alpha_{n}|w_{n}=1)\) is independent of \(x_{-\mathcal{N}_{W}}\) and \(\boldsymbol{\alpha}_{-\mathcal{N}_{W}}\) because agents in \(-\mathcal{N}_{W}\) have been removed from the system. Furthermore, \(r_{n}(x_{n},\alpha_{n}|w_{n}=0)=0\). Let \(\prod_{n\in-\mathcal{N}_{W}}\pi_{U}(\boldsymbol{\alpha}_{n}|x_{n})=\pi_{U}( \boldsymbol{\alpha}_{-\mathcal{N}_{W}}|x_{-\mathcal{N}_{W}})\). Then for \(x_{-\mathcal{N}_{W}}\) and \(\boldsymbol{\alpha}_{-\mathcal{N}_{W}}\) sampled from the fictitious system, the following operations on the value may be taken: \[=\sum_{\bar{\mathbf{\alpha}}}\bar{\pi}(\bar{\mathbf{\alpha}}|\bar{x})\left[ \sum_{n\in\mathcal{N}_{W}}r_{n}(x_{n},\alpha_{n})\right],\] \[=\sum_{x_{-\mathcal{N}_{W}}}\mu(x_{-\mathcal{N}_{W}})\sum_{\mathbf{ \alpha}_{-\mathcal{N}_{W}}}\pi_{U}(\mathbf{\alpha}_{-\mathcal{N}_{W}}|x_{- \mathcal{N}_{W}})\sum_{\bar{\mathbf{\alpha}}}\bar{\pi}(\bar{\mathbf{\alpha}}|\bar{x}) \left[\sum_{n\in\mathcal{N}_{W}}r_{n}(x_{n},\alpha_{n})\right],\] \[=\sum_{x_{-\mathcal{N}_{W}}}\mu(x_{-\mathcal{N}_{W}})\sum_{\mathbf{ \alpha}}\pi(\mathbf{\alpha}|x)\left[\sum_{n\in\mathcal{N}_{W}}r_{n}(x_{n},\alpha_ {n})\right],\] \[=\mathbb{E}_{x_{-\mathcal{N}_{W}}}\mathbb{E}_{\mathbf{\alpha}}\left[ \sum_{n\in\mathcal{N}_{W}}r_{n}(x_{n},\alpha_{n}|w_{n}=1)+\sum_{n\notin\mathcal{ N}_{W}}r_{n}(x_{n},\alpha_{n}|w_{n}=0)\right],\] \[=\mathbb{E}_{x_{-\mathcal{N}_{W}}}\mathbb{E}_{\mathbf{\alpha}}[U_{H}^ {\pi}(x)].\] Use induction to show now for general \(t\). Note that as \(x_{-\mathcal{N}_{W}}\) and \(\mathbf{\alpha}_{-\mathcal{N}_{W}}\) are sampled from the fictitious system, the transition matrices are related according to (12). Assume the claim holds for \(t+1\): \(\overline{V}_{t+1}^{\pi}(\bar{x}^{\prime})=\mathbb{E}_{x_{-\mathcal{N}_{W}}^{ \prime}}U_{t+1}^{\pi}(x^{\prime})\). Then as stepping backwards, \[\overline{V}_{t}^{\bar{\pi}}(\bar{x}) =\mathbb{E}_{\mathbf{\alpha}_{\mathcal{N}_{W}}}\left[r(x,\mathbf{\alpha} |W)+\gamma\mathbb{E}_{\bar{x}^{\prime}}[\overline{V}_{t+1}^{\pi}(\bar{x}^{ \prime})|\bar{x},\bar{\mathbf{\alpha}}]\right]\] \[=\mathbb{E}_{\mathbf{\alpha}_{\mathcal{N}_{W}}}[\mathbb{E}_{x_{- \mathcal{N}_{W}},\alpha_{-\mathcal{N}_{W}}}r(x,\mathbf{\alpha}|W)+\gamma\mathbb{E} _{x_{-\mathcal{N}_{W}},\alpha_{-\mathcal{N}_{W}}}\mathbb{E}_{x^{\prime}}[ \mathbb{E}_{x_{-\mathcal{N}_{W}}^{\prime}}U_{t+1}^{\pi}(x^{\prime})|x,\mathbf{ \alpha}]] \tag{22}\] \[=\mathbb{E}_{x_{-\mathcal{N}_{W}}}[U_{t}^{\pi}(x)]. \tag{23}\] As the pre-dropout system is assumed to be an ergodic chain for all policies in Assumption 3, the marginalization will be well defined. To verify the infinite horizon case, note that all rewards are non-negative and, \[\overline{V}^{\bar{\pi}}(\bar{x})=\mathbb{E}_{x_{-\mathcal{N}_{W}}}\mathbb{E} \left[\sum_{t=0}^{H-1}\gamma^{t}r(x_{t},\mathbf{\alpha}_{t}|W)\right]+\mathbb{E}_ {x_{-\mathcal{N}_{W}}}\mathbb{E}\left[\sum_{t=H}^{\infty}\gamma^{t}r(x_{t}, \mathbf{\alpha}_{t}|W)\right]. \tag{24}\] The right most term is lower bounded by 0 as \(r\) is bounded to be non-negative. In addition, it can be expressed as, \[\mathbb{E}_{x_{-\mathcal{N}_{W}}}\mathbb{E}\left[\sum_{t=H}^{\infty}\gamma^{t }r(x_{t},\mathbf{\alpha}_{t}|W)\right]\leq\mathbb{E}_{x_{-\mathcal{N}_{W}}}r_{\max }\frac{\gamma^{H}}{1-\gamma}=r_{\max}\frac{\gamma^{H}}{1-\gamma}.\] Take the limit as \(H\to\infty\) to show the upper bound goes to 0 to complete the proof. Theorem 1 establishes that samples generated by the pre-dropout MDP can be used to evaluate policies for realizations of the post-dropout system based on a marginalization relationship. This resolves the issue of being unable to sample from the desired post-dropout realization. Building on this observation, the next result is that the value function of the robust MDP is equal to the value of the expected realization of the system. **Theorem 2**.: _Value of Expected System: The value of the expected system is,_ \[V_{R}^{\pi}(x)=\mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}}J^{\pi}(x), \tag{25}\] _and a robust policy can be computed as,_ \[\pi_{R}(\mathbf{\alpha}|x)=\mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}}\pi_{J}( \mathbf{\alpha}|x). \tag{26}\] Proof.: Show that the value function of the expected system is equivalent to the robustness criterion. Initial case. Note that again \(r_{n}=0\) for removed agents in \(-\mathcal{N}_{W}\). Thus the expectation can be taken, \(V_{RH}^{\pi}(x)=\mathbb{E}_{\mathbf{\alpha}}\mathbb{E}_{W}[r(x,\mathbf{\alpha}|W)]= \mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}}\mathbb{E}_{\mathbf{\alpha}} \mathbb{E}_{W}[r(x,\mathbf{\alpha}|W)]\). Use induction to show now for general \(t\). Assume the claim holds for \(t+1\): \(V_{R,t+1}^{\pi}(x^{\prime})=\mathbb{E}_{x_{-\mathcal{N}_{W}}^{\prime}}J_{t+1}^{ \pi}(x^{\prime})\). Then stepping backwards, \[V_{R,t}^{\pi}(x) =\mathbb{E}_{W}[V_{t}^{\pi}(x|W)]\] \[=\mathbb{E}_{W}\mathbb{E}_{\mathbf{\alpha}}[r(x,\mathbf{\alpha}|W)+\gamma \mathbb{E}_{x^{\prime}}[V_{t}^{\pi}(x^{\prime})|\bar{x},\mathbf{\bar{\alpha}}]]\] \[=\mathbb{E}_{W}\mathbb{E}_{\mathbf{\bar{\alpha}}}[r(x,\mathbf{\alpha}|W)+ \gamma\mathbb{E}_{x^{\prime}}[\mathbb{E}_{x^{\prime}-_{\mathcal{N}_{W}}}J_{t+1 }^{\pi}(x^{\prime})|\bar{x},\mathbf{\bar{\alpha}}]] \tag{27}\] \[=\mathbb{E}_{W}\mathbb{E}_{\mathbf{\bar{\alpha}}}[r(x,\mathbf{\alpha}|W)+ \gamma\mathbb{E}_{x^{\prime}}[J_{t+1}^{\pi}(x^{\prime})|\bar{x},\mathbf{\bar{\alpha }}]] \tag{28}\] \[=\mathbb{E}_{W}\mathbb{E}_{\mathbf{\alpha}}[\mathbb{E}_{x_{-\mathcal{N }_{W}},\mathbf{\alpha}_{-\mathcal{N}_{W}}}r(x,\mathbf{\alpha}|W)\] \[\quad+\gamma\mathbb{E}_{x_{-\mathcal{N}_{W}},\mathbf{\alpha}_{- \mathcal{N}_{W}}}\mathbb{E}_{x^{\prime}}[J_{t+1}^{\pi}(x^{\prime})|x,\mathbf{ \alpha}]]\] \[=\mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}}\mathbb{E}_{\mathbf{ \alpha}}[r(x,\mathbf{\alpha}|W)+\gamma\mathbb{E}_{x^{\prime}}[J_{t+1}^{\pi}(x^{ \prime})|x,\mathbf{\alpha}]] \tag{30}\] Then note that as no reward is associated with non-existent agents: \[\mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}}\mathbb{E}_{\mathbf{ \alpha}}[r(x,\mathbf{\alpha}|W)]\] \[=\mathbb{E}_{W}\mathbb{E}_{\mathbf{\alpha}}[r(x,\mathbf{\alpha}|W)]\] \[=\sum_{n\in\mathcal{N}}\mathbb{E}_{w_{n}}\mathbb{E}_{\mathbf{\alpha}_ {n}}[r(x_{n},\mathbf{\alpha}_{n}|w_{n})]\] \[=\sum_{n\in\mathcal{N}}\mathbb{E}_{\mathbf{\alpha}_{n}}\mathbb{E}_{w _{n}}[r(x_{n},\mathbf{\alpha}_{n}|w_{n})]\] \[=\mathbb{E}_{\mathbf{\alpha}}[\mathbb{E}_{W}r(x,\mathbf{\alpha}|W)]\] \[=\mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}}\mathbb{E}_{\mathbf{ \alpha}}[\mathbb{E}_{W}r(x,\mathbf{\alpha}|W)] \tag{31}\] Then continuing from (30), \[=\mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}}\mathbb{E}_{\mathbf{ \alpha}}[r(x,\mathbf{\alpha}|W)+\gamma\mathbb{E}_{x^{\prime}}[J_{t+1}^{\pi}(x^{ \prime})|x,\mathbf{\alpha}]]\] \[=\mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}}\mathbb{E}_{\mathbf{ \alpha}}[\mathbb{E}_{W}r(x,\mathbf{\alpha}|W)+\gamma\mathbb{E}_{x^{\prime}}[J_{t+1 }^{\pi}(x^{\prime})|x,\mathbf{\alpha}]]\] \[=\mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}}J_{t}^{\pi}(x) \tag{32}\] To verify the infinite horizon case, note that all rewards are non-negative and, \[V_{R}^{\pi}(x)=\mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}} \mathbb{E}\left[\sum_{t=0}^{H-1}\gamma^{t}\mathbb{E}_{W}r(x_{t},\mathbf{\alpha}_{ t}|W)\right]+\mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}}\mathbb{E}\left[\sum_{t =H}^{\infty}\gamma^{t}\mathbb{E}_{W}r(x_{t},\mathbf{\alpha}_{t}|W)\right].\] The right most term is lower bounded by 0 as \(r\) is bounded to be non-negative. In addition, it can be expressed as, \[\mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}}\mathbb{E}\left[ \sum_{t=H}^{\infty}\gamma^{t}\mathbb{E}_{W}r(x_{t},\mathbf{\alpha}_{t}|W)\right]\leq \mathbb{E}_{x_{-\mathcal{N}_{W}}}r_{\max}\frac{\gamma^{H}}{1-\gamma}=r_{\max} \frac{\gamma^{H}}{1-\gamma}.\] Take the limit as \(H\to\infty\) to show the upper bound goes to 0 to complete the proof. Equation (26) then follows. **Remark 1**.: _Retaining the transition function as the original \(T\) is a key necessary condition for policy IS. Replacing the reward with \(\mathbb{E}_{W}r(x,\mathbf{\alpha}|W)\) will then enable policy IS to be evaluated on a single system, rather than needing estimate a value for each of the \(2^{N}\) combinations of \(W\)._ Given Theorem 2, it can be established that the optimal robust value and policy can similarly be found as a mixture. **Theorem 3**.: _Optimal Value and Policy of the Expected System: The optimal robust value satisfies,_ \[V_{R}^{*}(x)=\mathbb{E}_{W\sim B}[V^{*}(x|W)], \tag{33}\] _and an optimal robust policy satisfies,_ \[\pi_{R}^{*}(\mathbf{\alpha}|x)=\mathbb{E}_{W\sim B}[\pi^{*}(\mathbf{\alpha}|x,W)]. \tag{34}\] Proof.: Follows from Theorem 2. ### Performance of Robust Policy The next two results will relate the performance of a single policy across both the original system \(\mathcal{M}\) and the robust model \(\mathcal{M}_{R}\). Given the value of a policy on the pre-dropout system, this first result evaluates the robust policy's performance. **Lemma 1**.: _Let all agents have identical probabilities \(\beta_{n}\equiv\beta\). The performance of some policy \(\pi\) on the robust model \(\mathcal{M}_{R}\) and the original system \(\mathcal{M}\) can be related as,_ \[V_{R}^{\pi}(x)=\beta\mathbb{E}_{W}\mathbb{E}_{x_{-N_{W}}}V^{\pi}(x|W=\mathbf{1}). \tag{35}\] Proof.: Consider the value function \(J\) as in (18) with the expected reward function and original transitions. Base case of induction: \[J_{0}^{\pi}(x) =\mathbb{E}_{\mathbf{\alpha}}[r^{R}(x,\alpha)],\] \[=\mathbb{E}_{\mathbf{\alpha}}\mathbb{E}_{W}[r(x,\alpha|W)]\] \[=\mathbb{E}_{\mathbf{\alpha}}\sum_{n\in\mathcal{N}}\mathbb{E}_{w_{n} }[r_{n}(x_{n},\alpha_{n}|w_{n})],\] \[=\mathbb{E}_{\mathbf{\alpha}}\sum_{n\in\mathcal{N}}\left(\beta[r_{n} (x_{n},\alpha_{n}|w_{n}=1)]+(1-\beta)[r_{n}(x_{n},\alpha_{n}|w_{n}=0)]\right),\] \[=\beta\mathbb{E}_{\mathbf{\alpha}}[r(x,\mathbf{\alpha}|W=\mathbf{1})]+(1- \beta)\mathbb{E}_{\mathbf{\alpha}}[r(x,\mathbf{\alpha}|W=\mathbf{0})].\] Then: \[J_{1}^{\pi}(x) =\mathbb{E}_{\mathbf{\alpha}}[r^{R}(x,\mathbf{\alpha})+\gamma\mathbb{E}_{ x^{\prime}}[J_{0}^{\pi}(x^{\prime})|x,\mathbf{\alpha}]],\] \[=\mathbb{E}_{\mathbf{\alpha}}[\mathbb{E}_{W}[r(x,\mathbf{\alpha}|W)]+ \gamma\mathbb{E}_{x^{\prime}}[J_{0}^{\pi}(x^{\prime})|x,\mathbf{\alpha}]],\] \[=\mathbb{E}_{\mathbf{\alpha}}\left[\sum_{n\in\mathcal{N}}\mathbb{E}_{ w_{n}}[r_{n}(x_{n},\alpha_{n}|w_{n})]+\gamma\mathbb{E}_{x^{\prime}}[\beta \mathbb{E}_{\mathbf{\alpha}}[r(x^{\prime},\mathbf{\alpha}|W=\mathbf{1})]+(1-\beta) \mathbb{E}_{\mathbf{\alpha}}[r(x^{\prime},\mathbf{\alpha}|W=\mathbf{0})]|x,\mathbf{\alpha }]\right],\] \[=\mathbb{E}_{\mathbf{\alpha}}[\beta r(x,\mathbf{\alpha},|W=\mathbf{1})+(1 -\beta)r(x,\mathbf{\alpha},|W=\mathbf{0})\] \[\quad+\gamma\mathbb{E}_{x^{\prime}}[\beta\mathbb{E}_{\mathbf{\alpha} }[r(x^{\prime},\mathbf{\alpha}|W=\mathbf{1})]+(1-\beta)\mathbb{E}_{\mathbf{\alpha}}[r( x^{\prime},\mathbf{\alpha}|W=\mathbf{0})]|x,\mathbf{\alpha}]],\] \[=\mathbb{E}_{\mathbf{\alpha}}[\beta r(x,\mathbf{\alpha},|W=\mathbf{1})+ \gamma\mathbb{E}_{x^{\prime}}[\beta\mathbb{E}_{\mathbf{\alpha}^{\prime}}[r(x^{ \prime},\mathbf{\alpha}^{\prime}|W=\mathbf{1})]|x,\mathbf{\alpha}]]\] \[\quad+\mathbb{E}_{\mathbf{\alpha}}[(1-\beta)r(x,\mathbf{\alpha},|W= \mathbf{0})+\gamma\mathbb{E}_{x^{\prime}}[(1-\beta)\mathbb{E}_{\mathbf{\alpha}^{ \prime}}[r(x^{\prime},\mathbf{\alpha}^{\prime}|W=\mathbf{0})]|x,\mathbf{\alpha}]],\] \[=\beta\mathbf{T}^{\pi}J_{0}(x|W=\mathbf{1})+(1-\beta)\mathbf{T}^ {\pi}J_{0}(x|W=\mathbf{0}) \tag{36}\] Next, the inductive step must be established. Assume the following expression holds for iteration \(k\): \[J_{k}^{\pi}(x) =\beta\mathbf{T}^{\pi}J_{k-1}(x|W=\mathbf{1})+(1-\beta)\mathbf{T} ^{\pi}J_{k-1}(x|W=\mathbf{0}),\] \[=\beta\mathbb{E}_{\mathbf{\alpha}}[r(x,\mathbf{\alpha}|W=\mathbf{1})+ \gamma\mathbb{E}_{x^{\prime}}[J_{k-1}^{\pi}(x^{\prime}|W=\mathbf{1})|x,\mathbf{ \alpha})]]\] \[\quad(1-\beta)\mathbb{E}_{\mathbf{\alpha}}[r(x,\mathbf{\alpha}|W=\mathbf{0 })+\gamma\mathbb{E}_{x^{\prime}}[J_{k-1}^{\pi}(x^{\prime}|W=\mathbf{0})|x,\mathbf{ \alpha})]].\] Then for \(k+1\): \[J_{k+1}^{\pi}(x) =\mathbb{E}_{\mathbf{\alpha}}[r^{R}(x,\mathbf{\alpha})+\gamma\mathbb{E}_ {x^{\prime}}[J_{k}^{\pi}(x^{\prime})|x,\mathbf{\alpha}]],\] \[=\mathbb{E}_{\mathbf{\alpha}}[\mathbb{E}_{W}[r(x,\mathbf{\alpha}|W)]+ \gamma\mathbb{E}_{x^{\prime}}[J_{k}^{\pi}(x^{\prime})|x,\mathbf{\alpha}]],\] \[=\mathbb{E}_{\mathbf{\alpha}}\left[\sum_{n\in\mathcal{N}}\mathbb{E}_{ w_{n}}[r_{n}(x_{n},\alpha_{n}|w_{n})]+\gamma\mathbb{E}_{x^{\prime}}[\mathbf{T}^{ \pi}J_{k-1}(x^{\prime}|W=\mathbf{1})+(1-\beta)\mathbf{T}^{\pi}J_{k-1}(x^{ \prime}|W=\mathbf{0})|x,\mathbf{\alpha}]\right],\] \[=\mathbb{E}_{\mathbf{\alpha}}[\beta r(x,\mathbf{\alpha},|W=\mathbf{1})+(1- \beta)r(x,\mathbf{\alpha},|W=\mathbf{0})\] \[\quad+\gamma\mathbb{E}_{x^{\prime}}[J_{k}^{\pi}(x^{\prime},\mathbf{ \alpha}^{\prime}|W=\mathbf{1})+\gamma\mathbb{E}_{x^{\prime\prime}}[J_{k-1}^{ \pi}(x^{\prime\prime}|W=\mathbf{1})|x^{\prime},\mathbf{\alpha}^{\prime})]]\] \[\quad+(1-\beta)\mathbb{E}_{\mathbf{\alpha}}[r(x^{\prime},\mathbf{\alpha},|W=\mathbf{0})+\gamma\mathbb{E}_{x^{\prime}}[J_{k-1}^{\pi}(x^{\prime\prime}|W= \mathbf{0})|x^{\prime},\mathbf{\alpha}^{\prime})]]|x,\mathbf{\alpha}]],\] \[=\mathbb{E}_{\mathbf{\alpha}}[\beta r(x,\mathbf{\alpha},|W=\mathbf{1})+ \gamma\mathbb{E}_{x^{\prime}}[\beta\mathbb{E}_{\mathbf{\alpha}^{\prime}}[r(x,\mathbf{ \alpha}^{\prime}|W=\mathbf{1})+\gamma\mathbb{E}_{x^{\prime\prime}}[J_{k-1}^{ \pi}(x^{\prime\prime}|W=\mathbf{1})|x^{\prime},\mathbf{\alpha}^{\prime})]]|x,\mathbf{ \alpha}]\] \[\quad+\mathbb{E}_{\mathbf{\alpha}}[(1-\beta)r(x,\mathbf{\alpha},|W=\mathbf{ 0})+\gamma\mathbb{E}_{x^{\prime}}[(1-\beta)\mathbb{E}_{\mathbf{\alpha}^{\prime}}[r( x^{\prime},\mathbf{\alpha}^{\prime}|W=\mathbf{0})+\gamma\mathbb{E}_{x^{\prime\prime}}[J_{k-1}^{ \pi}(x^{\prime\prime}|W=\mathbf{0})|x^{\prime},\mathbf{\alpha}^{\prime})]]|x,\mathbf{ \alpha}],\] \[=\beta\mathbb{E}_{\mathbf{\alpha}}[r(x,\mathbf{\alpha},|W=\mathbf{1})+ \gamma\mathbb{E}_{x^{\prime}}[J_{k}^{\pi}(x^{\prime}|W=\mathbf{1})|x,\mathbf{ \alpha}]]\] \[\quad+(1-\beta)\mathbb{E}_{\mathbf{\alpha}}[r(x,\mathbf{\alpha},|W=\mathbf{ 0})+\gamma\mathbb{E}_{x^{\prime}}[J_{k}^{\pi}(x^{\prime}|W=\mathbf{0})|x,\mathbf{ \alpha}]],\] \[=\beta\mathbf{T}^{\pi}J_{k}(x|W=\mathbf{1})+(1-\beta)\mathbf{T}^ {\pi}J_{k}(x|W=\mathbf{0}) \tag{37}\] Taking \(k\rightarrow\infty\): \[J^{\pi}(x)\] \[=\lim_{k\rightarrow\infty}J^{\pi}_{k}(x)\] \[=\left[\beta\lim_{k\rightarrow\infty}J^{\pi}_{k}(x|W=\mathbf{1})+( 1-\beta)\lim_{k\rightarrow\infty}J^{\pi}_{k}(x|W=\mathbf{0})\right]\] \[=\beta J^{\pi}(x|W=\mathbf{1})+(1-\beta)J^{\pi}(x|W=\mathbf{0})\] By Theorem 2, \[V^{\pi}_{R}(x) =\mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}}J^{\pi}(x)\] \[=\mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}}\left[\beta J^{ \pi}(x|W=\mathbf{1})+(1-\beta)J^{\pi}(x|W=\mathbf{0})\right] \tag{38}\] \[=\mathbb{E}_{W}\mathbb{E}_{x_{-\mathcal{N}_{W}}}\left[\beta J^{ \pi}(x|W=\mathbf{1})+\frac{1-\beta}{1-\gamma}\bar{\tau}\right]\] \[=\beta\mathbb{E}_{W}V^{\pi}(x|W=\mathbf{1})+\frac{1-\beta}{1- \gamma}\bar{\tau}\] As \(r(x,\boldsymbol{\alpha}|W=\mathbf{0})=0)\), the term \((1-\beta)\mathbf{T}^{\pi}V_{0}(x|W=\mathbf{0})\) in (36) becomes \(0\) and the term \((1-\beta)\mathbf{T}^{\pi}V_{k}(x|W=\mathbf{0})\) in (37) becomes \(0\). Equation 38 follows. The next lemma establishes the optimality gap produced by controlling the pre-dropout system with the optimal robust policy. This is the maximum loss in error accrued by controlling the system with the robust policy if dropout were to never occur. **Lemma 2**.: _Controlling the pre-dropout MDP with \(\pi^{*}_{R}\) yields,_ \[V^{*}(x|W=\mathbf{1})-V^{\pi^{*}_{R}}(x)\leq(1-\beta^{N})[V^{*}(x|W=\mathbf{1}) -V^{\pi_{U}}(x|W=\mathbf{1})]. \tag{39}\] Proof.: Decompose the error as: \[V^{*}(x)-V^{\pi^{*}_{R}}(x)\] \[=V^{*}(x|W=\mathbf{1})-V^{\pi^{*}_{R}}(x|W=\mathbf{1}),\] \[=V^{*}(x|W=\mathbf{1})-\mathbb{E}_{W^{\prime}}V^{\pi^{*}_{W^{ \prime}}}(x|W=\mathbf{1}),\] \[=\mathbb{E}_{W^{\prime}}[V^{*}(x|W=\mathbf{1})-V^{\pi^{*}_{W^{ \prime}}}(x|W=\mathbf{1})],\] \[=\mathbb{E}_{W^{\prime}}\left[\sum_{n\in\mathcal{N}_{W^{\prime}} }V^{*}(x_{n}|W=\mathbf{1})+\sum_{n\notin\mathcal{N}_{W^{\prime}}}V^{*}(x_{n}| W=\mathbf{1})-\sum_{n\in\mathcal{N}_{W^{\prime}}}V^{\pi^{*}_{W^{\prime}}}(x_{n}|W= \mathbf{1})-\sum_{n\notin\mathcal{N}_{W^{\prime}}}V^{\pi^{*}_{W^{\prime}}}(x_{n }|W=\mathbf{1})\right].\] The last expression uses the separable properties of factored MDPs in (8). Note that \(\pi^{*}_{W^{\prime}}\) satisfies, \[\pi^{*}_{W^{\prime}} \in\arg\max_{\pi}\mathbb{E}_{x_{-\mathcal{N}_{W^{\prime}}}}\left[ \sum_{n\in\mathcal{N}_{W}}J^{\pi}(x_{n}|W=\mathbf{1})\right],\] \[\in\arg\max_{\pi}\sum_{n\in\mathcal{N}_{W^{\prime}}}V^{\pi}(x_{n}| W=\mathbf{1}),\] \[\Rightarrow\sum_{n\in\mathcal{N}_{W^{\prime}}}V^{\pi^{*}_{W^{ \prime}}}(x_{n}|W=\mathbf{1})\geq\sum_{n\in\mathcal{N}_{W^{\prime}}}V^{*}(x_{n }|W=\mathbf{1}).\] Then the value difference is equal to: \[=\mathbb{E}_{W^{\prime}}\left[\sum_{n\in\mathcal{N}_{W^{\prime}} }V^{*}(x_{n}|W=\mathbf{1})+\sum_{n\notin\mathcal{N}_{W^{\prime}}}V^{*}(x_{n}|W= \mathbf{1})-\sum_{n\in\mathcal{N}_{W^{\prime}}}V^{\pi^{*}_{W^{\prime}}}(x_{n}| W=\mathbf{1})-\sum_{n\notin\mathcal{N}_{W^{\prime}}}V^{\pi^{*}_{W^{\prime}}}(x_{n}|W= \mathbf{1})\right],\] \[\leq\mathbb{E}_{W^{\prime}}\left[\sum_{n\notin\mathcal{N}_{W^{ \prime}}}V^{*}(x_{n}|W=\mathbf{1})-\sum_{n\notin\mathcal{N}_{W^{\prime}}}V^{ \pi^{*}_{W^{\prime}}}(x_{n}|W=\mathbf{1})\right].\] As the policy \(\pi^{*}_{W^{\prime}}\) was optimized for values associated with \(n\in\mathcal{N}_{W^{\prime}}\), it is independent of \(n\notin\mathcal{N}_{W^{\prime}}\). The policy \(\pi^{*}_{W^{\prime}}(\bar{\mathbf{\alpha}}|\bar{x})\) must be augmented by uniform policies for \(n\notin\mathcal{N}_{W^{\prime}}\) according to (19) to be in the form \(\pi^{*}_{W^{\prime}}(\mathbf{\alpha}|x)\). With uniform policy \(\pi_{U}\), the value difference becomes: \[=\mathbb{E}_{W^{\prime}}\sum_{n\notin\mathcal{N}_{W^{\prime}}}[V^ {*}(x_{n}|W=\mathbf{1})-V^{\pi_{U}}(x_{n}|W=\mathbf{1})]\] \[=\sum_{W^{\prime}}P(W^{\prime})\sum_{n\notin\mathcal{N}_{W^{ \prime}}}[V^{*}(x_{n}|W=\mathbf{1})-V^{\pi_{U}}(x_{n}|W=\mathbf{1})]\] \[=\sum_{k=1}^{N}P(|W^{\prime}=0|=k)\sum_{\{W^{\prime}\mid|W^{ \prime}=0|=k\}}\sum_{n\notin\mathcal{N}_{W^{\prime}}}[V^{*}(x_{n}|W=\mathbf{1 })-V^{\pi_{U}}(x_{n}|W=\mathbf{1})]\] \[=\sum_{k=1}^{N}\beta^{k}(1-\beta)^{N-k}\binom{N}{k}\frac{k}{N} \left[V^{*}(x|W=\mathbf{1})-V^{\pi_{U}}(x|W=\mathbf{1})\right]\] \[=\sum_{k=1}^{N}\beta^{k}(1-\beta)^{N-k}\binom{N-1}{k-1}\left[V^{ *}(x|W=\mathbf{1})-V^{\pi_{U}}(x|W=\mathbf{1})\right]\] \[=\sum_{k=0}^{N-1}\binom{N-1}{k}\beta^{k}(1-\beta)^{N-k}[V^{*}(x|W =\mathbf{1})-V^{\pi_{U}}(x|W=\mathbf{1})]\] \[=[V^{*}(x|W=\mathbf{1})-V^{\pi_{U}}(x|W=\mathbf{1})](1-\beta^{N})\] ### Suboptimality of Pre-Dropout Policy Given the structural relationship between the pre- and post-dropout MDPs, a natural question is if this relationship extends to value optimality of the pre-dropout policy without defining the robust MDP. To demonstrate this cannot hold in general, we present a counter example. Consider a system with optimal value \(V^{*}(x|W=\mathbf{1})\) where the post-dropout rewards are defined as the marginalization \(r(x,\mathbf{\alpha}|W)=\mathbb{E}_{x,\mathbf{\alpha}}r(x,\mathbf{\alpha}|W=\mathbf{1})\). Then, \(V(x|W)=\mathbb{E}_{x_{-\mathcal{N}_{W}}}V(x|W=\mathbf{1})\). Unfortunately, the value \(V^{*}(x|W\neq\mathbf{1})\) cannot be calculated as \(\mathbb{E}_{x_{-\mathcal{N}_{W}}}[V^{*}(x|W=\mathbf{1})]\). To verify this, define \(b_{n}\equiv w_{n}\) to force the desired realization. Then, \[V_{R}(x)=V(x|W)=\mathbb{E}_{-\mathcal{N}_{W}}V(x|W=\mathbf{1}) \tag{40}\] To see this, note that the Bellman optimality criterion necessitates that \(V^{*}(s)=\mathbf{T}V^{*}(s)=\max_{\pi}\mathbb{E}_{\mathbf{\alpha}\sim\pi}[r(s,\bm {\alpha})+\gamma V^{*}(s^{\prime})]\). Therefore, if \(\tilde{V}\) are optimal values, then, \[\tilde{V}(x|W\neq\mathbf{1}) \tag{41}\] \[=\mathbf{T}\tilde{V}(x|W\neq\mathbf{1}),\] \[=\mathbf{T}\mathbb{E}_{-\mathcal{N}_{W}}V^{*}(x|W=\mathbf{1}),\] \[=\max_{\pi}\mathbb{E}_{\mathbf{\alpha}\sim\pi}\mathbb{E}_{-\mathcal{ N}_{W}}[r(x,\mathbf{\alpha})+\gamma\mathbb{E}_{x^{\prime}}[V^{*}(x^{\prime}|W= \mathbf{1})|x,\mathbf{\alpha}]]. \tag{42}\] However, we find that, \[\tilde{V}(x|W\neq\mathbf{1}) \tag{43}\] \[=\mathbb{E}_{-\mathcal{N}_{W}}V^{*}(x|W=\mathbf{1}),\] \[=\mathbb{E}_{-\mathcal{N}_{W}}\mathbb{E}_{\mathbf{\alpha}\sim\pi^{*}} [r(x,\mathbf{\alpha})+\gamma\mathbb{E}_{x^{\prime}}[V^{*}(x|W=\mathbf{1})|x,\mathbf{ \alpha}]],\] \[=\mathbb{E}_{-\mathcal{N}_{W}}\max_{\pi}\mathbb{E}_{\mathbf{\alpha} \sim\pi}[r(x,\mathbf{\alpha})+\gamma\mathbb{E}_{x^{\prime}}[V^{*}(x|W=\mathbf{1})| x,\mathbf{\alpha}]]. \tag{44}\] Clearly, (42) and (44) are not guaranteed to coincide as the maximization and marginalization operations do not commute. This calculation has merely evaluated the value of the _new_ system under the policy developed for the _old_ system. Therefore, one possibility is to evaluate \(V^{*}(s)\) for other policies \(\pi\) and find one such that the principle of optimality holds. Solving for such a policy is difficult to do analytically, so another approach could be to evaluate several candidate policies and choose the best performing option. In practice, however, it is more reasonable to use the robust MDP formulation to automatically relate all possible realizations of the system. ## 5 Model-Free Policy Evaluation ### Method Given Theorem 1, policy IS can now be easily adapted for the objective of estimating values of policies for the robust MDP and for specific post-dropout realizations based on trajectories generated by the pre-dropout system. The first step is to represent the desired post-dropout policy \(\phi\) in a usable format that satisfies (19). For the robust MDP, the desired policy \(\phi\) can be used as-is because the state and action spaces between the compared systems are identical. To perform policy evaluation for a realization \((\mathcal{M}|W)\), however, the post-dropout policy \(\phi^{\prime}(\boldsymbol{\alpha}|\bar{x})\) must be augmented to be a function of \(\boldsymbol{\alpha}\) and \(x\). As \(\phi^{\prime}\) is independent of \(-\mathcal{N}_{W}\), the policies for the removed agents can be augmented with uniform distributions as in (19). Next, trajectories are generated on the pre-dropout MDP using \(\pi\), and the policy IS estimate can be formed. By performing both the sampling and the policy IS routines on the pre-dropout system, the issue of non-cancellation of the transition probabilities in (15) is resolved. The estimate can then be transformed to the post-dropout value via marginalization according to Theorem 2: \[V_{H}^{\phi}(x|W)=\mathbb{E}_{x_{-\mathcal{N}_{W}}}\mathbb{E}_{ \tau\sim q}\left[\frac{p(\tau)}{q(\tau)}\sum_{t=0}^{H-1}\gamma^{t}r_{t}^{R}(x_ {t},\boldsymbol{\alpha}_{t})\right]. \tag{45}\] This reweighted sample return may be constructed via any standard policy IS technique, such as the step-wise estimator, weighted estimator, or doubly robust estimator [17]: \[\widehat{J}(x)=\frac{1}{|D|}\sum_{i=1}^{|D|}\left[\frac{p(\tau_{i })}{q(\tau_{i})}\sum_{t=0}^{H-1}\gamma^{t}r_{t}^{R}(x_{t},\boldsymbol{\alpha} _{t})\right], \tag{46}\] where \(p(\tau_{i})/q(\tau_{i})=\prod_{t=0}^{H-1}\phi(\boldsymbol{\alpha}_{t}^{(i)}|x _{t}^{(i)})/\pi(\boldsymbol{\alpha}_{t}^{(i)}|x_{t}^{(i)})\), a superscript \((i)\) and subscript \(t\) means the state or action taken at time \(t\) in trajectory \(i\), and \(|D|\) is the size of the used dataset. Under the assumption of bounded rewards in Assumption 2 and full support in Assumption 6, the IS estimator will be bounded by some maximum value \(\tilde{J}_{\max}\). The marginalization step may not be computed with respect to the true stationary distribution as it assumed that the true transition matrix is unknown. However, the empirical stationary distribution \(\widehat{\mu}(x_{-\mathcal{N}_{W}})\) may be estimated and used in place of the true distribution. To estimate a stationary distribution of a Markov chain empirically, it is beneficial to use one trajectory with a long horizon; this is the basis of Markov chain Monte Carlo techniques such as the Metropolis-Hastings algorithm that require a burn-in period [21]. To this end, we suggest generating a trajectory \(x_{1},\ldots,x_{H_{\mu}}\) of length \(H_{\mu}\) separate from the trajectories used for the IS routine. The _empirical stationary distribution_ is defined as, \[\widehat{\mu}(x)=H_{\mu}^{-1}\sum_{i=0}^{H_{\mu}-1}\mathbb{1} \left[x_{i}=x\right]. \tag{47}\] Similarly to (10), the empirical distribution \(\widehat{\mu}(x_{-\mathcal{N}_{W}})\) is the summation, \[\widehat{\mu}(x_{-\mathcal{N}_{W}})=\sum_{\{x\}_{-\mathcal{N}_{W}}}\widehat{ \mu}(x). \tag{48}\] To finish the value estimate of the robust MDP, a final marginalization step over \(W\) may be completed under the assumption that the dropout probabilities are known _a priori_. \[V_{RH}^{\phi}(x)=\mathbb{E}_{W}[V_{H}^{\phi}(x|W)]. \tag{49}\] The estimator can similarly be constructed as, \[\widehat{V}_{H}^{\phi}(x|W) =\sum_{x_{-\mathcal{N}_{W}}}\widehat{\mu}(x_{-\mathcal{N}_{W}}) \widehat{J}(x), \tag{50}\] \[=\sum_{x_{-\mathcal{N}_{W}}}\widehat{\mu}(x_{-\mathcal{N}_{W}}) \widehat{J}(x_{\mathcal{N}_{W}},x_{-\mathcal{N}_{W}}),\] (51) \[\widehat{V}_{RH}^{\phi}(x) =\sum_{W}p(W)\widehat{V}_{H}^{\phi}(x|W),\] (52) \[=\sum_{W}\prod_{n=1}^{N}\beta_{n}^{w_{n}}(1-\beta_{n})^{1-w_{n}} \widehat{V}_{H}^{\phi}(x|W). \tag{53}\] To compute \(\widehat{V}_{H}^{\phi}(x|W)\), note that \(\widehat{J}_{W}(x)\) will need to be known for all \(\{x^{\prime}|x^{\prime}_{\mathcal{N}_{W}}=x_{\mathcal{N}_{W}}\}\) as evident in (51). Computing \(\widehat{V}_{RH}^{\phi}(x)\) will thus require calculation of \(\widehat{J}_{W}(x)\) for all \(x\). If \(|D|\) trajectories are used to compute each estimate, a total of \(|\mathcal{X}||D|\) total trajectories will be needed for policy IS. If \(\beta_{n}\equiv\beta\), the following simplification can be made: \[\widehat{V}_{RH}^{\phi}(x)=\sum_{W}\beta^{|W=1|}(1-\beta)^{|W=0|} \widehat{V}_{H}^{\phi}(x|W). \tag{54}\] Given the described policy evaluation technique, policy search can be implemented according to the parameters of the application. A key benefit of the policy IS technique is that it resolves the conflicting objectives of controlling the existing system for good value while evaluating post-dropout policies. As discussed in Section 4.3, optimality of the pre- and post-dropout systems may be unrelated, so the pre-dropout system should not necessarily be controlled with the optimal post-dropout policy. By selecting behavioral policies that produce good pre-dropout value, while evaluating target policies for the robust or post-dropout models, good policy evaluation and system execution can be completed. ### Performance In this section the performance of the estimator \(\widehat{V}_{RH}^{\phi}\) will be analyzed. **Lemma 3**.: _Define,_ \[V_{H}^{\max}=\frac{1-\gamma^{H}}{1-\gamma}\max_{\mathbf{\alpha},x}r^{ R}(x,\mathbf{\alpha}). \tag{55}\] _Then \(V_{H}^{\max}\geq J_{H}(x)\)._ The following result (Prop. 2.19 from [22]), provides a concentration bound on the convergence of the empirical stationary distribution. This bound depends on the _mixing time_ of the MDP, which is a structural property defined as, \[d(t) \triangleq\sup_{x\in\mathcal{X}}d_{TV}(P^{t}(x,\cdot),\mu), \tag{56}\] \[t_{mix} \triangleq t_{mix}(1/4). \tag{57}\] This property is a measure of the time required of a Markov chain for the distance to stationarity to be small. **Lemma 4**.: _Consider a uniformly ergodic Markov chain with a countable state space, unique stationary distribution, and mixing time \(t_{mix}\). For any \(\delta\geq 0\),_ \[P(|d_{TV}(\widehat{\mu},\mu)-\mathbb{E}_{\mu}[d_{TV}(\widehat{ \mu},\mu)]|\geq\delta)\leq 2\exp(-\delta^{2}H_{\mu}/(4.5t_{mix})). \tag{58}\] Under the assumptions of a finite state space in the problem formulation and ergodicity in Assumption 3, we will satisfy the necessary conditions to apply Lemma 4. **Lemma 5**.: _For stationary chains, the expected total variational distance between the empirical distributional and the stationary distribution is bounded and inversely proportional to \(H_{\mu}\): \(\mathbb{E}_{\mu}[d_{TV}(\widehat{\mu},\mu)]\leq C(H_{\mu})\)._ Proof.: See Proposition 3.21 in [22]. Note that \(\mathbb{E}_{\mu}[d_{TV}(\widehat{\mu},\mu)]\to 0\) as \(H_{\mu}\to\infty\). **Remark 2**.: _If an unbiased policy IS technique is used, then \(\widehat{V}_{RH}^{\phi}(x)\) is an asymptotically unbiased estimator for \(V_{RH}^{\phi}(x)\) as \(H_{\mu}\to\infty\)._ **Lemma 6**.: _Error Produced by Empirical Marginalization. Let \(t_{mix}\) be the mixing time of \(\mathcal{M}\). Let \(\widehat{\mu}\) be the empirical stationary distribution computed from a trajectory of length \(H_{\mu}\). The error produced by empirical marginalization can be bounded by,_ \[P\left(\left|\mathbb{E}_{x_{-\mathcal{N}_{W}}}J_{H}(x)-\sum_{x_{- \mathcal{N}_{W}}}\widehat{\mu}(x_{-\mathcal{N}_{W}})J_{H}(x)\right|>\epsilon \right)\leq 2\exp\left(-\left(\frac{\epsilon}{|\mathcal{X}_{n}||^{W=1}\sum_{ x_{-\mathcal{N}_{W}}}J_{H}(x)}-C(H_{\mu})\right)^{2}\frac{H_{\mu}}{4.5t_{mix}} \right). \tag{59}\] Proof.: \[P\left(\left|\mathbb{E}_{x_{-\mathcal{N}_{W}}}J_{H}(x)-\sum_{x_{- \mathcal{N}_{W}}}\widehat{\mu}(x_{-\mathcal{N}_{W}})J_{H}(x)\right|>\epsilon\right)\] \[=P\left(\left|\sum_{x_{-\mathcal{N}_{W}}}\mu(x_{-\mathcal{N}_{W}} )J_{H}(x)-\sum_{x_{-\mathcal{N}_{W}}}\widehat{\mu}(x_{-\mathcal{N}_{W}})J_{H}( x)\right|>\epsilon\right),\] \[\leq P\left(\sum_{x_{-\mathcal{N}_{W}}}J_{H}(x)\left|\mu(x_{- \mathcal{N}_{W}})-\widehat{\mu}(x_{-\mathcal{N}_{W}})\right|>\epsilon\right),\] \[=P\left(\sum_{x_{-\mathcal{N}_{W}}}J_{H}(x)\left|\sum_{\{x\}_{- \mathcal{N}_{W}}}\mu(x)-\sum_{\{x\}_{-\mathcal{N}_{W}}}\widehat{\mu}(x)\right| >\epsilon\right),\] \[\leq P\left(\sum_{x_{-\mathcal{N}_{W}}}J_{H}(x)\sum_{\{x\}_{- \mathcal{N}_{W}}}|\mu(x)-\widehat{\mu}(x)|>\epsilon\right),\] \[\leq P\left(\sum_{x_{-\mathcal{N}_{W}}}J_{H}(x)|\mathcal{X}_{n}|^ {|W=1|}d_{TV}(\mu,\widehat{\mu})>\epsilon\right),\] \[=P\left(d_{TV}(\mu,\widehat{\mu})>\frac{\epsilon}{|\mathcal{X}_{ n}||^{W=1}|\sum_{x_{-\mathcal{N}_{W}}}J_{H}(x)}\right),\] \[=P\left(d_{TV}(\mu,\widehat{\mu})-\mathbb{E}_{\mu}d_{TV}(\mu, \widehat{\mu})>\frac{\epsilon}{|\mathcal{X}_{n}||^{W=1}|\sum_{x_{-\mathcal{N} _{W}}}J_{H}(x)}-C(H_{\mu})\right),\] \[\leq P\left(|d_{TV}(\mu,\widehat{\mu})-\mathbb{E}_{\mu}d_{TV}( \mu,\widehat{\mu})|>\frac{\epsilon}{|\mathcal{X}_{n}||^{W=1}|\sum_{x_{-\mathcal{ N}_{W}}}J_{H}(x)}-C(H_{\mu})\right)\] (60) Applying Lemma 4 to (60), the final result can be obtained. **Theorem 4**.: _Performance of Policy IS Estimate of Realized System: Let \(\widehat{V}_{RH}^{\pi}(x|W)\) be the estimated value of \((\mathcal{M}|W)\) formed by estimating \(\widehat{J}_{H}\) with policy IS and then marginalizing with respect to \(\widehat{\mu}(x_{-\mathcal{N}_{W}})\). Let \(V_{R}^{\pi}(x|W)\) be the corresponding true value. Let the selected IS estimator have bounded bias \(|J_{H}(x)-\mathbb{E}J_{H}(x)|\leq B_{IS}(H)\) and use a dataset of \(|D|\) i.i.d. trajectories each of length \(H\) for each \(x\). Let the empirical stationary distribution \(\hat{\mu}(x_{-\mathcal{N}_{W}})\) be formed from an additional trajectory of length \(H_{\mu}\). Let \(\tau_{\text{max}}\triangleq\max_{x,\boldsymbol{\alpha}}r^{R}(x,\boldsymbol{ \alpha})\), and let \(\epsilon^{\prime}=\frac{\gamma^{\mu}}{1-\gamma}\tau_{\text{max}}+B_{IS}(H)\). Then for \(\delta\geq 0\),_ \[P(|V_{R}^{\pi}(x|W)-\widehat{V}_{RH}^{\pi}(x|W)|\geq\delta+\epsilon^{ \prime})\] \[\leq 2\left(\exp\left(-\left(\frac{\delta}{2|\mathcal{X}_{n}|^{|W=1 }|\sum_{x-\mathcal{N}_{W}}J_{H}(x)}-C(H_{\mu})\right)^{2}\frac{H_{\mu}}{4.5t_{ mix}}\right)+\exp\left(\frac{-|D|\delta^{2}}{4\tilde{J}_{\max}^{2}}\right) \right). \tag{61}\] Proof.: The error can be decomposed as, \[V_{R}^{\pi}(x|W)-\widehat{V}_{RH}(x|W)\] \[=\mathbb{E}_{x-\mathcal{N}_{W}}J(x)-\sum_{x-\mathcal{N}_{W}} \widehat{\mu}(x_{-\mathcal{N}_{W}})\widehat{J}_{H}(x),\] \[=\mathbb{E}_{x-\mathcal{N}_{W}}\left[J(x)-J_{H}(x)+J_{H}(x) \right]+\sum_{x-\mathcal{N}_{W}}\widehat{\mu}(x_{-\mathcal{N}_{W}})\left(-J_ {H}(x)+J_{H}(x)-\mathbb{E}J_{H}(x)+J_{H}(x)-\widehat{J}_{H}(x)\right),\] \[=\mathbb{E}_{x-\mathcal{N}_{W}}[J(x)-J_{H}(x)]+\left[\mathbb{E}_ {x-\mathcal{N}_{W}}J_{H}(x)-\sum_{x-\mathcal{N}_{W}}\widehat{\mu}(x_{- \mathcal{N}_{W}})J_{H}(x)\right]\] \[\quad+\sum_{x-\mathcal{N}_{W}}\widehat{\mu}(x_{-\mathcal{N}_{W}} )(J_{H}(x)-\mathbb{E}J_{H}(x))+\sum_{x-\mathcal{N}_{W}}\widehat{\mu}(x_{- \mathcal{N}_{W}})(\mathbb{E}J_{H}(x)-\widehat{J}_{H}(x)),\] \[\leq\Delta_{H}+\Delta_{\mu}+\Delta_{B}+\Delta_{IS},\] where \(\Delta_{H}\) is the error of a trajectory of length \(H\) to estimate the infinite horizon reward, \(\Delta_{\mu}\) is the error due to empirical marginalization, \(B_{IS}\) is the upper bound of the (possible) bias of the IS estimator, and \(\Delta_{\text{IS}}\) is the error of the selected IS estimator. For any value function it is known that \(|J(x)-J_{H}(x)|\leq r_{\max}\gamma^{H}/(1-\gamma)=\tilde{r}\) so \(\Delta_{H}\) can be deterministically bounded. The stochastisity in the error therefore comes from the importance sampling step. With substitution, the triangle inequality, and noting that \(\tilde{r}\geq 0\) and \(B_{IS}\geq 0\), \[P(|\Delta_{\text{H}}+\Delta_{\mu}+B_{IS}+\Delta_{\text{IS}}| \geq\epsilon)\] \[\leq P(|\Delta_{\mu}+\Delta_{\text{IS}}|\geq\epsilon-\tilde{r}-B _{IS}).\] Define \(\epsilon=\delta+\tilde{r}+B_{IS}(H)\). Then, \[=P(|\Delta_{\mu}+\Delta_{\text{IS}}|\geq\epsilon-\tilde{r}-B_{IS} (H))\] \[=P(|\Delta_{\mu}+\Delta_{\text{IS}}|\geq\delta)\] \[\leq P\left(|\Delta_{\mu}|\geq\frac{1}{2}\delta\right)+P\left(| \Delta_{IS}|\geq\frac{1}{2}\delta\right).\] Next, bounding \(\Delta_{\mu}\) can be accomplished via Lemma 6. \[P(|\Delta_{\mu}|\geq\epsilon_{0})\] \[=P\left(\left|\mathbb{E}_{x-\mathcal{N}_{W}}J_{H}(x)-\sum_{x- \mathcal{N}_{W}}\widehat{\mu}(x_{-\mathcal{N}_{W}})J_{H}(x)\right|>\frac{1}{2 }\delta\right),\] \[\leq 2\exp\left(-\left(\frac{\delta}{2|\mathcal{X}_{n}|^{|W=1}| \sum_{x-\mathcal{N}_{W}}J_{H}(x)}-C(H_{\mu})\right)^{2}\frac{H_{\mu}}{4.5t_{ mix}}\right).\] Since the rewards are bounded and given Assumption 6, the estimates of \(J\) produced by policy IS are bounded by \(\tilde{J}_{\max}\). With i.i.d. samples of bounded random variables, a Hoeffding confidence bound may be used: \[P(|\Delta_{IS}|>\epsilon_{0})\] \[=P\left(\sum_{x-\mathcal{N}_{W}}\widehat{\mu}(x_{-\mathcal{N}_{W} })|\mathbb{E}J_{H}(x)-\widehat{J}_{H}(x)|>\epsilon_{0}\right),\] \[\leq P\left(\max_{x}|\mathbb{E}J_{H}(x)-\widehat{J}_{H}(x)|> \epsilon_{0}\right).\] By the Hoeffding bound, every \(P\left(|\mathbb{E}J_{H}(x)-\widehat{J}_{H}(x)|>\epsilon_{0}\right)\leq 2\exp\left( \frac{-|D|\epsilon_{0}^{2}}{\int_{\max}^{2}}\right)\). As this bounds holds for every \(x\), it will hold for the max: \[P\left(\max_{x}|\mathbb{E}J_{H}(x)-\widehat{J}_{H}(x)|>\epsilon_{0}\right)\leq 2 \exp\left(\frac{-|D|\epsilon_{0}^{2}}{\widetilde{J}_{\max}^{2}}\right).\] Therefore, \[P\left(|\Delta_{IS}|>\frac{1}{2}\delta\right)\leq 2\exp\left(\frac{-|D|\delta^{2} }{4\widetilde{J}_{\max}^{2}}\right).\] Theorem 4 gives an overall exponential confidence interval for an estimator constructed from finite trajectories subject to IS and empirical marginalization. The first term arises due to marginalization and the second due to the IS estimator. The IS error will go to zero as \(|D|\rightarrow\infty\), and the marginalization error will go to zero as \(H_{\mu}\rightarrow\infty\). Note that \(H\ll H_{\mu}\) is beneficial as the variance of standard IS estimators increases rapidly with \(H\), but \(H_{\mu}\) must be large for the empirical distribution to reflect the true stationary distribution. This bound justifies the marginalized IS estimator as a valid technique as \(\widehat{V}_{R}(x|W)\) is exponentially concentrated around \(V_{R}(x|W)\). It is possible to obtain more sophisticated bounds for the IS estimator (see [15]), but these bounds are often pessimistic for practical applications. A discussion of normal approximations for IS estimator bounds can be found in [23]. **Theorem 5**.: _Performance of Policy IS Estimate of Robust System: Let \(\widehat{V}_{RH}^{\pi_{R}}(x)\) be the estimated value of the robust MDP for policy \(\pi_{R}\), and let \(V_{R}^{\pi_{R}}(x)\) be the corresponding true value. Let the selected IS estimator have bounded bias \(|J_{H}(x)-\mathbb{E}J_{H}(x)|\leq B_{IS}(H)\) and use a dataset of \(|D|\) i.i.d. trajectories each of length \(H\) for each \(x\). Let the empirical stationary distribution \(\widehat{\mu}\) be formed from an additional trajectory of length \(H_{\mu}\). Let \(r_{\max}\triangleq\max_{x,\boldsymbol{\alpha}}r^{R}(x,\boldsymbol{\alpha})\), and let \(\epsilon^{\prime}=\frac{\gamma^{\mu}}{1-\gamma}r_{\max}+B_{IS}(H)\). Then for \(\delta\geq 0\),_ \[P(|V_{R}^{\pi_{R}}(x)-\widehat{V}_{RH}^{\pi_{R}}(x)|>\delta+ \epsilon^{\prime})\] \[\leq 2\Bigg{(}\exp\left(-\left(\frac{\delta}{2|\mathcal{X}|V_{H} ^{\max}}-C(H_{\mu})\right)^{2}\frac{H_{\mu}}{4.5t_{mix}}\right)+\exp\left( \frac{-|D|\delta^{2}}{4\widetilde{J}_{\max}^{2}}\right)\Bigg{)}. \tag{62}\] Proof.: The steps of this proof follow similarly to those of the proof of Theorem 4 except now with the outer expectation over \(W\). As it is assumed the distribution for \(W\) is known, no additional error should be incurred from that step. \[|V_{R}^{\pi_{R}}(x)-\widehat{V}_{RH}^{\pi_{R}}(x)|\] \[=\mathbb{E}_{W}|V_{R}^{\pi_{W}}(x|W)-\widehat{V}_{RH}^{\pi_{W}}( x|W)|,\] \[\leq\mathbb{E}_{W}|\Delta_{H}^{W}+\Delta_{\mu}^{W}+\Delta_{B}^{W }+\Delta_{IS}|.\] Note that \(\tilde{r}\) and \(B_{IS}(H)\) are both deterministic upper bounds over all \(W\). Then, \[P(\mathbb{E}_{W}|\Delta_{\mathbf{H}}^{W}+\Delta_{\mu}^{W}+ \Delta_{B}^{W}+\Delta_{\mathbf{IS}}^{W}|\geq\epsilon)\] \[\leq P(\mathbb{E}_{W}|\tilde{r}|+\mathbb{E}_{W}|B_{IS}|+\mathbb{E }_{W}|\Delta_{\mu}^{W}+\Delta_{\mathbf{IS}}^{W}|\geq\epsilon),\] \[=P(\mathbb{E}_{W}|\Delta_{\mu}^{W}+\Delta_{\mathbf{IS}}^{W}|\geq \delta),\] \[\leq P\left(\max_{W}|\Delta_{\mu}^{W}|\geq\frac{1}{2}\delta \right)+P\left(\max_{W}|\Delta_{IS}^{W}|\geq\frac{1}{2}\delta\right).\] The established Hoeffding bound holds for all \(\Delta_{IS}^{W}\): \(P\left(|\Delta_{IS}^{W}|>\frac{1}{2}\delta\right)\leq 2\exp\left(\frac{-|D| \delta^{2}}{4\widetilde{J}_{\max}^{2}}\right)\). Therefore, \[P\left(\max_{W}|\Delta_{IS}^{W}|>\frac{1}{2}\delta\right)\leq 2\exp\left( \frac{-|D|\delta^{2}}{4\widetilde{J}_{\max}^{2}}\right).\] To bound the error from marginalization, first note that \(\sum_{x_{-\mathcal{N}_{W}}}J_{H}(x)\leq|\mathcal{X}_{n}||^{W=0}|V_{H}^{\max}\) by Lemma 3. Then, \[P\left(\left|\Delta_{\mu}^{W}\right|>\frac{1}{2}\delta\right) \leq 2\exp\left(-\left(\frac{\delta}{2|\mathcal{X}_{n}||^{W=1}| \sum_{x_{-\mathcal{N}_{W}}}J_{H}(x)}-C(H_{\mu})\right)^{2}\frac{H_{\mu}}{4.5t_{ mix}}\right), \tag{63}\] \[\leq 2\exp\left(-\left(\frac{\delta}{2|\mathcal{X}|V_{H}^{\max}} -C(H_{\mu})\right)^{2}\frac{H_{\mu}}{4.5t_{mix}}\right). \tag{64}\] As the bound holds for all \(|\Delta_{\mu}^{W}|\), it will hold for the maximum. \[P\left(\max_{W}|\Delta_{IS}^{W}|\geq\frac{1}{2}\delta\right)\leq 2\exp\left(- \left(\frac{\delta}{2|\mathcal{X}|V_{H}^{\max}}-C(H_{\mu})\right)^{2}\frac{H_ {\mu}}{4.5t_{mix}}\right).\] Theorem 5 gives an overall exponential confidence bound for the estimator of the robust value function. In comparison to Theorem 4, the robust value function requires an additional expectation over the dropout probabilities. As this distribution is known _a priori_ no error from this operation is introduced. The denominators of the terms vary as the marginalization must now be taken over the whole state space rather than a subset. This result shows that the robust value function estimator may be computed without incurring significantly more error than the estimator for one dropout realization. ## 6 Simulations ### Robust Policies The first experiment, shown in Figure 1, examines the loss in value incurred when the system undergoes agent dropout. This experiment is done in the full model setting to demonstrate the utility of the robust policy. Four agents are controlled under an optimal policy for 500 time steps. The rewards assigned to the agents are equally weighted. The sample return to the current time is shown in green; under no intervention, this estimate will approximate the true value. At \(t=500\), however, the system is disturbed and loses two of the agents. If the CP finds a policy optimal for the post-dropout system, then they can achieve about 42% of pre-dropout value. If the policy is not adapted and the optimal pre-dropout policy is used, then the return drops to 22% of the pre-dropout value. This is because the pre-dropout policy does not address the change to the transition matrices marginalizing the removed agents. The loss in post-dropout value is then compared to the performance of the optimal robust policy, calculated for \(\beta_{n}=0.5\) for all agents. Note that when dropout occurs, the return of the robust policy is 95% of the optimal pre-dropout value. This shows that while the robust policy is suboptimal, the loss is negligible in this experiment. Bounds for this optimality gap are shown in Figure 2 for varying values of \(\beta\). The true loss is displayed with the theoretical upper bound from Lemma 2. The benefits of the robust policy are shown after dropout occurs; the robust policy yields 37% of the pre-dropout optimal value, a difference of five points from the optimal post-dropout policy, and an improvement of fifteen points over the optimal pre-dropout policy. This shows that the robust policy is better at automatically adjusting to any realization of the system than policies designed for the pre-dropout system. To demonstrate the robust policy in a general setting, the results averaged over 1000 randomly generated systems are reported in Figure 3. This experiment considers a system of five agents where \(|\mathcal{X}_{n}|=2\) and \(|\mathcal{A}_{n}|=2\). For each instantiation, each possible dropout combination was considered. For each number of lost agents, the robust policy was found with the corresponding value of \(\beta\); for example, all combinations with one lost agent used \(\beta_{n}=0.2\). The result in red is the loss in value between the optimal post-dropout policy and the optimal pre-dropout policy. The stars the average loss across all dropout combinations, and the error bars report the minimum and maximum. The corresponding results in black show the value loss between the optimal post-dropout policy and the optimal robust policy in the post-dropout regime. In comparing the red and black results, note the robust policy performs better in the average for \(\beta\in\{0.2,0.4\}\) and better in the maximum for \(\beta\in\{0.2,0.4,0.6\}\). This makes sense as a post-dropout system with fewer dropped agents will be more similar to the pre-dropout system, so the optimal pre-dropout policy should yield less loss. In comparison, the robust policy needs to perform well over the average of all possible dropout combinations, so it may have higher loss for a specific realization for high \(\beta\). When the number of dropped agents is high, the pre- and post-dropout systems are less similar, so the robust policy out-performs the pre-dropout policy. The last data point is the dot, which is the loss on the value function incurred by controlling the pre-dropout system with the robust policy instead of the optimal pre-dropout policy. There are no error bars as this metric is reported only for the pre-dropout regime. These results show that controlling the existing system with the robust policy results in at most a 10% loss in performance from the optimum. This experiment shows that the robust policy can be a promising strategy for systems that undergo agent dropout. ### Importance Sampling The next experiment finds good robust policies from data. The CP gathers data under the policy \(\pi^{*}\) optimal for the pre-dropout system \(W=\mathbf{1}\) (or a near-optimal policy), and then uses the proposed policy IS method to evaluate the Figure 1: Experimental comparison of various policies on the pre- and post-dropout system. Note that the robust policy (shown in black) performs almost as well as the optimal pre-dropout policy for \(t<500\), and performs much better than the optimal pre-dropout policy for \(t>500\). While the robust policy is optimal for the post-dropout regime, it does not require the central planner to manually change the exerted policy when dropout occurs, and it can be pre-computed with the policy IS regime. Figure 2: Experimental demonstration of the optimality gap and associated upper bound (Lemma 2) produced by controlling the pre-dropout system with the robust MDP. This was calculated for \(N=4\) agents, where \(|\mathcal{X}_{n}|=3\), \(|\mathcal{A}_{n}|=3\), and where the rewards were assigned such that \(|r_{n}(x_{n},\alpha_{n}|w_{n}=1)|\leq 1\) and \(r_{n}(x_{n},\alpha_{n}|w_{n}=0)=0\) candidate robust policy. Figure 4 demonstrates this technique; in this example, evaluating the candidate policy via direct testing leads to a 36% loss in value on the existing system. Figure 4 also demonstrates how the proposed policy IS method can be used to estimate candidate robust policies. The policy IS routine was implemented with a first-visit doubly roust estimator [17], \(|D|=100\), \(H=500\), and \(H_{\mu}=5000\). The horizontal dashed line shows the true value of the policy normalized to one, and the solid green horizontal line shows the estimated value of the same policy as found by the described policy IS routine. The vertical red line shows that if the candidate policy was evaluated by directly controlling the existing system and observing the sample return, then 300 time steps would be needed to reach 95% of the true value. This experiment demonstrates that the policy IS routine can be used to accurately evaluate candidate robust policies in the model-free setting while preserving good performance for the existing system. Works such as [15] have investigated how to perform the policy search and improvement steps. ## 7 Conclusion In this paper we consider a multi-agent MDP that may undergo agent dropout. The goal of the controller is to ensure that good control policies are known for both the pre- and post-dropout regimes of operation. The challenge is that these policies must be found before dropout occurs, using samples produced by the existing MDP because the post-dropout MDP cannot be observed. We demonstrate that by assuming a model of probabilistic agent dropout, this problem can be reduced to a single MDP that can be analyzed using data from the pre-dropout system. This model produces two key takeaways: (1) a robust policy that automatically yields good value under any regimes, and (2) a framework to find policies for specific post-dropout realizations. To complete these objectives in a model-free setting, we propose a policy IS method that uses pre-dropout observations. Experiments validate that the robust policy can perform well in both regimes if dropout occurs. Future work can consider when the composition of \(\beta\) suggests that the robust policy should be used.
2308.12955
A new framework for global data regulation
Under the current regulatory framework for data protections, the protection of human rights writ large and the corresponding outcomes are regulated largely independently from the data and tools that both threaten those rights and are needed to protect them. This separation between tools and the outcomes they generate risks overregulation of the data and tools themselves when not linked to sensitive use cases. In parallel, separation risks under-regulation if the data can be collected and processed under a less-restrictive framework, but used to drive an outcome that requires additional sensitivity and restrictions. A new approach is needed to support differential protections based on the genuinely high-risk use cases within each sector. Here, we propose a regulatory framework designed to apply not to specific data or tools themselves, but to the outcomes and rights that are linked to the use of these data and tools in context. This framework is designed to recognize, address, and protect a broad range of human rights, including privacy, and suggests a more flexible approach to policy making that is aligned with current engineering tools and practices. We test this framework in the context of open banking and describe how current privacy-enhancing technologies and other engineering strategies can be applied in this context and that of contract tracing applications. This approach for data protection regulations more effectively builds on existing engineering tools and protects the wide range of human rights defined by legislation and constitutions around the globe.
Ellie Graeden, David Rosado, Tess Stevens, Mallory Knodel, Rachele Hendricks-Sturrup, Andrew Reiskind, Ashley Bennett, John Leitner, Paul Lekas, Michelle DeMooy
2023-08-24T17:48:56Z
http://arxiv.org/abs/2308.12955v1
# A new framework for global data regulation ###### Abstract Under the current regulatory framework for data protections, the protection of human rights writ large and the corresponding outcomes are regulated largely independently from the data and tools that both threaten those rights and are needed to protect them. This separation between tools and the outcomes they generate risks overregulation of the data and tools themselves when not linked to sensitive use cases. In parallel, separation risks under-regulation if the data can be collected and processed under a less-restrictive framework, but used to drive an outcome that requires additional sensitivity and restrictions. A new approach is needed to support differential protections based on the genuinely high-risk use cases within each sector. Here, we propose a regulatory framework designed to apply not to specific data or tools themselves, but to the outcomes and rights that are linked to the use of these data and tools in context. This framework is designed to recognize, address, and protect a broad range of human rights, including privacy, and suggests a more flexible approach to policy making that is aligned with current engineering tools and practices. We test this framework in the context of open banking and describe how current privacy-enhancing technologies and other engineering strategies can be applied in this context and that of contract tracing applications. This approach for data protection regulations more effectively builds on existing engineering tools and protects the wide range of human rights defined by legislation and constitutions around the globe. Data protection regulations Data are the abstract representation of the world and can now be used to describe nearly every aspect of our physician and digital experience. Smartwatches capture movement patterns and track other nearby watches (and the people who wear them) [(1)]. Cars collect data on function and speed, alerting the driver when tire pressure is low and the insurance company when driving is erratic or dangerous [(2)]. Credit card data collected by banking and finance apps and platforms can be used to provide access to financial accounts while individuals are home, traveling, marketing products and performing transactions within and across platforms [(3)]. As data collection and use have expanded, data protection has become a topic at the forefront of discussion across much of the world [(4)]. With a rapidly expanding number of artificial intelligence applications demonstrating the power of data processing and interpretation at scale, and as we are faced with increasing complexity and speed in technological development, there is an ever-growing and immediate need for robust, sophisticated governance to keep pace [(5, 6)]. For most of the world, data protections embedded at all levels of policy (i.e., local, state, provincial, national, institutional) have focused on the individual right to privacy, a long-standing legal foundation for data governance. While the focus on privacy regulation has intensified in the last few years, the legal basis for these policies have been in development for nearly 150 years.1 The French Supreme Court recognized the right to protection of a private life in 1868 [(7)]. The Right to Privacy, written by Samuel Warren and Louis Brandeis in 1890, was the first publication to argue for individual privacy legislation in the United States (US) [(8, 9)].2 The modern Supreme Court found, 65 years later, in Griswold, that various amendments in the Constitution created a "zone of privacy," an implied right to privacy for Americans [(10)]. Nearly 100 years later, in 1980, the Organisation for Economic Co-operation and Development (OECD) published Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, the first major international protections focused specifically on privacy in data [(11)].3 The OECD guidance was rapidly followed by the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data ("Convention 108"), the 1981 treaty that established European policy on privacy, trade, and communications [(12, 13)]. Convention 108 was recently modernized and aligned to the European Union's (EU) General Data Protection Regulation (GDPR), introduced in 2018 [(14)]. GDPR was one of the first modern-era laws to protect individual privacy as a human right under a comprehensive omnibus law with enforcement for data protections. However, these protections lack clarity and do not adequately address the complexity of the data and technological landscape now in play, including the numerous and consequential ways in which data can be acquired and used (e.g., law enforcement, employment, insurance, etc.), leading to critical gaps in the regulatory framework. Footnote 1: For the purposes of this paper, we use the terms laws, regulations, and policies interchangeably to refer to government action initiated and implemented by policies written and enforced by government actors. ### The current model for regulating data protections Given the rapid expansion in the total volume of data and artificial intelligence models, there is an increasing imperative to transition from an atomistic approach to data protections to a systemic approach that accounts for the value chain of how data are used. Most critically, though privacy and data protection policies are centered on data, data do not stand alone. Far beyond their raw form, data from a wide variety of sources are collected, processed, and stored by tools. These tools subsequently yield outcomes that affect individuals and populations interacting with and impacted by how those data are used. These outcomes are the measured impacts in the world that can then be evaluated with the goal of safeguarding one or more human rights. Each of these elements - the data, tools, outcomes, and rights - can be envisioned as tiers of a pyramid, linked by use cases that extend from data to rights, though each component is currently regulated largely independently (Figure 1). Specific examples of regulations related to each tier are shown in Figure 1a. [MISSING_PAGE_POST] Figure 1. Current framework for regulating data protections. Each horizontal tier (shaded gray) represents a category of regulatory targets. For b-e, the white dots represent examples of what is regulated at each layer: specific datasets or categories of data in the data layer, specific tools or classes of tools in the tool layer, unique outcomes or impacts of how those data and tools are used in the outcomes layer, and specific rights in the rights layer. There are fewer dots at each layer as the pyramid narrows, representing the large number of, for example, data types as compared to rights. Specific examples are color-coded and described below. a) Rights build on outcomes, which are generated by tools that use, store, and process data. Examples of policies are shown for each tier. b) Current framework for regulating data. Red dots represent types of health data requiring protection under HIPAA; blue dots represent types of personal data that fall under GDPR; the red and blue dot represents a specific source or type of data that requires protection both under HIPAA and GDPR (e.g., consumer-generated health data, geolocation data, ride-sharing history data, etc.). c) Current framework for regulating tools. Magenta dots represent tools that process or store financial data and fall under financial regulations; teal dots represent artificial intelligence models that are regulated under the new EU AI regulations; the magenta and teal dot represents an example of a model that processes financial data and also falls under the EU AI regulations. d) Current framework for regulating outcomes. The green dot represents a fair housing outcome; the orange dot represents outcomes regulated to driving safety. e) Current framework for regulation of rights. The purple dot represents the right to freedom of speech; the teal dot represents the right to privacy. ### Regulating data The current legal framework for data protections are largely based on the data themselves, the foundational tier in the pyramid. These protections typically fall into two categories: omnibus legislation and legislation specific to a single category of data. GDPR is a clear example of the former, as it targets protections for all personal data, a classification that is broad enough to capture nearly any data collected about people for any purpose [(15)]. By contrast, the Health Insurance Portability and Accountability Act (HIPAA) regulates individually identifiable health information used or disclosed by defined 'covered entities,' which are limited to healthcare providers, insurance companies, and healthcare exchanges, while the Children's Online Privacy Protection Act (COPPA) regulates data collected, used, or disclosed online about a specific class of people: children under the age of 13, in this case [(16)]. Legal data protections tend to codify the data ecosystem of a given sector as it existed at the time of enactment: either by specific data types or by the industry or organization that collects and processes them (Figure 1b). For example, HIPAA was first developed in 1996 as part of the Social Security Act, and its protections were designed to allow individuals to maintain health insurance coverage between jobs. HIPAA covers health providers but does not extend to entities that, today, routinely access health data, such as mobile health apps and wearable providers leaving large gaps in data protections. Similarly, COPPA was enacted by Congress in 1998 as a way to make parents more aware of their children's online activities, a law that could not have anticipated the launch of the iPhone, and the corresponding seismic change it would bring to the digital world, just over a year later. Corresponding strategies for data protections based on changes in the digital space and technical characteristics are often then described in the regulation (e.g., specific strategies for de-identification) yielding a regulatory environment that can quickly become outdated as more effective privacy engineering solutions or privacy-enhancing technologies (PETs) are developed and new use cases for the data are implemented. Data regulations attempt to focus on the _bow_ of data protections; however, technological advancements are moving too quickly, data are expanding too rapidly, and data flows are too dynamic for the how to remain relevant for long. The disconnect in the rate of technological innovation versus that of developing policy to regulate those innovations has become an acute problem. Particularly as the amount of data expands, our knowledge of exactly which data were used by and for which tools decreases, and the tools used to analyze and model data generate even more data, amplifying the regulatory challenge exponentially. The current explosion of artificial intelligence models - from image generators to large language models and industry-specific models being developed in almost every field - have dramatically expanded not only the amount of data needed, but the amount generated. Under the current framework, each set of derivative results can require its own unique data protections, while the impacts and outcomes become harder to anticipate and evaluate. Footnote 3: The use of artificial intelligence will be regulated by the AI Act with key priorities to make AI safe, transparent and traceable. The Act requires such systems to be assessed based on the risk they pose to the users and require Generative AI to comply with transparency requirements. #### Regulating tools The tools that process, store, and transmit data are also currently regulated as standalone entities. The limitations placed on those systems are defined not by the specific function of the tool, but by its category as a tool to process data (Figure 1c). For example, the current proposed AI legislation from the EU would regulate AI models based on whether they are designed for and used in the context of high-risk applications [17, 18].4;5 Similarly, financial regulations apply to any tools that store or process financial data, including policies on data retention (e.g., storage) [19] and security requirements for data transfer systems [20]6. Footnote 4: By contrast, the US National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework in January 2023 is a voluntary tool that is risk-based. However, coupled with a system engineering approach related to models and the data needed to train them, it provides flexibility for a broad range of organizations to achieve specified outcomes including privacy, security, safety, and fairness, which also encourages continual research and innovation to achieve more effective solutions for identified risks. Footnote 5: By contrast, the US National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework in January 2023 is a voluntary tool that is risk-based. However, coupled with a system engineering approach related to models and the data needed to train them, it provides flexibility for a broad range of organizations to achieve specified outcomes including privacy, security, safety, and fairness, which also encourages continual research and innovation to achieve more effective solutions for identified risks. Because current regulations are focused on the intended use of data and models, other outcomes influenced or impacted by the models that fall outside of the originally-identified target can be missed. For example, an algorithm designed to assess home values might not be categorized as high-risk, but could be considered high-risk if used in a different context, such as to identify newly gentrifying areas or to target policing to less affluent or distressed neighborhoods.7 This type of "off-label" use would fall outside the current regulatory framework, despite the potential for harm. Conversely, this approach risks overregulation when tools are categorically defined as high-risk based on potential use versus specific use cases. For example, deploying a customer service chatbot in financial services is not the same level of risk as deploying machine learning to detect fraud, which could potentially increase discriminatory denial of services. Likewise, deploying a customer service chatbot to help insurance beneficiaries navigate their user portals online is substantially lower risk than using artificial intelligence to approve or deny prior authorization for potentially lifesaving health care services or treatments. Conversely, focusing on the tools themselves means that regulations rarely account for the data used to train the model or stored by the system. Without taking the data types into account, regulations on tools can work at cross-purposes. For example, a new law proposed in Utah, requires that social media companies verify age to ensure children cannot open new accounts.8 In practice, however, this regulation could significantly increase the amount of sensitive personal data accessed or collected by the companies in the process of verifying age (e.g., by requiring upload of an image of a government identification card). This collection could be lessened if privacy-enhancing technologies (PETs) suitable for identity use cases are deployed, but in many cases these types of engineering solutions are not addressed or supported as a risk mitigation tool because the regulatory approach is focused solely on the data. Footnote 8: The new Utah laws—H.B. 311 and S.B. 152—require that social media companies verify the age of any Utah resident _Regulating outcomes_ Most laws implemented by governments globally are focused on the outcome or impact of actions by individuals or institutions. Examples of regulations focused on outcomes include laws targeting housing discrimination, vehicle safety, child welfare, and many other areas (Figure 1d). Unlike the policies regulating data and tools, regulations and laws targeting outcomes tend to be narrower, focused on context-specific domains. For example, housing discrimination is illegal, whether implemented using a paper map and a red pen (i.e. red-lining) or using race-based credit worthiness algorithms [(21)]. In another, more specific example, the Equal Credit Opportunity Act states that individuals applying for credit can only be evaluated using factors related to their creditworthiness and prohibits any form of discrimination, such as on the basis of race, gender, color, religion, or age [(22)]. This framing focuses on the data sources for establishing creditworthiness, not the outcome itself. However, even when specific identifying characteristics of the individual are not included in the model, other proxy variables can often be used that yield the same result. Even a few data elements about the individuals' digital footprint such as the type of device, operating system, time, email, and email domain are correlated with protected classes and are often used as measures of creditworthiness [(23)]. If, instead, the Act were oriented around the outcome of biased assessments of creditworthiness, the law could be more effectively applied to the rapidly evolving technologies that are yielding discriminatory outcomes despite meeting the letter of the law in terms of the data used. Impact should be the critical measure of whether a law is broken, not the data and tools used. #### Regulating rights Protecting human rights is a primary goal of law and policy. However, when we implement protections for each right independently, we risk privileging one right over another unintentionally - decreasing protections for one as an unintended effect of increasing protections for another (Figure 1e). Even in the earliest cases of privacy in the courts, such as those litigated by Brandeis, there was already a gray area between the right to privacy and the right to free speech and freedom of the press. The line between what can or cannot be published about civilians has been deemed different from what can be published about public personas or those running for office: that which might otherwise be considered private information is considered of the public interest if the person is up for election.9 It is these very conflicts that are the critical purview of legal scholarship and regulatory application. In a 2023 example, the Federal Trade Commission (FTC) sued GoodRx for violating the FTC Act and the Health Breach Notification Rule, for inappropriate sharing of health data as though it were consumer data, stating that the company had violated HIPAA by using health-derived data for targeted advertising [24]. While the case was settled before it went to court, the challenge from the FTC highlights the regulatory ambiguity for health-adjacent data that could arguably be defined as either consumer or health, which dramatically changes the legal framework under which the data are regulated. The challenge facing policy makers is how to effectively protect the privacy of health data, fair use of personal data, and the right to consumer protections related to the products we buy. Without a clear regulatory framework that differentiates between the use of data in each context, these rights become conflated and their protection diluted. Footnote 9: The Ethics in Government Act of 1978 requires high-level federal officials to publicly disclose their personal financial interests. The public filing of this information is intended to prevent financial conflicts of interest. Since early privacy legislation was developed, the way we collect and share information about the world around us and about each other has changed dramatically. The question now is whether or not our current regulatory framework, focused on the regulation of **data** by category and limitations on the **tools** that store and process those data, adequately protects the **outcomes** and **rights** that we want to protect. And, if they do not adequately address these outcomes and rights, how do we shift the framework to protect not just privacy, but the wide range of human rights described in global constitutions from the right to free speech and press to equal protection under the law, from fair markets to the right to meet our basic needs for housing, food, and health? #### A new regulatory paradigm Under the current regulatory framework for data protections, the protection of human rights writ large and the corresponding outcomes are regulated largely independently from the data and tools that both threaten those rights and are needed to protect them. This separation between tools and the outcomes they generate risks overregulation of the data and tools themselves when not linked to sensitive use cases. In parallel, separation risks under-regulation if, as in the GoodRx case, the data can be plausibly collected and processed under a less-restrictive framework, but used to drive an outcome that requires additional sensitivity and restrictions. A new approach is needed to support differential protections based on the genuinely high-risk use cases within each sector. Here, we propose a new framework in which data protection regulations are organized vertically to capture the entire value chain of data use, from the data and tools to their applied use cases, outcomes, and associated rights (Figure 2). Instead of each horizontal layer being regulated individually, this regulatory paradigm shifts the emphasis toward how data are used in specific contexts as a part of a process, and away from their regulation as a good or bad outcome in and of themselves. By shifting to a vertically-aligned regulatory framework that honors both the context and direction in which data is collected, used, or shared, we gain immense flexibility to limit the use of data and tools in one context for one outcome while allowing the use of those same data and tools under a different regulatory framework to drive toward a different outcome, all while protecting a wide range of rights. This model (Figure 2b) would support, for example, the use of health data in the aggregate for public health response efforts while limiting their use to assess insurability. By establishing regulations based on use or outcome, the tools and the data that drive them can be used to benefit population-level health while still protecting the human right to health and healthcare in addition to the right to privacy [25]. Conversely, a single right or multiple rights can be more effectively protected when linked to specific use cases for the data and models that drive a diversity of outcomes (see Figure 2c). Figure 2. A vertically-aligned paradigm for regulating data protections. Each horizontal gray shaded tier represents a category of regulatory targets. Each colored line shows one example of a right, linked to one or more white dots which represent outcomes with data processed by one or more tools from one or more data sources. Each color represents a different regulatory framework based on a different combination of elements from each tier. a) A single right and associated outcome drives the regulation of tools and data that are used to achieve that outcome. b) Different outcomes fall under separate regulations, though the data tools used to achieve those outcomes may overlap. c) A representation of the new framework showing multiple linked regulatory targets, accounting for heterogeneity and multi-use of data and models while organizing regulation around the protection of diverse rights and corresponding outcomes. #### Applying the framework A vertical framework for data protection regulations requires a significant intellectual shift. However, the existing regulations and engineering methods needed to implement the new approach are readily available and already in place in many cases. Below, we describe one example for how this framework can be applied. #### Open banking Financial regulations are highly prescriptive, responsible for governing the movement of data globally for more than 70 million transactions daily between individuals and organizations [26]. Open banking describes the provisions that regulate this process, ensuring the data-and the money-can be managed and transferred securely and privately between any set of financial service providers upon request by the customer [3]. Currently, the data, the tools to process those data, and the outcomes of how the data are used are regulated independently. The financial and personal **data** required for open banking can include account balances; types of institutions/entities involved in the transaction; transaction geolocations; personal information like names, addresses, and social security numbers; measures of creditworthiness; and more. These data are regulated under highly-prescriptive consumer privacy protections [27]. Similarly, the **tools** that handle and transfer data currently need to satisfy uptime specifications, access control mechanisms, and encryption standards, including record retention, and are regulated by a large number of national and international policies, treaties, and agreements all of which define the requirements for open banking services.10,11,12 The **outcomes** of open banking are regulated independently, from the ability to purchase goods directly from your bank account to the ability of consumers to dispute their credit scores and the responsibility to report financial fraud [22, 28, 29].13 These outcomes impact and support a wide range of individual **rights**, from the right to own and exchange your purchases, to consumer protections, to the right to privacy. The protection of additional rights, including protection from corruption, fraud, and other illegal activities masked by money laundering such as human trafficking and the contraband of endangered species and drugs, require systems-wide analysis of large amounts of population-level data, which can conflict with personal privacy legislation in the financial domain. Footnote 10: 15 U.S.C. 6801(b), 6805(b)(2) Part 314. GLBA’s Safeguard Rule “sets forth standards for developing, implementing, and maintaining reasonable administrative, technical, and physical safeguards to protect the security, confidentiality, and integrity of customer information.” Footnote 11: Federal Trade Commission. Disputing Errors on your Credit Report. “As long as the information is correct, a credit bureau can report most negative information for seven years, and bankruptcy information for 10 years.” Footnote 12: The Revised Payment Services Directive (PSD2) is a European directive, administered by the European Commission to regulate payment services and providers throughout the European Union and the European Economic Area. Footnote 13: The Currency and Foreign Transactions Reporting Act of 1970 -commonly referred to as the “Bank Secrecy Act” (BSA) - requires U.S. financial institutions to assist U.S. government agencies to detect and prevent money laundering. If we shift the legal framework to a top-down vertical approach based on specific outcomes that meet specific rights, we can apply a more systems-based approach to risk mitigation and protection of both individual and societal rights. For example, the right to access personal funds to make purchases is currently treated as a standalone right. However, that right is predicated on a system that links the right to the outcome: the ability to make a point of sale purchase. Given a vertical framework, the **right** to access personal funds at the point of sale and the **outcome** of successfully completing a purchase can be linked to a **tool** that performs fraud detection. Fraud detection tools require access to anonymized **data** about typical transactions by similar customers to provide the statistical foundation for anomaly detection as well as personal information about the individual's spending and travel habits. This specific combination of information, while needed for rapid assessment of point-of-sale transactions, would not be relevant nor should it be accessible or used to assess mortgage creditworthiness. In the case of mortgage creditworthiness, the individual **right** to fair access to housing, as supported by an **outcome** of bias-free assessment of creditworthiness is driven by models **(tools)** that, like fraud detection, require aggregate **data** about the general population and specific information about the individual applying for credit. While the data used are similar and require similar privacy protections in each case, a model of mortgage creditworthiness requires significant bias testing (in the United States, under the Federal Housing Act) while fraud detection does not. By structuring the data protection regulations vertically, the rights and associated outcomes in each case can be met, while preventing overregulation and burdensome requirements that are decoupled from the relevant outcome and associated rights. #### 4.2.2 Engineering for a vertical regulatory framework The engineering tools needed to build rights-protecting data platforms are already available. These strategies, including PETs, can be used not only to manage risk associated with data storage and transfer, but to minimize the data collected in the first place [(30)]. Notably, privacy engineering is a systems-wide approach with linked methods applied to the data and tools to generate specific outcomes associated with user requirements. Therefore, aligning the regulatory framework to vertically-oriented technical applications amplifies the value of the existing engineering tools and strengthens the implementation of both the regulations and engineering strategies. Contact tracing for infectious disease outbreaks provides a useful example of outcomes-oriented privacy engineering [(31)]. The success of global outbreak response depends largely on the ability to effectively and rapidly share the data needed to address both individual and public health. Contact tracing applications, mobile software designed to collect information about infection, were launched in many countries during the Covid-19 pandemic to integrate user location data with test results showing infection status. These data were used by individuals to assess their risk of infection and also shared by governments or other public health officials to assess risk across the population or in specific communities [(32)]. For example, the Exposure Notifications System introduced jointly by Apple and Google is a privacy-preserving contact tracing application specifically engineered to alert users of potential exposures. The application enabled public health authorities to collect aggregated data to monitor the evolution of the pandemic, while upholding strong privacy principles [(33)]. The architecture used in the Exposure Notifications System builds on cryptographic secure aggregation and differential privacy, and relies on proximity to another infected user, not location tracking, to minimize data collection. When coupled with encryption during data transfer and differential privacy to limit access, these methods significantly reduced the risk of data transfer in good part by limiting the amount of information that needed to be transmitted between systems. By linking methods applied to both the data and tools, the applications could more effectively and safely address outcomes required for both individual and public health response. Given the current regulatory framework, these systems were subject to a long list of regulations, each based on the type of data and the way it was shared. HIPAA and related global health data regulations applied to data that were collected by applications managed or made available by healthcare providers or insurers; COPPA applied specifically to data collected about children; GDPR applied to data collected about European Union citizens; and both the Communications Act and Electronic Communications Privacy Act applied to data collected on telephones or other regulated electronic devices [34]. If the proposed vertical regulatory framework were applied instead, we could regulate these data and tools based on the specific use case-individual health and privacy or population-level well-being. The new approach would build on the vertically-integrated engineering tools that collect and process the data, protect the right to individual privacy, and more effectively protect the right to health by supporting and ensuring access to population-scale data collected from individuals. This vertical framework, applied to engineering data protections, is closely aligned with and would support the application of new outcomes- and risk-based guidance published by the National Institute of Standards and Technology and the latest artificial intelligence regulations proposed by the European Union. Both the guidance and the regulations are organized around the use case of the models and the sensitivity of the data used to train them. Starting with outcomes ranked by risk, each model and the underlying data it uses are then required to meet different standards, driving a more systems-wide approach to regulation. Expanding this approach beyond artificial intelligence models into the broader technology domain would significantly reduce the costs to implementation and innovation while more effectively protecting all human rights. ## Conclusion Current privacy regulations are focused narrowly on the data and associated tools. This regulatory approach risks prioritizing privacy over other equally critical human rights and, from a tactical perspective, the current legal structures have failed to keep up with the pace of technological development. By designing regulation for data and related tools as we do in most other regulatory contexts-governing outcomes-we protect our diverse human rights and the outcomes that are the expression of those rights. While this shift requires a significant change in how data regulations are structured, the engineering strategies needed to implement the new approach are already in place and, indeed, better aligned to a vertical, systems-based approach to regulation than the current regulations that treat each tier independently. By changing our focus from regulating data and tools to regulating the outcomes and uses of those data and tools, we can build a regulatory framework that is flexible, enduring, and effective. ## Acknowledgements This work was developed in collaboration with participants in the Georgetown Data Policy and Engineering Symposium held on February 9, 2023 at Georgetown University, including Ashley Bennett, Aneesh Chopra, Alissa Cooper, Marc Crandall, Ryan Donaghy, Ellie Graeden, Rachele Hendricks-Sturrup, Mallory Knodel, Naomi Lefkovitz, John Leitner, Paul Lekas, Kobbi Nissim, Pamela Peele, Andrew Reiskind, and Rob Sherman. The symposium was supported by the Georgetown research team at the Center for Global Health Science and Security of Hailey Robertson, David Rosado, Tess Stevens, and Ryan Zimmerman. The Symposium was funded by a gift from Meta.
2306.04443
Poissonian cellular Potts models reveal nonequilibrium kinetics of cell sorting
Cellular Potts models are broadly applied across developmental biology and cancer research. We overcome limitations of the traditional approach, which reinterprets a modified Metropolis sampling as ad hoc dynamics, by introducing a physical timescale through Poissonian kinetics and by applying principles of stochastic thermodynamics to separate thermal and relaxation effects from athermal noise and nonconservative forces. Our method accurately describes cell-sorting dynamics in mouse-embryo development and identifies the distinct contributions of nonequilibrium processes, e.g. cell growth and active fluctuations.
Roman Belousov, Sabrina Savino, Prachiti Moghe, Takashi Hiiragi, Lamberto Rondoni, Anna Erzberger
2023-06-07T14:03:17Z
http://arxiv.org/abs/2306.04443v4
# When time matters: ###### Abstract Cellular Potts models are broadly applied across developmental biology and cancer research. We overcome limitations of the traditional approach, which reinterprets a modified Metropolis sampling as ad hoc dynamics, by introducing an interpretable timescale through Poissonian kinetics and by applying principles of stochastic thermodynamics to separate thermal and athermal sources of noise. Our method accurately describes cell-sorting dynamics in mouse embryo development and identifies the distinct contributions of nonequilibrium processes, e.g. cell growth and active fluctuations. The dynamics of many nonequilibrium systems can be described by a time-dependent phenomenological Hamiltonian which actively controls transitions through a sequence of target states. Widely adopted frameworks of vertex, cellular Potts, and other methods rely on this effective energy-based principle to explain spatial organization in living systems [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. Whereas the optimum of the system's energy specifies a target state of such an active transformation, the unfolding of the modeled process in time is determined by its kinetic parameters. In vertex or subcellular-element models with a continuous phase space, these parameters correspond to the transport properties--the damping coefficients. However the traditional cellular Potts models, whose discrete-state dynamics are implemented by a modified Metropolis sampling, lack an explicit control over such kinetic parameters. Transport properties and the timescales they control are especially important when multiple processes evolve interdependently. In the course of embryonic development, numerous cellular and tissue-level processes require precise mutual coordination [12]. For example the sorting of cell types in the early mouse embryo must be completed before the subsequent morphogenetic events commence [13; 14]. To introduce kinetic parameters into cellular Potts models (CPMs) we invoke the theory of stochastic thermodynamics [15; 16], which comprehensively describes discrete-state physical processes driven by changes of free energy. As shown further, the transport properties control the system's _frenetic activity_[17], which constitutes the time-symmetric component of a _stochastic action_--the complement of entropic changes of a system's trajectory. While being less demanding than subcellular-element methods, CPMs can treat composite materials and describe more intricate shapes than vertex models [18; 19; 20; 21; 4]. Each cell in three dimensions corresponds to a contiguous collection of voxels with the same "spin" value--labels distinguishing individual object in the system [Fig. 1(a)]. CPMs were first introduced by Graner and Glazier [4] to study how differences of surface energy between ho Figure 1: (a) Schematic of a cellular Potts model: Simply connected regions of a voxel grid with “spin” values 1 (green) and 2 (red) represent two cells in a medium with value 0. (b) Poissonian dynamics of three Ising spins \(\mathbf{\sigma}=(\sigma_{1},\sigma_{2},\sigma_{3})\): The system’s configuration \(\mathbf{\sigma}(t)\) may change within a small time \(dt\) into one of the target states \(\mathbf{\sigma}^{\prime}\), \(\mathbf{\sigma}^{\prime\prime}\), and \(\mathbf{\sigma}^{\prime\prime\prime}\), which differ from the original one by the value of a single spin \(\sigma_{i=1,2,3}\), because Poissonian events never occur simultaneously. motypic and heterotypic contacts cause cell sorting in development. Using the modified Metropolis algorithm they showed that clusters of cells emerge in a typical configuration favored by the system's energy function \[E=\sum_{ij}\frac{J_{ij}(\sigma_{i},\sigma_{j})}{2}+\sum_{k}\frac{\kappa_{k}(V_{k }-\bar{V}_{k})^{2}}{2}, \tag{1}\] in which the first sum runs over spin pairs \(\sigma_{i}\) and \(\sigma_{j}\) with symmetric coefficients \(J_{ij}(\sigma_{i},\sigma_{j})=J_{ji}(\sigma_{i},\sigma_{j})\) encoding the surface interactions, whereas the second term penalizes deviations of the volume \(V_{k}\) of the \(k^{\text{th}}\) cell from its preferred value \(\bar{V}_{k}\). Usually \(J_{ij}(\sigma_{i},\sigma_{j})\) are identically zero unless the spins \(\sigma_{i}\) and \(\sigma_{j}\) are in direct contact. As the method of Graner and Glazier [4] evolved beyond a mere proof of concept, it was further generalized to include nonequilibrium aspects, such as cell division and active motility [4; 11; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36]. The modified Metropolis algorithm adopted in modern CPMs is not the most general kinetic model for discrete systems [37; 38; 39; 40; 41] and has some limitations [29]. In fact, the Metropolis scheme was originally designed to bypass potentially slow simulations of systems' dynamics when sampling equilibrium ensembles [42; 43, Chapter 7]. In presence of nonequilibrium aspects, the physical interpretation of this algorithm is questionable because its underlying assumptions of detailed balance and a time-independent Hamiltonian are violated. The dynamic CPM framework we propose here takes into account both energetic costs and kinetic properties of a system. Relying on the principles of stochastic thermodynamics, we separate thermal and athermal fluctuations, whereby the physical temperature of the environment is accounted for directly--unlike in traditional CPMs which regard temperature as a fictitious algorithmic parameter. Moreover, Poissonian dynamics introduces into CPM simulations an unambiguously interpretable timescale. _Framework._--As a simple illustration of our approach we first consider a paradigmatic example of discrete systems--an Ising chain \(\mathbf{\sigma}=(\sigma_{1},\sigma_{2},...,\sigma_{N})\) with nearest-neighbor interactions, which is a special case of the Potts model [44]. Arranged on a one-dimensional lattice, \(N\) spins \(\sigma_{i=1,2...N}\in\{-1,1\}\) with a periodic boundary condition \(\sigma_{N+1}=\sigma_{1}\) are described by the Hamiltonian \[H=\sum_{i=1}^{N}\frac{J}{2}\sigma_{i}\sigma_{i+1}\] with an interaction constant \(J\). To define dynamics of the Ising chain we assume that each spin flips its sign with a Poissonian transition rate \(k_{i}(\mathbf{\sigma})\), which in general depends on the current state \(\mathbf{\sigma}\) [Fig. 1(b)]. Within a sufficiently small time \(dt\) at most one spin can change its value. In equilibrium, the detailed-balance condition for such a spin \(\sigma_{i}\) requires \[\exp\left[-\frac{H(\sigma_{i})}{k_{\text{B}}T}\right]k_{i}(\sigma_{i})=\exp \left[-\frac{H(-\sigma_{i})}{k_{\text{B}}T}\right]k_{i}(-\sigma_{i}), \tag{2}\] in which \(H(\sigma_{i})\) and \(k_{i}(\sigma_{i})\) are respectively the energy and the transition rate of the spin \(\sigma_{i}\), with the values of \(\sigma_{j\neq i}\) given. From Eq. (2) it follows then \[\frac{k_{i}(\sigma_{i})}{k_{i}(-\sigma_{i})}=\exp\left[-\frac{\Delta H(-\sigma_ {i})}{k_{\text{B}}T}\right], \tag{3}\] in which \(\Delta H(-\sigma_{i})=H(-\sigma_{i})-H(\sigma_{i})\). The stochastic kinetics of the above discrete system is given by a master equation, once a common factor between \(k_{i}(\sigma_{i})\) and \(k_{i}(-\sigma_{i})\) in Eq. (3) is specified [47; 48; 49; 37]. In a more general context each spin may be characterized by a state-dependent _action rate_\(\alpha_{i}(\sigma_{i})\), which determines the probability \[1-e^{-\alpha_{i}(\sigma_{i})dt}\approx\alpha_{i}(\sigma_{i})dt\] for the \(i^{\text{th}}\) spin to _attempt_ a sign change. When such an attempt occurs, the transition probability \(p(\sigma_{i}\rightarrow\sigma_{i}^{\prime})\) is given by a _directing function_\(L(\sigma_{i}^{\prime})\) with a normalization constant \(Z\)[50] \[p(\sigma_{i}\rightarrow\sigma_{i}^{\prime})=\frac{e^{L(\sigma_{i}^{\prime})}} {Z}. \tag{4}\] For the two possible outcomes--no sign change, \(\sigma_{i}^{\prime}=\sigma_{i}\), and a transition, \(\sigma_{i}^{\prime}=-\sigma_{i}\)--the normalization constant expands to \(Z=e^{L(\sigma_{i})}+e^{L(-\sigma_{i})}\) and Eq. (4) yields \[p(\sigma_{i}\rightarrow\sigma_{i})= \frac{1}{1+e^{\Delta L(-\sigma_{i})}}, \tag{5}\] \[p(\sigma_{i}\rightarrow-\sigma_{i})= \frac{e^{\Delta L(-\sigma_{i})}}{1+e^{\Delta L(-\sigma_{i})}}, \tag{6}\] with \(\Delta L(-\sigma_{i})=L(-\sigma_{i})-L(\sigma_{i})\). With the transition rate given by the product of the attempt rate and the transition probability \[k_{i}(\sigma_{i})=\alpha_{i}(\sigma_{i})\frac{e^{\Delta L(-\sigma_{i})}}{1+e^{ \Delta L(-\sigma_{i})}}, \tag{7}\] we find from Eq. (3) \[e^{\Delta L(-\sigma_{i})}=\frac{\alpha_{i}(-\sigma_{i})}{\alpha_{i}(\sigma_{i})} \exp\left[-\frac{\Delta H(-\sigma_{i})}{k_{\text{B}}T}\right]. \tag{8}\] A complete specification of the Ising chain now requires both the Hamiltonian and the spins' action rates. Such a system can be simulated exactly in continuous time by the standard techniques for master equations, or approximately by using a tau-leap algorithm with a step \(dt\)[51] which is also easily parallelizable in high-performance computations. The action rates do not compromise the canonical distribution of the Ising chain in equilibrium [Fig. 2(a)]. These kinetic parameters control the unfolding of dynamical processes, such as relaxation of transients. For example, chains initially prepared in equilibrium at temperature \(T_{0}\) and subject to a sudden temperature change \(\Delta T\) relax to the new steady state faster when the spins have larger action rates [Fig. 2(b)]. By design, the Metropolis scheme renders samples of the target equilibrium ensemble after a very short transient trajectory, which can not be controlled by algorithmic or system parameters. _Model analysis._--To analyze the role of the introduced kinetic parameters we apply the theory of stochastic thermodynamics to the Ising model. Any given trajectory of the system \(\theta\) from an initial state \(\mathbf{\sigma}^{0}\) to a final state \(\mathbf{\sigma}^{M}\) can be decomposed into a sequence of \(M\)_elementary paths_ \[\theta=\mathcal{T}_{M}\mathcal{T}_{M-1}...\mathcal{T}_{1}.\] Each path \(\mathcal{T}\) consists of \(n\) quiescent intervals of arbitrarily small time \(dt\) in some initial configuration \(\mathbf{\sigma}\), followed by a sign change of an \(i\)-th spin which produces a new state \(\mathbf{\sigma}^{\prime}\). The probability of this change is \(p_{i}=k_{i}(\sigma_{i})dt\), whereas the probability of the quiescent period lasting \(n\) steps is \[q_{n}=\left[1-dt\sum_{j}k_{j}(\mathbf{\sigma})\right]^{n}\approx e^{-ndt\sum_{j}k _{j}(\mathbf{\sigma})}. \tag{9}\] With these definitions, the probability of the elementary path from a given initial condition is \[p(\mathcal{T}|\mathbf{\sigma})=q_{n}p_{i}. \tag{10}\] Now we can decompose the _stochastic action_\(\mathcal{A}\) of the elementary path into the entropic and frenetic components [17], \(\Delta\mathcal{S}\) and \(\mathcal{D}\) respectively: \[\mathcal{A}=-k_{\mathrm{B}}\ln p(\mathcal{T}|\mathbf{\sigma})=\mathcal{D}-\frac{1 }{2}\Delta\mathcal{S}. \tag{11}\] Indeed consider the probability of a time-reverse trajectory \(\tilde{\mathcal{T}}\): a change of the \(i^{\mathrm{th}}\) spin followed by \(n\) quiescent steps and conditioned on the initial configuration \(\mathbf{\sigma}^{\prime}\) has the probability \[p(\tilde{\mathcal{T}}|\mathbf{\sigma}^{\prime})=p^{\prime}_{i}q_{n} \tag{12}\] in which \(p^{\prime}_{i}=k_{i}(-\sigma_{i})dt\). Due to Eqs. (7) and (8) the time-asymmetric part of the action is \[\Delta\mathcal{S}= -k_{\mathrm{B}}\ln\frac{p(\mathcal{T}|\mathbf{\sigma})}{p(\tilde{ \mathcal{T}}|\mathbf{\sigma}^{\prime})}\] \[= -k_{\mathrm{B}}\ln\frac{k_{i}(\sigma_{i})}{k_{i}(-\sigma_{i})}= \frac{\Delta H(-\sigma_{i})}{T}. \tag{13}\] The time-symmetric part yields a more involved expression which for a small \(dt\) can be approximated by \[\mathcal{D}= \frac{k_{\mathrm{B}}}{2}\ln\left[p(\mathcal{T}|\mathbf{\sigma})p( \tilde{\mathcal{T}}|\mathbf{\sigma}^{\prime})\right]\approx-ndtk_{\mathrm{B}}\sum _{j}k_{j}(\sigma_{i})\] \[+k_{\mathrm{B}}\ln[\sqrt{k_{i}(\sigma_{i})k_{i}(-\sigma_{i})}dt]. \tag{14}\] The total action of the whole trajectory is \(\mathcal{A}(\theta)=\sum_{m=0}^{M}\mathcal{A}(\mathcal{T}_{m})\) with the components \[\Delta\mathcal{S}(\theta)= \frac{1}{T}\left\{H\left[\mathbf{\sigma}^{M}\right]-H(\mathbf{\sigma}^{0} )\right\}, \tag{15}\] \[\mathcal{D}(\theta)= \sum_{m=1}^{M}\mathcal{D}(\mathcal{T}_{m}). \tag{16}\] Figure 2: Simulations of an Ising chain with \(N=10\) spins. (a) Distribution of the total energy in the canonical equilibrium ensemble at temperature \(T=2J/k_{\mathrm{B}}\) (\(J=1\) arb.u.). The probability of states in such a small system is calculated exactly (Theory). Tau-leap simulations in discrete time (DT) with a step \(dt=10^{-4}\) arb.u. match the theory with a p-value of the multinomial test \(0.997\)[45, 46]. The exact continuous-time (CT) simulations and the Metropolis algorithm (MA) produce comparable results with the p-values \(0.966\) and \(0.986\) respectively. In the DT and CT models the chain is inhomogeneous with action rates \(\alpha_{2i+1}=0.1\) for the odd indices, and \(\alpha_{2i}=0.3\) arb.u. for the even indices. Error bars are given by three standard deviations of \(10^{4}\) realizations. (b) Relaxation of the chain energy between two equilibrium states with average values \(\langle H\rangle_{0}\) and \(\langle H\rangle_{\Delta}\) at temperatures \(T(t<t_{0})=T_{0}=1.8|J|\) and \(T(t\gg t_{0})=T_{0}+\Delta T=2.0|J|/k_{\mathrm{B}}\) respectively (\(J=-1\) arb.u.). The results of MA sampling are reported alongside the CT simulations of a slow (\(\alpha_{2i}=0.05\) and \(\alpha_{2i+1}=0.08\) arb.u.) and fast (\(\alpha_{2i}=0.5\) and \(\alpha_{2i+1}=0.7\) arb.u.) kinetics. Each curve traces an average over \(10^{4}\) trajectories with a standard-error band. Without compromising the entropic activity, action rates control the system's frenesy through transition rates \(k_{j}\), cf. Eqs (7), (14), and (16). The entropy change of a relaxation process is entirely determined by the energy difference between the initial and final configurations of the system [Eq. (14), Fig. 2(b)]. In contrast, the frenesy depends on the kinetics of each state transition in a system's trajectory. _Poissonian cellular Potts models_.--We now construct a three-dimensional CPM using the Poissonian framework. Voxels on a cubic lattice describe the state of \(K\) distinct cells and a medium, taking values \(\sigma_{i}\in\{0,1,2,...,K\}\) (Fig. 1). The coefficients \(J_{ij}(\sigma_{i},\sigma_{j})\) in Eq. (1) vanish when voxels \(i\) and \(j\) are both occupied by the same object, or when the voxel \(i\) is not within the Moore neighborhood of the voxel \(j\)[29]. Otherwise \(J_{ij}\) assume constant positive values encoding the surface interactions between objects. For each of the total \(\nu\) object types, our framework introduces a Poissonian state-dependent action rate \(\alpha(\sigma_{i})\in\{\alpha_{0},\alpha_{1},...,\alpha_{\nu}\}\), with which an \(i^{\text{th}}\) voxel attempts to change its current value \(\sigma_{i}\). Its possible target values \(\sigma_{i}^{(j)}\) are chosen from the Von Neumann neighborhood like in the standard CPMs [29], with the transition probabilities given by a general version of Eqs. (4) and (8) \[p(\sigma_{i}\rightarrow\sigma_{i}^{(j)})= \frac{e^{\Delta L(\sigma_{i}^{(j)})}}{\sum_{j}e^{\Delta L(\sigma_ {i}^{(j)})}}, \tag{17}\] \[e^{\Delta L(\sigma_{i}^{(j)})}= \frac{\alpha(\sigma_{i}^{(j)})}{\alpha(\sigma_{i})}\exp\left[- \frac{H(\sigma_{i}^{(j)})-H(\sigma_{i})}{k_{\text{B}}T}\right]. \tag{18}\] In traditional CPM simulations, the temperature is an algorithmic parameter manipulated to adjust the overall level of fluctuations [52]. In contrast, our approach regards it as a physical variable that can be set at an experimentally controlled value. To prevent cell fragmentation, usually suppressed by a periodically applied annealing, we adopt instead a local-connectivity test of Durand and Guesnet [29] in a modified form (see Computational Details in Supplemental Materials). The level of noise in the system may be amplified by nonequilibrium processes present inside cells, for example due to the activity of molecular motors or the active polymerization of cytoskeletal filaments [53, 54, 55]. An extension of the directing-function formalism [50, Appendix B] incorporates such _active_ fluctuations into a perturbation of Eq. (17) \[p(\sigma_{i}\rightarrow\sigma_{i}^{(j)})=\frac{e^{\Delta L(\sigma_{i}^{(j)})+ \phi(\sigma_{i},\sigma_{i}^{(j)})}}{\sum_{j}e^{\Delta L(\sigma_{i}^{(j)})+ \phi(\sigma_{i},\sigma_{i}^{(j)})}}, \tag{19}\] in which \(\phi(\sigma_{i},\sigma_{i}^{(j)})\) is a function associated with a specific transition \(\sigma_{i}\rightarrow\sigma_{i}^{(j)}\). When this perturbation breaks the detailed-balance condition, such a transition incurs an irreversible thermodynamic work. For example, if we set \[\phi(\sigma_{i},\sigma_{i})\equiv 0,\qquad\phi(\sigma_{i},\sigma_{i}^{(j)} \neq\sigma_{i})=\bar{\phi}=\text{const}, \tag{20}\] all transitions \(\sigma_{i}\rightarrow\sigma_{i}^{(j)}\), except for the trivial ones \(\sigma_{i}^{(j)}\equiv\sigma_{i}\) are promoted. Active processes can also be modeled more explicitly. Cell growth is typically implemented by a time-dependent preferred volume \(\bar{V}_{k}\) in Eq. (1). Additional terms of the Hamiltonian can account for persistent cell motility [30, 32, 33, 34]. _Cell sorting during embryonic development_.--As a biophysical example we consider the sorting of epiblast (EPI) and primitive endoderm (PrE) cells in the early mouse embryo. These cells form the inner cell mass (ICM) aggregate and sort into an outer single layer of PrE cells separating EPI cells in the bulk from the medium [56] [Fig. 3(a)]. Recent advances provide unprecedented experimental access to the dynamics of isolated ICMs [57, 58, 59, 60, 61]. We quantify the segregation of the two cell types by a _sorting score_, using the distances of EPI and PrE cells (\(r_{i}^{\text{EPI}}\) and \(r_{j}^{\text{PrE}}\) respectively) to their common geometric center: \[s=\frac{1}{N^{\text{PrE}}\ N^{\text{EPI}}}\sum_{i=1}^{N^{\text{EPI}}}\sum_{j=1 }^{N^{\text{PrE}}}\text{sign}\left(r_{j}^{\text{PrE}}-r_{i}^{\text{EPI}} \right),\] in which \(N^{\text{PrE}}\) and \(N^{\text{EPI}}\) are the numbers of PrE and EPI cells. By definition the score \(s\in[-1,1]\) is close to zero for unsorted cells, and \(-1\) or \(1\) when all PrE cells are inside or outside the aggregate respectively. To model the sorting process, we chose the five interaction constants \(J_{\text{medium:EPI}}\), \(J_{\text{medium:PrE}}\), \(J_{\text{EPI:EPI}}\), \(J_{\text{EPI:PrE}}\), \(J_{\text{PrE:PrE}}\) from a physiologically relevant range of the EPI and PrE surface tensions, set the temperature to the experimental value at \(310.15\,\text{K}\), and calibrated the growth parameters to match the observed proliferation dynamics (see Computational Details in Supplemental Materials). As expected, the action rate parameters control the relaxation dynamics of the sorting process [Fig. 3(c)-(d)]. We sampled 100 combinations of action rates \(\{\alpha_{0},\alpha_{\text{EPI}},\alpha_{\text{PrE}}\}\), with each entry chosen from the interval \((0.10,3.57)\) min\({}^{-1}\). Faster kinetics of PrE cells seems to promote the sorting process in agreement with Ref. [60]. Almost perfect sorting is achieved within \(480\,\text{min}\) of CPM simulations for a wide range of parameters [Fig. 3(b)]. Without either growth and division [Fig. 3(e)], or active fluctuations [Fig. 3(f)] however, the process is hindered. Both mechanisms do a thermodynamic work on the system: the growth of cells generates stresses, and new cell boundaries increase the total surface energy, whereas the active fluctuations inject energy due to the broken detailed balance. In response to these nonequilibrium processes the system more rapidly acquires the energetically favored sorted state. For our further findings on ICM sorting _in vivo_ see [61]. _Conclusions_.--Poissonian CPMs provide a physically consistent framework to study the dynamics of complex heterogeneous materials with active properties. The introduced kinetic parameters control generalized transport coefficients and timescales of the system, and thus permit an unambiguous interpretation of time in simulations. Thanks to the principles of stochastic thermodynamics, thermal and athermal sources of noise are clearly separated, and active fluctuations can be incorporated independently of the physical temperature of the environment. We applied this framework to examine the roles of distinct nonequilibrium processes in embryonic cell sorting, and show that either growth and division or active shape fluctuations are required for successful segregation of cell types. Our framework is also generally applicable to other discrete-state models, where identifying action rates and directing functions will provide insights into the distinct roles of transport properties and nonequilibrium processes [47; 48; 49; 62]. _Acknowledgements_.--R.B. is grateful to Marc Durand for stimulating discussions on the fragmentation-free CPM approach for 3D systems and to Florian Berger for creative suggestions on the front matter. L.R. acknowledges the support of Italian National Group of Mathematical Physics (GNFM) of INDAM. The authors also express their gratitude to Francois Graner for providing a constructive feedback on the theoretical aspects of CPMs, as well as to Amitabha Nandi, Pamela Guruciaga, Jan Rombouts, Tim Dullweber, Jenna Elliott, Ergin Kohen, and Pietro Zamberlan for their feedback.
2307.12108
An Empirical Study & Evaluation of Modern CAPTCHAs
For nearly two decades, CAPTCHAs have been widely used as a means of protection against bots. Throughout the years, as their use grew, techniques to defeat or bypass CAPTCHAs have continued to improve. Meanwhile, CAPTCHAs have also evolved in terms of sophistication and diversity, becoming increasingly difficult to solve for both bots (machines) and humans. Given this long-standing and still-ongoing arms race, it is critical to investigate how long it takes legitimate users to solve modern CAPTCHAs, and how they are perceived by those users. In this work, we explore CAPTCHAs in the wild by evaluating users' solving performance and perceptions of unmodified currently-deployed CAPTCHAs. We obtain this data through manual inspection of popular websites and user studies in which 1,400 participants collectively solved 14,000 CAPTCHAs. Results show significant differences between the most popular types of CAPTCHAs: surprisingly, solving time and user perception are not always correlated. We performed a comparative study to investigate the effect of experimental context -- specifically the difference between solving CAPTCHAs directly versus solving them as part of a more natural task, such as account creation. Whilst there were several potential confounding factors, our results show that experimental context could have an impact on this task, and must be taken into account in future CAPTCHA studies. Finally, we investigate CAPTCHA-induced user task abandonment by analyzing participants who start and do not complete the task.
Andrew Searles, Yoshimichi Nakatsuka, Ercan Ozturk, Andrew Paverd, Gene Tsudik, Ai Enkoji
2023-07-22T15:36:13Z
http://arxiv.org/abs/2307.12108v1
# An Empirical Study & Evaluation of Modern CAPTCHAs ###### Abstract For nearly two decades, captchas have been widely used as a means of protection against bots. Throughout the years, as their use grew, techniques to defeat or bypass captchas have continued to improve. Meanwhile, captchas have also evolved in terms of sophistication and diversity, becoming increasingly difficult to solve for both bots (machines) and humans. Given this long-standing and still-ongoing arms race, it is critical to investigate how long it takes legitimate users to solve modern captchas, and how they are perceived by those users. In this work, we explore captchas_in the wild_ by evaluating users' solving performance and perceptions of _unmodified currently-deployed_ captchas. We obtain this data through manual inspection of popular websites and user studies in which \(1,400\) participants collectively solved \(14,000\) captchas. Results show significant differences between the most popular types of captchas: surprisingly, solving time and user perception are not always correlated. We performed a comparative study to investigate the effect of _experimental context_ - specifically the difference between solving captchas directly versus solving them as part of a more natural task, such as account creation. Whilst there were several potential confounding factors, our results show that experimental context could have an impact on this task, and must be taken into account in future captchas studies. Finally, we investigate captcha-induced user task _abandonment_ by analyzing participants who start and do not complete the task. ## 1 Introduction Automated bots pose a significant challenge for, and danger to, many website operators and providers. Masquerading as legitimate human users, these bots are often programmed to scrape content, create accounts, post fake comments or reviews, consume scarce resources, or generally (ab)use other website functionality intended for human use [31, 46]. If left unchecked, bots can perform these nefarious actions at scale. Captchas are a widely-deployed defense mechanism that aims to prevent bots from interacting with websites by forcing each user to perform a task, such as solving a challenge [5]. Ideally, the task should be straightforward for humans, yet difficult for machines [68]. The earliest captchas asked users to transcribe random distorted text from an image. However, advances in computer vision and machine learning have dramatically increased the ability of bots to recognize distorted text [35, 41, 74], and by 2014, automated tools achieved over \(99\%\) accuracy [39, 62]. Alternatively, bots often outsource solving to captchas - sweatshop-like operations where humans are paid to solve captchas[54]. In light of this, captchas have changed and evolved significantly over the years. Popular captcha tasks currently include object recognition (e.g., "select squares with..."), parsing distorted text, puzzle solving (e.g., "slide the block..."), and user behavior analysis [39, 62]. It is therefore critical to understand and quantify how long it takes legitimate users to solve current captchas, and how these captchas are perceived by users. Several prior research efforts have explored captcha solving times, e.g., [24, 27, 33, 37, 58, 67]. For example, over a decade ago, Bursztein et al. [27] performed a large-scale user study, using over \(1,100\) unique participants from Amazon Mechanical Turk (MTurk) [3] as well as captcha farms. Their results showed that captchas were often more difficult or took longer to solve than was expected. There was a loose correlation between time-to-annoyance and abandonment, with higher abandonment rates observed for captchas that took longer to solve. The same study also showed several demographic trends, e.g., users outside the US typically took longer to solve English-language captcha schemes. However, since this study, the captcha ecosystem has changed substantially: new captcha types emerged, input methods evolved, and Web use boomed. More recently, Feng et al. [33] used a similar methodology, with 202 participants, to study the usability of their newly proposed sen captcha in comparison to text, audio, image, and video-based captchas. They found that sen captcha outperformed the alternatives, both in terms of solving time and user preference. They used Securimage [55], a free open-source PHP script, to generate text and audio captchas, and they implemented their own image and video captchas. Building upon and complementing prior work, this paper evaluates captchas_in the wild_ - specifically, the solving times and user perceptions of _unmodified_ (i.e., not re-implemented) _currently-deployed_ captcha types. We first performed a manual inspection of \(200\) popular websites, based on the Alexa Top websites list [2], to ascertain: (1) _how many_ websites use captchas, and (2) _what types_ of captchas they use. Next, we conducted a \(1,000\)-participant user study using Amazon MTurk, wherein each participant was required to solve \(10\) different types of captchas. We collected information about participants' captcha solving times, relative preferences for captcha types, types of devices used, and various demographic information. One notable aspect of our user study is that we attempted to measure the impact of experimental context on participants' captcha solving times. Half of the participants were directly asked to solve captchas, whilst the other half were asked to create accounts, which involved solving captchas as part of the task. The latter setting was designed to measure captcha solving times _in the context_ of a typical web activity. One inherent limitation of any user study, especially when using MTurk, is that we cannot ensure that all participants who begin the study will complete it. All of our results should therefore be interpreted as referring to _users who are willing to solve captchas_, rather than users in general. Indeed, having noted that some participants began but did not complete our main study, we conducted a secondary MTurk study specifically designed to quantify how many users abandon their intended web activity when confronted with different types of captchas. We believe that captcha-induced _user abandonment_ is an important - yet understudied - consideration, since every abandoned task (e.g., purchase, account creation) represents a potential loss for the website. To facilitate reproducibility and enable further analysis, we provide the entire anonymized data-set collected during our user studies, along with our analysis code.2 Footnote 2: [https://github.com/sprout-uci/captcha-study](https://github.com/sprout-uci/captcha-study) ## 2 Research Questions & Main Findings We now present our research questions and summarize our main findings. Table 1 shows how our findings relate to prior work at a high level, with detailed comparisons in Section 7. **RQ1: How long do human users take to solve different types of captchas?** Specifically, we aimed to measure solving times for captchas that users are likely to encounter (e.g., those used on popular websites). Our results align with previous findings [24, 27, 33] in showing that there are significant differences in mean solving times between captcha types. For comparison, we also identified the current fastest attacks on each type of captcha (Table 3). **RQ2: What Captcha types do users prefer?** In order to understand users' relative preference for various types of captchas, we asked participants to rate all captcha types on a Likert scale of \(1-5\), from least to most enjoyable. Our results show that there are marked differences in participants' preferences, with average preference scores ranging from \(2.76\) to \(3.94\). Our results also show that average solving time is _not fully correlated_ with participants' preferences, which means that other factors, beyond the amount of time required to solve a captcha, influence participants' preferences. Our analysis of data from prior studies [33, 48, 66] shows that their data supports this finding (even if they do not discuss it explicitly). **RQ3: Does experimental context affect solving time?** Specifically, we aimed to quantify the difference in solving times between the setting where participants are directly tasked with solving captchas versus the setting in which participants solve captchas as part of a typical web activity, such as user account creation. We therefore ran two separate versions of our main user study: _direct_ and _contextualized_, which we describe in detail in Section 4.2. Whilst there were several potential confounding factors in our study, our results show that experimental context could have an impact on captcha user studies, with the difference in mean solving times as high as \(57.5\%\) in our study. **RQ4: Do demographics affect solving time?** We analyzed different self-reported metrics including age, gender, country of residence, education, Internet usage, device type and input method. In line with prior results [27], we found that all types of captchas take longer for older participants. Specifically, [27] reported an increase in solving time for text-based captchas of \(0.03\) seconds per year of participant age. Our results show an even stronger dependence with an average increase across all captcha types of \(0.09\) seconds per year. Additionally, [27] showed that participants with a PhD solved captchas faster than all other educational groups. In contrast, our results show that our participants' self-reported level of education does not correlate with their solving times. **RQ5: Does experimental context influence abandonment?** Specifically, we aimed to quantify the extent to which abandonment within a captcha user study is influenced by i) experimental context, and ii) the amount of compensation offered. For different combinations of the above variables, we found that between \(18\%\) and \(45\%\) of participants abandoned the study after the presentation of the first captcha. Only one prior captcha user study [27] disclosed their observed rate of abandonment, which is similar to that observed in our study. Overall, participants in the contextualized setting were \(120\%\) more likely to abandon than their peers in the direct setting. This connection between experimental context and user abandonment is a new finding. ## 3 Website Inspection To understand the landscape of modern captchas and guide the design of the subsequent user study, we manually inspected the 200 most popular websites from the Alexa Top Website list [2]. Where applicable, we use the terminology from the taxonomy proposed by Guerar et al. [40]. Our goal was to imitate a normal user's web experience and trigger captchas in a natural setting. Although captchas can be used to protect any section or action on a website, they are often encountered during user account creation to prevent bots creating accounts. Thus, for each website, we investigated the process of creating an account (wherever available). Of the inspected websites, 185 had some type of account creation process, and we could successfully create accounts on 142 websites. Distinct domains operated by the same organization (e.g., amazon.com and amazon.co.jp) were counted separately. We visited each website twice: once with Google Chrome in incognito mode, and once with the Tor browser over the Tor network [17]. We used incognito mode to avoid websites changing their behavior based on cookies presented by our browser. We used Tor since anecdotal evidence suggests Tor users are asked to solve captchas more frequently and with greater difficulty than non-Tor users. If no captchas were displayed, we searched the page source for the string "captcha" (case insensitive). **Ethical considerations:** Based on the Guidelines for Internet Measurement Activities [28], we did not engage in malicious behavior which may trigger additional captchas. We used only manual analysis to avoid various challenges that arise from automated website crawling. ### Results and analysis Figure 1 shows the distribution of captcha types we observed during our inspection. The most prevalent types were: **reCAPTCHA**[11, 14, 15] was the most prevalent, appearing on 68 websites (34% of the inspected websites). This is a Google-owned and operated service that presents users with "click" tasks, which include behavioral analytics and may potentially result in an image challenge. reCAPTCHA allows website operators to select a difficulty level, ranging from "easiest for users" to "most secure". **Slider-based captchas** appeared on 14 websites (7%). These typically ask users to slide a puzzle piece into a corresponding empty spot using a drag interaction. The timing and accuracy is checked for bot-like behavior. **Distorted Text captchas** appeared on 14 websites (7%). We observed differences in terms of text type, color, length, masking, spacing, movement, and background. Text type varied in several ways: 2D or 3D, solid or hollow, font, and level of distortion. Certain captchas used masking, i.e., lines or shapes obscured parts of the letters. **Game-based captchas** appeared on 9 websites (4.5%). These present users with dynamic games and compute a risk profile from the results. For example, users are asked to rotate an image or select the correctly oriented image. **hCAPTCHA**[9] appeared on 1 website. This is a service provided by Intuition Machines, Inc. that was recently adopted by Cloudflare [57] and is gaining popularity. **Invisible captchas** were found on 12 websites (6%). These websites did not display any visible captchas, but contained the string "captcha" in the page source. **Other Captchas** found during our inspection included: a captcha resembling a scratch-off lottery ticket; a captcha asking users to locate Chinese characters within an image; and a proprietary captcha service called "NuCaptcha" [13]. ### Potential limitations **Choice of website list:** There are several lists of _"popular"_ websites that could be used for this type of study, including the Alexa Top Website list [2], Cisco Umbrella [6], Majestic [16], TRANCO [56], Cloudflare Radar [7], and SecRank TopDomain [71]. These lists vary because of the differences in the methodology used to identify and rank websites. Following \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline & **Findings supporting prior work** & **Findings contradicting prior work** & **New findings on Captchas** \\ \hline **RQ1: How long does it take humans to solve different types of captchas?** & Solving time across captcha types has a large degree of variance. [24, 27, 33] & \\ \hline **RQ2: What Captcha types do users prefer?** & Solving time is not correlated with user preference. [33, 48, 66] & \\ \hline **RQ3: Does experimental context affect solving time?** & & Solving time is heavily influenced by experimental context, with differences in means up to 57.5\%. \\ \hline **RQ4: Do demographics affect solving time?** & Age has an effect on solving time. [27] & Self-reported education does not correlate with solving time. [27] & \\ \hline **RQ5: Does experimental context influence abandonment?** & High abandonment rates observed in captcha user studies. [27] & Experimental context directly affects the rate of abandonment. \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of research questions and main findings. the work of Bursztein et al. [27] and the recommendation of Scheitle et al. [60], we used the Alexa list. **Number of inspected websites:** Since our website inspection was a manual process, we could only inspect the top 200 websites. This may also introduce a degree of systemic bias towards the types of captchas used on the most popular websites. However, we specifically chose these websites because they are visited by large numbers of users. **Lower bound:** Since we did not exercise all possible functionality of every website, it is possible that we might not have encountered all captchas. Therefore, our results represent a lower bound, while the actual number of deployed captchas may be higher. Nevertheless, we believe that we identified the most prevalent captchas types across all inspected websites. **Timing:** Web page rankings change on the daily basis and captchas shown by the same service may change. Given that our inspection was performed at a particular point in time, the precise results will likely change if the inspection were repeated at a different point in time. However, as explained above, we believe that the identified set of captcha types is representative of currently-deployed captchas. **Other types of captchas:** We only inspected mainstream websites (i.e., those that would appear on a top websites list). This means that there could be captchas that are prevalent on other types of websites (e.g., on the dark web) but are not included in our study. However, studying these _special-purpose_ captchas might require recruiting participants who have prior experience solving them, which was beyond the scope of our study. **Impact of limitations:** The above limitations could have had an impact on the set of captcha types we identified and subsequently used in our user study. However, we have high confidence that the captcha types we identified are a realistic sample of those a real user would encounter during typical web browsing. For instance, BuiltWith [5] has analyzed a dataset of 673 million websites and identified 15.2 million websites that use captchas. reCAPTCHA accounts for 97.3% and hCAPTCHA for a further 1.4%. The captcha types used in our study therefore account for over 98% of captchas in this large-scale dataset. ## 4 User Study Having identified the relevant captcha types, we conducted a \(1,000\) participant online user study to evaluate real users' solving times and preferences for these types of captchas. Our study was run using Amazon MTurk and can be summarized into the following four phases: **1. Introduction:** Participants were first given an overview of the study and details of the tasks to complete. **2. Pre-study questions:** All participants were then asked to provide demographic information by answering the pre-study questions shown in Table 11 in Appendix B. **3. Tasks:** Participants were asked to complete tasks, which included solving exactly ten captchas, presented in random order. Unless otherwise stated, each captcha was _unique_ (i.e., freshly generated per participant). Participants had to solve each captcha in order to progress to the next step, thus preventing them from speeding through the study. **4. Post-study question** Finally, participants were asked questions about the captchas they had just solved. The exact questions and possible answers are shown in Table 11 in Appendix B. ### Choice of Captchas Based on our website inspection (Section 3), we selected the following ten types of captchas: * Two reCAPTCHA v2 captchas: one with the setting _easiest for users_ and the other with _most secure_. Note that we do not have control over whether the user is shown an image-based (Figure 1(a)) challenge in addition to the click-based (Figure 1(b)) task. * Two game-based captchas from Arkose Labs [4]: one required using arrows to rotate an object (Figure 2(a)) and the other required selecting the upright object (Figure 2(b)). * Two hCAPTCHAs [9]: one with easy and one with difficult settings (Figure 5). * One slider-based captcha from Geetest [8]: we selected Geetest because it was used on several of the inspected websites and offers a convenient API (Figure 4). * Three types of distorted text captchas (Figure 6): (a) the _simple_ version had four unobscured characters, (b) the _masked_ version had five characters and included some masking effects, and (c) the _moving_ version contained moving characters. Figure 1: Discrete distribution of discovered captchas (full data available in the accompanying dataset). These form a representative sample of captchas we encountered in our website inspection. Although hCAPTCHA only appeared once, we included it since it is an emerging image-based approach, which claims to be the largest independent captcha service [10]. ### Direct vs. contextualized settings We initially hypothesized that we would observe a difference in behavior depending on experimental context. In order to evaluate this, we designed two settings of the study: 500 participants completed the _direct setting_, whilst the other 500 completed the _contextualized setting_. In both settings, each participant solved exactly ten captchas in random order. **Direct setting:** This setting was designed to match previous captcha user studies, in which participants are directly asked to solve captchas. The MTurk study title was "CAPTCHA User Study" and the instructions in the first phase informed users that their task was to solve captchas. In the second phase, in addition to the basic demographic information, participants were asked about their experience with and perception of captchas; see Table 11 in Appendix B. In the third phase, participants were shown ten captchas in random order. The fourth phase was the same for both settings. **Contextualized setting:** This setting was designed to measure captcha solving behavior _in the context_ of a typical web activity. We selected the task of user account creation, as this often includes solving a captcha. The MTurk study title was "Account Creation User Study" and the first and second phases did not mention captchas. In the third phase, participants were asked to complete ten typical user account creation forms, each displaying a captcha _after_ the participant clicked submit, as is often the case on real websites. This sequencing allowed us to precisely measure the captcha solving time in isolation from the rest of the account creation task. The account creation task was a basic web form asking for a randomized subset of: name, email address, phone number, password, and address. To avoid collecting personally identifiable information, participants were provided with Figure 4: Geetest [8] Figure 6: Distorted text captchas Figure 3: Arkose Labs [4] Figure 5: hCAPTCHA [9] synthetic information at each step. Each page also included a large banner clearly stating not to enter any personal information. The fact that we were specifically measuring captcha solving time was only revealed to participants after they completed the first three phases. ### Timeline and compensation The primary study ran for two months with a total of \(1,000\) distinct participants.3 Participants were initially paid $0.30 for completing the direct version and $0.75 for the contextualized version, as the latter involved a larger workload. After completing the study, we realized we may have unintentionally under-compensated participants,4 since the median HIT completion time was 4.4 and 11.5 minutes for direct and contextualized versions. We therefore retroactively doubled all participants' compensation to $0.60 and $1.50, which equates to approximately $7.80 - $8.20 per hour. Footnote 3: To the best of our knowledge, all participants were distinct. We configured Amazon MTurk to only allow unique accounts to participate. Footnote 4: In terms of US federal minimum wage. ### Ethical considerations This user study was duly approved by the Institutional Review Board (IRB) of the primary authors' organization. No sensitive or personally identifiable information was collected from participants. We used the pseudonymous MTurk worker IDs only to check that participants were unique. Since the contextualized setting did not inform participants of the actual aim of the study beforehand, two additional documents were filed and approved by the IRB: (1) _"Use of deception/incomplete disclosure"_ and (2) _"Waiver or Alteration of the Consent"_. After each participant completed the contextualized setting, we disclosed the study's actual goal and asked whether they gave us permission to use their data. No data were collected from participants who declined. ### User study implementation The realization of the user study included a front-end webpage and a back-end server. The front-end was a single HTML page that implemented the four phases described above. To prevent any inconsistencies, participants were prevented from going back to a previous phase or retrying a task once they had progressed. Timing events were captured with millisecond precision using the native JavaScript Date library. Timing events were recorded at several points for each captcha: request, serve, load, display, submit, and server response. We measured _solving time_ as the time between a captcha being displayed and the participant submitting a solution, as is done in prior captcha user studies [23, 24, 27, 34, 37, 38, 43, 47, 48, 52, 58, 67, 75]. Depending on the type of captcha, this might include multiple rounds or attempts. We used Amazon MTurk to recruit participants, host the front-end, and collect data. While most types of captchas shown by the front-end were served from their respective providers, distorted text captchas were not available from a third-party provider, as these are usually hosted by the websites themselves. We therefore set up our own back-end server to serve distorted text captchas. Specifically, we downloaded a total of 1,000 unique distorted text captchas of three different types, and stored these in a local MongoDB [19] database. We used a Node.js[20] server to retrieve and serve captchas from the database. Every participant was served one text captcha of each type, and each unique text captcha was served to three different participants. Table 2 shows the demographic information of the participants who completed the study. The demographics of the two subgroups who completed direct and contextualized studies are very similar to each other. ### Potential limitations **Use of MTurk:** Webb et al. [69] reported several potential concerns regarding the quality of data collected from MTurk. Of their six criteria, our study did not implement two: consent quiz (1) and examination of qualitative responses (2), which we acknowledge as a limitation. The remaining four criteria can be either evaluated through collected data or are not an issue for our study. Eligibility (3) and attention check (4) can be verified via the accuracy of text-based captcha responses, which confirm that nearly all of our participants were focused and provided correct data. Response time (5) was within our expected range. Study completion (6) was not an issue, since each participant had to complete every captcha to proceed. **Bots and farms:** Similarly, Chmielewski et al. [30] reported a decrease in data quality, citing bot and farm activity. \begin{table} \begin{tabular}{l|l|l|l|l|l|l} \hline \hline **Age** & **Residence** & **Education** & **Gender** & **Device Type** & **Input Method** & **Internet Use** \\ \hline 30 - 39 (531) & USA (985) & Bachelors (822) & Male (832) & Computer (1301) & Keyboard (1261) & Work (860) \\ 20 - 29 (403) & India (240) & Masters (243) & Female (557) & Phone (74) & Touch (125) & Web surf (397) \\ 40 - 49 (271) & Brazil (50) & High school (210) & Non-Binary (11) & Tablet (25) & Other (14) & Education (87) \\ 50 - 59 (106) & Italy (27) & Associate (98) & & & Gaming (30) \\ \(\geq\) 60 (58) & UK (24) & PhD (24) & & & Other (26) \\ 18 - 19 (31) & Other (74) & No degree (3) & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of demographic data for the \(1,400\) participants of the main user study. However, Moss and Litman [53] subsequently used several bot-detection measures to evaluate whether bots could be contaminating MTurk data, and found no evidence of bot activity. Every participant who completed our study solved ten modern captchas, which although possible, would be more difficult for bots. Since we configured MTurk to only allow one completion per MTurk account, farm activity was also limited. Therefore, we are reasonably confident that our results are not influenced by bots or farms. **Choice of captchas:** One consequence of using the captcha types we identified in Section 3 is that our user study results are not directly comparable with those from prior captcha user studies. In general, it is difficult to directly compare such studies, as even if the same _types_ of captchas are studied, different implementations may be used e.g., reCAPTCHA and hCAPTCHA are both image-based captchas, but could give different results. **Unmodified captchas:** In order to maximize the level of realism in our study, we used existing unmodified captchas. We therefore did not have fine-grained control over the precise behavior of these captchas, nor the ability to obtain more fine-grained measurements of participants' accuracy or performance beyond overall solving time. However, like previous studies, we consider overall solving time to be the most important measurable quantity. **Invalid inputs:** Unfortunately, the input field for the captcha preference question in our post-study questionnaire was a free text field rather than a pull-down menu. This allowed some participants to provide preference scores outside the requested 1-5 range. We therefore excluded invalid preference scores from 163 participants.5 Footnote 5: However, we have high confidence that these participants did not provide incorrect or rushed responses during the rest of the study because their average accuracy in text-based captchas was similar to the study-wide average. We therefore retained their measurements in other sections. **Abandonment:** Since we did not record how many participants began our main study, we cannot precisely quantify the rate of abandonment. To investigate this further, we performed an additional abandonment-focused study (Section 6), where we observed a 30% abandonment rate. We can therefore assume a similar abandonment rate for our main study. Whilst the impact of this level of abandonment is unclear, it could potentially affect the ecological validity of our results, as the participants who were willing to complete the study may not be representative of all users. **Confounding factors:** There were several differences between our direct and contextualized settings, some of which may be confounding factors when comparing these two groups. For example, participants in the contextualized setting had to do more work, so their attention or focus might have been reduced during captcha solving. Differences in compensation or participants' perceived benefit of completing the task (i.e., creating an account vs. solving a captcha) may have affected motivation or likeliness to abandon the task. ## 5 Results & Analysis This section presents the user study results. Unless otherwise indicated, results are based on the full set of participants. ### Solving times This subsection addresses **RQ1:**_How long do human users take to solve different types of captchas?_ Figure 7 shows the the distribution of solving times for each captcha type. We observed a small number of extreme outliers where the participant likely switched to another task before returning to the study. We therefore filtered out the highest 50 solving times per captcha type, out of \(1,000\) total. For reCAPTCHA, the selection between image- or click-based tasks is made dynamically by Google. Whilst we know that 85% and 71% of participants (easy and hard setting) were shown a click-based captcha, the exact task-to-participant mapping is not revealed to website operators. We therefore assume that the slowest solving times correspond to image-based tasks. After disambiguation, click-based reCAPTCHA had the lowest median solving time at 3.7 seconds. Curiously, there was little difference between easy and difficult settings. The next lowest median solving times were for distorted text captchas. As expected, simple distorted text captchas were solved the fastest. Masked and moving versions had very similar solving times. For hCAPTCHA, there is a clear distinction between easy and difficult settings. The latter consistently served either a harder image-based task or increased the number of rounds. However, for both hCAPTCHA settings, the fastest solving times are similar to those of reCAPTCHA and distorted text. Finally, the game-based and slider-based captchas generally yielded higher median solving times, though some participants still solved these relatively quickly (e.g., \(<10\) seconds). With the exception of reCAPTCHA (click) and distorted text, we observed that solving times for other types have a relatively high variance. Some variance is expected, especially since these results encompass all input modalities across both direct and contextualized settings. However, _relative differences in variances_ indicate that, while some types of captchas are consistently solved quickly, most have a range of solving times across the user population. The full statistical analysis of our solving time results is presented in Appendix C.
2305.04113
Inferring Covariance Structure from Multiple Data Sources via Subspace Factor Analysis
Factor analysis provides a canonical framework for imposing lower-dimensional structure such as sparse covariance in high-dimensional data. High-dimensional data on the same set of variables are often collected under different conditions, for instance in reproducing studies across research groups. In such cases, it is natural to seek to learn the shared versus condition-specific structure. Existing hierarchical extensions of factor analysis have been proposed, but face practical issues including identifiability problems. To address these shortcomings, we propose a class of SUbspace Factor Analysis (SUFA) models, which characterize variation across groups at the level of a lower-dimensional subspace. We prove that the proposed class of SUFA models lead to identifiability of the shared versus group-specific components of the covariance, and study their posterior contraction properties. Taking a Bayesian approach, these contributions are developed alongside efficient posterior computation algorithms. Our sampler fully integrates out latent variables, is easily parallelizable and has complexity that does not depend on sample size. We illustrate the methods through application to integration of multiple gene expression datasets relevant to immunology.
Noirrit Kiran Chandra, David B. Dunson, Jason Xu
2023-05-06T18:35:13Z
http://arxiv.org/abs/2305.04113v3
# Inferring Covariance Structure from Multiple Data Sources via Subspace Factor Analysis ###### Abstract Factor analysis provides a canonical framework for imposing lower-dimensional structure such as sparse covariance in high-dimensional data. High-dimensional data on the same set of variables are often collected under different conditions, for instance in reproducing studies across research groups. In such cases, it is natural to seek to learn the shared versus condition-specific structure. Existing hierarchical extensions of factor analysis have been proposed, but face practical issues including identifiability problems. To address these shortcomings, we propose a class of SUspace Factor Analysis (SUFA) models, which characterize variation across groups at the level of a lower-dimensional subspace. We prove that the proposed class of SUFA models lead to identifiability of the shared versus group-specific components of the covariance, and study their posterior contraction properties. Taking a Bayesian approach, these contributions are developed alongside efficient posterior computation algorithms. Our sampler fully integrates out latent variables, is easily parallelizable and has complexity that does not depend on sample size. We illustrate the methods through application to integration of multiple gene expression datasets relevant to immunology. _Keywords_: Data-augmented Markov chain Monte Carlo, Data Integration, Gradient-Based Sampling, Latent Variable Models, Multi-Study Factor Analysis. ## 1 Introduction With increasing calls for reproducibility and transparency in science, it has become standard to make datasets widely available to the scientific community. This motivates interest in aggregating different but related datasets together. A prominent example arises in gene network analyses where it is of interest to merge datasets from studies that consider a common set of genes. Combining datasets increases sample size in studying covariance structure among the genes. To conduct principled inference, it is important to account for differences across studies, and not simply pool the data. This article focuses on developing statistical methods for inferring common versus study-specific covariance structures, with a particular motivation to multi-study applications in the high-dimensional setting. These methods immediately apply to data from a single study featuring hierarchical, multi-group structure as well. Consider the analysis of gene expression levels in immune cells. Integrating data from multiple studies serves to (1) increase statistical precision in making inferences on covariance structure between genes; (2) yield results that are more robust to study-to-study variability and hence more generalizable; and (3) obtain insight into shared versus study-specific contributors to the covariance. We focus on datasets from the Immunological Genome Project (Heng _et al._, 2008), comprising microarray assays as well as bulk RNA sequencing data as shown in Figure S.3 in the supplementary materials. Substantial similarity between the datasets can be observed albeit with significant amount of heterogeneity. Such heterogeneity is typical and arises due to differences between the subject populations, and variation in data collection technologies inducing platform-specific effects in the respective datasets. Given our interest in the covariance structure of high-dimensional data, it is natural to consider Bayesian sparse factor analysis (FA) models, which achieve state-of-the-art performance in the single-study case (Fan _et al._, 2008; Knowles and Ghahramani, 2011; Trendafilov _et al._, 2017). Even when different studies collect the same variables for closely-related populations, applying factor analysis separately to each dataset can lead to very different inferred factor structures. There are recent ap proaches extending factor analysis to the multi-study setting--notably, De Vito _et al._ (2019, 2021) proposed a conceptually simple multi-study FA model. This appealing model includes shared and study-specific components via an additive expansion but is known to face identifiability issues, discussed further below. Alternatively, Roy _et al._ (2021) proposed a multiplicative 'perturbed' FA model that focuses on inferring the shared structure while making use of subject-specific perturbations to resolve identifiability issues in post-processing steps. Motivated by surmounting the identifiability issues arising in multi-study factor analysis, we propose a novel shared _SUbspace Factor Analysis (SUFA)_ model. In particular, SUFA assumes that there is a common lower-dimensional subspace shared across the multiple studies under consideration. The data from different studies all provide partial information about the shared subspace, providing a mechanism for borrowing of information. Moreover, this subspace is the key in yielding identifiability of the common versus study-specific components of the covariance. Related work by Franks and Hoff (2019) focuses on identifying the _best_ shared subspace across groups. In contrast, we focus on learning the shared versus study-specific contributors to covariance structure. Our factor analysis approach allows one to infer latent factors jointly with the structure of their loadings, providing valuable interpretations in applications including gene expression studies (Iacob _et al._, 2016). We focus on identifiability up to the usual rotational ambiguity encountered in factor modeling. Then, following standard practice in the Bayesian factor modeling literature, we rotationally align the factors across MCMC iterations in a post-processing stage. By using a fast algorithm recently introduced by Poworoznek _et al._ (2021), we maintain sparsity in the post-processed samples of the loadings by leveraging a key sparsity-inducing property of the varimax rotation (Kaiser, 1958; Rohe and Zeng, 2020). Sparse loadings have considerable interpretability advantages; for example, a subset of genes that loads onto a particular factor is commonly interpreted as belonging to the same pathway. To provide theoretical support in terms of covariance estimation, we derive favorable posterior contraction rates for subspace factor models in a high-dimensional setting. Our analysis shows that the shared covariance structure can be recovered with added precision for each additional study, even when the marginal distributions of data differ substantially across studies. Although the classical factor analysis literature relies heavily on choosing the number of latent factors correctly, we show that our covariance matrix estimation is robust to this choice. Our model is developed alongside computational contributions to enable efficient posterior inference. Latent variable-based Gibbs sampling (Bhattacharya and Dunson, 2011; Sabnis _et al._, 2016) or expectation-maximization (EM) algorithms (Rockova and George, 2016) are the most popular approaches for Bayesian factor models. However, the conditional updating can become a computational bottleneck for massive sample sizes, further exacerbated in multi-study settings. Dai _et al._ (2020) proposed a fast matrix-free approach for exploratory factor analysis but their approach does not apply in Bayesian contexts. We develop a scalable (large \(n\)) Hamiltonian Monte Carlo (HMC, Neal, 2011) sampler that jointly updates parameters while marginalizing out latent factors. This leads to significant improvements in mixing relative to Gibbs sampling. The proposed sampler depends only on the study-specific sample covariance matrices, which can be computed and cached prior to running MCMC. The computational complexity is then essentially invariant of the sample size, and we design a distributed computing framework to parallelize each MCMC iteration. The remainder of the article is organized as follows: Section 2 describes the SUFA model, and 2.1 discusses and resolves the potential identifiability issues common to multi-study factor models. Section 2.2 discusses our choice of prior and other important specifications. Section 3 establishes posterior contraction rates in high-dimensional settings. These contributions are made practical in Section 4, where we propose a scalable HMC algorithm for posterior sampling. Section 5.1 compares SUFA with existing approaches via a suite of simulation studies, and the methods are applied in an integrative gene network analysis case study in Section 5.2. We close by discussing our contributions and future directions in Section 6. ## 2 Bayesian Subspace Factor Analysis (SUFA) In this article, we consider the setting where data consist of \(S\) studies each comprising \(d\)-variate observations \(\mathbf{Y}_{s,i}=(Y_{s,i,1},\ldots,Y_{s,i,d})^{\mathrm{T}}\), \(i=1,\ldots,n_{s}\), \(s=1,\ldots,S\) measured on the same set of features with \(\mathbf{Y}_{s,i}\overset{\mathrm{iid}}{\sim}\mathrm{N}(\boldsymbol{\mu}_{s}, \boldsymbol{\Sigma}_{s})\). Without loss of generality, we assume \(\boldsymbol{\mu}_{s}=\mathbf{0}\) following standard practice to center the data. In our motivating application, we seek to jointly learn from two microarray datasets and a bulkRNASeq dataset: we have \(S=3\) studies each analyzing \(d=474\) genes. In studying the correlation structure between genes, we expect a common fundamental association between features across the studies, along with study-specific variations. Hence, we let \(\boldsymbol{\Sigma}_{s}=\boldsymbol{\Sigma}+\boldsymbol{\Gamma}_{s}\) where \(\boldsymbol{\Sigma}\) is a positive definite matrix quantifying the shared structure, and \(\boldsymbol{\Gamma}_{s}\) accounts for the respective study-specific dependencies. Bayesian factor analysis often yields state-of-the-art performance in single-study high-dimensional covariance estimation (Bhattacharya and Dunson, 2011; Pati _et al._, 2014; Rockova and George, 2016). Adopting a usual factor-analytic covariance factorization as the sum of a low rank and diagonal matrix to the shared \(\Sigma\), we let \(\boldsymbol{\Sigma}=\boldsymbol{\Lambda}\boldsymbol{\Lambda}^{\mathrm{T}}+ \boldsymbol{\Delta}\), where \(\boldsymbol{\Lambda}\) is a \(d\times q\) factor loading matrix, with \(q\ll d\) the number of latent factors, and \(\mathbf{\Delta}=\text{diag}(\delta_{1}^{2},\ldots,\delta_{d}^{2})\) is a diagonal matrix of residual variances. Evidence in the literature suggests that expression-level dependent error variances tend to produce better results (Kepler _et al._, 2002), so we do not assume \(\mathbf{\Delta}=\sigma^{2}\mathbf{I}_{d}\). To model the study-specific deviations, we let \(\mathbf{\Gamma}_{s}=\mathbf{\Lambda}\mathbf{A}_{s}\mathbf{A}_{s}^{\text{T}}\mathbf{ \Lambda}^{\text{T}}\), where \(\mathbf{A}_{s}\) is a \(q\times q_{s}\) matrix. This leads to the following expression for the covariance specific to study \(s\): \(\mathbf{\Sigma}_{s}=\mathbf{\Lambda}\mathbf{\Lambda}^{\text{T}}+\mathbf{\Lambda}\mathbf{A}_{s} \mathbf{\Lambda}_{s}^{\text{T}}\mathbf{\Lambda}^{\text{T}}+\mathbf{\Delta}.\) This can equivalently be written \[\mathbf{Y}_{s,i}=\mathbf{\Lambda}\mathbf{\eta}_{s,i}+\mathbf{\Lambda}\mathbf{A}_{s}\mathbf{ \zeta}_{s,i}+\mathbf{\epsilon}_{s,i},\ \mathbf{\eta}_{s,i}\overset{\text{iid}}{\sim}\text{N}_{q}(\mathbf{0},\mathbf{I}_{ q}),\ \mathbf{\zeta}_{s,i}\overset{\text{iid}}{\sim}\text{N}_{q_{s}}(\mathbf{0},\mathbf{I}_{ q_{s}}),\ \mathbf{\epsilon}_{s,i}\overset{\text{iid}}{\sim}\text{N}_{d}(\mathbf{0},\mathbf{ \Delta}), \tag{1}\] where \(\mathbf{\eta}_{s,i}\) is a \(q\) dimensional latent factor in the shared subspace, \(\mathbf{\zeta}_{s,i}\) are study-specific latent factors of dimension \(q_{s}\), and \(\mathbf{\epsilon}_{s,i}\) are mean zero Gaussian error terms. Because \(\mathbf{\eta}_{s,i}\) are supported on the same subspace, our hierarchical model allows borrowing of information across studies in estimating \(\mathbf{\Sigma}=\mathbf{\Lambda}\mathbf{\Lambda}^{\text{T}}+\mathbf{\Delta}\). This is formalized in Section 3. Model (1) implies the following marginal distributions \[\mathbf{Y}_{s,i}\overset{\text{iid}}{\sim}\text{N}_{d}(\mathbf{0},\mathbf{\Lambda} \mathbf{\Lambda}^{\text{T}}+\mathbf{\Lambda}\mathbf{A}_{s}\mathbf{A}_{s}^{\text{T}} \mathbf{\Lambda}^{\text{T}}+\mathbf{\Delta})\ \text{for}\ s=1,\ldots,S. \tag{2}\] Typically we take \(q_{s}<q\), as integrative analyses of multiple studies are meaningful when the signals are mostly shared across studies. Imposing that \(q_{s}<q\) allows \(\mathbf{\Lambda}\) to be the dominant term explaining the variation across the studies. The following sections make this intuition rigorous, and show how it ensures identifiability. Related work:Under the same data organization, De Vito _et al._ (2019, 2021) define the multi-study factor analysis (MSFA) model: \[\mathbf{Y}_{s,i}=\mathbf{\Lambda}\mathbf{\eta}_{s,i}+\mathbf{\Phi}_{s}\mathbf{\zeta}_{s,i}+\bm {\epsilon}_{s,i},\ \mathbf{\eta}_{s,i}\overset{\text{iid}}{\sim}\text{N}_{q}(\mathbf{0},\mathbf{I}_{ q}),\ \mathbf{\zeta}_{s,i}\overset{\text{iid}}{\sim}\text{N}_{q_{s}}(\mathbf{0},\mathbf{I}_{ q_{s}}),\ \mathbf{\epsilon}_{s,i}\overset{\text{iid}}{\sim}\text{N}_{d}(\mathbf{0},\mathbf{ \Delta}_{s}), \tag{3}\] where \(\mathbf{\Lambda}\) is the \(d\times q\) shared factor loading matrix and \(\mathbf{\Phi}_{s}\)'s are the \(d\times q_{s}\) study-specific loading matrices, with the remaining notation matching ours above. In contrast to (2), the implied marginal distribution is \(\mathbf{Y}_{s,i}\overset{\text{iid}}{\sim}\text{N}_{d}(\mathbf{0},\mathbf{\Lambda} \mathbf{\Lambda}^{\text{T}}+\mathbf{\Phi}_{s}\mathbf{\Phi}_{s}^{\text{T}}+\mathbf{\Delta}_{s})\). While clearly related, this formulation assumes separate arbitrary \(d\times q_{s}\) matrices \(\mathbf{\Phi}_{s}\) and residual error variances \(\mathbf{\Delta}_{s}\) for each study. This may seem more flexible, but gives rise to a critical identifiability issue-- the data can be fitted equally well if the shared \(\mathbf{\Lambda}\) is completely ignored. This has adverse practical implications, especially in high-dimensional applications; Section 2.1 details how our method avoids these pitfalls. Perturbed factor analysis (PFA, Roy _et al._, 2021) takes an altogether different approach with the same aim of learning shared covariance structure: \[\mathbf{Q}_{s}\mathbf{Y}_{s,i}=\mathbf{\Lambda}\boldsymbol{\eta}_{s,i}+ \boldsymbol{\epsilon}_{s,i},\qquad\boldsymbol{\eta}_{s,i}\overset{\mathrm{iid }}{\sim}\mathrm{N}(\mathbf{0},\mathbf{I}_{q}),\qquad\boldsymbol{\epsilon}_{s,i }\overset{\mathrm{iid}}{\sim}\mathrm{N}(\mathbf{0},\mathbf{\Delta}), \tag{4}\] where \(\mathbf{\Lambda}\) is the \(d\times q\) common factor loading matrix, \(\mathbf{Q}_{s}\) is a \(d\times d\) perturbation matrix, and the remaining notation as above. Though it overcomes some of the pitfalls of the MSFA models, the study-specific effects cannot be recovered under PFA, and the introduction of \(\mathbf{Q}_{s}\) makes it difficult to scale beyond a few hundred dimensions. ### Model Identifiablity Guarantees Factor analytic models such as SUFA are prone to two key identifiability issues: **(I)**: **Information Switching:** If there exists a \(q\times q\) non-null symmetric matrix \(\mathbf{C}\) such that \(\mathbf{I}_{q}-\mathbf{C}\succ\mathbf{0}\) and \(\mathbf{A}_{s}\mathbf{A}_{s}^{\mathrm{T}}+\mathbf{C}=\widetilde{\mathbf{A}}_{s }\widetilde{\mathbf{A}}_{s}^{\mathrm{T}}\) for some \(q\times q_{s}\) matrix \(\widetilde{\mathbf{A}}_{s}\) for all \(s=1,\ldots,S\), then multiple choices of \(\mathbf{\Lambda}\) and \(\mathbf{A}_{s}\) yield the same marginal distribution in (2), leading to an identifiability issue between \(\mathbf{\Lambda}\) and the \(\mathbf{\Lambda}\mathbf{A}_{s}\)'s. **(II)**: **Rotational Ambiguity:** Let \(\widetilde{\mathbf{\Lambda}}=\mathbf{\Lambda}\mathbf{H}\) and \(\widetilde{\mathbf{A}}_{s}=\mathbf{H}^{\mathrm{T}}\mathbf{A}_{s}\mathbf{H}_{s}\) where \(\mathbf{H}\) and \(\mathbf{H}_{s}\)'s are orthogonal matrices of order \(q\times q\) and \(q_{s}\times q_{s}\) respectively. Then substituting \(\mathbf{\Lambda}\) and \(\mathbf{A}_{s}\)'s by \(\widetilde{\mathbf{\Lambda}}\) and \(\widetilde{\mathbf{A}}_{s}\)'s respectively in (2) yields identical marginal distributions. This _information switching_ is a crucial issue if \(\mathbf{\Sigma}=\mathbf{\Lambda}\mathbf{\Lambda}^{\mathrm{T}}+\mathbf{\Delta}\) is to be interpreted as the shared covariance term, since \(\mathbf{\Sigma}\) will not be identifiable. To establish some intuition, we consider an example where \(d=5\), \(S=2\), \(q=3\) with \(q_{1}=q_{2}=2\), \[\mathbf{\Lambda}=\begin{bmatrix}7&5&6\\ 6&6&7\\ 6&9&4\\ 5&5&6\\ 4&6&6\end{bmatrix},\quad\mathbf{A}_{1}=\begin{bmatrix}3&0\\ 0&2\\ 0&0\end{bmatrix}\quad\text{ and }\mathbf{A}_{2}=\begin{bmatrix}0&0\\ 2&0\\ 0&4\end{bmatrix}. \tag{5}\] Under the SUFA model, we can write \(\mathbf{Y}_{s,i}=\begin{bmatrix}\mathbf{\Lambda}&\mathbf{\Lambda}\mathbf{A}_{ s}\end{bmatrix}\begin{bmatrix}\boldsymbol{\eta}_{s,i}\\ \boldsymbol{\zeta}_{s,i}\end{bmatrix}+\boldsymbol{\epsilon}_{s,i}\). This implies that for each of the two "studies", \(\mathbf{Y}_{s,i}\) is given by \[\mathbf{Y}_{1,i} =\begin{bmatrix}7&5&6\\ 6&6&7\\ 6&9&4&18&18\\ 5&5&6\\ 4&6&6\end{bmatrix}\underbrace{\begin{bmatrix}\boldsymbol{\eta}_{1,i}\\ \boldsymbol{\zeta}_{1,i}\end{bmatrix}}_{\mathbf{\Lambda}\mathbf{A}_{1}}+ \boldsymbol{\epsilon}_{1,i}=\begin{bmatrix}7&5&6&10\\ 6&6&7&12&18\\ 6&9&4&18&18\\ 5&5&6&10&15\\ 4&6&6&12\end{bmatrix}\begin{bmatrix}\boldsymbol{\eta}_{1,i}\\ \boldsymbol{\tilde{\zeta}}_{1,i}\end{bmatrix}+\boldsymbol{\epsilon}_{1,i}, \tag{6}\] \[\mathbf{Y}_{2,i} =\begin{bmatrix}7&5&6\\ 6&6&7&12&28\\ 6&9&4&18&16\\ 5&5&6&10&24\\ 4&6&6&12&24\end{bmatrix}\underbrace{\begin{bmatrix}\boldsymbol{\eta}_{2,i}\\ \boldsymbol{\tilde{\zeta}}_{2,i}\end{bmatrix}}_{\mathbf{\tilde{\Lambda}}}+ \boldsymbol{\epsilon}_{2,i}=\underbrace{\begin{bmatrix}7&5&6&10\\ 6&6&7&12&28\\ 6&9&4&18&16\\ 5&5&6&10&24\\ 4&6&6&12\end{bmatrix}\begin{bmatrix}\boldsymbol{\tilde{\eta}}_{2,i}\\ \boldsymbol{\tilde{\zeta}}_{2,i}\end{bmatrix}+\boldsymbol{\epsilon}_{2,i}. \tag{7}\] That the above equations can each be written in two equivalent decompositions illustrates the _information switching_ problem: we cannot distinguish whether the column \(\mathbf{\Lambda}\times\begin{bmatrix}0&2&0\end{bmatrix}^{\mathrm{T}}= \begin{bmatrix}10&12&18&10&12\end{bmatrix}^{\mathrm{T}}\) is a part of the shared effect or the study-specific effect. Although the \(\mathbf{A}_{s}\) matrices should take into account the study-specific variations only, the individual effect of one study in this example can be at least "partially explained" by the others. From the perspective of the statistical model, \(\mathbf{A}_{s}\)'s are no longer study-specific, but absorb a part of the shared variation which should be captured entirely by \(\mathbf{\Lambda}\). In Section 2.1.1 we identify the necessary and sufficient condition causing this issue and propose an _almost sure_ solution. Rotational ambiguity has been well documented in the prior literature, affecting both MSFA and PFA. In addition, MSFA suffers from information switching--briefly, for a \(d\times d\) order symmetric matrix \(\widetilde{\mathbf{C}}\) such that \(\mathbf{\Lambda}\mathbf{\Lambda}^{\mathrm{T}}-\widetilde{\mathbf{C}}\succ \mathbf{0}\), \(\mathbf{\Phi}_{s}\mathbf{\Phi}_{s}^{\mathrm{T}}+\widetilde{\mathbf{C}}= \widetilde{\mathbf{\Phi}}_{s}\widetilde{\mathbf{\Phi}}_{s}^{\mathrm{T}}\) for some \(d\times q_{s}\) order matrix \(\widetilde{\mathbf{\Phi}}_{s}\). To resolve this, De Vito _et al._ (2019) restrict the augmented matrix \(\begin{bmatrix}\mathbf{\Lambda}&\mathbf{\Phi}_{1}&\cdots&\mathbf{\Phi}_{S} \end{bmatrix}\) to be lower-triangular. Doing so incurs the tradeoff of introducing an order dependence (Fruhwirth-Schnatter and Lopes, 2018); when no natural ordering exists, results can be very sensitive to permutations of the variables or studies (Carvalho _et al._, 2008). These structural constraints also can affect mixing and yield inconsistent estimates (Millsap, 2001; Erosheva and Curtis, 2017). On the other hand, Roy _et al._ (2021) set \(\mathbf{Q}_{1}=\mathbf{I}_{d}\) to impose identifiability in PFA, which can again be sensitive to the choice of the "first" study. In the following section, we provide _almost sure_ solutions to the identifiability issues under much milder conditions, avoiding any such structural assumptions. #### 2.1.1 Resolving Information Switching We now derive the necessary and sufficient conditions for information switching. **Lemma 1**.: _Assume that the data admits the marginal distribution in (2) for all the studies. Then information switching occurs if and only if there exists \(\mathbf{\Lambda}\) and \(\mathbf{A}_{s}\)s such that \(\mathbf{Y}_{s,i}\stackrel{{\mathrm{iid}}}{{\sim}}\mathrm{N}_{d}( \mathbf{0},\mathbf{\Lambda}\mathbf{\Lambda}^{\mathrm{T}}+\mathbf{\Lambda} \mathbf{A}_{s}\mathbf{A}_{s}^{\mathrm{T}}\mathbf{\Lambda}^{\mathrm{T}}+ \mathbf{\Delta})\) and \(\bigcap_{s=1}^{S}\mathbb{C}\left(\mathbf{A}_{s}\right)\) is non-null, where \(\mathbb{C}\left(\mathbf{A}\right)\) denotes the column space of a matrix \(\mathbf{A}\)._ Intuitively, a non-null \(\bigcap_{s=1}^{S}\mathbb{C}\left(\mathbf{A}_{s}\right)\) implies that the \(\mathbf{A}_{s}\) matrices are no longer study-specific. As we saw in example (5), \(\mathbf{A}_{s}\) can absorb a part of the shared variation which is meant to be captured by \(\mathbf{\Lambda}\). The Lemma provides requisite insights to avoid this with a very simple condition to ensure \(\mathbf{\Lambda}\) fully explains the shared covariance. **Theorem 1**.: _If \(\sum_{s=1}^{S}q_{s}\leq q\) then information switching has zero support under any non-degenerate continuous prior on \(\mathbf{A}_{s}\)._ Intuitively, aggregating multiple data views is a fruitful pursuit only when the datasets share enough similarity or structure; it is natural to expect that our subspace factor model is well-posed when there are fewer study-specific latent factors than shared factors. Theorem 1 makes this intuition precise, showing that the potential problem of information switching is resolved _almost surely_--under any continuous prior on \(\mathbf{A}_{s}\)--as long as \(\sum_{s=1}^{S}q_{s}\leq q\). The assumption is therefore a natural one, and \(\mathbf{\Sigma}\) is completely identifiable as a result, allowing for us to study and interpret the interaction between variables. To illustrate concretely in the context of the above example, consider setting the shared \(\mathbf{\widetilde{\Lambda}}\) to be a \(5\times 4\) matrix and \(q_{1}=q_{2}=1\), i.e. right-hand side of (6)-(7). Now, \(\mathbf{\widetilde{\Lambda}}\) accounts for the variability in the marginal distribution (2), and in turn the parts explained by \(\mathbf{A}_{1}\) and \(\mathbf{A}_{2}\) no longer have intersection. Notably, the strategy of restricting the augmented matrix \(\begin{bmatrix}\mathbf{\Lambda}&\mathbf{\Phi}_{1}&\cdots&\mathbf{\Phi}_{S}\end{bmatrix}\) to be lower-triangular by De Vito _et al._ (2019) also requires a similar condition \(q+\sum_{s=1}^{S}q_{s}\leq d\), but is affected by the problem of order-dependence. #### 2.1.2 Resolving Rotational Ambiguity Identifiability with respect to orthogonal rotation (**II**) is not essential for estimating the shared covariance \(\mathbf{\Sigma}\), since \(\mathbf{\Lambda}\mathbf{\Lambda}^{\mathrm{T}}\) remains identifiable. However, rotational ambiguity does create obstacles to inferring interpretable lower-dimensional factors Russell (2002); Iacob _et al._ (2016). To avoid order dependence arising from structural assumptions, it is common to address the issue via post-processing the samples Legramanti _et al._ (2020); Roy _et al._ (2021); De Vito _et al._ (2021); Papastamoulis and Ntzoufras (2022), and others). Interpretability of the factor loadings is greatly enhanced by sparsity--for example, the sparse subset of genes with nonzero loadings onto a common factor can be associated with a common biological pathway. Even when one places a shrinkage prior on the loadings to favor (near) sparsity, this desired structure is potentially destroyed after applying post-processing algorithms to the MCMC samples of the loadings. To avoid this, we adopt the approach of Poworoznek _et al._ (2021). The method first tackles the generic rotational invariance across the MCMC samples using an _orthogonalization step_ leveraging the varimax transformation (Kaiser, 1958) to resolve ambiguity up to switching of the column labels and signs. These ambiguities are both resolved in the next step by _matching_ each MCMC sample to a reference matrix called a _pivot_, aligning samples via a greedy maximization scheme. The post-processed MCMC samples can thus be considered _matched and aligned_ with respect to a common orthogonal transformation for inference downstream. Critically, the varimax rotation in the _orthogonalization step_ implicitly induces sparsity (Rohe and Zeng, 2020). In more detail, the varimax criterion is given by \[\mathbf{H}_{\text{VARIMAX}}=\arg\max_{\mathbf{H}}\bigg{[}\tfrac{1}{d}\sum_{h= 1}^{q}\sum_{j=1}^{d}(\mathbf{\Lambda H})_{j,h}^{4}-\sum_{h=1}^{q}\Big{\{}\tfrac {1}{d}\sum_{j=1}^{d}(\mathbf{\Lambda H})_{j,h}^{2}\Big{\}}^{2}\bigg{]},\] and describes the optimal rotation maximizing the sum of the variances of squared loadings. Intuitively, this is achieved if (i) any given variable has a high loading on a single factor but near-zero loadings on the others, and (ii) any given factor is constituted by a few variables with very high loadings but near-zero support from the others. As a result, the summary matrix obtained from the posterior MCMC samples of \(\mathbf{\Lambda}\) is sparse upon applying varimax rotation, while the marginal distribution of the data is unaffected. Section S.4.2 of the supplementary materials contains thorough simulations showing see that this strategy accurately recovers \(\mathbf{\Lambda}\) consistently across several realistic scenarios. As \(\mathbf{\Lambda}^{(s)}=\mathbf{\Lambda}\mathbf{A}_{s}\) can be defined to denote the study-specific loading matrices, the above prescription applies similarly to obtaining study-specific loadings from samples of \(\mathbf{\Lambda}^{(s)}\). ### Latent Dimensions and Prior Specification Latent Dimensions:In most practical applications, we do not know the column dimensions of \(\mathbf{\Lambda}\) and \(\mathbf{A}_{s}\), denoted \(q\) and \(q_{s}\) respectively. Although one could choose priors on \(q\) and \(q_{s}\) and implement reversible-jump algorithms (Green, 1995), this will often lead to inefficient computation, particularly in large \(d\) cases. Instead, it has become common practice in the single-study literature to use overfitted factor models by first fixing \(q\) at some upper bound and then leveraging appropriate priors to shrink the extra columns (Legramanti _et al._, 2020; Schiavon _et al._, 2022, and related others). This strategy substantially simplifies MCMC implementations. To choose upper bounds on the number of factors in practice, we employ augmented implicitly restarted Lanczos bidiagonalization (Baglama and Reichel, 2005) to obtain approximate singular values and eigenvectors of the pooled dataset, choosing the smallest \(\widehat{q}\) that explains at least 95% of the variability in the data. After doing so, the simple choice \(\widehat{q}_{s}=\widehat{q}/S\) satisfies the conditions in Theorem 1. We later show in Section 3 that recovering the marginal covariance is asymptotically robust with respect to the choice of \(q\) and \(q_{s}\) under appropriate priors on \(\mathbf{\Lambda}\). Prior Specifications:In high-dimensional applications, it is important to reduce the number of parameters in the factor loadings matrices for statistical efficiency in estimating the shared and study-specific contributions to the large covariance matrix. Among a wide variety of appropriate shrinkage priors, we use the Dirichlet-Laplace (DL, Bhattacharya _et al._, 2015) for its near-minmax optimal contraction rates in the single-study context (Pati _et al._, 2014) and computational simplicity. We let \(\text{vec}(\mathbf{\Lambda})\sim\text{DL}(a)\) where \(a\) is a suitably chosen hyperparameter; details appear in Section S.1 of the supplementary materials. For the study-specific terms \(\mathbf{A}_{s}\), we let its entries \(a_{s,i,j}\overset{\text{iid}}{\sim}\text{N}(0,b_{\mathbf{\Lambda}})\), a choice that avoids information switching _almost surely_. Regarding the residual variances, we consider \(\log\delta_{j}^{2}\stackrel{{\text{iid}}}{{\sim}}\text{N}(\mu_{ \delta},\sigma_{\delta}^{2})\) as log-normal distributions have modes bounded away from zero and tend to produce more numerically stable estimates compared to commonly used inverse-gamma and half-Cauchy priors (Gelman, 2006). ## 3 Posterior Contraction Rates We now analyze the posterior contraction rates for recovering the shared and study-specific covariance matrices. Before detailing the assumptions on the true data-generating mechanism and the postulated SUFA model, we fix notational conventions. We let \(\Pi_{n}(\cdot)\) denote the prior and \(\Pi_{n}(\cdot\mid\mathcal{D}_{n})\) the corresponding posterior given data \(\mathcal{D}_{n}=\{\mathbf{Y}_{1,1},\ldots,\mathbf{Y}_{1,n_{1}},\ldots, \mathbf{Y}_{S,1},\ldots,\mathbf{Y}_{S,n_{S}}\}\). Throughout, \(\left\|\mathbf{A}\right\|_{F}\) and \(\left\|\mathbf{A}\right\|_{2}\) denote the Frobenius and spectral norms of a matrix \(\mathbf{A}\), respectively. For real sequences \(\{a_{n}\}\), \(\{b_{n}\}\), \(a_{n}=o(b_{n})\) implies that \(\lim|a_{n}/b_{n}|=0\) and \(a_{n}\succ b_{n}\) implies that \(\liminf a_{n}/b_{n}>0\). A \(d\)-dimensional vector \(\boldsymbol{\theta}\) is said to be \(s\)-sparse if it has only \(s\) nonzero elements, and we denote the set of all \(s\)-sparse vectors in \(\mathbb{R}^{d}\) by \(\ell_{0}\left[s,d\right]\). Data-generating mechanism:We assume that the data are generated according to \(\mathbf{Y}_{s,1:n_{s}}\stackrel{{\text{iid}}}{{\sim}}\text{N}_{d _{n}}(\mathbf{0},\boldsymbol{\Sigma}_{0sn})\) for each study \(s=1,\ldots,S\). Here, \(\boldsymbol{\Sigma}_{0sn}=\boldsymbol{\Lambda}_{0n}\boldsymbol{\Lambda}_{0n}^ {\text{T}}+\boldsymbol{\Lambda}_{0n}\boldsymbol{\Lambda}_{0sn}^{\text{T}} \boldsymbol{\Lambda}_{0n}^{\text{T}}+\boldsymbol{\Delta}_{0n}\) with \(\boldsymbol{\Lambda}_{0n}\) a \(d_{n}\times q_{0n}\) sparse matrix, \(\boldsymbol{\Lambda}_{0sn}\) a \(q_{0n}\times q_{0sn}\) matrix with real entries, and \(\boldsymbol{\Delta}_{0n}:=\text{diag}(\delta_{01}^{2},\ldots,\delta_{0d_{n}}^ {2})\). Compared to the setup in prior work by Pati _et al._ (2014), we consider multiple studies and allow heterogeneous residual errors, which will provide more realism in modeling the data as well as require a more nuanced analysis. Denoting \(n=\sum_{s=1}^{S}n_{s}\) the combined sample size across studies, we will allow the model parameters to increase in dimension with \(n\) as suggested by the subscripts in our notations. Under this setup, we now state the required sparsity conditions on the true parameters in order to recover the shared as well as the study specific covariance structures in high-dimensional settings. 1. For \(s=1,\ldots,S\), \(\liminf_{n\to\infty}\frac{n_{s}}{n}>0\). 2. Let \(\{q_{0n}\}\), \(\{q_{0sn}\}\) for \(s=1,\ldots,S\), \(\{s_{n}\}\) and \(\{d_{n}\}\) be increasing sequences of positive integers and \(\{c_{n}\}\) be an increasing sequence of positive real numbers such that \(\sum_{s=1}^{S}q_{0sn}\leq q_{0n}<d_{n}\). 3. \(\boldsymbol{\Lambda}_{0n}\) is a \(d_{n}\times q_{0n}\) full rank matrix such that each column of \(\boldsymbol{\Lambda}_{0n}\) belongs to \(\ell_{0}\left[s_{n},d_{n}\right]\), \(\left\|\boldsymbol{\Lambda}_{0n}\right\|_{2}^{2}=o(\frac{c_{n}}{q_{0n}})\) and \(\max_{1\leq s\leq S}\left\|\boldsymbol{\Lambda}_{0sn}\right\|_{2}^{2}=o(q_{0n})\). 4. \(\max_{j}\delta_{0d_{n}}^{2}=o(c_{n})\) and \(\min_{j}\delta_{0d_{n}}^{2}>\delta_{\min}^{2}\) where \(\delta_{\min}^{2}\) is a positive constant. 5. \(\frac{d_{n}\max\{(\log c_{n})^{2},\log d_{n}\}}{c_{n}^{2}s_{n}q_{0n}\log(d_{n} q_{0n})}=o(1)\). In less technical terms, (C1) ensures that none of the studies has a negligible proportion of the data. (C2) and (C3) specify the requisite sparsity conditions. Together, (C3) and (C4) imply that the marginal variances grow in \(o(c_{n})\) while ensuring that they are also bounded away from \(0\). The technical condition (C5) specifies the relative rate of increment of the parameters. Specifics of the posited SUFA model:In practice, the latent dimensions are unknown, and it is desirable to establish theory for the strategy of overspecifying a model and then shrinking redundant parameters via shrinkage priors. This section identifies conditions under which we may derive guarantees in this overparametrized setting. To this end, let \(\mathbf{Y}_{s,1:n_{s}}\stackrel{{\text{iid}}}{{\sim}}\mathrm{N} _{d_{n}}(\boldsymbol{0},\boldsymbol{\Lambda}_{n}\boldsymbol{\Lambda}_{n}^{ \mathrm{T}}+\boldsymbol{\Lambda}_{n}\boldsymbol{\Lambda}_{sn}\boldsymbol{ \Lambda}_{sn}^{\mathrm{T}}\boldsymbol{\Lambda}_{n}^{\mathrm{T}}+\boldsymbol{ \Delta}_{n})\) where \(\boldsymbol{\Lambda}_{n}\) and \(\boldsymbol{\Lambda}_{sn}\) are \(d_{n}\times q_{n}\) and \(q_{n}\times q_{sn}\) matrices, respectively, and \(\boldsymbol{\Delta}_{n}=\mathrm{diag}(\delta_{1}^{2},\ldots,\delta_{d_{n}}^{2})\). The parameters in the posited (possibly misspecified) model do not have \(0\) subscripts, to distinguish from their ground truth counterparts. We let \(\{q_{n}\}\), \(\{q_{sn}\}\) be increasing sequences of positive integers such that \(q_{0n}\leq q_{n}\), \(q_{0sn}\leq q_{sn}\) so that the chosen number of shared and study-specific latent factors upper bound the respective true (also unknown) numbers resulting in a misspeci fied and over-parametrized factor model in the postulation. We additionally assume \(\sum_{s=1}^{S}q_{sn}\leq q_{n}\) to ensure the identifiability condition from Theorem 1. As discussed in Section 2.2 we let \(\text{vec}(\mathbf{\Lambda}_{n})\sim\text{DL}(a_{n})\) with \(a_{n}=1/d_{n}q_{n}\) to impose sparsity on \(\mathbf{\Lambda}_{n}\). Below, we provide sufficient conditions on the overspecified dimensions to ensure consistency of the class of postulated models even when they are misspecified. 1. [label=(D0)] 2. \(q_{n}^{2}=o(c_{n}^{2}s_{n}q_{0n})\) and \(\lim_{n\to\infty}\frac{c_{n}^{12}}{n}\{s_{n}q_{0n}\log(d_{n}q_{n})\}^{3}=0\). 3. \(\log(d_{n}q_{n})\min\left\{\frac{c_{n}}{s_{n}q_{0n}^{3}}(d_{n}q_{n})^{c_{n}^{ 2}},s_{n}(d_{n}q_{n})^{\frac{c_{n}^{2}}{q_{n}^{2}}s_{n}q_{0n}},\frac{s_{n}q_{0 n}}{(\log c_{n})^{2}}(d_{n}q_{n})^{\frac{c_{n}^{2}}{d_{n}}s_{n}q_{0n}} \right\}\succ n\). The first condition 1 makes explicit how much overparametrization can be tolerated, while the second condition specifies the signal-to-noise ratio. The second condition in 1 and 5 jointly imply that \(d_{n}\log d_{n}\) grows in \(o(n^{\alpha})\) for some \(0<\alpha<1\). 2 can be interpreted as a lower bound on the signal strength in order to recover the covariance structures from the data. Contraction results:Let \(\mathbf{\Theta}_{0n}=\{\mathbf{\Lambda}_{0n},\mathbf{\Delta}_{0n},\mathbf{\Lambda}_{01n}, \ldots,\mathbf{\Lambda}_{0Sn}\}\) be the true data-generating parameters and \(\mathbb{P}_{0n}\) denote the joint distribution of \(\mathcal{D}_{n}\) under \(\mathbf{\Theta}_{0n}\). We define \(\mathbf{\Sigma}_{0n}=\mathbf{\Lambda}_{0n}\mathbf{\Lambda}_{0n}^{\text{T}}+\mathbf{\Delta}_{0n}\) as the true shared covariance structure and \(\mathbf{\Sigma}_{n}=\mathbf{\Lambda}_{n}\mathbf{\Lambda}_{n}^{\text{T}}+\mathbf{\Delta}_{n}\) to be the shared covariance in the postulated model. The following result formally characterizes the posterior contraction properties around \(\mathbf{\Sigma}_{0n}\). **Theorem 2** (Contraction rate).: _For any \(\mathbf{\Theta}_{0n}\) satisfying 1-5 and priors satisfying 1-2, we have \(\mathbb{E}_{\mathbb{P}_{0n}}\lim_{n\to\infty}\Pi_{n}\left(\left\|\mathbf{\Sigma}_ {n}-\mathbf{\Sigma}_{0n}\right\|_{2}>M\varepsilon_{n}\mid\mathcal{D}_{n}\right)=0\), where \(\varepsilon_{n}=c_{n}^{6}\sqrt{\frac{\{s_{n}q_{0n}\log(d_{n}q_{n})\}^{3}}{n_{1 }+\cdots+n_{S}}}\) and \(M>0\) is a large enough constant._ Theorem 2 has several important implications for modeling and computation. First, the combined sample size appears in the denominator of the contraction rate \(\varepsilon_{n}\), which is consistent with the intuition that learning the shared covariance structure improves inference by borrowing strength across multiple studies or data views. Because the true values of \(q_{0n}\) and \(q_{0sn}\) are unknown, we made the weak assumption that the specified column dimensions are larger than the ground truth dimensions. Although the best contraction rate is unsurprisingly achieved when correctly setting \(q_{n}=q_{0n}\), Theorem 2 validates the practice of beginning with overparametrized models, showing that it is still possible to recover \(\mathbf{\Sigma}_{0n}\). This result provides an important theoretical foundation to the common heuristic strategy of setting the column dimension of \(\mathbf{\Lambda}\) to some upper bound in FA model implementations. Note that \(\mathbf{\Lambda}_{0n}\mathbf{\Lambda}_{0sn}\mathbf{\Lambda}_{0sn}^{\mathrm{T}}\mathbf{\Lambda}_ {0n}^{\mathrm{T}}\mathrm{s}\) are the study-specific covariance structures. The following result shows that these study-specific terms can also be recovered for all \(s\). **Corollary 1** (Recovering individual structures).: _Under the same conditions as Theorem 2, \(\lim_{n\to\infty}\mathbb{E}_{\mathbb{P}_{0n}}\Pi_{n}\left(\left\|\mathbf{\Lambda}_ {n}\mathbf{\Lambda}_{sn}\mathbf{\Lambda}_{sn}^{\mathrm{T}}\mathbf{\Lambda}_{n}^{\mathrm{T} }-\mathbf{\Lambda}_{0n}\mathbf{\Lambda}_{0sn}\mathbf{\Lambda}_{0sn}^{\mathrm{T}}\mathbf{\Lambda }_{0n}^{\mathrm{T}}\right\|_{2}>M\varepsilon_{n}\mid\mathcal{D}_{n}\right)=0\)._ ## 4 Gradient-based Posterior Computation Gibbs sampling is the most popular approach for posterior inference in Bayesian factor models. While such an approach is straightforward for our model (1), it is well known that alternately updating latent factors and covariance structure parameters can lead to slow mixing. In addition, instantiating the latent factors \(\mathbf{\eta}_{s,i}\) and \(\mathbf{\zeta}_{s,i}\) quickly increases computational cost for large samples. It has been observed that marginalizing out auxiliary parameters can often dramatically improve MCMC performance (Robert and Roberts, 2021). In view of this, we develop a Hamiltonian Monte Carlo (HMC, Neal, 2011) within Gibbs sampler that makes use of the marginal posterior, after integrating out all latent factors. The HMC sampler makes smarter proposals utilizing the gradient of the log-target density. We will see that our proposed algorithm confers substantial gains in real computation times compared to standard Gibbs approaches. Denote the log-likelihood of the marginal SUFA model (2) by \(\frac{1}{2}\sum_{s=1}^{S}\{n_{s}\log|\mathbf{\Sigma}_{s}|+\text{trace}(\mathbf{\Sigma}_{s} ^{-1}\mathbf{W}_{s})\}\) with marginal covariance matrix \(\mathbf{\Sigma}_{s}=\mathbf{\Lambda}(\mathbf{I}_{q}+\mathbf{A}_{s}\mathbf{A}_{s}^{ \text{T}})\mathbf{\Lambda}^{\text{T}}+\mathbf{\Delta}\), sample sum of squares matrix \(\mathbf{W}_{s}=\sum_{i=1}^{n_{s}}\mathbf{Y}_{s,i}\mathbf{Y}_{s,i}^{\text{T}}\) for study \(s\) and a constant \(K\). The DL prior on \(\mathbf{\Lambda}\) admits a Gaussian distribution conditionally on the hyperparameters \(\tau\), \(\mathbf{\psi}\) and \(\mathbf{\phi}\) (see (S.1) in the supplementary materials for details). Thus from the prior specifications outlined in Section 2.2, the posterior density of \(\mathbf{\Theta}:=(\mathbf{\Lambda},\mathbf{\Delta},\mathbf{A}_{1},\ldots,\mathbf{A}_{S})\) given \(\mathbf{\psi},\mathbf{\phi},\tau\) and the observed data is given by \[\Pi(\mathbf{\Theta}\mid-)=\exp(\mathcal{L})\times\Pi(\mathbf{\Lambda}\mid\tau,\mathbf{ \phi},\mathbf{\psi})\times\Pi(\mathbf{\Delta})\times\prod_{s=1}^{S}\Pi(\mathbf{A}_{s}). \tag{8}\] To implement HMC, gradients of \(\log\Pi(\mathbf{\Theta}\mid-)\) with respect to \(\mathbf{\Theta}\) are required (see Section S.3 in the supplementary materials for a general discussion on HMC algorithms). In Section A.2 of the Appendix we show that the gradients can be analytically expressed. Equipped with these expressions, we propose an HMC-within-Gibbs sampler where \(\mathbf{\Theta}\) is updated given \((\tau,\mathbf{\phi},\mathbf{\psi})\) using an HMC step, and then the hyperparameters \((\tau,\mathbf{\phi},\mathbf{\psi})\) are updated conditionally on \(\mathbf{\Lambda}\) using a Gibbs step following Bhattacharya _et al._ (2015). The vanilla HMC algorithm requires the following tuning parameters: a positive real number \(\delta t\), a positive integer \(L\) and a positive definite matrix \(\mathbf{M}\) of the same order as \(\mathbf{\Theta}\). Letting \(\nabla V(\mathbf{\Theta})=-\frac{\partial}{\partial\mathbf{\Theta}}\log\Pi(\mathbf{\Theta} \mid-)\) be the gradient of the negative log-posterior, we outline the HMC-within-Gibbs sampler in Algorithm 1 to obtain \(N\) MCMC samples from the joint posterior distribution of \((\mathbf{\Theta},\tau,\mathbf{\psi},\mathbf{\phi})\). Implementing Algorithm 1 requires computing \(\log\Pi(\mathbf{\Theta}\mid-)\) as well as \(\frac{\partial}{\partial\mathbf{\Theta}}\log\Pi(\mathbf{\Theta}\mid-)\) at each MCMC step. In Section A.2 of the Appendix we show that these quantities can be computed very efficiently by distributing the calculations over parallel processes. Before validating theory and methods in empirical studies, we highlight two advantages of our sampler. First, it can be adapted to any prior with (conditionally) differentiable density, including the spike-and-slab (Ishwaran and Rao, 2005), spike-and-slab lasso (Rockova and George, 2016), horseshoe (Carvalho _et al._, 2009), multiplicative gamma (Bhattacharya and Dunson, 2011), generalized double Pareto (Armagan _et al._, 2013), cumulative shrinkage (Legramanti _et al._, 2020), and others. Almost all of these admit a conditionally Gaussian prior on \(\boldsymbol{\Lambda}\), so that adapting Algorithm 1 requires only modifying the Gibbs step. Second, a crucial advantage of Algorithm 1 is its scalability with respect to the sample size. In particular, the log-likelihood \(\mathcal{L}\) and its gradient depend on the data only through the study-specific sample sum of squares \(\mathbf{W}_{s}\), which just needs to be computed and cached once. ## 5 Empirical Study ### Simulated Experiments To assess the proposed methodology, we generate synthetic data according to the true model (3) under three scenarios induced by different \(\boldsymbol{\Lambda}\) matrices. The ground truth shared covariance structures are displayed in the first column of Figure 2; complete details of the simulation setup and further figures appear in Section S.4.1 of the supplementary materials. For each scenario, we also examine performance under two misspecified settings: (i) _slight misspecification_, where \(\mathbf{\Phi}_{s}=\mathbf{\Lambda}\mathbf{A}_{s}+\mathbf{E}_{s}\) and \(\mathbf{E}_{s}\) is a matrix of randomly generated errors, and (ii) _complete misspecification_, where \(\mathbf{\Phi}_{s}\) is independent of \(\mathbf{\Lambda}\) for each study \(s=1,\ldots,S=5\). We vary \(d=50,200\), and \(450\) and set \(n_{s}=\text{Poisson}(50/S)\), \(\text{Poisson}(200/S)\) and \(\text{Poisson}(400/S)\), respectively, for each \(s=1,\ldots,S\). We generate \(q_{s}\sim\min\{1,\text{Poisson}(q/S)\}\) where \(q=10\) for the \(d=50\) case and \(q=20\) otherwise. Finally, we repeat all experiments over 25 trials. Peer methods:We compare the performance of our proposed SUFA model with B-MSFA (De Vito _et al._, 2021) and PFA (Roy _et al._, 2021). However, PFA did not scale beyond \(d>50\) and therefore we could only apply it for \(d=50\). Both of these methods use the multiplicative gamma process prior (Bhattacharya and Dunson, 2011) on \(\mathbf{\Lambda}\) and available code requires the user to input an upper bound on the number of latent factors; we choose the bound following the strategy in Section 2.2. To assess model fit, we consider the _widely applicable Bayesian information criterion_(WBIC, Watanabe, 2013), with lower values implying better fit. Boxplots of the WBIC values across independent replicates are shown in the top panel of Figure 1. For \(d=50\), PFA performs slightly better than SUFA only in the completely misspecifed case. As dimension grows, performance improvements under SUFA become more evident. Identifiability issues with B-MSFA may degrade parameter estimation performance but are not expected to adversely impact goodness-of-fit measures such as WBIC. As SUFA was motivated by improving identifiability, the better model fit is surprising, and may be explained by (i) better borrowing of information as discussed in Section 2.1, (ii) more effective use of the sparsity in \(\mathbf{\Lambda}\) induced through the DL prior, and (iii) more efficient posterior sampling using the proposed HMC-within Gibbs sampler (itself in part due to improved parameter identifiability). We compute the Frobenius norms between the simulation truth and estimated values of the shared covariance matrix \(\mathbf{\Sigma}\). Boxplots of the norms across independent replicates are shown in the bottom panel of Figure 1. In almost every case, SUFA yields the lowest estimation errors. In Figure 2, we plot the shared correlation matrices \(\mathbf{R}=\text{diag}(\mathbf{\Sigma})^{-\frac{1}{2}}\mathbf{\Sigma}\text{ diag}(\mathbf{\Sigma})^{-\frac{1}{2}}\) recovered by SUFA. In each setting, as a representative of the 25 independent replicates, we choose the one with the median Figure 1: Comparing our proposed SUFA with B-MSFA and PFA across different simulation scenarios and dimensions: The top panel shows the boxplots of the WBIC values and the middle panel shows the boxplots of Frobenius norms between the true and estimated shared covariance structure \(\mathbf{\Sigma}\). Frobenius norm from the truth. We use the posterior mean across the MCMC samples as the point estimate. Due to page limits, we only show results for \(d=200\) in Figure 2. For easier visualization of the correlation structure, in Figure 2 we use gray color for correlations having absolute value less then \(0.10\). The heatplots under the "_True_" panel of Figure 2 show the simulation truths of the three different correlation structures; the "_Slight_" and "_Complete_" _misspecification_ panels show the recovered Figure 2: The true and estimated shared correlation matrix: horizontal panels correspond to the different factor models considered. In each panel, the true correlation matrix is shown in the leftmost plot; the middle and right plots are the point estimates in the slight and complete misspecified cases respectively. \(\mathbf{R}\) by the SUFA model for the two model misspecification types under consideration. The figure clearly indicates that SUFA accurately recovers the shared correlation structures under misspecified scenarios as well. Additional favorable results on inferring the loadings \(\boldsymbol{\Lambda}\) are included in Section S.4.2 of the supplementary materials. Finally, we compare the total runtimes for \(7,500\) MCMC iterations in Figure 3. PFA is slowest due to requiring updates of \(S\) dense \(d\times d\) matrices. However, owing to the distributed computing implementation of our HMC-within-Gibbs sampler, we observe dramatic improvements in computational efficiency using SUFA. Complete details of the parallelized implementation and a complexity analysis showing that each HMC step is of order \(\mathcal{O}(Lqd^{2})\) appear in Section A.2 of the Appendix. ### Application to Gene Expression Data We now turn to a case study on gene associations among immune cells, which serve specialized roles in innate and adaptive immune responses that function to eliminate antigens (Gonzalez _et al._, 2018). Understanding gene associations in immune cells is of particular current interest as an essential step in developing cancer therapeutics in immunotherapy (Tan _et al._, 2020). Here, we integrate data from three studies analyzing gene expressions. The first is the GSE109125 bulkRNAseq dataset, collected from 103 highly purified immunocyte populations representing all lineages and sev Figure 3: Comparing execution times (in Minutes) of B-MSFA, PFA and SUFA across increasing dimensions. eral differentiation cascades and profiled using the ImmGen pipeline (Yoshida _et al._, 2019). The second study is a microarray dataset GSE15907 (Painter _et al._, 2011; Desch _et al._, 2011), measured on multiple _ex-vivo_ immune lineages, primarily from adult B6 male mice. Finally, we include the GSE37448 (Elpek _et al._, 2014) microarray dataset, also part of the Immgen project. After standard pre-processing (detailed in Section S.5.2 of the supplementary material), we work with 474 common genes measured on 156, 146 and 628 cells from the respective datasets. We apply SUFA and B-MSFA to the integrated datasets, resulting in WBIC values of 424278 and 445916, respectively. This suggests that SUFA provides a better fit, and we hence focus on interpreting the SUFA results. First, we visualize the estimated shared correlation structure between genes in Figure 4(a). We use the uncertainty quantified by the posterior samples to infer whether correlations are zero--following common practice in the literature on Bayesian inference on sparsity patterns in covariance matrices (Ksheera Sagar _et al._, 2021), we encode the off-diagonals of the correlation matrix as zero if the respective 95% posterior credible interval contains zero. This leads to a distinctive sparsity pattern. Next, we focus on identifying important hub genes that have absolute correlation \(\geq 0.25\) with at least 10 other genes. The resulting dependency network is summarized as a circos plot (Gu _et al._, 2014) in Figure 4(b). Our findings are quite consistent with results from the prior literature: we observe strong positive correlation between the genes Ly6a, Ly6c1 and Ly6c2 identified within the murine Ly6 complex on chromosome 15 (Lee _et al._, 2013); similar behavior is observed within the membrane-spanning 4A class of genes, namely Ms4a4c, Ms4a4b and Ms4a6b (Liang _et al._, 2001). Bcl2a1b Figure 4: Results on ImmGen data: panel (a) shows the transpose of the shared loading matrix \(\mathbf{\Lambda}\) and correlation matrix derived from \(\mathbf{\Sigma}\) from left to right; panel (b) is the circos plot of the dependency structure between genes, blue (red) connection implies a positive (negative), their opacities being proportional to the corresponding association strengths. Associations are only plotted for absolute correlation \(\geq 0.25\). and Bcl2a1d genes-two functional isoforms of the B cell leukemia 2 family member (Schenk _et al._, 2017) also show strong positive correlations. In addition to corroborating qualitative results in the prior literature with a principled statistical analysis, our study also reveals new insights via integrating the data. We find strong positive associations between genes Bub1, Ccna2, Ccnb2, Top2a, which corroborates findings from a study by Ashrafi _et al._ (2021) on adult human T cell leukemia. Interestingly, we find this group of genes strongly negatively correlated with those in the Gimap family, which are involved in lymphocyte development and play important roles in immune system homeostasis. On the other hand, the analysis reveals that the Gimap class features positive within-group associations, supporting recent studies that suggest Gimap proteins may interact with each other in roles such as moving cellular cargo along the cytoskeletal network (Limoges _et al._, 2021). Upregulation of the Ccr2 gene has been found to be associated with cancer advancement, metastasis and relapse (Hao _et al._, 2020), whereas Ccr5 inhibitors exhibit negative association with lung metastasis of human breast cancer cell lines (Velasco-Velazquez _et al._, 2012). Coherent with these studies, we find strong positive correlation between Ccr2 and Ccr5. We also observe Ccr2 to be highly positively correlated with Il18rap, supported by genome-wide association studies identifying their susceptibility with coeliac diseases (Amundsen _et al._, 2010). Additionally we see strong positive association between the structurally related genes Il18rap and Il18r1 within the Il18 receptor complex (Parnet _et al._, 1996). Interestingly, strong negative correlation is observed between the Il18 family of genes and the gene Cd81, which is required for multiple normal physiological functions (Levy, 2014). The joint analysis of multiple datasets using our SUFA framework thus reveals interesting findings adding to the relevant immunology literature, discovering potential relationships that warrant further scientific study. At a less detailed level, we visualize the top eight columns with highest sum-of-squares value from the shared loading matrix, and repeat for the top three columns of each study-specific loading matrix. These appear in Figure 4(c). The columns from the shared loading strongly indicate the existence of latent factors that govern the variability between correlated genes. Coherent with empirical studies (Wang _et al._, 2014), the strongest signals appear in the bulkRNAseq data despite having much smaller sample size than the microarrays. The loadings of Microarray 2 show much stronger signals than those in Microarray 1, possibly due to the larger sample size of the former. These intuitive findings, corroborated by results from the prior literature, illustrate the interpretability of the learned study-specific loadings under SUFA. ## 6 Discussion This article proposes a new factor analytic approach for covariance learning by integrating multiple related studies in high-dimensional settings. We provide practical solutions and theoretical guarantees for learning the shared covariance structure, resolving some long-standing identifiablity issues in the literature. We also quantify the utility of data integration under the proposed model by way of improved learning rates with each additional study in our analysis of the posterior contraction properties. Finally, these contributions are developed alongside a scalable HMC approach to posterior inference capable of handing massive datasets. In thorough realistic simulation studies, we show that our proposed method outperforms several existing approaches in multiple aspects. Our approach yields new insights when applied to a gene network estimation problem, integrating immune cell gene expression datasets. Leveraging low-rank matrix decompositions, factor models have widespread scope in contemporary data analysis regimes beyond our focus on estimating covariance structure. These extensions include measurement error problems (Sarkar _et al._, 2021), model-based clustering (Chandra _et al._, 2023), conditional graph estimation (Chandra _et al._, 2021), among many others. Our proposed SUFA approach can be readily adopted in these domains. Integrating data from multiple sources is of interest in many applications involving non-Gaussian data as well. Flexible copula-based factor models (Murray _et al._, 2013; Lu _et al._, 2017) can potentially handle such data. In the presence of multiple studies, our approach can be adopted using additional hierarchies in copula-based factor models. These extensions represent several examples of promising future directions building upon the ideas in this article, extending their scope beyond the problems we study here. ## Supplementary Materials Complete prior specifications, proofs and details of the theoretical results and technical lemmas, extended simulation results, and details on the gene expression datasets. ## Acknowledgments The authors are grateful for partial funding support from NIH grants R01-ES027498 and R01-ES028804, NSF grants DMS-2230074 and PIPP-2200047, and ERC Horizon 2020 grant agreement No. 856506. ## Appendix ### Proofs of Main Results **Proof of Lemma 1.** We first show that information switching or the existence of a matrix \(\mathbf{C}\) as mentioned in (**I**) implies \(\bigcap_{s=1}^{S}\mathbb{C}\left(\mathbf{A}_{s}\right)\) is non-null. Since \(\mathbf{C}\) is symmetric, \(\mathbf{C}=\mathbf{H}\mathbf{D}\mathbf{H}^{\mathrm{T}}\) where \(\mathbf{H}=\begin{bmatrix}\mathbf{h}_{1}&\cdots&\mathbf{h}_{q}\end{bmatrix}\) is an orthogonal matrix and \(\mathrm{diag}(d_{1},\ldots,d_{q})\) is a diagonal matrix with real entries. Clearly \(\mathbf{C}=\sum_{i=1}^{q}d_{i}\mathbf{h}_{i}\mathbf{h}_{i}^{\mathrm{T}}\). Letting \(\mathbf{C}^{+}=\sum_{i=1}^{q}d_{i}\mathbb{1}_{d_{i}>0}\mathbf{h}_{i}\mathbf{h}_ {i}^{\mathrm{T}}\) and \(\mathbf{C}^{-}=-\sum_{i=1}^{q}d_{i}\mathbb{1}_{d_{i}<0}\mathbf{h}_{i}\mathbf{h }_{i}^{\mathrm{T}}\), we have \(\mathbf{C}=\mathbf{C}^{+}-\mathbf{C}^{-}\). Let us first consider the case where \(\mathbf{C}^{+}=\mathbf{0}\). Since by assumption \(\mathbf{A}_{s}\mathbf{A}_{s}^{\mathrm{T}}+\mathbf{C}\) is positive semi-definite (psd), for any non-zero vector \(\mathbf{x}\), \(\mathbf{x}^{\mathrm{T}}(\mathbf{A}_{s}\mathbf{A}_{s}^{\mathrm{T}}+\mathbf{C} )\mathbf{x}=0\) for all \(s\) implies that \(\mathbf{x}^{\mathrm{T}}\mathbf{C}^{-}\mathbf{x}=0\). Now for any psd matrix \(\mathbf{B}\) and non-zero vector \(\mathbf{x}\), \(\mathbf{x}^{\mathrm{T}}\mathbf{B}\mathbf{x}=0\) implies that \(\mathbf{x}\in\mathbb{N}\left(\mathbf{B}\right)\) where \(\mathbb{N}\left(\mathbf{B}\right)\) denotes the nullity of the column space of \(\mathbf{B}\). Hence \(\mathbb{N}\left(\mathbf{A}_{s}\right)\subset\mathbb{N}\left(\mathbf{C}^{-}\right)\) for all \(s=1,\ldots,S\) implying that \(\mathbb{C}\left(\mathbf{C}^{-}\right)\subseteq\bigcap_{s=1}^{S}\mathbb{C} \left(\mathbf{A}_{s}\right)\). To complete the argument, we consider the case where \(\mathbf{C}^{+}\) is non-null. Note that \(\mathbf{I}_{q}-\mathbf{C}\succ\mathbf{0}\) implies that \(\mathbf{C}^{+}\prec\mathbf{I}_{q}\) and therefore, without loss of generality, we can assume \(\mathbf{C}^{-}=\mathbf{0}\). Henceforth we have \(\widetilde{\mathbf{A}}_{s}\widetilde{\mathbf{A}}_{s}^{\mathrm{T}}=\mathbf{A}_ {s}\mathbf{A}_{s}^{\mathrm{T}}+\mathbf{C}^{+}\). Redefine \(\mathbf{\Phi}:=\mathbf{\Phi}(\mathbf{I}_{q}-\mathbf{C}^{+})^{\frac{1}{2}}\) and \(\mathbf{A}_{s}:=(\mathbf{I}_{q}-\mathbf{C}^{+})^{-\frac{1}{2}}\widetilde{ \mathbf{A}}_{s}\). For any non-zero vector \(\mathbf{x}\), \(\mathbf{x}^{\mathrm{T}}\mathbf{A}_{s}\mathbf{A}_{s}^{\mathrm{T}}\mathbf{x}=0\) implies that \(\mathbf{x}^{\mathrm{T}}\mathbf{C}^{+}\mathbf{x}=0\). Using similar arguments as in the preceding paragraph, we have \(\mathbb{C}\left(\mathbf{C}^{+}\right)\subseteq\bigcap_{s=1}^{S}\mathbb{C} \left(\mathbf{A}_{s}\right)\). Next we show that a non-null \(\bigcap_{s=1}^{S}\mathbb{C}\left(\mathbf{A}_{s}\right)\) guarantees the existence of a (not necessarily unique) psd matrix \(\mathbf{C}\) such that \(\mathbf{A}_{s}\mathbf{A}_{s}^{\mathrm{T}}-\mathbf{C}\succeq\mathbf{0}\) for all \(s\) resulting in information switching. Let \(\mathbf{H}=\begin{bmatrix}\mathbf{h}_{1}&\cdots&\mathbf{h}_{r}\end{bmatrix}\) be a \(q\times r\) matrix with orthonormal columns such that the columns form a basis of \(\bigcap_{s=1}^{S}\mathbb{C}\left(\mathbf{A}_{s}\right)\). Let \(\mathbf{A}_{s}\mathbf{A}_{s}^{\mathrm{T}}=\mathbf{H}_{s}\mathbf{D}_{s} \mathbf{H}_{s}^{\mathrm{T}}\) be the spectral decomposition of \(\mathbf{A}_{s}\mathbf{A}_{s}^{\mathrm{T}}\). Clearly \(\mathbf{D}_{s}\) are diagonal matrices with non-negative entries. Let \(\varsigma\) be the minimum among all positive entries of \(\mathbf{D}_{s}\) across all \(s=1,\ldots,S\). Define \(\mathbf{C}=\varsigma\mathbf{H}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbf{ }}}}}}}}}}}}}}} }} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}} }}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf }}}}}}}}}}}}}}} }}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbf{ }}}}}}}}}}}}}}} }}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}} }}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}} }}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }\mathbf{\mathbf{\mathbf{\mathbf{ } }}}}}}}}}}}}} }}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\}}}}}}}}}}}}} }}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\}}}}}}}}}}}}} }}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}} }}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}} }\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}} }\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}}\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}}\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{\mathbf{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf 0}}}}}}}\) **}}\) **} ** ** **} ** **} ** **} ** ** **} ** To prove the theorem, we show that, as \(n\to\infty\), the posterior distribution on \(\mathcal{P}_{1,n}\) concentrates around \(\mathbf{\Sigma}_{0n}\) while the remaining mass assigned to \(\mathcal{P}_{2,n}\) diminishes to zero. We define \(B_{n,0}(\mathbf{\Theta}_{0n},\epsilon)=\{\mathbf{\Theta}_{n}\in\mathcal{P}_{n}:\mathbb{KL }\left(\mathbb{P}_{0n}\parallel\mathbb{P}_{n}\right)\leq n\epsilon^{2}\}\) as the set of probability measures characterized by \(\mathbf{\Theta}_{n}\) within \(\epsilon\)-radius of \(\mathbb{P}_{0n}\) in KL divergence. We state the following lemma. **Lemma 2**.: _Let \(e_{n}\) be a semimetric on \(\mathcal{P}_{n}\) such that \(e_{n}(\mathbf{\Theta}_{n},\mathbf{\Theta}_{0n})=\frac{1}{n\tau_{n}^{2}c_{n}^{3}}\big{(} \big{\|}\mathbf{\Sigma}_{n}-\mathbf{\Sigma}_{0n}\big{\|}_{2}-c_{n}^{2}\epsilon_{n})\) where \(\epsilon_{n}\asymp c_{n}n\tau_{n}^{3}\). Then \(\Pi_{n}\left\{\mathbf{\Theta}_{n}\in\mathcal{P}_{1,n}:e_{n}\left(\mathbf{\Theta}_{n},\mathbf{\Theta}_{0n}\right)>M\tau_{n}\mid\mathcal{D}_{n}\right\}\to 0\) in \(\mathbb{P}_{0n}\)-probability for a sufficiently large \(M\) as long as the following conditions hold for sufficiently large \(j\in\mathbb{N}\):_ 1. _For some_ \(C>0\)_,_ \(\Pi_{n}\left\{B_{n,0}(\mathbf{\Theta}_{0n},\tau_{n})\right\}\geq e^{-Cn\tau_{n}^{2}}\)_._ 2. _Define the set_ \(G_{j,n}=\left\{\mathbf{\Theta}_{n}\in\mathcal{P}_{1,n}:j\tau_{n}<e_{n}\left(\mathbf{ \Theta}_{n},\mathbf{\Theta}_{0n}\right)\leq 2j\tau_{n}\right\}\)_. There exist test functions_ \(\varphi_{n}\) _such that for some_ \(K>0\)_,_ \(\lim_{n\to\infty}\mathbb{E}_{\mathbf{\Theta}_{0n}}\varphi_{n}=0\) _and_ \(\sup_{\mathbf{\Theta}_{n}\in G_{j,n}}\mathbb{E}_{\mathbf{\Theta}_{n}}(1-\varphi_{n}) \leq\exp(-Knj^{2}\tau_{n}^{2})\)_._ The remaining technical arguments are included in full in the Supplemental Materials. We prove Lemma 2 in Section S.2.1, and verify condition (I) in Theorem 3 and show the existence of a sequence of test functions satisfying condition (II) in Theorem 4 there. Note that \(e_{n}\left(\mathbf{\Theta}_{n},\mathbf{\Theta}_{0n}\right)>M\tau_{n}\Leftrightarrow \left\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}_{0n}\right\|_{2}>M\varepsilon_{n}\). This implies that \(\Pi_{n}(\mathbf{\Theta}_{n}\in\mathcal{P}_{1,n}:\left\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma} _{0n}\right\|_{2}>M\varepsilon_{n}\mid\mathcal{D}_{n})\to 0\) in \(\mathbb{P}_{0n}\)-probability. Subsequently applying the dominated convergence theorem (DCT) establishes that \(\lim_{n\to\infty}\mathbb{E}_{\mathbb{P}_{0n}}\Pi_{n}(\mathbf{\Theta}_{n}\in \mathcal{P}_{1,n}:\left\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}_{0n}\right\|_{2}>M \varepsilon_{n}\mid\mathcal{D}_{n})=0\). It remains to show that the remaining mass assigned to \(\mathcal{P}_{2,n}\) goes to \(0\), which is established in Theorem 5. Proof of Corollary 1.: Let \(\mathbf{F}_{sn}=\mathbf{\Lambda}_{n}\mathbf{A}_{sn}\mathbf{A}_{sn}^{\mathrm{T}} \mathbf{\Lambda}_{n}^{\mathrm{T}}\) and \(\mathbf{F}_{0sn}=\mathbf{\Lambda}_{0n}\mathbf{A}_{0sn}\mathbf{A}_{0sn}^{\mathrm{T}} \mathbf{\Lambda}_{0n}^{\mathrm{T}}\). Using the notations defined in the proof of Theorem 2, we have \(\Pi_{n}(\left\|\mathbf{F}_{sn}-\mathbf{F}_{0sn}\right\|_{2}>M\varepsilon_{n} \mid\mathcal{D}_{n})\leq\Pi_{n}(\mathbf{\Theta}_{n}\in\mathcal{P}_{1,n}:\left\| \mathbf{F}_{sn}-\mathbf{F}_{0sn}\right\|_{2}>M\varepsilon_{n}\mid\mathcal{D}_ {n})+\Pi_{n}\left(\mathcal{D}_{2,n}\mid\mathcal{D}_{n}\right)\). As Theorem 5 in the supplementary materials shows that \(\Pi_{n}\left(\mathcal{P}_{2,n}\mid\mathcal{D}_{n}\right)\to 0\) in \(\mathbb{P}_{0n}\)-probability we focus on the first expression in the previous display in this proof. Note that \(\mathbf{\Sigma}_{0sn}=\mathbf{\Sigma}_{0n}+\mathbf{F}_{0sn}\) and \(\mathbf{\Sigma}_{sn}=\mathbf{\Sigma}_{n}+\mathbf{F}_{sn}\), and therefore \(\Pi_{n}(\mathbf{\Theta}_{n}\in\mathcal{P}_{1,n}:\left\|\mathbf{F}_{sn}-\mathbf{F}_ {0sn}\right\|_{2}>M\varepsilon_{n}\mid\mathcal{D}_{n})\leq\Pi_{n}(\mathbf{\Theta} _{n}\in\mathcal{P}_{1,n}:\left\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}_{0n}\right\|_{2}> \frac{M}{2}\varepsilon_{n}\mid\mathcal{D}_{n})+\Pi_{n}(\mathbf{\Theta}_{n}\in \mathcal{P}_{1,n}:\left\|\mathbf{\Sigma}_{sn}-\mathbf{\Sigma}_{0sn}\right\|_{2}>\frac{M }{2}\varepsilon_{n}\mid\mathcal{D}_{n})\). In Theorem 2 we established that the first term in the previous display diminishes to \(0\) in \(\mathbb{P}_{0n}\)-probability; in Theorem 6 of the supplementary materials we show that \(\Pi_{n}(\mathbf{\Theta}_{n}\in\mathcal{P}_{1,n}:\left\|\mathbf{\Sigma}_{sn}-\mathbf{\Sigma} _{0sn}\right\|_{2}>\frac{M}{2}\varepsilon_{n}\mid\mathcal{D}_{n})\to 0\) in \(\mathbb{P}_{0n}\)-probability. Hence \(\Pi_{n}(\left\|\mathbf{F}_{sn}-\mathbf{F}_{0sn}\right\|_{2}>M\varepsilon_{n} \mid\mathcal{D}_{n})\to 0\) in \(\mathbb{P}_{0n}\)-probability. Subsequently applying DCT we conclude the proof. ### Sampler Details and Distributed Computation In this section, instead of \(\mathbf{\Delta}\) we use its one-to-one transformation \(\widetilde{\mathbf{\delta}}=(\widetilde{\delta}_{1},\ldots,\widetilde{\delta}_{d})^{ \mathrm{T}}=\log\mathrm{diag}(\mathbf{\Delta})\). It allows updating the \(\widetilde{\mathbf{\delta}}\) vector in the unrestricted space \(\mathbb{R}^{d}\) resulting in simplified numerical operations. Accordingly, we redefine \(\mathbf{\Theta}:=(\mathbf{\Lambda},\widetilde{\mathbf{\delta}},\mathbf{\Lambda}_{1},\ldots,\bm {\Lambda}_{S})\). Gradients of the \(\log\)-posterior:Following Equation (8), we have \[\log\Pi(\mathbf{\Theta}\mid-)=\mathcal{L}+\log\Pi(\mathbf{\Lambda}\mid\mathbf{\psi},\mathbf{ \phi},\tau)+\log\Pi(\widetilde{\mathbf{\delta}})+\sum_{s=1}^{S}\log\Pi(\mathbf{ \Lambda}_{s}),\] where \(\log\Pi(\mathbf{\Lambda}\mid\mathbf{\psi},\mathbf{\phi},\tau)=K_{\mathbf{\Lambda}}-\frac{1}{2 \tau^{2}}\sum_{j=1}^{d}\sum_{h=1}^{q}\frac{\lambda_{j,h}^{2}}{\psi_{j,h} \phi_{j,h}^{2}},\log\Pi(\widetilde{\mathbf{\delta}})=K_{\widetilde{\mathbf{\delta}}}- \frac{1}{2\sigma_{\delta}^{2}}\sum_{j=1}^{d}(\widetilde{\delta}_{j}-\mu_{ \delta})^{2}\) and \(\log\Pi(\mathbf{\Lambda}_{s})=K_{\mathbf{\Lambda}_{s}}-\frac{1}{2b_{\mathbf{\Lambda}}} \sum_{j=1}^{q}\sum_{h=1}^{q_{s}}a_{s,j,h}^{2}\). The constants \(K_{\mathbf{\Lambda}}\), \(K_{\widetilde{\mathbf{\delta}}}\) and \(K_{\mathbf{\Lambda}_{s}}\)s are never required as they cancel out in the Metropolis-Hastings ratio. Analytical expressions for the partials of \(\log\Pi(\mathbf{\Lambda}\mid-)\) are given below: \[\frac{\partial}{\partial\mathbf{\Lambda}}\log\Pi(\mathbf{\Theta}\mid-)=- \sum_{s=1}^{S}\mathbf{G}_{s}\mathbf{\Lambda}\mathbf{C}_{s}-\Big{(}\frac{\lambda_{j,h}}{\psi_{j,h}\phi_{j,h}^{2}\tau^{2}}\Big{)}_{d\times q}, \tag{9}\] \[\frac{\partial}{\partial\mathbf{\delta}}\log\Pi(\mathbf{\Theta}\mid-)=- \frac{1}{2}\sum_{s=1}^{S}\mathrm{diag}(\mathbf{G}_{s})\odot\exp\Bigl{(} \widetilde{\mathbf{\delta}}\Bigr{)}-\Big{(}\frac{\widetilde{\delta}_{1}-\mu_{ \delta}}{\sigma_{\delta}^{2}},\ldots,\frac{\widetilde{\delta}_{d}-\mu_{ \delta}}{\sigma_{\delta}^{2}}\Bigr{)}^{\mathrm{T}},\] (10) \[\frac{\partial}{\partial\mathbf{\Lambda}_{s}}\log\Pi(\mathbf{\Theta}\mid -)=-\mathbf{\Lambda}^{\mathrm{T}}\mathbf{G}_{s}\mathbf{\Lambda}\mathbf{A}_{s}-\frac{1} {b_{\mathbf{\Lambda}}}\mathbf{A}_{s}, \tag{11}\] where \(\mathbf{G}_{s}=n_{s}\mathbf{\Sigma}_{s}^{-1}-\mathbf{\Sigma}_{s}^{-1}\mathbf{W}_{s} \mathbf{\Sigma}_{s}^{-1}\), \(\mathbf{C}_{s}=\mathbf{I}_{q}+\mathbf{A}_{s}\mathbf{A}_{s}^{\mathrm{T}}\) and \(\odot\) denotes the Hadamard product between two vectors/matrices. Distributed gradient and likelihood computation:In each HMC step of Algorithm 1, calculating \(\log\Pi(\mathbf{\Theta}\mid-)\) and its gradients \(\frac{\partial}{\partial\mathbf{\Theta}}\log\Pi(\mathbf{\Theta}\mid-)\) are required for \(L\) many leapfrog operations. Note that the first terms in the RHS of (9)-(11) are gradients of \(\mathcal{L}\) with respect to parameters, while the second terms are gradients of the prior densities. Although computing the gradients of \(\mathcal{L}\) are seemingly intensive, these can be obtained very efficiently by dividing the calculations into parallel sub-operations across different studies. These sub-operations can be distributed over parallel processes in multi-thread computing systems. We elaborate the procedure in Algorithm 2, and illustrate it in a schematic diagram in Figure 5. The figure sketches out a flowchart to avoid repetitive calculations by storing variables and subsequently using them in following steps. The next lemma establishes its complexity. **Lemma 3**.: _The runtime complexity of each HMC step is \(\mathcal{O}(Lqd^{2})\) with \(L\) being the number of leapfrog steps._ Proof.: Each HMC step comprises calculating the gradients of the \(\log\)-posterior followed by evaluating the density at the proposed value. We separately compute the numerical complexities of each. Gradient computation:As steps 7 and 8 are simply parallelized sums of objects with size at most \(qd\), it suffices to focus attention on steps 2-5. These are parallelized across studies \(s=1,\ldots,S\), so we analyze the cost for each study as follows. **Step 2:**: Computing the Cholesky factor \(\mathbf{C}_{s}^{\frac{1}{2}}\) is order \(\mathcal{O}(q^{3})\) since \(\mathbf{C}_{s}\) has a low-rank plus diagonal decomposition, and Cholesky decomposition of a \(q\times q\) matrix is \(\mathcal{O}(q^{3})\) in the worst case. Next, the matrix product \(\widetilde{\boldsymbol{\Lambda}}_{s}=\boldsymbol{\Lambda}\mathbf{C}_{s}^{ \frac{1}{2}}\) is \(\mathcal{O}(q^{2}d)\). Finally, owing to the low-rank and diagonal decomposition structure, the inversion \(\boldsymbol{\Sigma}_{s}^{-1}\) has complexity \(\mathcal{O}(q^{2}d)\). Therefore the overall complexity of this step is \(\mathcal{O}\{\max(q^{2}d,q^{3})\}=\mathcal{O}(q^{2}d)\) since in our setting \(q\ll d\). **Step 3:**: Noting that \(\boldsymbol{\Sigma}_{s}^{-1}=(\widetilde{\boldsymbol{\Lambda}}_{s}\widetilde{ \boldsymbol{\Lambda}}_{s}^{\mathrm{T}}+\boldsymbol{\Delta})^{-1}=\boldsymbol{ \Delta}^{-1}-\boldsymbol{\Delta}^{-1}\widetilde{\boldsymbol{\Lambda}}_{s}( \mathbf{I}_{q}+\widetilde{\boldsymbol{\Lambda}}_{s}^{\mathrm{T}}\boldsymbol {\Delta}^{-1}\widetilde{\boldsymbol{\Lambda}}_{s})^{-1}\widetilde{ \boldsymbol{\Lambda}}_{s}^{\mathrm{T}}\boldsymbol{\Delta}^{-1}\) reveals that computing \(\mathbf{G}_{s}\) is \(\mathcal{O}(qd^{2})\). **Steps 4 and 5:**: Because the factor \(\mathbf{G}_{s}\) is cached from the preceding step, obtaining the derivative (following the flowchart in Figure 5) only requires simple matrix multiplies, with complexity no more than \(\mathcal{O}(qd^{2})\). Hence, the dominant complexity of steps 2-5 is \(\mathcal{O}(qd^{2})\), so that the combined computational complexity of all leapfrog steps is \(\mathcal{O}(Lqd^{2})\). Figure 5: Schematic diagram of the distributed gradient computation. log-posterior computation:From equation (8) it can be seen that evaluating \(\mathcal{L}\) at a proposal is the expensive step involving high-dimensional matrix operations while the prior densities are quite simple to evaluate. Therefore, we focus on the complexity of calculating \(\mathcal{L}\): owing to the structure of \(\mathbf{\Sigma}_{s}\), \(|\mathbf{\Sigma}_{s}|\) can be computed in \(\mathcal{O}(q^{2}d)\). Using the already cached \(\mathbf{\Sigma}_{s}^{-1}\mathbf{W}_{s}\) from step 3, \(\text{trace}(\mathbf{\Sigma}_{s}^{-1}\mathbf{W}_{s})\) can be computed in \(\mathcal{O}(d)\). Both of the aforementioned operations are done in parallel across \(s\), so evaluating the log-posterior has complexity no more than \(\mathcal{O}(q^{2}d)\). Hence, the runtime complexity of each HMC step is \(\mathcal{O}(Lqd^{2})\). ## References * Amundsen et al. (2010) Amundsen, S. S. _et al._ (2010). Four novel coeliac disease regions replicated in an association study of a Swedish-Norwegian family cohort. _Genes & Immunity_, **11**, 79-86. * Armagan et al. (2013) Armagan, A., Dunson, D. B., and Lee, J. (2013). Generalized double Pareto shrinkage. _Statistica Sinica_, **23**, 119-143. * Ashrafi et al. (2021) Ashrafi, F., Ghezeldasht, S. A., and Ghobadi, M. Z. (2021). Identification of joint gene players implicated in the pathogenesis of HTLV-1 and BLV through a comprehensive system biology analysis. _Microbial Pathogenesis_, **160**, 105153. * Baglama and Reichel (2005) Baglama, J. and Reichel, L. (2005). Augmented implicitly restarted Lanczos bidiagonalization methods. _SIAM Journal on Scientific Computing_, **27**, 19-42. * Bhattacharya and Dunson (2011) Bhattacharya, A. and Dunson, D. B. (2011). Sparse Bayesian infinite factor models. _Biometrika_, **98**, 291-306. * Bhattacharya et al. (2015) Bhattacharya, A., Pati, D., Pillai, N. S., and Dunson, D. B. (2015). Dirichlet-Laplace priors for optimal shrinkage. _Journal of the American Statistical Association_, **110**, 1479-1490. * Carvalho et al. (2008) Carvalho, C. M., Chang, J., Lucas, J. E., Nevins, J. R., Wang, Q., and West, M. (2008). High-dimensional sparse factor modeling: Applications in gene expression genomics. _Journal of the American Statistical Association_, **103**, 1438-1456. * Carvalho et al. (2009) Carvalho, C. M., Polson, N. G., and Scott, J. G. (2009). Handling sparsity via the horseshoe. In _Artificial Intelligence and Statistics_, pages 73-80. PMLR. * Chandra et al. (2021) Chandra, N. K., Muller, P., and Sarkar, A. (2021). Bayesian precision factor analysis for high-dimensional sparse Gaussian graphical models. _arXiv:2107.11316_. * Chandra et al. (2023) Chandra, N. K., Canale, A., and Dunson, D. B. (2023). Escaping the curse of dimensionality in Bayesian model-based clustering. _Journal of Machine Learning Research_. To appear. * Chandra et al. (2015) Dai, F., Dutta, S., and Maitra, R. (2020). A matrix-free likelihood method for exploratory factor analysis of high-dimensional Gaussian data. _Journal of Computational and Graphical Statistics_, **29**, 675-680. PMID: 33041614. * De Vito et al. (2019) De Vito, R., Bellio, R., Trippa, L., and Parmigiani, G. (2019). Multi-study factor analysis. _Biometrics_, **75**, 337-346. * De Vito et al. (2021) De Vito, R., Bellio, R., Trippa, L., and Parmigiani, G. (2021). Bayesian multistudy factor analysis for high-throughput biological data. _The Annals of Applied Statistics_, **15**, 1723-1741. * Desch et al. (2011) Desch, A. N., Randolph, G. J., Murphy, K., _et al._ (2011). CD103+ pulmonary dendritic cells preferentially acquire and present apoptotic cell-associated antigen. _Journal of Experimental Medicine_, **208**, 1789-1797. * Elpek et al. (2014) Elpek, K. G., Cremasco, V., Shen, H., _et al._ (2014). The tumor microenvironment shapes lineage, transcriptional, and functional diversity of infiltrating myeloid cells. _Cancer Immunology Research_, **2**, 655-667. * Erosheva and Curtis (2017) Erosheva, E. A. and Curtis, S. M. (2017). Dealing with reflection invariance in Bayesian factor analysis. _Psychometrika_, **82**, 295-307. * Fan et al. (2008) Fan, J., Fan, Y., and Lv, J. (2008). High dimensional covariance matrix estimation using a factor model. _Journal of Econometrics_, **147**, 186-197. * Franks and Hoff (2019) Franks, A. M. and Hoff, P. (2019). Shared subspace models for multi-group covariance estimation. _Journal of Machine Learning Research_, **20**, 1-37. * Fruhwirth-Schnatter and Lopes (2018) Fruhwirth-Schnatter, S. and Lopes, H. F. (2018). Sparse Bayesian factor analysis when the number of factors is unknown. _arXiv preprint arXiv:1804.04231_. * Gelman (2006) Gelman, A. (2006). Prior distributions for variance parameters in hierarchical models. _Bayesian Analysis_, **1**, 515-534. * Gonzalez et al. (2018) Gonzalez, H., Hagerling, C., and Werb, Z. (2018). Roles of the immune system in cancer: From tumor initiation to metastatic progression. _Genes & Development_, **32**, 1267-1284. * Green (1995) Green, P. J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. _Biometrika_, **82**, 711-732. * Gu et al. (2014) Gu, Z., Gu, L., Eils, R., Schlesner, M., and Brors, B. (2014). _circlize_ implements and enhances circular visualization in R. _Bioinformatics_, **30**, 2811-2812. * Hao et al. (2020) Hao, Q., Vadgama, J. V., and Wang, P. (2020). CCL2/CCR2 signaling in cancer pathogenesis. _Cell Communication and Signaling_, **18**, 1-13. * Heng et al. (2008) Heng, T. S., Painter, M. W., _et al._ (2008). The immunological genome project: networks of gene expression in immune cells. _Nature Immunology_, **9**, 1091-1094. * Hagerling and Werb (2018) Iacob, E., Light, A. R., Donaldson, G. W., _et al._ (2016). Gene expression factor analysis to differentiate pathways linked to fibromyalgia, chronic fatigue syndrome, and depression in a diverse patient sample. _Arthritis Care & Research_, **68**, 132-140. * Ishwaran and Rao (2005) Ishwaran, H. and Rao, J. S. (2005). Spike and slab variable selection: Frequentist and Bayesian strategies. _The Annals of Statistics_, **33**, 730-773. * Kaiser (1958) Kaiser, H. F. (1958). The varimax criterion for analytic rotation in factor analysis. _Psychometrika_, **23**, 187-200. * Kepler et al. (2002) Kepler, T. B., Crosby, L., and Morgan, K. T. (2002). Normalization and analysis of DNA microarray data by self-consistency and local regression. _Genome Biology_, **3**, 1-12. * Knowles and Ghahramani (2011) Knowles, D. and Ghahramani, Z. (2011). Nonparametric Bayesian sparse factor models with application to gene expression modeling. _The Annals of Applied Statistics_, **5**, 1534-1552. * Ksheera Sagar et al. (2021) Ksheera Sagar, K. N., Banerjee, S., Datta, J., and Bhadra, A. (2021). Precision matrix estimation under the horseshoe-like prior-penalty dual. arXiv:2104.10750. * Lee et al. (2013) Lee, P. Y., Wang, J.-X., Parisini, E., Dascher, C. C., and Nigrovic, P. A. (2013). Ly6 family proteins in neutrophil biology. _Journal of Leukocyte Biology_, **94**, 585-594. * Legramanti et al. (2020) Legramanti, S., Durante, D., and Dunson, D. B. (2020). Bayesian cumulative shrinkage for infinite factorizations. _Biometrika_, **107**, 745-752. * Levy (2014) Levy, S. (2014). Function of the tetraspanin molecule CD81 in B and T cells. _Immunologic Research_, **58**, 179-185. * Liang et al. (2001) Liang, Y., Buckley, T. R., _et al._ (2001). Structural organization of the human MS4A gene cluster on chromosome 11q12. _Immunogenetics_, **53**, 357-368. * Limoges et al. (2021) Limoges, M.-A., Cloutier, M., _et al._ (2021). The GIMAP family proteins: An incomplete puzzle. _Frontiers in Immunology_, **12**, 2046. * Lu et al. (2017) Lu, M.-J., Chen, C. Y.-H., and Hardle, W. K. (2017). Copula-based factor model for credit risk analysis. _Review of Quantitative Finance and Accounting_, **49**, 949-971. * Millsap (2001) Millsap, R. E. (2001). When trivial constraints are not trivial: The choice of uniqueness constraints in confirmatory factor analysis. _Structural Equation Modeling: A Multidisciplinary Journal_, **8**, 1-17. * Murray et al. (2013) Murray, J. S., Dunson, D. B., Carin, L., and Lucas, J. E. (2013). Bayesian Gaussian copula factor models for mixed data. _Journal of the American Statistical Association_, **108**, 656-665. * Neal (2011) Neal, R. (2011). MCMC using Hamiltonian dynamics. In Chapter 5 of the Handbook of Markov Chain Monte Carlo Edited by Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng. * Nelder and Rubin (1982) Painter, M. W., Davis, S., Hardy, R. R., _et al._ (2011). Transcriptomes of the B and T lineages compared by multiplatform microarray profiling. _The Journal of Immunology_, **186**, 3047-3057. * Papastamoulis and Ntzoufras (2022) Papastamoulis, P. and Ntzoufras, I. (2022). On the identifiability of bayesian factor analytic models. _Statistics and Computing_, **32**, 23. * Parnet _et al._ (1996) Parnet, P., Garka, K. E., _et al._ (1996). IL-1Rrp is a novel receptor-like molecule similar to the type I Interleukin-1 receptor and its homologues T1/ST2 and IL-1R AcP. _Journal of Biological Chemistry_, **271**, 3967-3970. * Pati _et al._ (2014) Pati, D., Bhattacharya, A., Pillai, N. S., and Dunson, D. (2014). Posterior contraction in sparse Bayesian factor models for massive covariance matrices. _The Annals of Statistics_, **42**, 1102-1130. * Poworoznek _et al._ (2021) Poworoznek, E., Ferrari, F., and Dunson, D. (2021). Efficiently resolving rotational ambiguity in Bayesian matrix sampling with matching. _arXiv:2107.13783_. * Robert and Roberts (2021) Robert, C. P. and Roberts, G. (2021). Rao-Blackwellisation in the Markov chain Monte Carlo era. _International Statistical Review_, **89**, 237-249. * Rockova and George (2016) Rockova, V. and George, E. I. (2016). Fast Bayesian factor analysis via automatic rotations to sparsity. _Journal of the American Statistical Association_, **111**, 1608-1622. * Rohe and Zeng (2020) Rohe, K. and Zeng, M. (2020). Vintage factor analysis with varimax performs statistical inference. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_. To appear. * Roy _et al._ (2021) Roy, A., Lavine, I., Herring, A. H., and Dunson, D. B. (2021). Perturbed factor analysis: Accounting for group differences in exposure profiles. _The Annals of Applied Statistics_, **15**, 1386-1404. * Russell (2002) Russell, D. W. (2002). In search of underlying dimensions: The use (and abuse) of factor analysis in personality and social psychology bulletin. _Personality and Social Psychology Bulletin_, **28**(12), 1629-1646. * Sabnis _et al._ (2016) Sabnis, G., Pati, D., Engelhardt, B., and Pillai, N. (2016). A divide and conquer strategy for high dimensional Bayesian factor models. _arXiv:1612.02875_. * Sarkar _et al._ (2021) Sarkar, A., Pati, D., Mallick, B. K., and Carroll, R. J. (2021). Bayesian copula density deconvolution for zero-inflated data in nutritional epidemiology. _Journal of the American Statistical Association_, **116**, 1075-1087. * Schenk _et al._ (2017) Schenk, R. L., Tuzlak, S., _et al._ (2017). Characterisation of mice lacking all functional isoforms of the pro-survival BCL-2 family member A1 reveals minor defects in the haematopoietic compartment. _Cell Death & Differentiation_, **24**, 534-545. * Schiavon _et al._ (2022) Schiavon, L., Canale, A., and Dunson, D. B. (2022). Generalized infinite factorization models. _Biometrika_, **109**, 817-835. * Schiavon _et al._ (2016) Tan, S., Li, D., and Zhu, X. (2020). Cancer immunotherapy: Pros, cons and beyond. _Biomedicine & Pharmacotherapy_, **124**, 109821. * Trendafilov _et al._ (2017) Trendafilov, N. T., Fontanella, S., and Adachi, K. (2017). Sparse exploratory factor analysis. _Psychometrika_, **82**, 778-794. * Velasco-Velazquez _et al._ (2012) Velasco-Velazquez, M., Jiao, X., _et al._ (2012). CCR5 antagonist blocks metastasis of basal breast cancer cells. _Cancer Research_, **72**, 3839-3850. * Wang _et al._ (2014) Wang, C., Gong, B., _et al._ (2014). The concordance between RNA-seq and microarray data depends on chemical treatment and transcript abundance. _Nature Biotechnology_, **32**, 926-932. * Watanabe (2013) Watanabe, S. (2013). A widely applicable Bayesian information criterion. _Journal of Machine Learning Research_, **14**, 867-897. * Yoshida _et al._ (2019) Yoshida, H., Lareau, C. A., _et al._ (2019). The cis-regulatory atlas of the mouse immune system. _Cell_, **176**, 897-912. Supplementary Materials for **Inferring Covariance Structure from Multiple Data Sources via Subspace Factor Analysis** Noirrit Kiran Chandra\({}^{\dagger}\), David B. Dunson\({}^{\ddagger}\), Jason Xu\({}^{\ddagger,*}\) \({}^{\dagger}\)Department of Mathematical Sciences, The University of Texas at Dallas, Richardson, TX Email: [email protected] \({}^{\ddagger}\)Department of Statistical Science, Duke University, Durham, NC \({}^{*}\)Department of Biostatistics and Bioinformatics, Duke University, Durham, NC Supplementary materials provide further details on Hamiltonian Monte Carlo, complete prior specifications, proofs and details of the theoretical results and technical lemmas, extended simulation results, and details on the gene expression datasets. ## S.1 Details on Prior Specifications Specifics of the Dirichlet-Laplace (DL) prior:On a \(d\)-dimensional vector \(\boldsymbol{\theta}\), the DL prior with parameter \(a\), denoted by \(\mathrm{DL}(a)\), can be specified hierarchically as \[\theta_{j}\mid\boldsymbol{\psi},\boldsymbol{\phi},\tau\stackrel{{ \mathrm{ind}}}{{\sim}}\mathrm{N}(0,\psi_{j}\phi_{j}^{2}\tau^{2}),\ \psi_{j}\stackrel{{ \mathrm{iid}}}{{\sim}}\mathrm{Exp}\left(\tfrac{1}{2}\right),\ \boldsymbol{\phi}\sim\mathrm{Dir}(a,\ldots,a),\ \tau\sim \mathrm{Ga}\left(da,\tfrac{1}{2}\right),\] (S.1) where \(\theta_{j}\) is the \(j^{th}\) element of \(\boldsymbol{\theta}\), \(\tau\in\mathbb{R}\), \(\boldsymbol{\psi},\boldsymbol{\phi}\in\mathbb{R}^{d}\), \(\mathrm{Exp}(a)\) is an exponential distribution with mean \(1/a\), \(\mathrm{Dir}(a_{1},\ldots,a_{d})\) is the \(d\)-dimensional Dirichlet distribution, and \(\mathrm{Ga}(a,b)\) is the gamma distribution with mean \(a/b\) and variance \(a/b^{2}\). Choice of hyperparameters:Following the suggestions by Bhattacharya _et al._ (2015) we set \(a=\tfrac{1}{2}\) as the DL hyperparameter. Recall that we have assumed \(a_{s,j,h}\stackrel{{\mathrm{iid}}}{{\sim}}\mathrm{N}(0,b_{ \boldsymbol{\Lambda}})\) and \(\log\delta_{j}^{2}\stackrel{{\mathrm{iid}}}{{\sim}}\mathrm{N}( \mu_{\delta},\sigma_{\delta}^{2})\) where \(\boldsymbol{\Lambda}_{s}=((a_{s,j,h}))\) and \(\boldsymbol{\Delta}=\mathrm{diag}(\delta_{1}^{2},\ldots,\delta_{d}^{2})\). To specify weakly informative priors, we set \(b_{\boldsymbol{\Lambda}}=1\) and choose \(\mu_{\delta},\sigma_{\delta}^{2}\) such that \(\mathbb{E}(\delta_{j}^{2})=1\) and \(\mathrm{var}(\delta_{j}^{2})=10\) a priori for all \(j=1,\ldots,d\). ## S.2 Proofs of Theoretical Results Notations:For two sequences \(a_{n},b_{n}\geq 0\), \(a_{n}\precsim b_{n}\) implies that \(a_{n}\leq Cb_{n}\) for some constant \(C>0\); \(a_{n}\asymp b_{n}\) implies that \(0<\liminf|a_{n}/b_{n}|\leq\limsup|a_{n}/b_{n}|<\infty\). \(|\boldsymbol{\Lambda}|\) denotes the determinant of the square matrix \(\boldsymbol{\Lambda}\). For a set \(S\), \(|S|\) denotes its cardinality. Let \(\|\boldsymbol{\mathrm{x}}\|\) be the Euclidean norm of a vector \(\boldsymbol{\mathrm{x}}\). We define \(\boldsymbol{\Theta}_{n}=\{\boldsymbol{\Lambda}_{n},\boldsymbol{\Delta}_{n}, \boldsymbol{\Lambda}_{1n},\ldots,\boldsymbol{\Lambda}_{Sn}\}\) as the set of all parameters and \(\boldsymbol{\Theta}_{0n}=\{\boldsymbol{\Lambda}_{0n},\boldsymbol{\Delta}_{0n},\boldsymbol{\Lambda}_{01n},\ldots,\boldsymbol{\Lambda}_{0S}\}\) as the true data-generating values, and let \(\mathbb{P}_{n}\) and \(\mathbb{P}_{0n}\) denote the joint distributions of \(\mathcal{D}_{n}=\{\boldsymbol{Y}_{1,1},\ldots,\boldsymbol{Y}_{1,n_{1}},\ldots,\boldsymbol{Y}_{S,1},\ldots,\boldsymbol{Y}_{S,n_{S}}\}\) under \(\boldsymbol{\Theta}_{n}\) and the true value \(\boldsymbol{\Theta}_{0n}\), respectively. We define \(\mathbb{KL}\left(\mathbb{P}_{0n}\parallel\mathbb{P}_{n}\right)\) to be the Kullback-Leibler (KL) divergence between \(\mathbb{P}_{0n}\) and \(\mathbb{P}_{n}\). For brevity of notations, we reuse \(C\), \(\widetilde{C}\), \(C^{\prime\prime}\), etc. in the proofs to denote constants whose values may not be the same throughout the same proof. Nevertheless, we are careful to make sure that these quantities are indeed constants. We define the following quantities extensively used in the proofs \[\widetilde{\boldsymbol{\Lambda}}_{0n} =\begin{bmatrix}\boldsymbol{\Lambda}_{0n}&\boldsymbol{0}_{d_{n} \times(q_{n}-q_{0n})}\end{bmatrix}:\boldsymbol{\Lambda}_{0n}\text{ padded with $\boldsymbol{0}$ columns to have the same order as }\boldsymbol{\Lambda}_{n},\] \[\widetilde{\boldsymbol{\Lambda}}_{0sn} =\begin{bmatrix}\boldsymbol{\Lambda}_{0sn}&\boldsymbol{0}_{0sn \times(q_{sn}-q_{0sn})}\end{bmatrix}:\boldsymbol{\Lambda}_{0sn}\text{ padded with $\boldsymbol{0}$ columns to have the same order as }\boldsymbol{\Lambda}_{sn},\] \[\widetilde{\boldsymbol{\Lambda}}_{0sn} =\widetilde{\boldsymbol{\Lambda}}_{0n}\widetilde{\boldsymbol{ \Lambda}}_{0sn}:\text{product of the padded matrices to have the same order as }\boldsymbol{\Lambda}_{n}\boldsymbol{\Lambda}_{sn}.\] **Theorem 3** (KL support).: \(\Pi_{n}\left\{B_{n,0}(\mathbb{P}_{0n},\tau_{n})\right\}\geq e^{-Cn\tau_{n}^{2}}\) _for \(n\tau_{n}^{2}\asymp c_{n}^{2}s_{n}q_{0n}\log(d_{n}q_{n})\)._ Proof.: Since the priors on \(\mathbf{\Lambda}_{n}\), \(\mathbf{\Lambda}_{sn}\) and \(\mathbf{\Delta}_{n}\) are independent, from Lemma 9, we have \[\Pi_{n}\left\{B_{n,0}(\mathbb{P}_{0n},\tau_{n})\right\}\geq\Pi_{n} \left\{\sum_{s=1}^{S}\frac{n_{s}\|\mathbf{\Sigma}_{0sn}-\mathbf{\Sigma}_{sn}\|_{F}^{2}} {n\delta_{\min}^{2}s_{\min}\left(\mathbf{\Sigma}_{sn}\right)}\leq\tau_{n}^{2}\right\}\] \[\geq \Pi_{n}\left(\left\|\mathbf{\Lambda}_{n}-\widetilde{\mathbf{\Lambda}}_{0n }\right\|_{F}<\frac{C\tau_{n}}{q_{0n}\sqrt{c_{n}}}\right)\times\Pi_{n}\left( \max_{1\leq s\leq S}\left\|\mathbf{\Lambda}_{sn}-\widetilde{\mathbf{\Lambda}}_{0sn} \right\|_{F}<\frac{C\tau_{n}}{c_{n}\sqrt{q_{0n}}}\right)\] \[\times\Pi_{n}\left\{\|\mathbf{\Delta}_{0n}-\mathbf{\Delta}_{n}\|_{F}\leq C \tau_{n},s_{\min}\left(\mathbf{\Delta}_{n}\right)\geq\nu\right\}.\] (S.2) We handle the \(\mathbf{\Lambda}_{s}\), \(\mathbf{\Lambda}\) and \(\mathbf{\Delta}\) parts in (S) separately. We then conclude the proof by showing that each part individually exceeds \(e^{-Cn\tau_{n}^{2}}\) for some constant \(C>0\). **The \(\mathbf{\Lambda}_{sn}\) term in (S):** Note that the priors on \(\mathbf{\Lambda}_{sn}\) are independent. Also from (C) \(\left\|\widetilde{\mathbf{\Lambda}}_{0sn}\right\|_{F}<\sqrt{q_{0sn}}\|\mathbf{\Lambda} _{0sn}\|_{2}=o(\sqrt{q_{0sn}q_{0n}})\). Further using Lemma 7 and the conditions \(\sum_{s=1}^{S}q_{0sn}\leq q_{0n}\) and \(\sum_{s=1}^{S}q_{sn}\leq q_{n}\), we obtain \[\Pi_{n}\left(\max_{1\leq s\leq S}\left\|\mathbf{\Lambda}_{sn}- \widetilde{\mathbf{\Lambda}}_{0sn}\right\|_{F}\leq\frac{C\tau_{n}}{c_{n}\sqrt{q_{ 0n}}}\right) \geq e^{-C^{\prime}\sum_{s=1}^{S}\max\{q_{n}q_{sn}\log\frac{c_{n} \sqrt{q_{0n}}}{\tau_{n}},q_{n}q_{sn}\log(q_{n}q_{sn}),q_{0n}q_{0n}\}}\] \[\geq e^{-C^{\prime\prime}\max(q_{n}^{2}\log\frac{c_{n}\sqrt{q_{0n }}}{\tau_{n}},q_{n}^{2}\log q_{n},q_{0n}^{2})}.\] (S.3) Using (D1) we get \(\max(q_{n}^{2}\log q_{n},q_{0n}^{2})=o(n\tau_{n}^{2})\), and (D2) implies that \(q_{n}^{2}\log\frac{c_{n}\sqrt{q_{0n}}}{\tau_{n}}=o\{c_{n}^{2}s_{n}q_{0n}\log(d_ {n}q_{n})\}\). Thus the quantity in (C) exceeds \(e^{-Cn\tau_{n}^{2}}\) for some \(C>0\). **The \(\mathbf{\Lambda}_{n}\) term in (S):** Using Pati _et al._ (2014, Lemma 7.1) we obtain \[\Pi_{n}\left(\left\|\mathbf{\Lambda}_{n}-\widetilde{\mathbf{\Lambda}}_{0n}\right\|_{F} <\frac{C\tau_{n}}{q_{0n}\sqrt{c_{n}}}\right)\geq e^{-C\max\left\{\|\mathbf{ \Lambda}_{0n}\|_{F}^{2},s_{n}q_{0n}\log\frac{s_{n}q_{0n}}{\frac{C\tau_{n}}{c_{ n}^{2}}\log(d_{n}q_{n})}\right\}}.\] (S.4) Note that from (C) \(\left\|\mathbf{\Lambda}_{0n}\right\|_{F}^{2}\leq q_{0n}\|\mathbf{\Lambda}_{0n}\|_{2}^{2} =o(c_{n})\) and hence \(\max\left\{\|\mathbf{\Lambda}_{0n}\|_{F}^{2},\log(d_{n}q_{n})\right\}=o(n\tau_{n}^ {2})\). Additionally using (D2), we obtain that \(s_{n}q_{0n}\log\frac{\frac{s_{n}q_{0n}}{\frac{C\tau_{n}}{q_{0n}\sqrt{c_{n}}}}}{ \frac{1}{2}s_{n}q_{0n}\log\frac{ns_{n}q_{0n}^{2}}{c_{n}\log(d_{n}q_{n})}}=o\{c _{n}^{2}s_{n}q_{0n}\log(d_{n}q_{n})\}\). Hence the RHS of (S) exceeds \(e^{-Cn\tau_{n}^{2}}\) for some \(C>0\). **The \(\mathbf{\Delta}_{n}\) term in (S):** Note that for a differentiable function \(f(\cdot)\) in a compact interval \([a,b]\), using the mean value theorem we have \(|f(w)-f(y)|=f^{\prime}(x)(b-a)\) where \(a<w<x<y<b\) implying that \(|f(w)-f(y)|\leq(b-a)\times\sup_{x\in(a,b)}|f^{\prime}(x)|\). Thus we have \(\nu\leq\min_{j}\delta_{jn}^{2}\leq\max_{j}\delta_{jn}^{2}\leq c_{n}+M\Rightarrow \left|\delta_{jn}^{2}-\delta_{0jn}^{2}\right|\leq(c_{n}+M)\log(c_{n}+M)\big{|} \log\delta_{jn}^{2}-\log\delta_{0jn}^{2}\big{|}\leq Cc_{n}\log c_{n}\big{|} \log\delta_{jn}^{2}-\log\delta_{0jn}^{2}\big{|}\) where \(C,M>0\) are positive large enough constants. Hence \[\Pi_{n}\left\{\|\mathbf{\Delta}_{n}-\mathbf{\Delta}_{0n}\right\|_{F}\leq C\tau_{n},s_{ \min}\left(\mathbf{\Delta}_{n}\right)\geq\nu\right\}\] **SUPPLEMENTARY MATERIALS** \[\geq \Pi_{n}\left(\left\|\mathbf{\Delta}_{n}-\mathbf{\Delta}_{0n}\right\|_{F} \leq C\tau_{n},\min_{1\leq j\leq d_{n}}\delta_{jn}^{2}\geq\nu,\max_{1\leq j\leq d _{n}}\delta_{jn}^{2}\leq c_{n}+M\right)\] \[\geq \Pi_{n}\left(\left\|\widetilde{\mathbf{\delta}}_{n}-\widetilde{\mathbf{ \delta}}_{0n}\right\|\leq\frac{C^{\prime}\tau_{n}}{c_{n}\log c_{n}},\min_{1 \leq j\leq d_{n}}\delta_{jn}^{2}\geq\nu,\max_{1\leq j\leq d_{n}}\delta_{jn}^{2 }\leq c_{n}+M\right),\] (S.5) where \(\widetilde{\mathbf{\delta}}_{n}=(\log\delta_{1n}^{2},\ldots,\log\delta_{d_{n}n}^{2 })^{\rm T}\) and \(\widetilde{\mathbf{\delta}}_{0n}=(\log\delta_{01n}^{2},\ldots,\log\delta_{0d_{n}n} ^{2})^{\rm T}\). Recall that \(\log\delta_{jn}^{2}\stackrel{{\rm iid}}{{\sim}}{\rm N}(\mu_{ \delta},\sigma_{\delta}^{2})\). Adding a constant to the elements of \(\widetilde{\mathbf{\delta}}_{0n}\) does not violate any assumptions and therefore without loss of generality the parameter \(\mu_{\delta}\) can be assumed to be zero. Note that \(\left\|\widetilde{\mathbf{\delta}}_{n}-\widetilde{\mathbf{\delta}}_{0n}\right\|\leq \frac{C^{\prime}\tau_{n}}{c_{n}\log c_{n}}\Rightarrow\max_{j}\left|\log \delta_{jn}^{2}-\log\delta_{0jn}^{2}\right|\leq\frac{C^{\prime}\tau_{n}}{c_{n} \log c_{n}}\). Since \(\tau_{n}=o(1)\) the last display in conjunction with (C4) further implies that \(\max_{j}\delta_{jn}^{2}<\max_{j}\delta_{0jn}^{2}+C^{\prime\prime}\tau_{n}<c_{ n}+M\) and \(\min_{j}\delta_{jn}^{2}>\min_{j}\delta_{0jn}^{2}-C^{\prime\prime}\tau_{n}>\nu\). Hence the set \(\left\{\widetilde{\mathbf{\delta}}_{n}:\left\|\widetilde{\mathbf{\delta}}_{n}- \widetilde{\mathbf{\delta}}_{0n}\right\|\leq\frac{C^{\prime}\tau_{n}}{c_{n}\log c _{n}}\right\}\) is a subset of \(\left\{\min_{1\leq j\leq d_{n}}\delta_{jn}^{2}\geq\nu,\max_{1\leq j\leq d_{n}} \delta_{jn}^{2}\leq c_{n}+M\right\}\). Therefore using Lemma 7 \[\Pi_{n}\left\{\left\|\mathbf{\Delta}_{n}-\mathbf{\Delta}_{0n}\right\|_{F} \leq C\tau_{n},s_{\min}\left(\mathbf{\Delta}_{n}\right)\geq\nu\right\}\geq\Pi_{n} \left(\left\|\widetilde{\mathbf{\delta}}_{n}-\widetilde{\mathbf{\delta}}_{0n}\right\| \leq\frac{C^{\prime}\tau_{n}}{c_{n}\log c_{n}}\right)\] \[\geq \exp\biggl{\{}-C\max\left(d_{n}\log\frac{\sqrt{\sigma_{\delta}}c_ {n}\log c_{n}}{\tau_{n}},d_{n}\log d_{n},\frac{1}{\sigma_{\delta}^{2}}\left\| \widetilde{\mathbf{\delta}}_{0n}\right\|^{2}\right)\biggr{\}}.\] (S.6) Using (C3) we have \(\left\|\widetilde{\mathbf{\delta}}_{0n}\right\|^{2}=o\{d_{n}(\log c_{n})^{2}\}\). Hence \(\max\left(d_{n}\log d_{n},\frac{1}{\sigma_{\delta}^{2}}\left\|\widetilde{\mathbf{ \delta}}_{0n}\right\|^{2}\right)=o\{c_{n}^{2}s_{n}q_{0n}\log(d_{n}q_{n})\}\) using (C5). Additionally from (D2) \(d_{n}\frac{n\sigma_{\delta}(\log c_{n})^{2}}{s_{n}q_{0n}\log(d_{n}q_{n})}=o\{ c_{n}^{2}s_{n}q_{0n}\log(d_{n}q_{n})\}\). Thus the RHS of (S.6) exceeds \(e^{-Cn\tau_{n}^{2}}\) for some \(C>0\). As we have shown that each of the product terms in (S.2) individually exceeds \(e^{-Cn\tau_{n}^{2}}\), we conclude the proof. **Theorem 4** (Test function).: _Recalling the definitions from the proof of Theorem 2, we let the set \(G_{j,n}=\{\mathbf{\Theta}_{n}\in\mathcal{P}_{1,n}:j\tau_{n}\leq e_{n}(\mathbf{\Theta}_{ n},\mathbf{\Theta}_{0n})\leq 2j\tau_{n}\}\) denote an annulus of inner and outer radii \(n\tau_{n}^{2}c_{n}^{3}(c_{n}^{2}\epsilon_{n}+j\tau_{n})\) and \(n\tau_{n}^{2}c_{n}^{3}(c_{n}^{2}\epsilon_{n}+2j\tau_{n})\), respectively, in spectral norm around \(\mathbf{\Sigma}_{0n}\) for positive integer \(j=o\left[\frac{\sqrt{n}}{c_{n}^{3}}\left\{s_{n}q_{0n}\log(d_{n}q_{n})\right\}^ {-\frac{3}{2}}\right]\). Based on observed data \(\mathcal{D}_{n}\) consider the following hypothesis testing problem \(H_{0}:\mathbf{\Theta}_{n}=\mathbf{\Theta}_{0n}\) versus \(H_{1}:\mathbf{\Theta}_{n}\in G_{j,n}\)._ _Define \(\mathbf{\Lambda}_{0sn}=\mathbf{\Lambda}_{0n}\left[\mathbf{I}_{\emptyset n_{0}}\quad \mathbf{A}_{0sn}\right]\), \(\widetilde{q}_{0,sn}=q_{0n}+q_{0sn}\), \(\mathbf{\Delta}_{\eta,0sn}=\mathbf{\Lambda}_{0sn}^{\rm T}\mathbf{\Delta}_{0n}^{-1}\mathbf{ \Lambda}_{0sn}\), \(\mathbf{\Sigma}_{\eta,0sn}=\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\mathbf{\Delta}_{ \eta,0sn}\right)^{-\frac{1}{2}}\mathbf{\Delta}_{\eta,0sn}\left(\mathbf{I}_{ \widetilde{q}_{0,sn}}+\mathbf{\Delta}_{\eta,0sn}\right)^{-\frac{1}{2}}\) and \(\mathbf{\Upsilon}_{s,in}=(\mathbf{I}_{\widetilde{q}_{0,sn}}+\mathbf{\Delta}_{\eta,0sn})^ {-1}\mathbf{\Lambda}_{0sn}^{\rm T}\mathbf{\Delta}_{0n}^{-1}\mathbf{Y}_{s,i}\). Further define \(\mathbf{\Xi}_{sn}=\mathbf{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}\left(\sum_{i=1}^{n_{s}} \mathbf{\Upsilon}_{s,in}^{\rm T}\mathbf{\Upsilon}_{s,in}^{\rm T}\right)\mathbf{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}\) and the indicator functions \(\varphi_{sn}=\mathbb{1}\left(\left\|\frac{1}{n_{s}}\mathbf{\Xi}_{sn}-\mathbf{I}_{ \widetilde{q}_{0,sn}}\right\|_{2}>\epsilon_{n}\right)\) for \(s=1,\ldots,S\). We define the test function: \(\varphi_{n}=1-\prod_{s=1}^{S}(1-\varphi_{sn})\). Then for some absolute constant \(K>0\)_ \[\mathbb{E}_{H_{0}}(\varphi_{n})\to 0;\qquad\sup_{\mathbf{\Theta}_{n}\in G_{j,n}} \mathbb{E}_{\mathbf{\Theta}_{n}}(1-\varphi_{n})\leq\exp\bigl{(}-Knj^{2}\tau_{n}^{2} \bigr{)}.\] Proof.: **Type-I error:** Note that under \(H_{0}\) \[\operatorname{var}_{H_{0}}(\boldsymbol{\Upsilon}_{s,in})\] \[=\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\boldsymbol{\Delta}_{ \eta,0sn}\right)^{-1}\boldsymbol{\Lambda}_{0sn}^{\mathrm{T}}\boldsymbol{ \Delta}_{0n}^{-1}(\boldsymbol{\Lambda}_{0sn}\boldsymbol{\Lambda}_{0sn}^{ \mathrm{T}}+\boldsymbol{\Delta}_{0n})\boldsymbol{\Delta}_{0n}^{-1}\boldsymbol{ \Lambda}_{0sn}\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\boldsymbol{\Delta}_{ \eta,0sn}\right)^{-1}\] (S.7) \[=\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\boldsymbol{\Delta}_{ \eta,0sn}\right)^{-1}\left(\boldsymbol{\Lambda}_{0sn}^{\mathrm{T}}\boldsymbol{ \Delta}_{0n}^{-1}\boldsymbol{\Lambda}_{0sn}\boldsymbol{\Lambda}_{0sn}^{ \mathrm{T}}\boldsymbol{\Delta}_{0n}^{-1}\boldsymbol{\Lambda}_{0sn}+ \boldsymbol{\Lambda}_{0sn}^{\mathrm{T}}\boldsymbol{\Delta}_{0n}^{-1} \boldsymbol{\Lambda}_{0sn}\right)\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+ \boldsymbol{\Delta}_{\eta,0sn}\right)^{-1}\] \[=\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\boldsymbol{\Delta}_{ \eta,0sn}\right)^{-1}\left(\boldsymbol{\Delta}_{\eta,0sn}^{2}+\boldsymbol{ \Delta}_{\eta,0sn}\right)\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\boldsymbol{ \Delta}_{\eta,0sn}\right)^{-1}\] \[=\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\boldsymbol{\Delta}_{ \eta,0sn}\right)^{-\frac{1}{2}}\boldsymbol{\Delta}_{\eta,0sn}\left(\mathbf{I}_ {\widetilde{q}_{0,sn}}+\boldsymbol{\Delta}_{\eta,0sn}\right)^{-\frac{1}{2}}= \boldsymbol{\Sigma}_{\eta,0sn},\] implying that \(\boldsymbol{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}\boldsymbol{\Upsilon}_{s,in} \overset{\mathrm{iid}}{\sim}\mathrm{N}_{q}(\mathbf{0},\mathbf{I}_{\widetilde{ q}_{0,sn}})\Rightarrow\frac{1}{n_{s}}\mathbb{E}_{H_{0}}(\boldsymbol{\Xi}_{sn})= \mathbf{I}_{\widetilde{q}_{0,sn}}\) for all \(s=1,\ldots,S\). From Vershynn (2012, Corollary 5.50), \(\mathbb{E}_{H_{0}}(\varphi_{sn})\leq 2\exp\Bigl{(}-\widetilde{C}\widetilde{q}_{0,sn}t_ {n}^{2}\Bigr{)}\) for a universal constant \(\widetilde{C}>0\) and any sequence \(t_{n}\) satisfying \(\widetilde{q}_{0,sn}t_{n}^{2}\leq n_{s}\epsilon_{n}^{2}\). From (C1), (C2) and (D1) \(\widetilde{q}_{0,sn}\rightarrow\infty\) and \(n_{s}\epsilon_{n}^{2}/\widetilde{q}_{0,sn}>c_{n}^{2}q_{0n}s_{n}\log(d_{n}q_{n})\rightarrow\infty\) as \(n\rightarrow\infty\). Hence \(t_{n}\) can be constructed such that \(t_{n}\succ 1\) and hence \(\lim_{n\rightarrow\infty}\mathbb{E}_{H_{0}}(\varphi_{sn})=0\). Since the \(\varphi_{sn}\)s are independent across \(s\), \(\lim_{n\rightarrow\infty}\mathbb{E}_{H_{0}}(\varphi_{n})=1-\prod_{s=1}^{S} \lim_{n\rightarrow\infty}\{1-\mathbb{E}_{H_{0}}(\varphi_{sn})\}=0\). **Type-II error:** For data generating parameters \(\boldsymbol{\Theta}_{n}\in G_{j,n}\), we define \(\boldsymbol{\Sigma}_{\eta,sn}=\mathrm{cov}(\boldsymbol{\Upsilon}_{s,in})\). Then \[\boldsymbol{\Sigma}_{\eta,sn}=\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+ \boldsymbol{\Delta}_{\eta,0sn}\right)^{-1}\boldsymbol{\Lambda}_{0sn}^{ \mathrm{T}}\boldsymbol{\Delta}_{0n}^{-1}(\boldsymbol{\Lambda}_{sn}\boldsymbol {\Lambda}_{sn}^{\mathrm{T}}+\boldsymbol{\Delta}_{n})\boldsymbol{\Delta}_{0n}^ {-1}\boldsymbol{\Lambda}_{0sn}\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+ \boldsymbol{\Delta}_{\eta,0sn}\right)^{-1}.\] Hence for \(i=1,\ldots,n_{s}\), \[1-\varphi_{sn}=1\left\{\left\|\frac{1}{n_{s}}\boldsymbol{\Sigma}_{ \eta,0sn}^{-\frac{1}{2}}\left(\sum_{i=1}^{n_{s}}\boldsymbol{\Upsilon}_{s,in} \boldsymbol{\Upsilon}_{s,in}^{\mathrm{T}}\right)\boldsymbol{\Sigma}_{\eta,0sn}^ {-\frac{1}{2}}-\mathbf{I}_{\widetilde{q}_{0,sn}}\right\|_{2}\leq\epsilon_{n}\right\}\] \[\leq 1\left\{\left\|\frac{1}{n_{s}}\boldsymbol{\Sigma}_{\eta,0sn}^{- \frac{1}{2}}\left(\sum_{i=1}^{n_{s}}\boldsymbol{\Upsilon}_{s,in}\boldsymbol{ \Upsilon}_{s,in}^{\mathrm{T}}-\boldsymbol{\Sigma}_{\eta,sn}\right)\boldsymbol{ \Sigma}_{\eta,0sn}^{-\frac{1}{2}}\right\|_{2}\geq\left\|\boldsymbol{\Sigma}_{ \eta,0sn}^{-\frac{1}{2}}\boldsymbol{\Sigma}_{\eta,sn}\boldsymbol{\Sigma}_{\eta,0 sn}^{-\frac{1}{2}}-\mathbf{I}_{\widetilde{q}_{0,sn}}\right\|_{2}- \epsilon_{n}\right\}\] \[\leq 1\left(\left\|\boldsymbol{\Sigma}_{\eta,0sn}^{-1}\boldsymbol{ \Sigma}_{\eta,sn}\right\|_{2}\left\|\frac{1}{n_{s}}\sum_{i=1}^{n_{s}} \mathbf{z}_{s,i}\mathbf{z}_{s,i}^{\mathrm{T}}-\mathbf{I}_{\widetilde{q}_{0,sn}} \right\|_{2}\geq\left\|\boldsymbol{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}\boldsymbol{ \Sigma}_{\eta,sn}\boldsymbol{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}-\mathbf{I}_{ \widetilde{q}_{0,sn}}\right\|_{2}-\epsilon_{n}\right).\] (S.8) Note that, \[\boldsymbol{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}\boldsymbol{\Sigma}_{ \eta,sn}\boldsymbol{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}-\mathbf{I}_{\widetilde{q}_{0,sn} }=\boldsymbol{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}(\boldsymbol{\Sigma}_{\eta,sn}- \boldsymbol{\Sigma}_{\eta,0sn})\boldsymbol{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}\text { and }\] \[\boldsymbol{\Sigma}_{\eta,sn}-\boldsymbol{\Sigma}_{\eta,0sn}=\left( \mathbf{I}_{\widetilde{q}_{0,sn}}+\boldsymbol{\Delta}_{\eta,0sn}\right)^{-1} \boldsymbol{\Lambda}_{0sn}^{\mathrm{T}}\boldsymbol{\Delta}_{0n}^{-1}(\boldsymbol{ \Sigma}_{sn}-\boldsymbol{\Sigma}_{0sn})\boldsymbol{\Delta}_{0n}^{-1}\boldsymbol{ \Lambda}_{0sn}\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\boldsymbol{\Delta}_{\eta,0sn} \right)^{-1}\] \[\boldsymbol{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}=\left(\mathbf{I}_{ \widetilde{q}_{0,sn}}+\boldsymbol{\Delta}_{\eta,0sn}\right)^{\frac{1}{2}} \left\{\boldsymbol{\Lambda}_{0sn}^{\mathrm{T}}\boldsymbol{\Delta}_{0n}^{-1}( \boldsymbol{\Lambda}_{0sn}\boldsymbol{\Lambda}_{0sn}^{\mathrm{T}}+\boldsymbol{ \Delta}_{\eta,0sn})\boldsymbol{\Delta}_{0n}^{-1}\boldsymbol{\Lambda}_{0sn} \right\}^{-\frac{1}{2}}\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\boldsymbol{ \Delta}_{\eta,0sn}\right)^{\frac{1}{2}}.\] We obtain the last identity in the above display from (S.7). Therefore, \[\mathbf{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}\mathbf{\Sigma}_{\eta,sn}\mathbf{\Sigma }_{\eta,0sn}^{-\frac{1}{2}}-\mathbf{I}_{\widetilde{q}_{0,sn}}=\left(\mathbf{I}_ {\widetilde{q}_{0,sn}}+\mathbf{\Delta}_{\eta,0sn}\right)^{\frac{1}{2}}\left\{\mathbf{ \Lambda}_{0sn}^{\mathrm{T}}\mathbf{\Delta}_{0n}^{-1}(\mathbf{\Lambda}_{0sn}\mathbf{ \Lambda}_{0sn}^{\mathrm{T}}+\mathbf{\Delta}_{0n})\mathbf{\Delta}_{0n}^{-1}\mathbf{\Lambda }_{0sn}\right\}^{-\frac{1}{2}}\\ \times\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\mathbf{\Delta}_{\eta,0 sn}\right)^{-\frac{1}{2}}\mathbf{\Lambda}_{0sn}^{\mathrm{T}}\mathbf{\Delta}_{0n}^{-1}(\mathbf{ \Sigma}_{sn}-\mathbf{\Sigma}_{0sn})\mathbf{\Delta}_{0n}^{-1}\mathbf{\Lambda}_{0sn}\left( \mathbf{I}_{\widetilde{q}_{0,sn}}+\mathbf{\Delta}_{\eta,0sn}\right)^{-\frac{1}{2}} \\ \times\left\{\mathbf{\Lambda}_{0sn}^{\mathrm{T}}\mathbf{\Delta}_{0n}^{-1}( \mathbf{\Lambda}_{0sn}\mathbf{\Lambda}_{0sn}^{\mathrm{T}}+\mathbf{\Delta}_{0n})\mathbf{ \Delta}_{0n}^{-1}\mathbf{\Lambda}_{0sn}\right\}^{-\frac{1}{2}}\left(\mathbf{I}_{ \widetilde{q}_{0,sn}}+\mathbf{\Delta}_{\eta,0sn}\right)^{\frac{1}{2}}.\] (S.9) Lemma 6 implies that \[\left\|\mathbf{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}\mathbf{\Sigma}_{\eta,sn} \mathbf{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}-\mathbf{I}_{\widetilde{q}_{0,sn}} \right\|_{2}\geq\left\|\mathbf{\Sigma}_{sn}-\mathbf{\Sigma}_{0sn}\right\|_{2}\times s _{\min}^{2}\left[\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\mathbf{\Delta}_{\eta,0 sn}\right)^{\frac{1}{2}}\times\\ \left\{\mathbf{\Lambda}_{0sn}^{\mathrm{T}}\mathbf{\Delta}_{0n}^{-1}(\mathbf{ \Lambda}_{0sn}\mathbf{\Lambda}_{0sn}^{\mathrm{T}}+\mathbf{\Delta}_{0n})\mathbf{\Delta}_{0 n}^{-1}\mathbf{\Lambda}_{0sn}\right\}^{-\frac{1}{2}}\times\left(\mathbf{I}_{ \widetilde{q}_{0,sn}}+\mathbf{\Delta}_{\eta,0sn}\right)^{-\frac{1}{2}}\mathbf{\Lambda }_{0sn}^{\mathrm{T}}\mathbf{\Delta}_{0n}^{-1}\right].\] (S.10) Let us define \(\mathbf{\chi}_{0sn}=\mathbf{\Delta}_{0n}^{-\frac{1}{2}}\mathbf{\Lambda}_{0sn}\). Then \(\mathbf{\Delta}_{\eta,0sn}=\mathbf{\chi}_{0sn}^{\mathrm{T}}\mathbf{\chi}_{0sn}\) and the term inside the \(s_{\min}^{2}\) in (S.10) simplifies to \[\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\mathbf{\chi}_{0sn}^{\mathrm{ T}}\mathbf{\chi}_{0sn}\right)^{\frac{1}{2}}\times\left\{\mathbf{\chi}_{0sn}^{\mathrm{T}} \mathbf{(\chi}_{0sn}\mathbf{\chi}_{0sn}^{\mathrm{T}}+\mathbf{I}_{d_{n}})\mathbf{\chi}_{0sn }\right\}^{-\frac{1}{2}}\\ \times\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\mathbf{\chi}_{0sn}^{ \mathrm{T}}\mathbf{\chi}_{0sn}\right)^{-\frac{1}{2}}\mathbf{\chi}_{0sn}^{\mathrm{T}} \mathbf{\Delta}_{0n}^{-\frac{1}{2}}.\] (S.11) Letting \(\chi_{0s,rn},r=1,\ldots,\widetilde{q}_{0,sn}\) denote the eigenvalues of \(\mathbf{\chi}_{0sn}^{\mathrm{T}}\mathbf{\chi}_{0sn}\) and using Lemma 6, we have \[s_{\min}^{2}\left[\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\mathbf{ \chi}_{0sn}^{\mathrm{T}}\mathbf{\chi}_{0sn}\right)^{\frac{1}{2}}\times\left\{\mathbf{ \chi}_{0sn}^{\mathrm{T}}(\mathbf{\chi}_{0sn}\mathbf{\chi}_{0sn}^{\mathrm{T}}+\mathbf{I }_{d_{n}})\mathbf{\chi}_{0sn}\right\}^{-\frac{1}{2}}\\ \times\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\mathbf{\chi}_{0sn}^{ \mathrm{T}}\mathbf{\chi}_{0sn}\right)^{-\frac{1}{2}}\mathbf{\chi}_{0sn}^{\mathrm{T}} \mathbf{\Delta}_{0n}^{-\frac{1}{2}}\right]\geq s_{\min}\left(\mathbf{\Delta}_{0n}^{-1} \right)\times\\ \left[\min_{r=1,\ldots,\widetilde{q}_{0,sn}}\left\{(1+\chi_{0s, rn})^{\frac{1}{2}}\left(\chi_{0s,rn}(1+\chi_{0s,rn})\right)^{-\frac{1}{2}}(1+\chi_{0s, rn})^{-\frac{1}{2}}\chi_{0s,rn}^{\frac{1}{2}}\right\}\right]^{2}\\ \geq\frac{1}{\left\|\mathbf{\Delta}_{0n}\right\|_{2}}\times\min_{r=1, \ldots,\widetilde{q}_{0,sn}}\frac{1}{1+\chi_{0s,rn}}\geq\frac{1}{\left\|\mathbf{ \Delta}_{0n}\right\|_{2}}\times\frac{1}{1+\left\|\mathbf{\Lambda}_{0sn}^{\mathrm{T}} \mathbf{\Delta}_{0n}^{-1}\mathbf{\Lambda}_{0sn}\right\|_{2}}.\] (S.12) From (C3)-(C4) \(\left\|\mathbf{\Delta}_{0n}\right\|_{2}=o(c_{n})\), \(\left\|\mathbf{\Lambda}_{0sn}\right\|_{2}^{2}=o(c_{n})\). Now combining (S.10)-(S.12), we get \[\left\|\mathbf{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}\mathbf{\Sigma}_{\eta,sn} \mathbf{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}-\mathbf{I}_{\widetilde{q}_{0,sn}}\right\| _{2} \geq\frac{\left\|\mathbf{\Sigma}_{sn}-\mathbf{\Sigma}_{0sn}\right\|_{2}}{ \left\|\mathbf{\Delta}_{0n}\right\|_{2}(1+\left\|\mathbf{\Lambda}_{0sn}^{\mathrm{T}} \mathbf{\Delta}_{0n}^{-1}\mathbf{\Lambda}_{0sn}\right\|_{2})}\] \[\geq\frac{\left\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}_{0n}\right\|_{2}}{ \left\|\mathbf{\Delta}_{0n}\right\|_{2}\left(1+\frac{\left\|\mathbf{\Delta}_{0sn} \right\|_{2}^{2}}{\delta_{\min}^{2}}\right)}\geq\frac{\left\|\mathbf{\Sigma}_{n}- \mathbf{\Sigma}_{0n}\right\|_{2}}{c_{n}^{2}}.\] (S.13) Similarly, we have \[\left\|\mathbf{\Sigma}_{\eta,0sn}^{-1}\mathbf{\Sigma}_{\eta,sn} \right\|_{2}\leq\left\|\mathbf{\Sigma}_{sn}\right\|_{2}\times\left\|\left( \mathbf{I}_{\widetilde{q}_{0,sn}}+\mathbf{\Delta}_{\eta,0sn}\right)^{\frac{1}{ 2}}\times\right.\] \[\quad\left.\left\{\mathbf{\Lambda}_{0sn}^{\mathrm{T}}\mathbf{ \Delta}_{0n}^{-\frac{1}{2}}(\mathbf{\Lambda}_{0sn}\mathbf{\Lambda}_{0sn}+ \mathbf{\Delta}_{0n})\mathbf{\Delta}_{0n}^{-\frac{1}{2}}\mathbf{\Lambda}_{0sn} \right\}^{-\frac{1}{2}}\times\left(\mathbf{I}_{\widetilde{q}_{0,sn}}+\mathbf{ \Delta}_{\eta,0sn}\right)^{-\frac{1}{2}}\mathbf{\Lambda}_{0sn}^{\mathrm{T}} \mathbf{\Delta}_{0n}^{-1}\right\|_{2}^{2}\] \[\leq \left\|\mathbf{\Sigma}_{sn}\right\|_{2}\times\left[\max_{1\leq r \leq\widetilde{q}_{0,sn}}\left\{(1+\chi_{0s,rn})^{\frac{1}{2}}\left(\chi_{0s, rn}(1+\chi_{0s,rn})\right)^{-\frac{1}{2}}(1+\chi_{0s,rn})^{-\frac{1}{2}} \chi_{0s,rn}^{\frac{1}{2}}\right\}\right]^{2}\] (S.13) \[\leq \left\|\mathbf{\Sigma}_{n}\right\|_{2}+\left\|\mathbf{\Lambda}_{ n}\mathbf{A}_{sn}\mathbf{A}_{sn}^{\mathrm{T}}\mathbf{\Lambda}_{n}^{\mathrm{T}} \right\|_{2}\leq\left\|\mathbf{\Sigma}_{n}\right\|_{2}\left(1+\left\|\mathbf{A }_{sn}\right\|_{2}^{2}\right).\] (S.14) Note that \(\left\|\mathbf{\Sigma}_{n}\right\|_{2}\leq\left\|\mathbf{\Sigma}_{n}-\mathbf{ \Sigma}_{0n}\right\|_{2}+c_{n}\). Combining (S.8), (S.12) and (S.14) we get \[1-\varphi_{sn}\] \[\leq 1\left\{\left\|\mathbf{\Sigma}_{n}\right\|_{2}\left(1+\left\| \mathbf{A}_{sn}\right\|_{2}^{2}\right)\left\|\frac{1}{n_{s}}\sum_{i=1}^{n_{s}} \mathbf{z}_{s,i}\mathbf{z}_{s,i}^{\mathrm{T}}-\mathbf{I}_{\widetilde{q}_{0,sn }}\right\|_{2}\geq\frac{\left\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}_{0n} \right\|_{2}}{c_{n}^{2}\left(\left\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}_{0n} \right\|_{2}+c_{n}\right)\left(1+\left\|\mathbf{A}_{sn}\right\|_{2}^{2}\right) }\right\}\] \[\leq 1\left\{\left\|\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}\mathbf{z}_{s,i} \mathbf{z}_{s,i}^{\mathrm{T}}-\mathbf{I}_{\widetilde{q}_{0,sn}}\right\|_{2} \geq\frac{\left\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}_{0n}\right\|_{2}-\epsilon _{n}c_{n}^{2}}{c_{n}^{2}\left(\left\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}_{0n} \right\|_{2}+c_{n}\right)\left(1+\left\|\mathbf{A}_{sn}\right\|_{2}^{2}\right) }\right\},\] (S.15) where \(\mathbf{z}_{s,1:n_{s}}\overset{\mathrm{iid}}{\sim}\mathrm{N}_{\widetilde{q}_{0,sn}}(\mathbf{0},\mathbf{I}_{\widetilde{q}_{0,sn}})\). Using Lemma 10, we see that the RHS of (S.15) is bounded below by \(\left(\widetilde{C}\sqrt{\widetilde{q}_{0,sn}}+j\sqrt{n}\tau_{n}\right)\frac{1}{ \sqrt{n_{s}}}\) for some absolute constant \(\widetilde{C}>0\) if \(\mathbf{\Theta}_{n}\in G_{j,n}\). Further applying Vershynin (2012, Equation 5.26), for all \(\mathbf{\Theta}_{n}\in G_{j,n}\) we get \[\mathbb{E}_{\mathbf{\Theta}_{n}}(1-\varphi_{sn})\leq\Pr\left\{\left\|\frac{1} {n_{s}}\sum_{i=1}^{n_{s}}\mathbf{z}_{s,i}\mathbf{z}_{s,i}^{\mathrm{T}}- \mathbf{I}_{\widetilde{q}_{0,sn}}\right\|_{2}\geq\left(\widetilde{C}\sqrt{ \widetilde{q}_{0,sn}}+j\sqrt{n}\tau_{n}\right)\frac{1}{\sqrt{n_{s}}}\right\} \leq e^{-Kj^{2}n_{s}\tau_{n}^{2}}.\] Finally, \(\sup_{\mathbf{\Theta}_{n}\in G_{j,n}}\mathbb{E}_{\mathbf{\Theta}_{n}}(1- \varphi_{n})=\sup_{\mathbf{\Theta}_{n}\in G_{j,n}}\prod_{s=1}^{S}\mathbb{E}_{ \mathbf{\Theta}_{n}}(1-\varphi_{sn})\leq e^{-K\left(\sum_{s=1}^{S}n_{s}\right) j^{2}\tau_{n}^{2}}\). Hence the proof. **Theorem 5** (Remaining probability mass).: \(\lim_{n\rightarrow\infty}\mathbb{E}_{\mathbb{P}_{0n}}\Pi_{n}\left(\mathcal{P}_{2,n} \mid\mathcal{D}_{n}\right)=0\)_._ Proof.: For densities \(\mathbf{p}_{0n}\) and \(\mathbf{p}_{n}\) corresponding to \(\mathbb{P}_{0n}\) and \(\mathbb{P}_{n}\), respectively, we define the average KL variation \(\mathbb{V}\left(\mathbb{P}_{0n}\parallel\mathbb{P}_{n}\right)=\int\left\{\log \frac{\mathbf{p}_{0n}}{\mathbf{p}_{n}}-\mathbb{KL}\left(\mathbb{P}_{0n} \parallel\mathbb{P}_{n}\right)\right\}^{2}\mathrm{d}\mathbf{p}_{0n}\) and the set \(B_{n,2}(\mathbb{P}_{0n},\tau)=\left\{\mathbb{P}_{n}:\mathbb{KL}\left(\mathbb{P}_{0 n}\parallel\mathbb{P}_{n}\right)\leq n\tau^{2},\ \mathbb{V}\left(\mathbb{P}_{0n}\parallel\mathbb{P}_{n}\right)\leq n\tau^{2}\right\}\). From Banerjee and Ghosal (2015, Theorem 3.1) we find that the KL variation between \(\mathrm{N}_{d_{n}}(\mathbf{0},\mathbf{\Sigma}_{0sn})\) and \(\mathrm{N}_{d_{n}}(\mathbf{0},\mathbf{\Sigma}_{sn})\) is \(\frac{1}{2}\sum_{j=1}^{d_{n}}(1-\psi_{j})^{2}\) where \(\psi_{j}\)'s are eigenvalues of \(\mathbf{\Sigma}_{sn}^{-\frac{1}{2}}\mathbf{\Sigma}_{0sn}\mathbf{\Sigma}_{sn}^{- \frac{1}{2}}\). Hence \(\mathbb{V}\left(\mathbb{P}_{0n}\parallel\mathbb{P}_{n}\right)=\sum_{s=1}^{S}n_{s} \|\mathbf{\Sigma}_{sn}^{-1}(\mathbf{\Sigma}_{0sn}-\mathbf{\Sigma}_{sn})^{2} \mathbf{\Sigma}_{sn}^{-1}\|_{F}^{2}\leq\sum_{s=1}^{S}\frac{n_{s}\left\|\mathbf{ \Sigma}_{0sn}-\mathbf{\Sigma}_{sn}\right\|_{F}^{2}}{2s_{min}^{2}(\mathbf{\Sigma}_{ sn})}\). Therefore, the same three conditions in Lemma 9 imply that \(2\mathbb{V}\left(\mathbb{P}_{0n}\parallel\mathbb{P}_{n}\right)\leq n\tau_{n}^{2}\) and accordingly \(\Pi_{n}\left\{B_{n,2}(\mathbb{P}_{0n},\tau_{n})\right\}>e^{-\widetilde{C}n\tau_{n} ^{2}}\) for some absolute constant \(\widetilde{C}>0\). Note that the elements of \(\mathbf{A}_{sn}\) are iid \(\mathrm{N}(0,b_{\mathbf{A}})\) across all \(s\). Thus using Vershynin (2012, Theorem 5.39) we get that \(\Pi_{n}(\left\|\mathbf{A}_{sn}\right\|_{2}\geq C\sqrt{n\tau_{n}^{2}})\leq e^{- C^{\prime}n\tau_{n}^{2}}\) with \(C^{\prime}\) depending only on the constants \(b_{\mathbf{A}}\) and \(C\). Therefore \(\Pi_{n}(\mathcal{P}_{2,n})\leq\sum_{s=1}^{S}\Pi_{n}(\left\|\mathbf{A}_{sn} \right\|_{2}\geq C\sqrt{n\tau_{n}^{2}})\leq e^{-C^{\prime\prime}n\tau_{n}^{2}}\) where \(C^{\prime\prime}\) is a constant that depends only on the constants \(b_{\mathbf{A}}\), \(S\) and \(C\). Hence, \(\frac{\Pi_{n}(P_{2,n})}{\Pi_{n}\left\{B_{n,2}(\overline{\mathbb{P}}_{0n}, \tau_{n})\right\}}\leq e^{-2(C^{\prime\prime}-\widetilde{C})n\tau_{n}^{2}}=o(e ^{-2n\tau_{n}^{2}})\). The last display holds if the constant \(C\) in the definition of \(\mathcal{P}_{2,n}\) is large enough, however, it is independent of \(n\). Using Ghosal and van der Vaart (2017, Theorem 8.20) and subsequent application of DCT we conclude the proof. **Theorem 6**.: _Under the same conditions of Theorem 2, \(\Pi_{n}(\mathbf{\Theta}_{n}\in\mathcal{P}_{1,n}:\left\|\mathbf{\Sigma}_{sn}- \mathbf{\Sigma}_{0sn}\right\|_{2}>M\varepsilon_{n}\mid\mathcal{D}_{n})\to 0\) in \(\mathbb{P}_{0n}\)-probability._ Proof.: Recall \(\epsilon_{n}\asymp c_{n}n\tau_{n}^{3}\) and \(n\tau_{n}^{2}=c_{n}^{2}s_{n}q_{0n}\log(d_{n}q_{n})\) defined in the proof of Theorem 2. Using these we now state a variation of Lemma 2. stated earlier. **Lemma 4**.: _Let \(\widetilde{e}_{n}^{(s)}\) be a semimetric on \(\mathcal{P}_{n}\) such that \(\widetilde{e}_{n}^{(s)}(\mathbf{\Theta}_{n},\mathbf{\Theta}_{0n})=\frac{1}{c_ {n}^{3}}(\left\|\mathbf{\Sigma}_{sn}-\mathbf{\Sigma}_{0sn}\right\|_{2}-c_{n} ^{2}\epsilon_{n}-\frac{c_{n}^{3}\widetilde{q}_{0,sn}}{\sqrt{n}_{s}})\). Then \(\Pi_{n}\left\{\mathbf{\Theta}_{n}\in\mathcal{P}_{1,n}:\widetilde{e}_{n}^{(s)} \left(\mathbf{\Theta}_{n},\mathbf{\Theta}_{0n}\right)>C\tau_{n}\mid\mathcal{D} _{n}\right\}\to 0\) in \(\mathbb{P}_{0n}\)-probability for a sufficiently large \(M\) and a constant \(C>0\), if for every sufficiently large \(j\in\mathbb{N}\) the following conditions hold_ 1. _For some_ \(C>0\)_,_ \(\Pi_{n}\left\{B_{n,0}(\mathbf{\Theta}_{0n},\tau_{n})\right\}\geq e^{-Cn\tau_{n }^{2}}\)_._ 2. _Define the set_ \(\widetilde{G}_{j,n}=\left\{\mathbf{\Theta}_{n}\in\mathcal{P}_{1,n}:j\tau_{n}< \widetilde{e}_{n}^{(s)}\left(\mathbf{\Theta}_{n},\mathbf{\Theta}_{0n}\right) \leq 2j\tau_{n}\right\}\)_. There exists tests_ \(\varphi_{\text{sn}}\) _such that for some_ \(K>0\)_,_ \(\lim_{n\to\infty}\mathbb{E}_{\mathbf{\Theta}_{0n}}\varphi_{\text{sn}}=0\) _and_ \(\sup_{\mathbf{\Theta}_{n}\in\widetilde{G}_{j,n}}\mathbb{E}_{\mathbf{\Theta}_{ n}}(1-\varphi_{\text{sn}})\leq\exp(-Knj^{2}\tau_{n}^{2})\)_._ Proof of the above lemma follows same line of arguments as Lemma 2. Additionally condition (I') is identical to (I) from Lemma 2. We show the existence of a sequence of test functions satisfying condition (II') in Lemma 5. This implies that \[\Pi_{n}\{\mathbf{\Theta}_{n}\in\mathcal{P}_{1,n}:\widetilde{e}_{n}^{(s)}\left( \mathbf{\Theta}_{n},\mathbf{\Theta}_{0n}\right)>C\tau_{n}\mid\mathcal{D}_{n} \}\to 0\text{ in $\mathbb{P}_{0n}$-probability}.\] (S.16) Note that \(\widetilde{e}_{n}^{(s)}\left(\mathbf{\Theta}_{n},\mathbf{\Theta}_{0n}\right)>C \tau_{n}\Leftrightarrow\left\|\mathbf{\Sigma}_{sn}-\mathbf{\Sigma}_{0sn} \right\|_{2}>c_{n}^{2}\epsilon_{n}+\frac{c_{n}^{3}\widetilde{q}_{0,sn}}{\sqrt{ n}_{s}}+Cc_{n}^{3}\tau_{n}\). From respective definitions of \(\epsilon_{n}\) and \(\varepsilon_{n}\), \(c_{n}^{2}\epsilon_{n}\asymp\varepsilon_{n}\) and from (D1)\(\varepsilon_{n}\succ\max\left(\frac{c_{n}^{3}\widetilde{q}_{0,sn}}{\sqrt{ n}_{s}},c_{n}^{3}\tau_{n}\right)\). Thus (S.16) implies that for some large enough \(M\), \(\Pi_{n}(\mathbf{\Theta}_{n}\in\mathcal{P}_{1,n}:\left\|\mathbf{\Sigma}_{sn}- \mathbf{\Sigma}_{0sn}\right\|_{2}>M\varepsilon_{n}\mid\mathcal{D}_{n})\to 0\) in \(\mathbb{P}_{0n}\)-probability. **Lemma 5**.: _Recalling the definitions from Lemma 4, we let the set \(\widetilde{G}_{j,n}=\{\mathbf{\Theta}_{n}\in\mathcal{P}_{1,n}:j\tau_{n}\leq \widetilde{e}_{n}^{(s)}(\mathbf{\Theta}_{n},\mathbf{\Theta}_{0n})\leq 2j\tau_{n}\}\) for positive integer \(j=o\left[\sqrt{n}\left\{c_{n}^{2}s_{n}q_{0n}\log(d_{n}q_{n})\right\}^{- \frac{1}{2}}\right]\). Based on observed data \(\mathcal{D}_{n}\) consider the following hypothesis testing problem \(H_{0}:\mathbf{\Theta}_{n}=\mathbf{\Theta}_{0n}\) versus \(H_{1}:\mathbf{\Theta}_{n}\in\widetilde{G}_{j,n}\). We use \(\varphi_{\text{sn}}=1\left(\left\|\frac{1}{n_{s}}\mathbf{\Xi}_{\text{sn}}- \mathbf{I}_{\widetilde{q}_{0,sn}}\right\|_{2}>\epsilon_{n}\right)\) from Theorem 4 as the test function. Then for some absolute constant \(K>0\)_ \[\mathbb{E}_{H_{0}}(\varphi_{\text{sn}})\to 0;\qquad\sup_{\mathbf{\Theta}_{n}\in \widetilde{G}_{j,n}}\mathbb{E}_{\mathbf{\Theta}_{n}}(1-\varphi_{\text{sn}})\leq \exp\bigl{(}-Knj^{2}\tau_{n}^{2}\bigr{)}.\] Proof.: **Type-I error:** Please refer to corresponding section in the proof of Theorem 4 where it has already been established that \(\lim_{n\to\infty}\mathbb{E}_{H_{0}}(\varphi_{sn})=0\). **Type-II error:** From eqn (S.13), \[\left\|\boldsymbol{\Sigma}_{\eta,0sn}^{-1}\boldsymbol{\Sigma}_{ \eta,sn}\right\|_{2}\leq\left\|\boldsymbol{\Sigma}_{sn}\right\|_{2}\leq\left\| \boldsymbol{\Sigma}_{sn}-\boldsymbol{\Sigma}_{0sn}\right\|_{2}+\left\| \boldsymbol{\Sigma}_{sn}\right\|_{2}<\left\|\boldsymbol{\Sigma}_{sn}- \boldsymbol{\Sigma}_{0sn}\right\|_{2}+c_{n}.\] (S.17) We obtain the last inequality in the previous display using (C3) and (C4). Using (S.12) \[\left\|\boldsymbol{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}\boldsymbol{ \Sigma}_{\eta,sn}\boldsymbol{\Sigma}_{\eta,0sn}^{-\frac{1}{2}}-\mathbf{I}_{ \widetilde{q}_{0,sn}}\right\|_{2}\geq\frac{\left\|\boldsymbol{\Sigma}_{sn}- \boldsymbol{\Sigma}_{0sn}\right\|_{2}}{\left\|\boldsymbol{\Delta}_{0n}\right\| _{2}(1+\left\|\boldsymbol{\Lambda}_{0sn}^{\mathrm{T}}\boldsymbol{\Delta}_{0n} ^{-1}\boldsymbol{\Lambda}_{0sn}\right\|_{2})}\geq\frac{\left\|\boldsymbol{ \Sigma}_{sn}-\boldsymbol{\Sigma}_{0sn}\right\|_{2}}{c_{n}^{2}}.\] (S.18) Combining (S.8), (S.17) and (S.18) we get \[1-\varphi_{sn}\] \[\leq \mathbb{1}\left\{\left\|\frac{1}{n_{s}}\sum_{i=1}^{n_{s}} \mathbf{z}_{s,i}\mathbf{z}_{s,i}^{\mathrm{T}}-\mathbf{I}_{\widetilde{q}_{0,sn }}\right\|_{2}\geq\frac{\left\|\boldsymbol{\Sigma}_{sn}-\boldsymbol{\Sigma}_{0 sn}\right\|_{2}-\epsilon_{n}c_{n}^{2}}{c_{n}^{2}\left(\left\|\boldsymbol{\Sigma}_{ sn}-\boldsymbol{\Sigma}_{0sn}\right\|_{2}+c_{n}\right)}\right\}.\] (S.19) Using \(j=o\left[\sqrt{n}\left\{c_{n}^{2}s_{n}q_{0n}\log(d_{n}q_{n})\right\}^{-\frac{1 }{2}}\right]\) and similar arguments as in the proof of Lemma 10, it can be straightforwardly seen that \(\frac{\left\|\boldsymbol{\Sigma}_{sn}-\boldsymbol{\Sigma}_{0sn}\right\|_{2}- \epsilon_{n}c_{n}^{2}}{c_{n}^{2}\left(\left\|\boldsymbol{\Sigma}_{sn}- \boldsymbol{\Sigma}_{0sn}\right\|_{2}+c_{n}\right)}>\left(\widetilde{C}\sqrt{ \widetilde{q}_{0,sn}}+j\sqrt{n}\tau_{n}\right)\frac{1}{\sqrt{n_{s}}}\) for some absolute constant \(\widetilde{C}>0\) if \(\boldsymbol{\Theta}_{n}\in\widetilde{G}_{j,n}\). Further applying Vershynin (2012, Equation 5.26), for all \(\boldsymbol{\Theta}_{n}\in\widetilde{G}_{j,n}\) we get \[\mathbb{E}_{\boldsymbol{\Theta}_{n}}(1-\varphi_{sn})\leq\Pr \left\{\left\|\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}\mathbf{z}_{s,i}\mathbf{z}_{s, i}^{\mathrm{T}}-\mathbf{I}_{\widetilde{q}_{0,sn}}\right\|_{2}\geq\left( \widetilde{C}\sqrt{\widetilde{q}_{0,sn}}+j\sqrt{n}\tau_{n}\right)\frac{1}{ \sqrt{n_{s}}}\right\}\leq e^{-Kj^{2}n_{s}\tau_{n}^{2}}.\] Using (C1) we conclude \(\sup_{\boldsymbol{\Theta}_{n}\in\widetilde{G}_{j,n}}\mathbb{E}_{\boldsymbol{ \Theta}_{n}}(1-\varphi_{sn})\leq e^{-K^{\prime}j^{2}nr_{n}^{2}}\) for some \(K^{\prime}>0\). ### Technical Lemmas and Complete Proofs **Proof of Lemma 2.** Lemma 2 closely resembles Theorem 8.22 from Ghosal and van der Vaart (2017). In the discussion following equation (8.22) in the book, the authors note that for the iid case, simpler theorems are obtained by using an absolute lower bound on the prior mass and by replacing the local entropy by the global entropy. In particular, (I) implies Theorem 8.19(i) in the book. Similarly, (II) is the same condition in Theorem 8.20, and so the result follows. **Lemma 6**.: _For any two matrices \(\mathbf{A}\) and \(\mathbf{B}\),_ 1. \(s_{\min}\left(\mathbf{A}\right)\left\|\mathbf{B}\right\|_{F}\leq\left\|\mathbf{ A}\mathbf{B}\right\|_{F}\leq\left\|\mathbf{A}\right\|_{2}\left\|\mathbf{B} \right\|_{F}\)_._ 2. \(s_{\min}\left(\mathbf{A}\right)\left\|\mathbf{B}\right\|_{2}\leq\left\| \mathbf{A}\mathbf{B}\right\|_{2}\leq\left\|\mathbf{A}\right\|_{2}\left\| \mathbf{B}\right\|_{2}\)_._ 3. \(s_{\min}\left(\mathbf{A}\right)s_{\min}\left(\mathbf{B}\right)\leq s_{\min} \left(\mathbf{A}\mathbf{B}\right)\leq\left\|\mathbf{A}\right\|_{2}s_{\min} \left(\mathbf{B}\right)\)_._ Proof.: We borrow Lemma 6 from Pati _et al._ (2014) (Lemma 1.1. from the supplement). **Lemma 7**.: _Let \(\mathbf{x}=(x_{1},\ldots,x_{n})^{\mathrm{T}}\) be a random vector with \(x_{i}\stackrel{{\mathrm{iid}}}{{\sim}}\mathrm{N}(0,\sigma^{2})\) and \(\mathbf{x}_{0}=(x_{01},\ldots,x_{0n})^{\mathrm{T}}\) be a fixed vector. Then for some absolute constant \(C>0\), \(\Pr(\left\|\mathbf{x}-\mathbf{x}_{0}\right\|\leq\tau)\geq e^{-C\max\{n\log \frac{\sqrt{\sigma}}{\tau},n\log n,\frac{1}{\sigma^{2}}(\left\|\mathbf{x}_{0} \right\|+\tau)^{2}\}}\)._ Proof.: Let us first define \(v_{n}(r)\) to be the \(n\)-dimensional Euclidean ball of radius \(r\) centered at zero and \(|v_{n}(r)|\) denote its volume. For the sake of brevity, denote \(v_{n}=|v_{n}(1)|\), so that \(|v_{n}(r)|=r^{n}v_{n}\). Using Castillo and van der Vaart (2012, Lemma 5.3), \(v_{n}\asymp(2\pi e)^{n/2}n^{-(n+1)/2}\). Note that \(\Pr(\left\|\mathbf{x}-\mathbf{x}_{0}\right\|\leq\tau)\geq|v_{n}(\tau)|\inf_{ \mathbf{z}:\left\|\mathbf{z}-\mathbf{x}_{0}\right\|\leq\tau}\mathrm{N}_{n}( \mathbf{z};\mathbf{0},\sigma^{2}\mathbf{I}_{n})\) where \(\mathrm{N}_{n}(\mathbf{z};\boldsymbol{\mu},\boldsymbol{\Sigma})\) denotes the density of a \(n\) dimensional multivariate normal distribution with mean \(\boldsymbol{\mu}\) and covariance matrix \(\boldsymbol{\Sigma}\) at the point \(\mathbf{z}\). Now the greatest distance between the origin of the \(n\)-dimensional Euclidean space and the \(n\)-dimensional sphere of radius \(\tau\) centered at \(\mathbf{x}_{0}\) is bounded above by \(\left\|\mathbf{x}_{0}\right\|+\tau\). Since the density of \(\mathrm{N}_{n}(\mathbf{0},\sigma^{2}\mathbf{I}_{n})\) at a point \(\mathbf{z}\) monotonically decreases with \(\left\|\mathbf{z}\right\|\), we have \(\inf_{\mathbf{z}:\left\|\mathbf{z}-\mathbf{x}_{0}\right\|\leq\tau}\mathrm{N}_ {n}(\mathbf{z};\mathbf{0},\sigma^{2}\mathbf{I}_{n})\geq\frac{1}{(\sqrt{2\pi} \sigma)^{n}}e^{-\frac{1}{2\sigma^{2}}(\left\|\mathbf{x}_{0}\right\|+\tau)^{2}}\). Hence for absolute constants \(C,C^{\prime},C^{\prime\prime}>0\) \[\Pr(\left\|\mathbf{x}-\mathbf{x}_{0}\right\|\leq\tau)\geq|v_{n}( \tau)|\inf_{\mathbf{z}:\left\|\mathbf{z}-\mathbf{x}_{0}\right\|\leq\tau} \mathrm{N}_{n}(\mathbf{z};\mathbf{0},\sigma^{2}\mathbf{I}_{n})\] \[\geq C\tau^{n}(2\pi e)^{n/2}n^{-(n+1)/2}\times\frac{1}{(\sqrt{2\pi} \sigma)^{n}}e^{-\frac{1}{2\sigma^{2}}(\left\|\mathbf{x}_{0}\right\|+\tau)^{2 }}\geq e^{n\log\frac{\tau}{\sqrt{\sigma}}-C^{\prime}n(\log n+C^{\prime\prime}) -\frac{1}{2\sigma^{2}}(\left\|\mathbf{x}_{0}\right\|+\tau)^{2}}\] which concludes the proof. **Lemma 8**.: \[2\mathbb{KL}\left(\mathbb{P}_{0n}\parallel\mathbb{P}_{n}\right)=\sum_{s=1}^{S }n_{s}\left\{\mathrm{trace}(\boldsymbol{\Sigma}_{sn}^{-1}\boldsymbol{\Sigma}_ {0sn}-\mathbf{I}_{d_{n}})-\log\left|\boldsymbol{\Sigma}_{sn}^{-1}\boldsymbol{ \Sigma}_{0sn}\right|\right\}\leq\sum_{s=1}^{S}\frac{n_{s}\|\boldsymbol{\Sigma}_ {sn}-\boldsymbol{\Sigma}_{0sn}\|_{F}^{2}}{\delta_{\min}^{2}s_{\min}\left( \boldsymbol{\Sigma}_{sn}\right)}.\] Proof.: Recall that \(\mathbb{P}_{n}\) and \(\mathbb{P}_{0n}\) denote the joint distributions of \(\mathcal{D}_{n}\) under \(\boldsymbol{\Theta}_{n}\) and the true value \(\boldsymbol{\Theta}_{0n}\), respectively. Let \(f_{\mathbb{P}}(\mathbf{Y}_{s,i})\) denote the marginal distribution of \(\mathbf{Y}_{s,i}\) under a measure \(\mathbb{P}\). Since \(f_{\mathbb{P}_{n}}(\mathbf{Y}_{s,i})\equiv\mathrm{N}_{d_{n}}(\mathbf{0},\mathbf{ \Sigma}_{sn})\) and \(f_{\mathbb{P}_{0n}}(\mathbf{Y}_{s,i})\equiv\mathrm{N}_{d_{n}}(\mathbf{0}, \mathbf{\Sigma}_{0sn})\) for all \(i\), and the \(\mathbf{Y}_{s,i}\)s are independent, we have \[\mathbb{KL}\left(\mathbb{P}_{0n}\parallel\mathbb{P}_{n}\right) =\sum_{s=1}^{S}\sum_{i=1}^{n_{s}}\mathbb{KL}\left(f_{\mathbb{P}_{0 n}}(\mathbf{Y}_{s,i})\parallel f_{\mathbb{P}_{n}}(\mathbf{Y}_{s,i})\right)\] \[=\sum_{s=1}^{S}\frac{n_{s}}{2}\left\{\mathrm{trace}(\mathbf{ \Sigma}_{sn}^{-1}\mathbf{\Sigma}_{0sn}-\mathbf{I}_{d_{n}})-\log\left|\mathbf{ \Sigma}_{sn}^{-1}\mathbf{\Sigma}_{0sn}\right|\right\}.\] (S.20) Let \(\mathbf{H}=\mathbf{\Sigma}_{sn}^{-\frac{1}{2}}\mathbf{\Sigma}_{0sn}\mathbf{ \Sigma}_{sn}^{-\frac{1}{2}}\) and \(\mathbb{KL}_{s}=-\log\left|\mathbf{\Sigma}_{sn}^{-1}\mathbf{\Sigma}_{0sn} \right|+\mathrm{trace}(\mathbf{\Sigma}_{0sn}\mathbf{\Sigma}_{sn}^{-1}- \mathbf{I}_{d_{n}})\). Letting \(\psi_{1},\ldots,\psi_{d_{n}}\) be the eigenvalues of \(\mathbf{H}\), we note that \(\mathbb{KL}_{s}=\sum_{j=1}^{d_{n}}\left\{(\psi_{j}-1)-\log\psi_{j}\right\}\). Observe that for any \(x>-1\), \(\log(1+x)\geq\frac{x}{1+x}\); additionally \(\psi_{j}>0\) for all \(j=1,\ldots,d_{n}\) since these are eigenvalues of the positive definite matrix \(\mathbf{H}\). Hence \(\log\psi_{j}\geq 1-\frac{1}{\psi_{j}}\) implying that \((\psi_{j}-1)-\log\psi_{j}\leq\frac{(\psi_{j}-1)^{2}}{\psi_{j}}\). Therefore, \[\mathbb{KL}_{s}\leq\sum_{j=1}^{d_{n}}\frac{(\psi_{j}-1)^{2}}{\psi_{j}}=\left\| (\mathbf{H}-\mathbf{I}_{d_{n}})^{2}\mathbf{H}^{-1}\right\|_{F}=\left\|( \mathbf{\Sigma}_{0sn}-\mathbf{\Sigma}_{sn})^{2}\mathbf{\Sigma}_{sn}^{-1} \mathbf{H}^{-1}\mathbf{\Sigma}_{sn}^{-1}\right\|_{F}.\] (S.21) Now \(\mathbf{\Sigma}_{sn}\mathbf{H}\mathbf{\Sigma}_{sn}=\mathbf{\Sigma}_{sn}^{ \frac{1}{2}}\mathbf{\Sigma}_{0sn}\mathbf{\Sigma}_{sn}^{\frac{1}{2}}\). Hence, from (S.21) and using Lemma 6 we arrive at \[\mathbb{KL}_{s}\leq\left\|\mathbf{\Sigma}_{sn}-\mathbf{\Sigma}_{0sn}\right\|_ {F}^{2}\left\|(\mathbf{\Sigma}_{sn}^{\frac{1}{2}}\mathbf{\Sigma}_{0sn}\mathbf{ \Sigma}_{sn}^{\frac{1}{2}})^{-1}\right\|_{2}\leq\frac{\left\|\mathbf{\Sigma}_{ sn}-\mathbf{\Sigma}_{0sn}\right\|_{F}^{2}}{s_{\min}\left(\mathbf{\Sigma}_{0sn} \right)s_{\min}\left(\mathbf{\Sigma}_{sn}\right)}\leq\frac{\left\|\mathbf{ \Sigma}_{sn}-\mathbf{\Sigma}_{0sn}\right\|_{F}^{2}}{\delta_{\min}^{2}s_{\min} \left(\mathbf{\Sigma}_{sn}\right)}.\] Combining the above display with (S.20) we conclude the proof. **Lemma 9**.: _Assume that \(\left\|\mathbf{\Lambda}_{n}-\mathbf{\widetilde{\Lambda}}_{0n}\right\|_{F}<\frac {C\tau_{n}}{q_{0n}\sqrt{c_{n}}}\), \(\max_{s}\left\|\mathbf{\Lambda}_{sn}-\mathbf{\widetilde{\Lambda}}_{0sn}\right\| _{F}<\frac{C\tau_{n}}{c_{n}\sqrt{q_{0n}}}\), \(\left\|\mathbf{\Delta}_{n}-\mathbf{\Delta}_{0n}\right\|_{F}\leq C\tau_{n}\) and \(s_{\min}\left(\mathbf{\Delta}_{n}\right)>\nu\) where \(C=\nu\delta_{\min}^{2}/5\) and \(0<\nu<\delta_{\min}^{2}\). Then \(2\mathbb{KL}\left(\mathbb{P}_{0n}\parallel\mathbb{P}_{n}\right)\leq n\tau_{n}^{2}\)._ Proof.: Using Lemma 8\(\mathbb{KL}\left(\mathbb{P}_{0n}\parallel\mathbb{P}_{n}\right)\leq\sum_{s=1}^{S} \frac{n_{s}\left\|\mathbf{\Sigma}_{sn}-\mathbf{\Sigma}_{0sn}\right\|_{F}^{2}} {2\delta_{\min}^{2}s_{\min}\left(\mathbf{\Sigma}_{sn}\right)}\). Now \(\left\|\mathbf{\Sigma}_{sn}-\mathbf{\Sigma}_{0sn}\right\|_{F}\leq\left\| \mathbf{\Lambda}_{n}\mathbf{\Lambda}_{n}^{\mathrm{T}}-\mathbf{\Lambda}_{0n} \mathbf{\Lambda}_{0n}^{\mathrm{T}}\right\|_{F}+\left\|\mathbf{\Lambda}_{sn} \mathbf{\Lambda}_{sn}^{\mathrm{T}}-\mathbf{\Lambda}_{0sn}\mathbf{\Lambda}_{0sn}^ {\mathrm{T}}\right\|_{F}+\left\|\mathbf{\Delta}_{0n}-\mathbf{\Delta}_{n} \right\|_{F}\) where \(\mathbf{\Lambda}_{sn}:=\mathbf{\Lambda}_{n}\mathbf{\Lambda}_{sn}\) and \(\mathbf{\Lambda}_{0sn}:=\mathbf{\Lambda}_{0n}\mathbf{\Lambda}_{0sn}\). Recalling the notations defined in the beginning of Section S.2 we have \[\left\|\mathbf{\Lambda}_{sn}\mathbf{\Lambda}_{sn}^{\mathrm{T}}- \mathbf{\Lambda}_{0sn}\mathbf{\Lambda}_{0sn}^{\mathrm{T}}\right\|_{F}=\left\| \mathbf{\Lambda}_{sn}\mathbf{\Lambda}_{sn}^{\mathrm{T}}-\mathbf{\widetilde{ \Lambda}}_{0sn}\mathbf{\widetilde{\Lambda}}_{0sn}^{\mathrm{T}}\right\|_{F}\] \[= \left\|(\mathbf{\Lambda}_{sn}-\mathbf{\widetilde{\Lambda}}_{0sn}) \mathbf{\Lambda}_{sn}^{\mathrm{T}}+\mathbf{\widetilde{\Lambda}}_{0sn}(\mathbf{ \Lambda}_{sn}-\mathbf{\widetilde{\Lambda}}_{0sn})^{\mathrm{T}}\right\|_{F}\] \[\leq \left\|\mathbf{\Lambda}_{sn}-\mathbf{\widetilde{\Lambda}}_{0sn} \right\|_{F}\times\left(\left\|\mathbf{\Lambda}_{sn}\right\|_{2}+\left\| \mathbf{\widetilde{\Lambda}}_{0sn}\right\|_{2}\right).\] (S.22) We define \(\mathbf{\Lambda}_{sn}=\mathbf{\Lambda}_{n}\mathbf{A}_{sn}\) and \(\widetilde{\mathbf{\Lambda}}_{0sn}=\widetilde{\mathbf{\Lambda}}_{0n}\widetilde{\mathbf{ \Lambda}}_{0sn}\). Then using Lemma 6 \[\left\|\mathbf{\Lambda}_{sn}-\widetilde{\mathbf{\Lambda}}_{0sn}\right\|_{F}= \left\|\mathbf{\Lambda}_{n}\mathbf{A}_{sn}-\widetilde{\mathbf{\Lambda}}_{0n} \widetilde{\mathbf{\Lambda}}_{0sn}\right\|_{F}=\left\|\mathbf{\Lambda}_{n}(\mathbf{A}_ {sn}-\widetilde{\mathbf{\Lambda}}_{0sn})+(\mathbf{\Lambda}_{n}-\widetilde{\mathbf{\Lambda} }_{0n})\widetilde{\mathbf{\Lambda}}_{0sn}\right\|_{F}\] \[\leq \|\mathbf{\Lambda}_{n}\|_{2}\Big{\|}\mathbf{\Lambda}_{sn}-\widetilde{\bm {\Lambda}}_{0sn}\Big{\|}_{F}+\left\|\mathbf{\Lambda}_{n}-\widetilde{\mathbf{\Lambda}}_ {0n}\right\|_{F}\Big{\|}\widetilde{\mathbf{\Lambda}}_{0sn}\Big{\|}_{2}.\] (S.23) Note that \(\left\|\mathbf{\Lambda}_{n}-\widetilde{\mathbf{\Lambda}}_{0n}\right\|_{F}<\frac{C \tau_{n}}{q_{0n}\sqrt{c_{n}}}\Rightarrow\left\|\mathbf{\Lambda}_{n}\right\|_{2} \leq\left\|\mathbf{\Lambda}_{n}\right\|_{F}\leq\left\|\widetilde{\mathbf{\Lambda}}_{0 n}\right\|_{F}+\frac{C\tau_{n}}{q_{0n}\sqrt{c_{n}}}<\sqrt{c_{n}}\). The last display holds since \((C3)\Rightarrow\left\|\mathbf{\Lambda}_{0n}\right\|_{2}=o(\sqrt{c_{n}/q_{0n}}) \Rightarrow\left\|\mathbf{\Lambda}_{0n}\right\|_{F}=o(\sqrt{c_{n}})\). Combining the condition \(\max_{1\leq s\leq S}\left\|\mathbf{\Lambda}_{sn}-\widetilde{\mathbf{\Lambda}}_{0sn} \right\|_{F}\leq\frac{C\tau_{n}}{c_{n}\sqrt{q_{0n}}}\) and (S.23) we get \[\max_{1\leq s\leq S}\left\|\mathbf{\Lambda}_{sn}-\widetilde{\mathbf{\Lambda}}_{0sn} \right\|_{F}\leq\sqrt{c_{n}}\times\frac{C\tau_{n}}{c_{n}\sqrt{q_{0n}}}+\frac{C \tau_{n}}{q_{0n}\sqrt{c_{n}}}\times\sqrt{q_{0n}}<\frac{2C\tau_{n}}{\sqrt{c_{n} q_{0n}}}.\] (S.24) Now \(\left\|\mathbf{\Lambda}_{sn}-\widetilde{\mathbf{\Lambda}}_{0sn}\right\|_{F}<\frac{C \tau_{n}}{\sqrt{c_{n}q_{0n}}}\)\(\Rightarrow\)\(\left\|\mathbf{\Lambda}_{sn}\right\|_{2}\leq\left\|\mathbf{\Lambda}_{sn}\right\|_{F}< \left\|\widetilde{\mathbf{\Lambda}}_{0sn}\right\|_{F}+\frac{C\tau_{n}}{\sqrt{c_{n} q_{0n}}}\). Note that \(\left\|\widetilde{\mathbf{\Lambda}}_{0sn}\right\|_{F}\leq\sqrt{q_{0sn}}\times \left\|\widetilde{\mathbf{\Lambda}}_{0sn}\right\|_{2}\leq\sqrt{q_{0sn}}\times \left\|\mathbf{\Lambda}_{0n}\right\|_{2}\|\mathbf{\Lambda}_{0sn}\|_{2}<\sqrt{c_{n}q_{0n}}\) implying that \(\left\|\mathbf{\Lambda}_{sn}\right\|_{2}+\left\|\widetilde{\mathbf{\Lambda}}_{0sn} \right\|_{2}<2\sqrt{c_{n}q_{0n}}\). Combining the last display with (S.22) and (S.24) we obtain \(\left\|\mathbf{\Lambda}_{sn}\mathbf{\Lambda}_{sn}^{\mathrm{T}}-\mathbf{\Lambda}_{0sn}\mathbf{ \Lambda}_{0sn}^{\mathrm{T}}\right\|_{F}\leq 2C\tau_{n}\). Similarly \(\left\|\mathbf{\Lambda}_{n}\mathbf{\Lambda}_{n}^{\mathrm{T}}-\mathbf{\Lambda}_{0n}\mathbf{ \Lambda}_{0n}^{\mathrm{T}}\right\|_{F}\leq\left\|\mathbf{\Lambda}_{n}-\widetilde{ \mathbf{\Lambda}}_{0n}\right\|_{F}(\left\|\mathbf{\Lambda}_{n}\right\|_{F}+\left\| \widetilde{\mathbf{\Lambda}}_{0n}\right\|_{F})\leq 2C\tau_{n}\). Using the additional condition \(\left\|\mathbf{\Delta}_{n}-\mathbf{\Delta}_{0n}\right\|_{F}\leq C\tau_{n}\) and \(s_{\min}\left(\mathbf{\Delta}_{n}\right)>\nu\), we get \(\left\|\mathbf{\Sigma}_{sn}-\mathbf{\Sigma}_{0sn}\right\|_{F}\leq 5C\tau_{n}\). Furthermore \(s_{\min}\left(\mathbf{\Sigma}_{sn}\right)\geq s_{\min}\left(\mathbf{\Delta}_{n}\right)\). Hence, \(2\mathbb{KL}\left(\mathbb{P}_{0n}\parallel\mathbb{P}_{n}\right)\leq\sum_{s=1}^{S }\frac{n_{s}\left\|\mathbf{\Sigma}_{sn}-\mathbf{\Sigma}_{0sn}\right\|_{F}^{2}}{\delta _{\min}^{2}s_{\min}\left(\mathbf{\Sigma}_{sn}\right)}\leq n\tau_{n}^{2}\). **Lemma 10**.: _Recall the notations in Theorem 4. If \(\mathbf{\Theta}_{n}\in G_{j,n}\), then \(\frac{\left\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}_{0n}\right\|_{2}-\epsilon_{n}c_{n}^{2} }{c_{n}^{2}\left(\left\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}_{0n}\right\|_{2}+c_{n} \right)\left(1+\left\|\mathbf{\Lambda}_{sn}\right\|_{2}^{2}\right)}>\left(\widetilde {C}\sqrt{\widetilde{q}_{0,sn}}+j\sqrt{n}\tau_{n}\right)\frac{c_{n}^{2}}{\sqrt{n}s}\) for some absolute constant \(\widetilde{C}>0\)._ Proof.: For all \(\mathbf{\Theta}_{n}\in\mathcal{P}_{1,n}\), \(1+\left\|\mathbf{A}_{sn}\right\|_{2}^{2}\leq Cn\tau_{n}^{2}\) for some \(C>0\) for all \(s=1,\ldots,S\). For brevity of notations, we define \(\omega=Cn\tau_{n}^{2}\), \(x=\left\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}_{0n}\right\|_{2}\) and \(a=\epsilon_{n}c_{n}^{2}\). Hence observe that for \(\mathbf{\Theta}_{n}\in G_{j,n}\) \[\frac{\left\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}_{0n}\right\|_{2}-\epsilon_ {n}c_{n}^{2}}{c_{n}^{2}(\left\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}_{0n}\right\|_{2}+c_{n })\left(1+\left\|\mathbf{A}_{sn}\right\|_{2}^{2}\right)}>\left(\widetilde{C} \sqrt{\widetilde{q}_{0,sn}}+j\sqrt{n}\tau_{n}\right)\frac{1}{\sqrt{n}s}\] (S.25) \[\Leftrightarrow \frac{x-a}{(x+c_{n})w}>\left(\widetilde{C}\sqrt{\widetilde{q}_{0,sn}} +j\sqrt{n}\tau_{n}\right)\frac{c_{n}^{2}}{\sqrt{n}s}\Leftarrow x>\frac{a+ \omega\left(\widetilde{C}\sqrt{\widetilde{q}_{0,sn}}+j\sqrt{n}\tau_{n}\right) \frac{c_{n}^{3}}{\sqrt{n}s}}{1-\omega\left(\widetilde{C}\sqrt{\widetilde{q}_{0, sn}}+j\sqrt{n}\tau_{n}\right)\frac{c_{n}^{2}}{\sqrt{n}s}}.\] The specifics on the postulated model from Section 3 imply \(\widetilde{q}_{0,sn}\leq q_{n}+q_{sn}=o(n\tau_{n}^{2})\). As \(j=o\left[\frac{\sqrt{n}}{c_{n}^{5}}\left\{s_{n}q_{0n}\log(d_{n}q_{n})\right\}^{- \frac{3}{2}}\right]\), \(\omega\left(\widetilde{C}\sqrt{\widetilde{q}_{0,sn}}+j\sqrt{n}\tau_{n}\right) \frac{c_{n}^{2}}{\sqrt{n}s}\prec\frac{c_{n}^{5}}{\sqrt{n}}\{s_{n}q_{0n}\log(d_ {n}q_{n})\}^{\frac{3}{2}}=o(1)\) which can be seen from (D1). Therefore, \(\frac{a+\omega\big{(}\tilde{C}\sqrt{\widehat{q}_{0,sn}}+j\sqrt{n}\tau_{n}\big{)} \frac{c_{n}^{3}}{\sqrt{n}s}}{1-\omega\big{(}\tilde{C}\sqrt{\widehat{q}_{0,sn}} +j\sqrt{n}\tau_{n}\big{)}\frac{c_{n}^{2}}{\sqrt{n}s}}<a+j\times Cc_{n}^{3} \sqrt{\frac{(n\tau_{n}^{2})^{3}}{n}s}\) for some absolute constant \(C>0\) and large enough \(n\). Hence, (S.25) holds if \(x>a+j\times Cc_{n}^{3}\sqrt{\frac{(n\tau_{n}^{2})^{3}}{n_{s}}}\) or equivalently if \(j\tau_{n}\leq e_{n}(\boldsymbol{\Theta}_{n},\boldsymbol{\Theta}_{0n})\), establishing the result. ### S.3 Hamiltonian Monte Carlo Algorithm Let \(\Pi(\boldsymbol{\Theta})\) be the target density where \(\Pi(\cdot)\) is differentiable with respect to \(\boldsymbol{\Theta}\in\mathbb{R}^{D}\). In HMC, a dynamical system is considered in which auxiliary "momentum" variables \(\mathbf{p}\in\mathbb{R}^{D}\) are introduced and the uncertain parameters \(\boldsymbol{\Theta}\) in the target distribution are treated as the variables for the displacement. The total energy (Hamiltonian function) of the dynamical system is defined by \(H(\boldsymbol{\Theta},\mathbf{p})=V(\boldsymbol{\Theta})+K(\mathbf{p})/2\), where its potential energy \(V(\boldsymbol{\Theta})=-\log\Pi(\boldsymbol{\Theta})\) and its kinetic energy \(K(\mathbf{p})=\mathbf{p}^{\mathrm{T}}\mathbf{M}^{-1}\mathbf{p}\) depends only on \(\mathbf{p}\) and some chosen positive definite "mass" matrix \(\mathbf{M}\in\mathbb{R}^{D\times D}\). The Hamiltonian dynamics will preserve the distribution \(e^{-H(\boldsymbol{\Theta},\mathbf{p})}\) and the invariant distribution will have \(\Pi(\cdot)\) to be the marginal distribution of \(\boldsymbol{\Theta}\). Using Hamilton's equations, the evolution of \(\boldsymbol{\Theta}\), \(\mathbf{p}\) through "time" \(t\) is given by \[\frac{\mathrm{d}\mathbf{p}}{\mathrm{d}t}=-\frac{\partial H}{\partial\boldsymbol {\Theta}}=-\nabla V(\boldsymbol{\Theta}),\qquad\frac{\mathrm{d}\boldsymbol{ \Theta}}{\mathrm{d}t}=-\frac{\partial H}{\partial\mathbf{p}}=-\mathbf{M}^{-1} \mathbf{p}.\] (S.26) If we start with \(\boldsymbol{\Theta}(0)\) and draw a sample \(\mathbf{p}(0)\) from \(\mathrm{N}(\mathbf{0},\mathbf{M})\), the final values \(\boldsymbol{\Theta}(t)\), \(\mathbf{p}(t)\) will provide an independent sample \(\boldsymbol{\Theta}(t)\) from \(\Pi\). The leapfrog algorithm (Duane _et al._, 1987) is popularly used to approximately solve the differential equations in (S.26). For time step \(\delta t\), we have \[\boldsymbol{\Theta}(t+\delta t)=\boldsymbol{\Theta}(t)+\delta t\mathbf{M}^{-1 }\left[\mathbf{p}(t)-\frac{\delta t}{2}\nabla V\left\{\boldsymbol{\Theta}(t) \right\}\right],\] (S.27) \[\mathbf{p}(t+\delta t)=\mathbf{p}(t)-\frac{\delta t}{2}\left[\nabla V\left\{ \boldsymbol{\Theta}(t)\right\}+\nabla V\left\{\boldsymbol{\Theta}(t+\delta t )\right\}\right].\] (S.28) The complete HMC sampler is summarized as follows for some \(\mathbf{M}\), \(\delta t\), and \(L\) leapfrog steps. Thus we obtain the MCMC samples \(\boldsymbol{\Theta}_{1},\ldots,\boldsymbol{\Theta}_{N}\). ``` 1 Initialize \(\boldsymbol{\Theta}\) and simulate \(\mathbf{p}(0)\sim\mathrm{N}(\mathbf{0},\mathbf{M})\). 2for\(i=1,\ldots,N\)do 3 In iteration \(i\), let the most recent sample be \((\boldsymbol{\Theta}_{i-1},\mathbf{p}_{i-1})\), then do the following to simulate a new sample \((\boldsymbol{\Theta}_{i},\mathbf{p}_{i})\): 1. Randomly draw a new momentum vector \(\mathbf{p}^{\prime}\) from \(\mathrm{N}(\mathbf{0},\mathbf{M})\). 2. Initiate the leapfrog algorithm with \(\left\{\boldsymbol{\Theta}(0),\mathbf{p}(0)\right\}=(\boldsymbol{\Theta}_{i-1},\mathbf{p}^{\prime})\) and run the algorithm (S.27)-(S.28) for \(L\) time steps to obtain a new candidate sample \((\boldsymbol{\Theta}^{\prime\prime},\mathbf{p}^{\prime\prime})=(\boldsymbol{ \Theta}(t+L\delta t),\mathbf{p}(t+L\delta t))\). 3. Set \((\boldsymbol{\Theta}_{i},\mathbf{p}_{i})=(\boldsymbol{\Theta}^{\prime\prime}, \mathbf{p}^{\prime\prime})\) with probability \(\min\Big{\{}1,e^{H(\boldsymbol{\Theta}^{\prime\prime},\mathbf{p}^{\prime \prime})-H(\boldsymbol{\Theta}_{i-1},\mathbf{p}^{\prime})}\Big{\}}\). 4 end for ``` **Algorithm 3**Hamiltonian Monte Carlo ## S.4 Extended Simulation Studies ### True Simulation Settings **Details on shared \(\boldsymbol{\Lambda}\): Scenario 1:**: We fill a consecutive 25% of the elements in each column of \(\boldsymbol{\Lambda}\) with independent samples from \(\mathrm{Unif}(-2,2)\) and set the rest as zero. We choose the starting position of the consecutive non-zero elements randomly. To avoid having rows with all zero elements, we fill a randomly chosen 5 elements from each null row with iid samples from \(\mathrm{Unif}(-2,2)\). A typical example for a \(200\times 20\)\(\boldsymbol{\Lambda}\) is shown in the leftmost panel of Figure S.1. **Scenario 2:**: We separately fill the first and second consecutive \(d/2\) values in each column of \(\boldsymbol{\Lambda}\). For each half we follow the same strategy as scenario 1 and eliminate having entirely null rows using the aforementioned scheme. A typical example for a \(200\times 20\)\(\boldsymbol{\Lambda}\) is shown in the middle panel of Figure S.1. From the correlation structures shown under the "_True_" panel of Figure 2 in the main paper, it can be seen that the above two scenarios induce very different correlation structures in the marginal distribution. **Scenario 3:**: We randomly choose and fill 25% of elements in each column of \(\boldsymbol{\Lambda}\) with independent samples from \(\mathrm{Unif}(-2,2)\) and set the rest as zero. We eliminate null rows using the same strategy of scenario 1. A typical example for a \(200\times 20\)\(\boldsymbol{\Lambda}\) is shown in the rightmost panel of Figure S.1. **Details on study-specific loadings:** **Slight misspecification:**: We set the study-specific loading \(\boldsymbol{\Phi}_{s}=\boldsymbol{\Lambda}\mathbf{A}_{s}+\mathbf{E}_{s}\) where we generate \(\mathbf{A}_{s}=((a_{s,j,h}))\) as \(a_{s,j,h}\stackrel{{\mathrm{iid}}}{{\sim}}\mathrm{N}(0,0.25^{2})\) and \(\mathbf{E}_{s}=((e_{s,j,h}))\) as \(e_{s,j,h}\stackrel{{\mathrm{iid}}}{{\sim}}\mathrm{N}(0,0.10^{2})\). **Complete misspecification:** We generate the study-specific loading \(\mathbf{\Phi}_{s}=((\phi_{s,j,h}))\) as \(\phi_{s,j,h}\overset{\text{iid}}{\sim}\text{N}(0,0.65^{2})\). Details on residual variance \(\mathbf{\Delta}\):We set the diagonals of \(\mathbf{\Delta}\) as \(0.50\). ### S.4.2 Recovering Shared Loading Matrix In addition to the discussion in Section 5.1 in the main paper, here we include an extended simulation study showcasing recovery of the shared factor loading matrix \(\mathbf{\Lambda}\). We apply the post-processing strategy discussed in Section 2.1.2 on the MCMC samples to align them with respect to a common orthogonal rotation and then use the mean of the post-processed MCMC samples as a point estimate. The uncertainty quantification via the posterior samples guide inference on whether entries of \(\mathbf{\Lambda}=((\lambda_{j,h}))\) are \(0\) or not; following common practice in Bayesian inferences on sparsity patterns in matrices (Ksheera Sagar _et al._, 2021), we set \(\lambda_{j,h}=0\) if its \(95\%\) posterior credible interval includes \(0\). Comparison with alternate approaches:Although the post-processing scheme aligns the MCMC samples of \(\mathbf{\Lambda}\) with respect to a common orthogonal rotation, it is not immediately obvious whether the estimate \(\widehat{\mathbf{\Lambda}}\) is well-aligned with the simulation truth. To assess this, we further regress the columns of the true \(\mathbf{\Lambda}\) on \(\widehat{\mathbf{\Lambda}}\), and quantify accuracy in terms of the coefficient of determination \(R^{2}\). To this end, we regress \(\mathbf{\lambda}_{j}\). on \(\widehat{\mathbf{\Lambda}}\) for all \(j=1,\ldots,q\), where \(\mathbf{\lambda}_{j}\). denotes the \(j^{th}\) column of the ground truth. Considering \(\widehat{\mathbf{\Lambda}}\) as the predictor matrix in this fashion, we expect \(R^{2}\) should be close to 1 if the true \(\mathbf{\Lambda}\) is well-learned up to orthogonal rotations, since the coefficient of determination is invariant under (non-singular) matrix multiplication. We report the median over the \(R^{2}\) values obtained from the \(q\) fitted regressions, and show the boxplots of the median \(R^{2}\) values from 25 independent replicates in Figure S.2, across all simulation settings together with estimates under peer methods. Note that the post-processing strategy in Section 2.1.2 does not depend on any prior specification or knowledge of \(\mathbf{\Lambda}\); we use the same approach on samples from peer methods to obtain point estimates of \(\mathbf{\Lambda}\). In Figure S.2, we see that even in the completely misspecified case, performance of our proposed SUFA is comparable with the other approaches, and evidently becomes better in higher dimensions. This is coherent with the observations reported in Section 5.1 in the main paper. The better performance of SUFA compared to B-MSFA is most likely due to nullification of the information switching issue allowing improved learning of the shared structure. ## S.5 Supplementary to Section 5.2 ## Appendix S.5 Pre-processing the Data The normalized dataset has more than \(21,000\) gene expressions from \(628\) and \(146\) immune cells in the two microarray datasets, respectively, and more than \(49,000\) genes from \(156\) cells in the bulk RNAseq dataset. We made a \(\log_{2}\) transformation of the data and filtered the top \(5\%\) genes with highest variances using the genefilter R package (Gentleman _et al._, 2020) from each of the datasets and considered the intersection of the filtered genes in our analysis, resulting in a \(d=474\) dimensional problem. Since different cell-types exhibit very different gene expression profiles, we centered the gene-expressions separately within each cell type.
2306.14888
Percolation in lattice $k$-neighbor graphs
We define a random graph obtained via connecting each point of $\mathbb{Z}^d$ independently to a fixed number $1 \leq k \leq 2d$ of its nearest neighbors via a directed edge. We call this graph the directed $k$-neighbor graph. Two natural associated undirected graphs are the undirected and the bidirectional $k$-neighbor graph, where we connect two vertices by an undirected edge whenever there is a directed edge in the directed $k$-neighbor graph between them in at least one, respectively precisely two, directions. In these graphs we study the question of percolation, i.e., the existence of an infinite self-avoiding path. Using different kinds of proof techniques for different classes of cases, we show that for $k=1$ even the undirected $k$-neighbor graph never percolates, but the directed one percolates whenever $k \geq d+1$, $k \geq 3$ and $d \geq 5$, or $k \geq 4$ and $d=4$. We also show that the undirected $2$-neighbor graph percolates for $d=2$, the undirected $3$-neighbor graph percolates for $d=3$, and we provide some positive and negative percolation results regarding the bidirectional graph as well. A heuristic argument for high dimensions indicates that this class of models is a natural discrete analogue of the $k$-nearest-neighbor graphs studied in continuum percolation, and our results support this interpretation.
Benedikt Jahnel, Jonas Köppl, Bas Lodewijks, András Tóbiás
2023-06-26T17:55:17Z
http://arxiv.org/abs/2306.14888v2
# Percolation in lattice \(k\)-neighbor graphs ###### Abstract. We define a random graph obtained via connecting each point of \(\mathbb{Z}^{d}\) independently to a fixed number \(1\leq k\leq 2d\) of its nearest neighbors via a directed edge. We call this graph the _directed \(k\)-neighbor graph_. Two natural associated undirected graphs are the _undirected_ and the _bidirectional_\(k\)-neighbor graph, where we connect two vertices by an undirected edge whenever there is a directed edge in the directed \(k\)-neighbor graph between them in at least one, respectively precisely two, directions. In these graphs we study the question of percolation, i.e., the existence of an infinite self-avoiding path. Using different kinds of proof techniques for different classes of cases, we show that for \(k=1\) even the undirected \(k\)-neighbor graph never percolates, but the directed one percolates whenever \(k\geq d+1\), \(k\geq 3\) and \(d\geq 5\), or \(k\geq 4\) and \(d=4\). We also show that the undirected \(2\)-neighbor graph percolates for \(d=2\), the undirected \(3\)-neighbor graph percolates for \(d=3\), and we provide some positive and negative percolation results regarding the bidirectional graph as well. A heuristic argument for high dimensions indicates that this class of models is a natural discrete analogue of the \(k\)-nearest-neighbor graphs studied in continuum percolation, and our results support this interpretation. _Keywords and phrases._ Lattice \(k\)-neighbor graphs, directed \(k\)-neighbor graph, undirected \(k\)-neighbor graph, bidirectional \(k\)-neighbor graph, \(1\)-dependent percolation, oriented percolation, negatively correlated percolation models, connective constant, planar duality, coexistence of phases. _MSC 2020._ 60K35, 82B43 ## 1. Introduction In recent years, the study of \(k\)-nearest-neighbor type models in continuum percolation has attracted significant attention. Haggstrom and Meester [12] introduced the concept of the _undirected_\(k\)-nearest-neighbor graph, where the vertex set consists of a homogeneous Poisson point process in \(\mathbb{R}^{d}\). In this graph, two points are connected by an edge if at least one of them belongs to the \(k\) nearest neighbors of the other. A _cluster_ refers to a maximal connected component in this graph, and the graph is said to _percolate_ if it contains an infinite (or unbounded) cluster. Since the percolation probability does not depend on the intensity of the Poisson point process, the only parameters affecting the percolation behavior are \(k\) and \(d\). Haggstrom and Meester showed that for \(k=1\), the model does not percolate in any dimension, while for large enough \(d\), percolation occurs when \(k=2\). Moreover, they established that for any \(d\geq 2\) there exists a \(k\in\mathbb{N}\) such that the graph percolates. This initial study of continuum \(k\)-nearest-neighbor graphs was later extended by Balister and Bollobas [1], who investigated three possible senses of percolation in the model where each vertex is connected to its \(k\) nearest neighbors by a _directed_ edge: _in-percolation_ (resp. _out-percolation_), which occurs by definition whenever some point of the point process exhibits an infinite incoming (resp. outgoing) path ending (resp. starting) at it, and _strong percolation_, which means that there exists a strongly connected component in the graph, i.e., a component where from any point there exists a directed path to any other point. Additionally, they introduced another undirected graph, called the _bidirectional_\(k\)-nearest-neighbor graph, where one connects two vertices whenever they are mutually among the \(k\) nearest neighbors of each other. For the two-dimensional case, they verified percolation in the undirected graph for \(k\geq 11\), in the directed graph in all the three senses (in-, out- and strong percolation) for \(k\geq 13\) and in the bidirectional graph for \(k\geq 15\). Recently, Jahnel and Tobias [14] showed that in the bidirectional graph there is no percolation for \(k=2\) in any dimension, even if the underlying point process is not a Poisson point process but an arbitrary deletion-tolerant and stationary point process (satisfying some basic nondegeneracy conditions). Their proof exploits the simplicity of the structure of the bidirectional graph for \(k=2\), which has degrees bounded by \(2\). Proving absence of percolation for \(k=3,4\) seems to be out of reach at the moment even in the Poisson case, and it is also not entirely clear whether these assertions hold in all dimensions or only for \(d=2\). In this manuscript, we introduce and analyze a discrete counterpart of the continuum \(k\)-nearest-neighbor model, aiming to gain a deeper understanding of its underlying structure and fundamental properties. By taking a step back and considering this discrete version we hope to shed some light on its behavior in a more controlled setting. For the directed \(k\)-neighbor graph (\(k\)-DnG), defined on the lattice \(\mathbb{Z}^{d}\), each vertex is connected precisely to its \(k\) nearest neighbors out of the \(2d\) possible neighbors, using the \(\ell_{1}\)-metric. The connections are chosen independently and uniformly for each vertex. By connecting nearest-neighbor pairs with undirected edges if they share at least one edge (or bidirectional edges if they share two directed edges) in the \(k\)-DnG, we obtain the undirected (\(k\)-UnG) or bidirectional (\(k\)-BnG) \(k\)-neighbor graph, respectively. At least for high dimensions \(d\) this discrete model can be expected to behave similarly to its continuum counterpart. Indeed, if we consider the continuum nearest-neighbor percolation model with \(k=2\), where the intensity of the underlying Poisson point process is such that the expected number of points in a unit ball equals one, and let \(Y_{i}\) denote the position of the \(i\)-th nearest neighbor for \(i\in\{1,2\}\), then Haggstrom and Meester show that \(|Y_{i}|\) converges in probability to one as \(d\) tends to infinite, as well as that the conditional distribution of \(Y_{i}\), given \(|Y_{i}|=r\), is uniform on the sphere \(\{x\in\mathbb{R}^{d}:|x|=r\}\)[10, Lemma 3.2]. Moreover, the volume of the intersection of two spheres with radius \(r_{1},r_{2}\in(0.9,1.1)\), whose centers are at least \(0.9\) units apart, is negligible compared to the volume of either sphere [10, Lemma 3.3]. As a result, for large dimensions the continuum nearest-neighbor graph has connections at distance around \(1\), which are established almost independently among different pairs of vertices. As such, an approximation of this graph on \(\mathbb{Z}^{d}\) should yield similar behavior when \(d\) is large. Let us note that the \(k\)-BnG can be viewed as a \(1\)-dependent Bernoulli bond percolation model, where each edge is "open" (included in the \(k\)-BnG) with a probability of \(p=k^{2}/(4d^{2})\). In other words, for any given edge, it is open with probability \(p\), and the events of edges being open are independent when the edges are pairwise non-adjacent. Similarly, the \(k\)-UnG follows the same pattern with \(p=(1-k/(4d))k/d\). It is worth noting that these lattice \(k\)-neighbor graphs exhibit intriguing negative correlation properties, setting them apart from classical models such as the random cluster model [1]. The presence of an edge in the models under consideration here actually decreases the likelihood of neighboring edges being present. Throughout this paper, our main focus revolves around investigating percolation phenomena in all three variations of the model: \(k\)-DnG, \(k\)-UnG, and \(k\)-BnG. Specifically, we explore the presence and absence of percolation in each variant, unraveling the intricate behavior of these models. ### Organization of the manuscript The remainder of this article is organized as follows. We will first collect the necessary notation and formulate our main results in Section 2. In Section 3 we then discuss our findings and mention some related conjectures and open questions. The proofs of our main results can all be found in Section 4. ## 2. Setting and main results Consider the \(d\)-dimensional hypercubic lattice \(\mathbb{Z}^{d}\). We are interested in the percolation behavior of the \(k\)_-neighbor graph_ (\(k\)-nG) in which each vertex independently chooses uniformly \(k\leq 2d\) of its \(2d\) nearest neighbors. We distinguish three different types of \(k\)-nGs: 1. The _directed \(k\)-nG_ (\(k\)-DnG) in which we open a directed edge from the vertex to each of the \(k\) chosen neighbors. 2. The _undirected_\(k\)-\(nG\) (\(k\)-UnG) in which we open an undirected edge from the vertex to each of the \(k\) chosen neighbors. 3. The _bidirectional \(k\)-nG_ (\(k\)-BnG) in which we open an undirected edge between two vertices whenever both choose the other one as one of their \(k\) neighbors. We are interested in the behavior of the percolation probabilities \[\theta^{\mathrm{D}}(k,d)=\mathbb{P}(o\rightsquigarrow\infty\text{ in the $k$-DnG on $\mathbb{Z}^{d}$}), \tag{2.1}\] respectively \(\theta^{\mathrm{U}}(k,d)\) and \(\theta^{\mathrm{B}}(k,d)\), where \(o\rightsquigarrow\infty\) represents the event that there exists a path along open edges from the origin to infinity. Let us note that the percolation probabilities are clearly non-decreasing functions in \(k\). The dependence on the dimension \(d\) is more subtle and - in contrast to many other percolation models - not monotone. While the outdegree in the \(k\)-DnG is almost-surely equal to \(k\), the degree of a vertex in the \(k\)-BnG is a binomial random variable with parameters \(k\) and \(k/2d\), i.e., the expected degree is given by \(k^{2}/2d\) which is less than \(k\), unless \(k=2d\). In case of the \(k\)-UnG, the degree is distributed according to \(k+D\), where \(D\) is a binomial random variable with parameters \(2d-k\) and \(k/2d\), and hence, the expected degree is given by \(k(4d-k)/(2d)\). ### Results for the directed k-neighbor graph As a first step, we study the cases where the occurrence of percolation is deterministic. On the one hand, choosing only one neighbor is never enough for percolation, and on the other hand, choosing a sufficiently large number of neighbors leads to the almost-sure existence of a path connecting the origin to infinity. **Proposition 2.1**.: 1. _For all_ \(d\geq 1\) _it holds that_ \(\theta^{\mathrm{D}}(1,d)=0\)_._ 2. _Whenever_ \(k\geq d+1\) _we have_ \(\theta^{\mathrm{D}}(k,d)=1\)_._ The proof can be found in Section 4.1. Let us now turn our attention towards the intermediate supercritical percolation phase. Here the behavior is more subtle and we have to restrict ourselves to sufficiently high dimensions \(d\). **Theorem 2.2**.: _If \(d\geq 4\) and \(k\geq 4\) or if \(d\geq 5\) and \(k=3\) we have \(\theta^{\mathrm{D}}(k,d)>0\)._ The proof can be found in Section 4.1 and mainly relies on a variation of the technique established in [1] plus precise estimates on the probability that two independent simple random walks meet each other and then take a common step. Last but not least, we come to the question of monotonicity. From Proposition 2.1 and Theorem 2.2 it is already clear that for fixed \(k>1\), the percolation probability \(\theta^{\mathrm{D}}(k,d)\) is not non-decreasing in the dimension. But we can deduce the following monotonicity along diagonals of the parameter space. **Theorem 2.3**.: _For all \(k,d\geq 1\) we have that \(\theta^{\mathrm{D}}(k+1,d+1)\geq\theta^{\mathrm{D}}(k,d)\)._ The proof of Theorem 2.3 can be found in Section 4.1 and is based on a coupling argument. We will see that it is essential that _both_\(k\) and \(d\) increase by one to make the coupling work. For example, issues arise when trying to deduce a similar coupling between the settings \(k=d=2\) and \(k=2,d=3\). Though we expect percolation occurs in both settings, a similar coupling would give rise to a two-dimensional model where some vertices have only out-degree one. In such a case, it seems that the additional edges that appear in the 2-DnG in two dimensions are pivotal in the sense that they enable percolation to occur whereas it may not occur without the presence of these edges. Still, we expect that if we restrict ourselves to parameters \(k\leq d\) a similar relation between the \(d\)-dimensional and \((d+1)\)-dimensional settings should exist. We state this in Conjecture 3.2 below. ### Results for the undirected k-neighbor graph To state our results and proofs for the \(k\)-UnG we first introduce the following notation. Let us denote by \(c(d)\) the _connective constant_ of \(\mathbb{Z}^{d}\), see e.g., [1], which is defined by \[c(d):=\lim_{n\to\infty}c_{n}(d)^{1/n},\] where \(c_{n}(d)\) is the number of self-avoiding paths of length \(n\) in the \(d\)-dimensional hypercubic lattice that start at the origin. Via a quick subadditivity argument one can show that the limit \(c(d)\) actually exists and satisfies \(d<c(d)<2d-1\). It is clear that \(\theta^{\mathrm{U}}(k,d)\geq\theta^{\mathrm{D}}(k,d)\) for any \(k,d\in\mathbb{N}\) because any directed edge in the \(k\)-DnG corresponds to an undirected edge in the \(k\)-UnG (and at most two directed edges can correspond to the same undirected edge). This way, Theorem 2.2 and Part ii of Proposition 2.1 also hold with \(\theta^{\mathrm{D}}\) replaced by \(\theta^{\mathrm{U}}\) everywhere. Moreover, we can actually now also deal with lower dimensions. **Theorem 2.4**.: _We have \(\theta^{\mathrm{U}}(2,2)>0\) and \(\theta^{\mathrm{U}}(3,3)>0\)._ However, even in the undirected sense, \(k=1\) is still not sufficient for percolation in any dimension \(d\in\mathbb{N}\). **Proposition 2.5**.: _We have that \(\theta^{\mathrm{U}}(1,d)=0\) for all \(d\geq 1\)._ ### Results for the bidirectional \(k\)-neighbor graph Clearly we have \[\theta^{\mathrm{B}}(k,d)\leq\theta^{\mathrm{D}}(k,d),\] so we already know that if each vertex chooses a single neighbor, the bidirectional model will not percolate. A stronger result holds for this model, however. The following lemma presents an upper bound for \(k\) in terms of \(d\) for which we can verify that the bidirectional model does not percolate. **Lemma 2.6**.: _For all \(k\) such that \(k(k-1)<2d(2d-1)/c(d)\) we have that \(\theta^{\mathrm{B}}(k,d)=0\)._ Let us note that this result can be interpreted in two ways. First, for given \(d\), absence of percolation is guaranteed for sufficiently small \(k\), and for example \(\theta^{\mathrm{B}}(2,d)=0\) for any \(d\geq 2\) (while it is clear that \(\theta^{\mathrm{B}}(2,1)=1\)), \(\theta^{\mathrm{B}}(3,d)=0\) for any \(d\geq 3\) and \(\theta^{\mathrm{B}}(4,d)=0\) for any \(d\geq 6\), as can be seen from the lower bounds on \(c(d)\) by [13]. However, also for fixed \(k\), since \(2d(2d-1)/c(d)>2d\), for sufficiently large \(d\) there is no percolation. This is due to the fact that in high dimensions, it is unlikely for two neighboring vertices to pick the same connecting edge. This behavior is rather different from the case of the \(k\)-DnG and the \(k\)-UnG where there is percolation for all \(k\geq 3\) in all sufficiently high dimensions. The approach of verifying percolation restricted to a two-dimensional plane, which we used in order to derive \(\theta^{\mathrm{U}}(3,3)>0\) in the proof of Theorem 2.4, is also applicable in the bidirectional case, as the following proposition shows. **Proposition 2.7**.: _We have \(\theta^{\mathrm{B}}(k,d)>0\) whenever_ \[k>d\sqrt{4\Big{(}1-1/c(2)\Big{)}}. \tag{2.2}\] An application of the upper bound \(c(2)\leq 2.679192495\) from [14] for the connective constant of \(\mathbb{Z}^{2}\) immediately yields the following corollary. **Corollary 2.8**.: _We have \(\theta^{\rm B}(k,d)>0\) whenever_ \[k>d\sqrt{4\big{(}1-1/2.679192495\big{)}}\approx 1.583355d. \tag{2.3}\] This corollary allows us to verify percolation, e.g., for the \((2d-1)\)-BnG for \(d\geq 3\), for the \((2d-2)\)-BnG for \(d\geq 5\), for the \((2d-3)\)-BnG for \(d\geq 8\), and for the \((2d-4)\)-BnG for \(d\geq 10\). (Thus, the smallest-dimensional positive percolation result that we obtain is that \(\theta^{\rm B}(5,3)>0\).) Note that, compared to \(d\), we always need rather large \(k\) to percolate. In high-dimensions we can improve the ratio slightly by using that the \(k\)-BnG model (just as the \(k\)-UnG model) in any dimension \(d\) is an \(1\)-dependent Bernoulli bond percolation model where each edge is open with probability \(p=k^{2}/(4d^{2})\). By using the results from [1] we obtain the following improved asymptotic result. **Proposition 2.9**.: _For any \(\alpha>2\sqrt{0.5847}\approx 1.5293\), we have \(\theta^{\rm B}(\alpha d,d)>0\) for all \(d\) sufficiently large._ Of course, the \(k\)-UnG model is also an \(1\)-dependent bond-percolation model, but the same approach unfortunately does not yield any new results in this case. Indeed, here each edge is open with probability \(p=k(4d-k)/(4d^{2})\). Thus, for \(k=d\) it always holds that \(p=3/4<0.8457\), while for \(k>d\) (and for \(d\geq 2\) even for \(k\geq 4\)) we already know that \(\theta^{\rm D}(k,d)=1\). ### Summary To close this section off, let us summarize our results in Table 1. ## 3. Outlook and open problems Although we managed to prove the occurrence or absence of percolation in a wide range of cases for the \(k\)-\(\square\)nG model (\(\square\in\{{\rm D},{\rm U},{\rm B}\}\)), there are still many cases where the techniques used did not provide any conclusive results, especially in low dimensions. In this section we want to comment on a few of those that we deem to be interesting and also discuss some further conjectures on the behavior of discrete \(k\) neighbor graphs. ### Directed \(k\)-neighbor graphs For the \(k\)-DnG the most pressing open question is if it percolates for \(k=2\) in \(d=2\) and then if we consequentially also percolate in any dimension \(d\). According to simulations this seems to be the case, see Figure 1, and, moreover, the proportion of vertices we can reach from the origin is actually quite high. This strong numerical evidence and the heuristic that it is easier for the DnG to percolate in higher dimensions suggest the following conjecture. **Conjecture 3.1**.: _The \(2\)-DnG percolates in all dimensions \(d\in\mathbb{N}\)._ Note that the complement of the \(k\)-DnG in \(d\) dimensions with respect to \(\mathbb{Z}^{d}\), i.e., the graph with vertex set \(\mathbb{Z}^{d}\) and the edge set formed by all nearest-neighbor edges of \(\mathbb{Z}^{d}\) not contained in the \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \(d\)\({}^{k}\) & 1 & 2 & 3 & 4 & 5 & 6 & \(\geq 7\) \\ \hline 1 & no/no/no & yes/yes/yes & - & - & - & - & - \\ \hline 2 & no/no/no & no/open/yes & open/yes/yes & yes/yes/yes & - & - & - \\ \hline 3 & no/no/no & no/open/open & no/open/yes & open/yes/yes & yes/yes/yes & yes/yes/yes & - \\ \hline 4 & no/no/no & no/open/open & no/open/open & open/yes/yes & open/yes/yes & open/yes/yes & yes/yes/yes \\ \hline \(\geq 5\) & no/no/no & no/open/open & no/yes/yes & open/yes/yes & open/yes/yes & open/yes/yes & open/yes/yes \\ \hline \end{tabular} \end{table} Table 1. Percolation results and open cases for the BnG/DnG/UnG. For given \(k,d\), ‘yes’ means that the given graph percolates, ‘no’ means that it does not, while ‘open’ means that the given case is (at least partially) open. \(k\)-DnG, is in distribution a \((2d-k)\)-DnG. So if Conjecture 3.1 holds, then this would be an example for phase coexistence in a self-complimentary (directed) graph in two dimensions. With regards to the monotonicity statement Theorem 2.3 we expect a corresponding statement also to hold along the diagonal. **Conjecture 3.2**.: _For all \(k,d\geq 2\) such that \(k\leq d\), we have that \(\theta^{\mathrm{D}}(k,d+1)\geq\theta^{\mathrm{D}}(k,d)\)._ One possible way to prove Conjecture 3.1 would be then to verify percolation in \(d=2\) and to prove Conjecture 3.2, but at least for higher dimensions \(d\) there might be simpler arguments making use of intersection properties of random walks to show that \(\theta^{\mathrm{D}}(2,d)>0\). Of course this would also imply that we have \(\theta^{\mathrm{U}}(k,d)>0\) for all \(k,d\geq 2\). ### Bidirectional \(k\)-neighbor graphs For the bidirectional model the most intriguing question seems to be what is the smallest \(k=k(d)\) such that \(\theta^{\mathrm{B}}(k(d),d)>0\) holds for all \(d\) (or at least for all \(d\) sufficiently large)? Heuristically, one would expect that we do not percolate when the expected degree of a vertex is less than \(2\), because the chance of backtracking, i.e., not getting a new edge is then to large. However, as soon as we have an expected degree of at least \(2\), one could imagine that this is sufficient to percolate since we usually get at least one new edge in each step while walking along a path. At least in low dimensions (which are still accessible for numerical computations) the numerical tests seem to support the following conjecture, see Figures 2 and 3. Figure 1. Samples of the \(2\)-DnG in boxes with side-length \(n\). The colored vertices are the ones contained in the connected component of the origin. Figure 2. Samples of the \(3\)-BnG in boxes with side-length \(n\). The colored vertices are the ones contained in the connected component of the origin. **Conjecture 3.3**.: _The \(k\)-BnG percolates in dimension \(d\) if and only if \(k\geq 2\sqrt{d}\)._ In particular this would imply that the \(d\)-BnG does _not_ percolate for dimensions \(d=2,3\) but percolates for \(d\geq 4\), and that the \(3\)-BnG percolates in \(d=2\). ### XOR percolation Let us briefly consider another variant of lattice \(k\)-neighbor graphs, namely the _exclusively unidirectional \(k\)-nG_ (\(k\)-XnG), in which we open an undirected edge between two vertices whenever exactly one of them chooses the other one as one of its \(k\) chosen neighbors. The letter X refers to the "XOR" (exclusive "or") in the edge-drawing rule. Although the parameter \(k\) of the \(k\)-XnG can range between \(1\) and \(2d\) (just as for the \(k\)-BnG, \(k\)-DnG and \(k\)-UnG), actually it suffices to consider \(1\leq k\leq d\), thanks to the following lemma. **Lemma 3.4**.: _For any \(1\leq k\leq 2d-1\), the \(k\)-XnG equals the \((2d-k)\)-XnG in distribution._ Proof.: The statement follows from the fact that the \(k\)-XnG contains precisely the undirected nearest-neighbor edges of \(\mathbb{Z}^{d}\) that are included in the \(k\)-DnG in one direction and in the complementary \((2d-k)\)-DnG in the other direction, and the same holds for the \((2d-k)\)-DnG. Figure 3. Probability to reach the boundary of boxes \(S_{n}\) of varying sidelength \(n\) in the \(d\)-BnG for \(d\in\{2,3,4,5\}\) (Sample size: \(10000\)). Of course, Lemma 3.4 also holds for \(k=2d\) if we define the \(0\)-XnG as \(\mathbb{Z}^{d}\) with no edges, from which it is clear that \(\theta^{\mathrm{X}}(2d,d)=0\) for all \(d\), where we write \(\theta^{\mathrm{X}}\) for the percolation probability of the XnG analogously to (2.1). Hence, we can limit our analysis to the cases \(1\leq k\leq d\). It is also easy to see that, since the \(k\)-XnG is a subgraph of the \(k\)-UnG, we have \(\theta^{\mathrm{X}}(1,d)=0\) for all \(d\geq 1\), and thus by Lemma 3.4, \(\theta^{\mathrm{X}}(2d-1,d)=0\) for all \(d\). Moreover, the \(2\)-XnG is also not percolating in one dimension, so the first non-trivial case is \(k=d=2\), see Figure 4 for some simulations. Since the probability of an edge being open in the \(k\)-XnG is given by \(k(2d-k)/(2d^{2})\), which is maximized at \(k=d\) with maximum value \(1/2\), the \(k\)-XnG seems to be critical in dimension \(k\). Moreover, for \(k\geq 2\), and \(e_{1},e_{2}\) edges that share a common vertex, we have that \[\mathbb{P}(e_{1},e_{2}\text{ open})=\frac{(2d-k)(k(4k-1)(2d-k)-k^{2})}{8d^{3}( 2d-1)}\leq\frac{k^{2}(2d-k)^{2}}{4d^{4}}=\mathbb{P}(e_{1}\text{ open})^{2}, \tag{3.1}\] with equality if and only if \(k=d\). Thus the model features again negative correlations that decrease in \(k\in\{2,\ldots,d\}\) and in particular, for \(k=d\), we even have independence. This, together with the fact that, for fixed \(d\), the probability of an edge being open in the \(k\)-XnG increases, allows us to formulate the following conjecture. **Conjecture 3.5**.: _For fixed \(d\), \(k\mapsto\theta^{\mathrm{X}}(k,d)\) is strictly monotone increasing in \(k\in\{2,\ldots,d\}\)._ Despite the above heuristic justification, it is not clear to us how to verify this conjecture. In particular, we are not aware of a coupling between the \(k\)-XnG and the \(l\)-XnG for \(1\leq k<l\leq d\), while the \(k\)-DnG (respectively \(k\)-UnG, \(k\)-BnG) is a subgraph of the \(l\)-DnG (resp. \(l\)-UnG, \(l\)-BnG) whenever \(1\leq k\leq l\leq 2d\). Let us finally mention that, in view of the first-moment method for the proof of existence of subcritical regimes, see for example Lemma 2.6, we would need to establish \(\mathbb{P}(e_{1}\text{ open}|e_{2}\text{ open})c(d)<1\), which is unfortunately not true for any \(2\leq k\leq d\). Also none of the methods which we have used to prove existence of supercritical percolation regimes seem to apply to the \(k\)-XnG, so determining whether it actually percolates for certain choices of \(d\) and \(k\) remains a goal for our future research. Focussing on the case \(k=2\), we have performed simulations that suggest absence of percolation for \(d=2\), see Figures 4(a) and 4(b). Based on this and Figure 4(c) we at least formulate the following conjecture. **Conjecture 3.6**.: _The \(2\)-XnG percolates for all \(d\geq 3\) but does not percolate for \(d=2\)._ Figure 4. Samples of the \(2\)-XnG in boxes with side-length \(n\). The colored vertices are the ones contained in the connected component of the origin. ### In-percolation and strong percolation As we already mentioned in the introduction, our notion of directed percolation (2.1) corresponds to out-percolation in [1]. In this paper, we did not develop any specific proof techniques for strong or in-percolation. Nevertheless, it is clear that for fixed \(d\) and \(k\) percolation in the \(k\)-BnG implies strong percolation, while strong percolation implies both in- and out-percolation in the \(k\)-DnG, moreover, in- or out-percolation in the \(k\)-DnG implies percolation in the \(k\)-UnG, analogously to the continuum case (where these implications were mentioned in [1]). Hence, our positive percolation results for the \(k\)-BnG yield ones for strong percolation, and our negative (out-)percolation results for the \(k\)-DnG imply the absence of strong percolation. We have seen that for \(d\) large, out-percolation in the \(k\)-DnG occurs already for \(k=3\), while for the \(k\)-BnG percolation definitely requires \(k=\Omega(\sqrt{d})\) and perhaps even \(k>d\). It is an interesting open question whether strong percolation is closer to directed (out-)percolation than to bidirectional percolation with this respect. **Conjecture 3.7**.: _For all \(k,d\in\mathbb{N}\), in-percolation occurs if and only if out-percolation does._ It should possibly follow from some general mass-transport type argument, but it is not known either in the continuum case. Without such a result, it seems that in-percolation is in general more difficult to deal with than out-percolation, due to increased combinatorial complexity. E.g., showing that the 1-DnG does not percolate was relatively straightforward (cf. the proof of Proposition 2.1), but proving the lack of in-percolation in the same graph (cf. the proof of Proposition 2.5) already presented more challenges. Further, in the two-dimensional Poisson case [1, Proof of Theorem 2], the authors showed directly that out-percolation occurs for \(k=13\), but for in-percolation for the same \(k\), the same method did not work. They derived that their arguments for out-percolation actually imply strong and therefore also in-percolation. ## 4. Proofs ### Proofs for the \(k\)-DnG We start by treating the cases in which the occurence of an infinite cluster that contains the origin is a deterministic event. Proof of Proposition 2.1.: _Ad (i):_ If there exists a path starting from the origin and reaching an \(\ell_{1}\)-distance \(n\) from the origin, then this path is unique and has at least \(n\) steps. But such a path exists with probability at most \((2d)^{-n}\), which converges to zero as \(n\) tends to infinity. _Ad (ii):_ We will use a growth argument and show that the maximal distance to the origin is strictly increasing between generations. To make this precise, denote by \(G_{n}\) the new vertices discovered in the \(n\)-th step, where we start with \(G_{0}=\{0\}\) and at every step \(n\leadsto n+1\), each vertex in \(G_{n}\) chooses \(k\) of its neighbors uniformly at random as successors and \(G_{n+1}\) is then the set of all Figure 5. Numerical experiments for the XnG in dimensions \(d=2,3\). potential successors that have not been discovered before for any \(m\leq n\). For \(x\in\mathbb{Z}^{d}\), we let \(\|x\|_{1}\) denote the \(\ell_{1}\)-distance between \(x\) and the origin. We show by induction that \[\forall n\in\mathbb{N}:\quad\max_{x\in G_{n}}\|x\|_{1}<\max_{x\in G_{n+1}}\|x \|_{1}. \tag{4.1}\] This then implies that there exists an infinitely long directed path starting at the origin. Indeed, for \(n=0\) the inequality in (4.1) is clear. For the induction step, let \(x\in G_{n}\) be a vertex (possibly non unique) that achieves the maximum \(\ell_{1}\) distance. Then \(x\) has \(d\) neighbors \(y\) with \(\|x\|_{1}<\|y\|_{1}\). If \(k\geq 2d-d+1=d+1\), then we necessarily have to choose one of these \(y\) as a potential vertex for the new generation. This \(y\) cannot have been in any of the previous generations by the induction hypothesis. Therefore we have \(y\in G_{n+1}\) and (4.1) follows. Proof of Theorem 2.2.: We follow [13, Section 2]. Let \(\mathcal{R}_{n}\) denote the set of directed paths from the origin to level \(n\) (that is, all vertices at \(\ell_{1}\)-distance \(n\)) with strictly increasing \(\ell_{1}\)-distance in the first quadrant, and let \(N_{n}\) be the number of open paths in \(\mathcal{R}_{n}\). Then \(W_{n}=(2/k)^{n}N_{n}\) is a martingale with respect to \(\mathcal{F}_{n}\), the sigma-algebra generated by the first \(n\) steps of a simple random walk \(T\) with \(T_{0}=o\) and \(T_{n+1}=T_{n}+e_{i}\) where \(i\) is uniformly chosen from \(\{1,\ldots,d\}\). Indeed, using the fact that the expected number of outgoing (i.e., with respect to \(\ell_{1}\)-distance increasing) edges is given by the expectation of a hypergeometric random variable, i.e., \[\sum_{\ell=0}^{k}\ell\binom{d}{\ell}\binom{d}{k-\ell}/\binom{2d}{k}=\frac{k}{2}.\] We can now use independence to see that \[\mathbb{E}[N_{n+1}|\mathcal{F}_{n}]=kN_{n}/2.\] Since \(\mathbb{E}[W_{1}]=1\), we can apply the martingale convergence theorem, to ensure the existence of a random variable \(W\) with \[W_{n}\to W\qquad\text{almost surely}.\] Using the second-moment method it now suffices to show that \(\limsup_{n\uparrow\infty}\mathbb{E}[W_{n}^{2}]<\infty\), since by the Paley-Zygmund inequality we then have that \(\mathbb{P}(W>0)>0\), which implies \(\theta^{\text{D}}(k,d)>0\). For this, note that \[\mathbb{E}[N_{n}^{2}]=\sum_{s,t\in\mathcal{R}_{n}}\mathbb{P}(s,t\text{ open})=\sum_{s,t\in\mathcal{R}_{n}}p^{K(s,t)}q^{L(s,t)}p^{2(n-K(s,t)-L(s,t))},\] where \(p=k/(2d)\) denotes the probability that a given edge is open, \(q=k(k-1)/(2d(2d-1))\) denotes the probability that two different edges that emerge from the same vertex are open, \(K(s,t)\) is the number joint edges in two paths \(s\) and \(t\), and \(L(s,t)\) is the number of vertices \(x\) in \(s\) and \(t\) such that \(s\) and \(t\) do not have a joint edges after \(x\). Now note that \(q<p^{2}\), which implies that \[\mathbb{E}[N_{n}^{2}]\leq\sum_{s,t\in\mathcal{R}_{n}}p^{2n-K(s,t)}.\] The right-hand side is precisely the same as the right-hand side of the display below [13, Equation (2.7)]. Thus, a verbatim application of the arguments of [13, Section 2] implies that \[\lim_{n\to\infty}\mathbb{E}W_{n}^{2}<\infty\] holds if and only if \[p=k/(2d)>\varrho(d),\] where for two independent (simple, symmetric, nearest neighbor) random walks \(\widetilde{S}=(\widetilde{S}_{n})_{n\in\mathbb{N}_{0}}\) and \(\widetilde{S}^{\prime}=(\widetilde{S}^{\prime}_{n})_{n\in\mathbb{N}_{0}}\) on the first quadrant of \(\mathbb{Z}^{d}\) started from the origin, we define \[\varrho(d)=\mathbb{P}(\exists m\geq 0\colon\widetilde{S}_{m}=\widetilde{S}^{ \prime}_{m}\text{ and }\widetilde{S}_{m+1}=\widetilde{S}^{\prime}_{m+1}).\] Let further \(\tau_{d}=\inf\{m\geq 0\colon\widetilde{S}^{\prime}_{m}\text{ and }\widetilde{S}_{m+1}=\widetilde{S}^{\prime}_{m+1}\}\). According to [13, p. 155] we have for all \(d\) that \[\mathbb{P}(\tau_{d}=0)=d^{-1},\qquad\mathbb{P}(\tau_{d}=1)=0,\qquad\mathbb{P}( \tau_{d}=2)=d^{-3}-d^{-4}, \tag{4.2}\] and for \(3\leq k\leq d\), \[\mathbb{P}(\tau=k)\leq d^{-k}k!,\] while for \(j\geq 1\) and \(k>jd\), \[\mathbb{P}(\tau=k)\leq d^{-1}(2\pi d)^{1/2}\Big{(}\mathrm{e}^{-1/13}/\sqrt{2 \pi}\Big{)}^{d}j^{-\frac{1-d}{2}}, \tag{4.3}\] so that \[\begin{split}\varrho(d)=\mathbb{P}(\tau_{d}<\infty)& \leq d^{-1}+d^{-3}-d^{-4}+\sum_{k=3}^{d}d^{-k}k!+(2\pi d)^{1/2} \Big{(}\frac{\mathrm{e}^{-1/13}}{\sqrt{2\pi}}\Big{)}^{d}\sum_{j=1}^{\infty}j^ {-\frac{1-d}{2}}\\ &=d^{-1}+d^{-3}-d^{-4}+\sum_{k=3}^{d}d^{-k}k!+(2\pi d)^{1/2} \Big{(}\frac{\mathrm{e}^{-1/13}}{\sqrt{2\pi}}\Big{)}^{d}\zeta\Big{(}\frac{d-1} {2}\Big{)},\end{split} \tag{4.4}\] where \(\zeta\) denotes the Riemann zeta function. Let \(U(d)\) denote the right-hand side of the inequality. To conclude that we do indeed percolate it suffices to verify that \(U(d)\) is less than \(k/(2d)\). In low dimensions one can compute the right-hand side numerically. The Table 2 shows the smallest values of \(k\) such that \(U(d)\) is less than \(k/(2d)\), for \(d=4,5,6,7\). So it remains to verify the claim for \(d\geq 7\) and \(k\geq 3\). For these standard but tedious calculations we refer to the proof of Lemma 4.1 which is given below in full detail. Before we start the technical calculations, let us briefly note that by (4.2) we directly have that \(\varrho(d)>1/d\). Therefore, \(\varrho(d)<k/(2d)\) never holds for \(k=2\), whence for \(k=2\) the approach of the proof of Theorem 2.2 is not applicable. For \(d\leq 3\) (where the cases \(k=2,d=2\), \(k=2,d=3\), and the case \(k=3,d=3\) are open), the issue is that the sum \(\sum_{j=1}^{\infty}j^{\frac{-1-d}{2}}\) (cf. (4.4)) does not converge. Indeed, this technique, like many others in statistical physics, only works when we are in at least four dimensions. Finally, for \(d=4\) and \(k=3\) one could hope that a combination of the proof techniques of the theorem and some explicit computations can work, but for this, one would need a sufficiently tight upper bound on \(\mathbb{P}(\tau_{4}=k)\) up to \(k\approx 12\), which exceeds our available computing capacity. **Lemma 4.1**.: _For any \(d\geq 7\) and \(k\geq 3\), we have_ \[\frac{1}{d}+\frac{1}{d^{3}}-\frac{1}{d^{4}}+\sum_{k=3}^{d}d^{-k}k!+(2\pi d)^{ 1/2}\Big{(}\frac{\mathrm{e}^{-1/13}}{\sqrt{2\pi}}\Big{)}^{d}\zeta\Big{(}\frac {d-1}{2}\Big{)}-\frac{k}{2d}<0, \tag{4.5}\] _where \(\zeta(\cdot)\) is the Riemann zeta function._ Proof.: Recall that we have already seen that \[R(k,d):=\frac{1}{d}+\frac{1}{d^{3}}-\frac{1}{d^{4}}+\sum_{k=3}^{d}d^{-k}k!+(2 \pi d)^{1/2}\Big{(}\frac{\mathrm{e}^{-1/13}}{\sqrt{2\pi}}\Big{)}^{d}\zeta\Big{(} \frac{d-1}{2}\Big{)}-\frac{k}{2d}<0\] holds for any \(k\geq 3\) if \(d=7\). Our goal is to prove the statement of Lemma 4.1, i.e., that the same holds for any \(d\geq 7\) and \(k\geq 3\). We check numerically that it also holds for \(8\leq d\leq 11\) and \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \(d=4\) & \(d=5\) & \(d=6\) & \(d=7\) \\ \hline \(U(d)\), the r.h.s. of (4.4) & 0.693093 & 0.394622 & 0.268615 & 0.199707 \\ \hline Smallest \(k\) such that \(U(d)\) is less than \(k/(2d)\) & 6 & 4 & 4 & 3 \\ \hline \end{tabular} \end{table} Table 2. Values of the right-hand side of (4.4) and the smallest \(k\) such that this right-hand side is below \(k/(2d)\), for \(4\leq d\leq 7\). \(k=3\) (and thus also for \(k\geq 4\) for the same values of \(d\)): indeed, we have \(R(8,3)=-0.0292277\), \(R(9,3)=-0.0350912\), \(R(10,3)=-0.0367514\) and \(R(11,3)=-0.0364418\). Now, it is easy to show that for \(d\geq 11\), we have \[R(k,d+1)\leq\frac{d}{d+1}R(k,d)+(d+1)^{-(d+1)}(d+1)! \tag{4.6}\] Indeed, it holds that \[R(k,d+1)-(d+1)^{-(d+1)}(d+1)!\] \[=\frac{1}{d+1}-\frac{k}{2(d+1)}+\frac{1}{(d+1)^{3}}-\frac{1}{(d+1 )^{4}}+\sum_{k=3}^{d}\frac{k!}{(d+1)^{k}}+(2\pi(d+1))^{1/2}\Big{(}\frac{\mathrm{ e}^{-1/13}}{\sqrt{2\pi}}\Big{)}^{d+1}\zeta\Big{(}\frac{d}{2}\Big{)}\] \[=\frac{d}{d+1}\Big{(}\frac{1}{d}-\frac{k}{2d}\Big{)}+\frac{d}{d+ 1}\bigg{[}\frac{1}{d^{3}}-\frac{1}{d^{4}}+\sum_{k=3}^{d}d^{-k}k!+(2\pi d)^{1/ 2}\Big{(}\frac{\mathrm{e}^{-1/13}}{\sqrt{2\pi}}\Big{)}^{d}\zeta\Big{(}\frac{d -1}{2}\Big{)}\bigg{]}\] \[=\frac{d}{d+1}R(k,d),\] where in the last step we used that the Rieman zeta function is monotone decreasing on \((1,\infty)\) and that for \(d\geq 11\), \[\sqrt{\frac{(d+1)}{d}}\frac{\mathrm{e}^{-1/13}}{\sqrt{2\pi}}\leq\sqrt{\frac{1 2}{11}}\frac{\mathrm{e}^{-1/13}}{\sqrt{2\pi}}\approx 0.385831,\] while \(d/(d+1)>11/12\), which is strictly larger. Next, let us show that \[(d+1)^{-(d+1)}(d+1)!\leq\frac{d}{d+1}\big{(}\frac{1}{d^{3}}-\frac{1}{d^{4}} \big{)}-\Big{(}\frac{1}{(d+1)^{3}}-\frac{1}{(d+1)^{4}}\Big{)} \tag{4.7}\] holds for \(d\geq 11\). Note that for such \(d\), we have \[(d+1)^{-(d+1)}(d+1)!\leq\frac{12!}{(d+1)^{12}}\leq\frac{12!}{12^{8}}\frac{1}{ (d+1)^{4}}\approx\frac{1.114}{(d+1)^{4}},\] and \[\frac{d}{d+1}\Big{(}\frac{1}{d^{3}}-\frac{1}{d^{4}}\Big{)}-\Big{(}\frac{1}{(d +1)^{3}}-\frac{1}{(d+1)^{4}}\Big{)}=\frac{(d-1)(d+1)^{3}}{d^{3}(d+1)^{4}}= \frac{2d^{3}-2d-1}{d^{3}(d+1)^{4}}>\frac{1.99}{(d+1)^{4}}\] Hence, (4.7) holds for \(d\geq 11\), and thus such \(d\) and for any \(k\geq 3\), \[R(k,d+1)\leq\frac{d}{d+1}R(k,d).\] Since \(R(k,11)<0\), the lemma now follows by induction over \(d\). Proof of Theorem 2.3.: We use a randomized coupling approach. The coupling is designed to have the following two key properties. First, using the additional dimension, the edge distribution of a node in \(\mathbb{Z}^{d}\) is the image measure of a mapping from the nodes "above and below" that node and hence iid over the nodes in \(\mathbb{Z}^{d}\). Second, any connected component in \(\mathbb{Z}^{d}\) is the image of a connected component in \(\mathbb{Z}^{d+1}\). To make this precise, let us write \(x=(x_{1},\ldots,x_{d+1})\in\mathbb{Z}^{d+1}\) and \(x^{\prime}=\pi(x)=(x_{1},\ldots,x_{d})\in\mathbb{Z}^{d}\) for the projection of \(x\) to its first \(d\) coordinates. By \(\omega_{x}\) we denote the \((d+1)\)-dimensional \((k+1)\)-neighbor directed edge configuration of \(x\in\mathbb{Z}^{d+1}\) and by \(\omega_{x^{\prime}}^{\prime}=\pi(\omega_{x})\) the associated edge configuration of \(x^{\prime}\) in \(\mathbb{Z}^{d}\). In words we forget all edges facing up or down and only keep all remaining edges which we consider as part of the edge set of \(\mathbb{Z}^{d}\). We further write \(\bar{x}=(x^{\prime},n)_{n\in\mathbb{Z}}\) for the vector of nodes _above and below_\(x^{\prime}\in\mathbb{Z}^{d}\) in the \((d+1)\)-th dimension and \(\bar{\omega}_{\bar{x}}\) for the associated vector of \((k+1)\)-neighbor configurations of the nodes \((x^{\prime},n)\in\mathbb{Z}^{d+1}\) in \(\bar{x}\). The idea now is to define probability kernels \(\nu_{x^{\prime}}(\cdot|\bar{\omega}_{\bar{x}})\), step-by-step with respect to \(x^{\prime}\), in such a way that the joint kernel has two properties: 1. Integrating the joint kernel with respect to \(\omega\) gives the distribution of the \(d\)-dimensional directed \(k\)-DnG. 2. If the connected component of the origin in a \((d+1)\)-dimensional component \(\omega\) is finite, then the joint kernel only puts positive mass on finite components of the origin in \(d\) dimensions as well. To define this precisely, we need to consider ordered \(k\)-subsets of nearest neighbors of nodes \(x^{\prime}\in\mathbb{Z}^{d}\) with some additional coordinate, i.e., we write \[D_{x^{\prime}}=\{\{(a_{1},n_{1}),\ldots,(a_{k},n_{k})\}\colon n_{1},\ldots,n_ {k}\in\mathbb{Z},\{a_{1},\ldots,a_{k}\}\subset N^{d}(x^{\prime})\text{ with }a_{i}\neq a_{j}\ \forall i\neq j\}\] where \(N^{d}(x^{\prime})\subset\mathbb{Z}^{d}\) denotes the set of nearest neighbors of \(x^{\prime}\) in \(\mathbb{Z}^{d}\). Then, we start with the origin \(o\in\mathbb{Z}^{d}\) and define for \(\bar{\omega}_{\bar{o}}\) the probability kernel \(\nu_{o,0}(\cdot|\bar{\omega}_{\bar{o}})\) as a probability measure on \(D_{o}\) as follows. 1. If \(|\pi(\omega_{(o,0)})|\geq k\), pick \(k\) neighbors \(a_{o,1},\ldots,a_{o,k}\) uniformly at random from the available \(|\pi(\omega_{(o,0)})|\) connected neighbors in \(\mathbb{Z}^{d}\). Write the outcome as \(\{(a_{o,1},0),\ldots,(a_{o,k},0)\}\). 2. If \(|\pi(\omega_{(o,0)})|<k\), we keep the \((k-1)\)-many available connected neighbor in \(\mathbb{Z}^{d}\) as \(a_{o,1},\ldots,a_{o,k-1}\). Then, randomly choose a direction up or down (there must exist a directed edge facing up as well as down) and follow these steps: 1. If \(|\pi(\omega_{(o,\pm 1)})\setminus\pi(\omega_{(o,0)})|\geq 1\), pick the missing neighbor \(a_{o,k}\) uniformly at random from the available connected neighbors \(\pi(\omega_{(o,\pm 1)})\setminus\pi(\omega_{(o,0)})\). Write for the outcome \(\{(a_{o,1},0),\ldots,(a_{o,k-1},0),(a_{o,k},\pm 1)\}\). 2. If \(|\pi(\omega_{(o,\pm 1)})\setminus\pi(\omega_{(o,0)})|=0\), follow the existing arrow in the same direction as in Step 2 and repeat from (a) with \(\omega_{(o,\pm 1)}\) replaced by \(\omega_{(o,\pm 2)}\). Now, almost surely, the construction ends and, due to the complete symmetry in the construction, \(\nu_{o,0}\) reproduces the uniform \(k\)-nearest-neighbor distribution for the origin. That is, for \[E_{x^{\prime}}^{k,d}=\{\{a_{1},\ldots,a_{k}\}\subset N^{d}(x^{\prime})\}\] and \(F\colon D_{x^{\prime}}\to E_{x^{\prime}}^{k,d}\), \(\{(a_{1},n_{1}),\ldots,(a_{k},n_{k})\}\mapsto\{a_{1},\ldots,a_{k}\}\) and \(U_{x}^{k,d}\) the uniform distribution on \(E_{x}^{k,d}\), we have that, for all \(\omega_{o}^{\prime}\in E_{o}^{k,d}\), \[\int\Big{(}\bigotimes_{n\in\mathbb{Z}}U_{(o,n)}^{k+1,d+1}\Big{)}(\mathrm{d} \bar{\omega}_{\bar{o}})\nu_{o,0}(F^{-1}(\omega_{o}^{\prime})|\bar{\omega}_{ \bar{o}})=U_{o}^{k,d}(\omega_{o}^{\prime}).\] Let us describe the following steps in words. Under \(\nu_{o,0}(\cdot|\bar{\omega}_{\bar{o}})\) we are equipped with \(k\) neighbors of the origin in \(\mathbb{Z}^{d}\) that all also carry the information on which level in the additional \((d+1)\)-th coordinate they are discovered. Starting with \(\hat{a}_{o,1}=(a_{o,1},n_{1})\), the first discovered neighbor, we can sample its neighbors, using the same algorithm, based on the information provided by \(\bar{a}_{o,1}\), now started at level \(n_{1}\) (and not at level \(0\)). The same can be done for all other neighbors \(\hat{a}_{o,i}=(a_{o,i},n_{i})\). In this fashion, we can step-by-step explore the connected component of the origin in \(\mathbb{Z}^{d}\), where in any step, whenever we discover a new vertex \(y\in\mathbb{Z}^{d}\), we only use information from the associated vector \(\bar{y}\), i.e., \[\nu(\mathrm{d}\omega^{\prime}|\omega)= \int\nu_{(a,0)}(\mathrm{d}(\hat{a}_{o,1},\ldots,\hat{a}_{o,k})| \bar{\omega}_{\bar{o}})\times\] \[\int\nu_{(a_{o,1},n_{1})}(\mathrm{d}(\hat{a}_{o,1},1,\ldots,\hat{ a}_{a_{o,1},k})|\bar{\omega}_{\bar{a}_{o,1}})\cdots\int\nu_{(a_{o,k},n_{k})}( \mathrm{d}(\hat{a}_{a_{o,k},1},\ldots,\hat{a}_{a_{o,k},k})|\bar{\omega}_{ \bar{a}_{o,k}})\times\] \[\cdots\mathbbm{1}\{F(\hat{a}_{o,1},\ldots,\hat{a}_{o,k},\hat{a}_{ a_{o,1},1},\ldots,\hat{a}_{a_{o,1},k},\ldots)=\mathrm{d}\omega^{\prime}\}.\] Now, the key point is the following. Imagine that the algorithm presents us a configuration of open edges in \(\mathbb{Z}^{d}\) such that the connected component of the origin reaches the boundary of a centered box with side-length \(n\). Then, necessarily, any configuration in \(\mathbb{Z}^{d+1}\) that is used as an input for the algorithm must also have the property that the origin is connected to the boundary of a box of side-length \(n\), now of course in \(\mathbb{Z}^{d+1}\). Hence, denoting by \(A_{n}^{d}\) the event that the origin is connected to the boundary of the centered box of side-length \(n\) in \(\mathbb{Z}^{d}\) and by \(\mathbb{P}_{k,d}\) the distribution of the \(k\)-DnG in dimension \(d\), we have that \[\mathbb{P}_{k,d}(A_{n}^{d})=\mathbb{E}_{k+1,d+1}[\nu(A_{n}^{d}|\cdot)]\leq \mathbb{P}_{k+1,d+1}(A_{n}^{d+1}),\] which gives the result. ### Proofs for the \(k\)-UnG Proof of Theorem 2.4.: We prove the positive percolation probability in the 2-UnG via a dual approach. That is, let \(\mathbb{Z}^{2}:=\{x+(1/2,1/2):x\in\mathbb{Z}^{2}\}\) be the two-dimensional dual lattice of \(\mathbb{Z}^{2}\). An edge \(e^{\prime}\in\mathbb{Z}^{\prime 2}\) has exactly one edge \(e\in\mathbb{Z}^{2}\) that crosses \(e^{\prime}\), and we say the edge \(e^{\prime}\) is open if and only if the edge \(e\) is open. For the 2-UnG model in two dimensions it thus follows that for any such edge \(e^{\prime}\in\mathbb{Z}^{\prime 2}\), \[\mathbb{P}(e^{\prime}\text{ is closed})=1-\mathbb{P}(e\text{ is closed})=1-3/4=1/4. \tag{4.8}\] Moreover, the negative correlations between edges that are present in the 2-UnG model also appear in its dual. Namely, whenever for edges \(e^{\prime}_{1},e^{\prime}_{2}\in\mathbb{Z}^{\prime 2}\) their unique associated edges \(e_{1},e_{2}\in\mathbb{Z}^{d}\) are neighbors, then \[\mathbb{P}(e^{\prime}_{1},e^{\prime}_{2}\text{ closed})=1/24\leq 1/16=\mathbb{P}(e^{ \prime}_{1}\text{ closed})^{2}. \tag{4.9}\] Otherwise, the status of the two dual edges is independent. This can be extended, so that for any \(k\geq 2\) and any self-avoiding path \((e^{\prime}_{1},\dots,e^{\prime}_{k})\) of nearest-neighbor vertices in \(\mathbb{Z}^{\prime 2}\) with \(\mathbb{P}(e^{\prime}_{1},\dots,e^{\prime}_{k}\text{ closed})>0\), \[\mathbb{P}(e^{\prime}_{1},\dots,e^{\prime}_{k}\text{ closed})=\prod_{i=0}^{k-1} \mathbb{P}(e^{\prime}_{i+1}\text{ closed}|e^{\prime}_{1},\dots,e^{\prime}_{i} \text{ closed})\leq 4^{-k}. \tag{4.10}\] As such, we can use the negative correlation to bound the probability that there exists a closed circuit in the dual that surrounds the origin. For any \(n\geq 4\), the number of such circuits of length \(n\) is at most \(nc_{n-1}(2)\), where we recall that \(c_{n}(2)\) denotes the number of self-avoiding paths of length \(n\) in \(\mathbb{Z}^{2}\). As \(c_{n}(2)=(c(2)+o(1))^{n-1}\), we thus arrive at \[\begin{split} 1-\theta^{\text{U}}(2,2)&=\mathbb{P}( \exists\text{ circuit }\gamma\text{ in }\mathbb{Z}^{\prime 2}\text{ that surrounds the origin})\\ &\leq\sum_{n\geq 4}\sum_{\text{ circuits }\gamma:\ |\gamma|=n}\mathbb{P}(\gamma\text{ closed in }\mathbb{Z}^{\prime 2})\leq\sum_{n\geq 4}n(c(2)+o(1))^{n-1}4^{-n}, \end{split} \tag{4.11}\] which is finite since \(c(2)\leq 3\). We now apply an argument from [10] (based on an argument from [11]) to show that this probability is in fact smaller than one, which yields \(\theta^{\text{U}}(2,2)>0\). Let \(D_{m}=\{0,\dots,m\}^{2}\) and say that a vertex \(i\in\mathbb{Z}^{2}\) is 'wet' when there exists a \(j\in D_{m}\) such that \(i\to j\), that is, when there exists a path of closed edges from \(i\) to \(j\). Suppose the component of the origin is finite almost surely. Then, the number of wet vertices is finite almost surely as well, and there exists a circuit in the dual that surrounds all the wet sites. By the same argument as in (4.11), \[\mathbb{P}(D_{m}\not\to\infty)\leq\sum_{n\geq 4m}\sum_{\text{ circuits }\gamma:\ |\gamma|=n}\mathbb{P}(\gamma\text{ closed in }\mathbb{Z}^{\prime 2})\leq\sum_{n\geq 4m}n(c(d)+o(1))4^{-n}, \tag{4.12}\] where we can start the outer sum from \(4m\) since a circuit that surrounds all wet vertices must surround \(D_{m}\) and hence have length \(4m\) at least. As a result, since the sum is finite for each \(m\in\mathbb{N}\), choosing \(m\) large enough yields that \(\mathbb{P}(D_{m}\to\infty)>0\) and hence \(\mathbb{P}(i\to\infty)>0\) for some \(i\in D_{m}\). By translation invariance, it then follows that \(\theta^{\text{U}}(2,2)>0\), as desired. To prove that \(\theta^{\mathrm{U}}(3,3)>0\), it suffices to show that with positive probability \(o\) is connected to infinity via any two-dimensional plane \(S\) of \(\mathbb{Z}^{3}\) including \(o\). This can be done via the dual approach employed in the first part of the proof. In the restriction of the 3-UnG to \(S\), each edge of the 3-UnG is open with probability \(3/4\). Thus, for any edge \(e^{\prime}\) of the dual lattice corresponding to \(S\), we have \(\mathbb{P}(e^{\prime}\text{ closed})=1/4\). Then, for any two dual edges for which their associated non-duals are not nearest neighbors, their status is independent, and otherwise, \[\mathbb{P}(e^{\prime}_{1},e^{\prime}_{2}\text{ closed})=\binom{5}{3}\binom{5}{3} \binom{4}{3}\Big{/}\binom{6}{3}^{3}=\frac{1}{20}\leq\frac{1}{16}=\mathbb{P}(e ^{\prime}_{1}\text{ closed})^{2},\] whence the proof can be finished analogously to the one for the 2-UnG. Proof of Proposition 2.5.: Assume for a contradiction that we do have \(\theta^{\mathrm{U}}(1,d)>0\) for some \(d\in\mathbb{N}\), and let \(A\) then denote the event of positive probability that there exists an infinite path starting from \(o\) consisting of edges of the 1-UnG. Let \(o=X_{0},X_{1},X_{2},\ldots\) be any infinite path of the 1-UnG starting from \(o\). Now, if \(X_{0},X_{1},X_{2}\) is the sequence of vertices in an infinite path (on the event \(A\)), we put \[K=\inf\{k\geq 0\colon(X_{k},X_{k+1})\text{ is not an edge of the 1-DnG}\}.\] We know from the proof of Part \((i)\) in Proposition 2.1 that \(K<\infty\) almost surely. Next, let us define \[L=\inf\{k>K\colon(X_{k},X_{k+1})\text{ is an edge of the 1-DnG}\}.\] Now we claim that \(L=\infty\) on the event \(\{K<\infty\}\) (where we put \(K=\infty\) on the event \(A^{c}\)). Indeed, for all \(k=K,K+1,\ldots,L-1\), \((X_{k+1},X_{k})\) is an edge of the 1-DnG because it follows from the definition of \(K\) that \((X_{k},X_{k+1})\) is not an edge of the 1-DnG, but \((X_{k},X_{k+1})\) is an edge of the 1-UnG, and this is only possible if \((X_{k+1},X_{k})\) is an edge of the 1-DnG. But now, if \(L<\infty\), then \((X_{L},X_{L-1})\) is an edge of the 1-DnG, so that \((X_{L},X_{L+1})\) cannot be an edge of the 1-DnG since there is only one edge going out of \(X_{L}\). This implies the claim. Let us denote the \(\ell_{1}\)-sphere of radius \(n\in\mathbb{N}_{0}\) by \(S_{n}^{(1)}\) (with \(S_{0}^{(1)}=\{o\}\)). Note that we even have that \(\mathbb{P}(\kappa<\infty\,|\,A)=1\), where \[\kappa=\inf\big{\{}k\geq 0\colon o\not\to S_{k}^{(1)}\text{ in the 1-DnG}\big{\}}.\] Since \(o\in S_{0}^{(1)}\) and \(o\) is always connected to one of its nearest neighbors by a directed edge, we have \(\mathbb{P}(\kappa\geq 2)=1\). Given that \(\kappa\) is an \(\mathbb{N}_{0}\cup\{\infty\}\)-valued random variable, we can find \(k_{0}\in\mathbb{N}\setminus\{1\}\) and \(\varepsilon>0\) such that \(\mathbb{P}(\kappa=k_{0}|A)\geq\varepsilon\). Now, by our previous observation, since from \(o\) to \(S_{\kappa-1}^{(1)}\) there always exists a directed path in the 1-DnG, on the event \(\{\kappa=k_{0}\}\cap A\), there must exist a self-avoiding directed path of length \(n\) in the 1-DnG ending at some vertex of \(S_{k_{0}-1}^{(1)}\) for all \(n\in\mathbb{N}_{0}\). Thus, writing \(c_{n,k_{0}-1}(d)\) for the number of self-avoiding paths in the \(d\)-dimensional lattice of length \(n\) starting from (or ending at) \(S_{k_{0}-1}^{(1)}\), it is clear that \[\lim_{n\to\infty}c_{n,k_{0}-1}(d)^{1/n}=\lim_{n\to\infty}c_{n,1}(d)^{1/n}=c(d) \leq 2d-1,\] where \(c(d)\) is the connective constant of \(\mathbb{Z}^{d}\). Now, any self-avoiding path of length \(n\) ending at \(S_{k_{0}-1}^{(1)}\) is open with probability \((2d)^{-n}\). Hence, \[\mathbb{P}(\exists\text{ self-avoiding path of length $n$ starting from $S_{k_{0}-1}^{(1)}$ included in 1-DnG})\leq\Big{(}\frac{2d-1}{2d}\Big{)}^{n},\] which decays exponentially fast in \(n\). This contradicts the assertion that the left-hand side is bounded from below by \(\mathbb{P}(A)\varepsilon>0\) independently of \(n\). Therefore, \(\mathbb{P}(A)=0\), as desired. ### Proofs for the \(k\)-BnG Proof of Lemma 2.6.: Note first that the system is negatively correlated in the sense that for all directed edges \(\ell_{1}=(x_{1},x_{2})\neq\ell_{2}=(y_{1},y_{2})\) we have that \[\mathbb{P}(\ell_{1}\text{ open and }\ell_{2}\text{ open})\leq\mathbb{P}(\ell_{1} \text{ open})^{2}.\] Indeed, note that if \(x_{1}\neq y_{1}\), then the inequality is an equality by independence. However, if \(x_{1}=y_{1}\) then \[\mathbb{P}(\ell_{1}\text{ open}\,|\,\ell_{2}\text{ open})=(k-1)/(2d-1)\leq k /(2d)=\mathbb{P}(\ell_{1}\text{ open}).\] Note that the negative correlation carries over also to the bidirectional (and also undirected) case since they are built from the directed case. Let us denote by \(\ell_{n}\) the straight line starting at the origin up to the node \((ne_{1})\). Then, using the first moment method and negative correlations, we have that \[\mathbb{P}(o\rightsquigarrow_{\text{B}}\partial B_{n})\leq\sum_{s\in\mathcal{ R}_{n}}\mathbb{P}(s\text{ is open})\leq c(d)^{n}\mathbb{P}(\ell_{n}\text{ is open}).\] Finally, for a suitable constant \(C>0\), we have that \(\mathbb{P}(\ell_{n}\text{ is open})\leq Cp^{n}\) where \(p=k(k-1)/(2d(2d-1))\) is the probability that the origin chooses precisely two prescribed edges. This leads to the criterion for absence of percolation \(c(d)p<1\). Proof of Proposition 2.7.: As in the proof of the assertion \(\theta^{\text{U}}(3,3)>0\) of Theorem 2.4, it suffices to verify that \(o\rightsquigarrow\infty\) in the restriction of the \(k\)-BnG to a fixed two-dimensional plane \(S\) including \(o\) with positive probability. Using the duality approach of the proof of Theorem 2.4, dual edges of the two-dimensional lattice of \(S\) are open with probability \((k/(2d))^{2}\). Further, the indicators of two dual edges being closed, i.e., the corresponding edges of the \(k\)-BnG restricted to \(S\) being open, are non-positively correlated. Indeed, we saw already in the proof of Proposition 2.6 that the indicators of two edges of the \(k\)-BnG being open are either independent or strictly negatively correlated. Now, if the indicators of two events are independent or strictly negatively correlated, then so are the indicators of the complements of the two events. We see from the proof of Theorem 2.4 that given these non-positive correlations, it suffices to choose \(k\) and \(d\) in such a way that a dual edge is closed with probability at most \(1/c(2)\). This holds whenever \[\big{(}k/(2d)\big{)}^{2}>1-1/c(2),\qquad\text{or, equivalently,}\qquad k>d\sqrt{4 (1-1/c(2))}, \tag{4.13}\] as asserted. Proof of Proposition 2.9.: Note that the \(k\)-BnG model (just as the \(k\)-UnG model) in any dimension \(d\) is an \(1\)-dependent bond percolation model where each edge is open with probability \(p=k^{2}/(4d^{2})\). Let us write \[p_{\text{max}}(\mathbb{Z}^{d})=\sup\{p\in(0,1)\colon\text{some $1$-dependent bond percolation model does not percolate}\}.\] According to [1], we have \(\lim_{d\to\infty}p_{\text{max}}(\mathbb{Z}^{d})\leq 0.5847\), which implies the statement. Let us remark that another result by [1] is that \(p_{\text{max}}(\mathbb{Z}^{2})\leq 0.8457\). This together with the assertion that \(p_{\text{max}}(\mathbb{Z}^{d})\leq p_{\text{max}}(\mathbb{Z}^{d-1})\) yields that for \(\alpha>2\sqrt{0.8457}\) we have \(\theta^{\text{B}}(\alpha d,d)>0\). However, this statement is weaker than Proposition 2.7 because \(2\sqrt{0.8457}\approx 1.8392>\sqrt{4(1-1/2.679192495)}\). ## Acknowledgements The authors thank D. Mitsche, G. Pete, and B. Rath for interesting discussions and comments. BJ and JK acknowledge the financial support of the Leibniz Association within the Leibniz Junior Research Group on _Probabilistic Methods for Dynamic Communication Networks_ as part of the Leibniz Competition. BL is supported by the grant GrHyDy ANR-20-CE40-0002. AT was partially supported by the ERC Consolidator Grant 772466 "NOISE".
2304.14029
Phaseless auxiliary field quantum Monte Carlo with projector-augmented wave method for solids
We implement the phaseless auxiliary field quantum Monte Carlo method using the plane-wave based projector augmented wave method and explore the accuracy and the feasibility of applying our implementation to solids. We use a singular value decomposition to compress the two-body Hamiltonian and thus reduce the computational cost. Consistent correlation energies from the primitive-cell sampling and the corresponding supercell calculations numerically verify our implementation. We calculate the equation of state for diamond and the correlation energies for a range of prototypical solid materials. A down-sampling technique along with natural orbitals accelerates the convergence with respect to the number of orbitals and crystal momentum points. We illustrate the competitiveness of our implementation in accuracy and computational cost for dense crystal momentum point meshes comparing to a well-established quantum-chemistry approach, the coupled-cluster ansatz including singles, doubles and perturbative triple particle-hole excitation operators.
Amir Taheridehkordi, Martin Schlipf, Zoran Sukurma, Moritz Humer, Andreas Grüneis, Georg Kresse
2023-04-27T08:51:42Z
http://arxiv.org/abs/2304.14029v1
# Phaseless auxiliary field quantum Monte Carlo with projector-augmented wave method for solids ###### Abstract We implement the phaseless auxiliary field quantum Monte Carlo method using the plane-wave based projector augmented wave method and explore the accuracy and the feasibility of applying our implementation to solids. We use a singular value decomposition to compress the two-body Hamiltonian and thus reduce the computational cost. Consistent correlation energies from the primitive-cell sampling and the corresponding supercell calculations numerically verify our implementation. We calculate the equation of state for diamond and the correlation energies for a range of prototypical solid materials. A down-sampling technique along with natural orbitals accelerates the convergence with respect to the number of orbitals and crystal momentum points. We illustrate the competitiveness of our implementation in accuracy and computational cost for dense crystal momentum point meshes comparing to a well-established quantum-chemistry approach, the coupled-cluster ansatz including singles, doubles and perturbative triple particle-hole excitation operators. ## I Introduction One of the most challenging tasks in solid-state physics is to solve the many-electron Schrodinger equation. For this purpose effective one-electron methods based on density functional theory [1; 2] (DFT) are particularly successful. Introducing an exchange-correlation density functional, DFT replaces the electron-electron interaction with an effective one-electron potential. Thus, the interacting many-electron problem is reduced to a set of one-electron equations, which can be solved self-consistently in a representation defined by the employed basis set. For solids, plane waves are the most popular basis functions because they exhibit a favorable scaling with system size and are independent of atom positions and species. Moreover, their convergence is systematically controlled by the plane-wave energy cutoff. Projector augmented waves (PAWs) are an efficient way to describe the all-electron DFT orbitals. [3; 4] In this method, one constructs the all-electron orbital replacing a certain fraction of a pseudo-orbital with a localized function in the vicinity of the ions. The specific fraction depends on the overlap of the pseudo-orbital with so-called _projectors_. The important point is that the PAW method yields essentially all-electron precision but the plane-wave energy cutoff corresponds to the smoother pseudo-orbital. The high precision of the PAW methods has been demonstrated for density functional theory methods for both, small molecules [5] as well as solids. [6] Recently the evaluations were also extended to many-body calculations for small molecules, [7] demonstrating that PAW potentials can reach chemical accuracy (\(<1\) kcal/mol). The main issue of DFT is the choice of the exchange-correlation functional. Since calculating the exact exchange-correlation functional has the same complexity as solving the Schrodinger equation, in practice, one needs to approximate the functional. These approximations lead to inaccurate results especially in strongly-correlated systems. [8] Other weaknesses of DFT include the description of thermochemical properties [9; 10] and van der Waals interactions. [11] Alternatively, one can find the ground state of the Schrodinger equation explicitly. Quantum-chemical wavefunction based methods are limited to small-sized systems because of the adverse scaling of their computational cost. For example, the computational cost of the coupled-cluster ansatz using single, double and perturbative triple particle-hole excitation operators [12; 13; 14; 15] (CCSD(T)) scales with the seventh power of the system size and configuration interaction methods [16; 17; 18] scale exponentially. Although this adverse scaling can be reduced by adopting local correlation methods, it is not guaranteed that local correlation methods entirely avoid uncontrolled errors. For example, recent work on large molecules yielded conflicting results for localized coupled-cluster methods and diffusion Monte Carlo (DMC) methods--the reason for this inconsistency not yet been known. [19; 20; 21; 22] Furthermore, for densely packed 3D solids, it is not clear whether local methods can accelerate the calculations in the same way as they do for more open low-dimensional structures. Quantum Monte Carlo [23; 24; 25] (QMC) methods also overcome adverse scaling with a cubic to quartic increase of the computational cost with system size. However, DMC [26; 27; 28] requires local potentials and, hence, has not yet been combined with the PAW method, since the PAW methods relies on non-local projectors. Even, when non-local pseudopotentials are used in DMC, approximations must be made. [29; 30; 31] Variational Monte Carlo [32] cannot typically reach chemical accuracy. [33] Full-configuration interaction QMC [34; 35; 36] (FCIQMC) and semi-stochastic heat-bath configuration interaction [37; 38] as a variant of selected configuration interaction methods [39; 40] are very accurate but still contain terms scaling weakly exponentially with system size. In the auxiliary-field quantum Monte Carlo (AFQMC) method,[41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55] one represents the ground-state wavefunction as an ensemble of Slater determinants. AFQMC utilizes two key ideas: First, similar to other projective QMC approaches, it constructs the ground-state wavefunction by repeated application of an infinitesimal imaginary-time propagator to a trial wavefunction. Second, the Hubbard-Stratonovich transformation[56; 57] reduces the interacting many-body problem to a high-dimensional integral of one-body operators using auxiliary fields. In real materials, one needs to carefully control the phase of the ensemble of walkers to ensure numeric stability. This so-called _phase problem_ is a generalization of the sign problem[58] and is mitigated by the phaseless approximation in AFQMC (ph-AFQMC).[59] ph-AFQMC has been applied to a wide range of systems from the Hubbard model[60; 61] to atoms and molecular systems.[62; 63; 64] Using Gaussian-type orbitals, ph-AFQMC was compared to DMC in solids[65] and applied to solid NiO.[66] Zhang _et al._ implemented ph-AFQMC for plane waves using norm-conserving pseudopotentials.[59] Furthermore, ph-AFQMC has been used to study the pressure-induced transition in silicon[47] and to compute benchmark charge densities for solids.[67] Algorithmic improvements reduce the computational cost with down-sampling and frozen orbitals[68; 69] and generalize ph-AFQMC for optimized norm-conserving pseudopotentials.[70] All these successes motivate us to combine ph-AFQMC with the PAW method to expand the limits of _ab initio_ calculations. In this work, we implement ph-AFQMC using the plane-wave based PAW method. Our aim is to explore whether ph-AFQMC for solids is feasible and what kind of accuracy can be expected for simple prototypical solids. To achieve this goal, we compare ph-AFQMC with widely-used quantum-chemistry methods--second-order Moller-Plesset perturbation theory[71; 72; 73] (MP2), and coupled-cluster calculations at the level of CCSD and CCSD(T). The numerical setup is intentionally reduced so that one can compare all these methods. Therefore, the results in this work cannot be compared directly with the experiment. We utilize a singular value decomposition (SVD)[74] to reduce the computational cost. In general, we find that ph-AFQMC yields slightly larger absolute correlation energies than CCSD(T). However, the correlation-energy difference between ph-AFQMC and CCSD(T) is one order of magnitude smaller than the one between ph-AFQMC and MP2. Calculations are performed for both primitive cells and supercells, with excellent agreement between them, validating the implementation for primitive cells using \(\mathbf{k}\) points. In addition, we demonstrate down-sampling strategies to converge the results with respect to the employed \(\mathbf{k}\) points and the number of unoccupied bands. These strategies facilitate computing the diamond correlation energy approximately in the complete basis-set limit. We clearly show that correlation energies of similar accuracy as CCSD(T) can be achieved. Finally, we compare the computational cost of the code to CCSD(T) as a function of the number of \(\mathbf{k}\) points. The present implementation is in Python[75] and far from being fully optimized. Still, the better scaling of ph-AFQMC leads to faster execution times than using a Fortran[76] CCSD(T) code[14; 77] for dense \(\mathbf{k}\)-point grids. The remainder of this paper is structured as follows: In Sec. II, we introduce the ph-AFQMC method and show how the application of an SVD reduces the computational cost. In Sec. III, we summarize our results for lattice constants and correlation energies. We compare ph-AFQMC with other quantum-chemistry methods and specifically show that it is competitive in accuracy with CCSD(T). Finally, we conclude in Sec. IV. ## II Method This section describes the required tasks to obtain the ph-AFQMC ground-state energy starting from the Born-Oppenheimer Hamiltonian. First, we reduce the computational cost applying an SVD. Second, the Hamiltonian is written in a mean-field subtracted form to reduce the variance of the ground-state energy. Third, we discuss the update of the Slater determinants and how the ground-state energy is calculated. These steps incur the largest computational cost. Finally, we lay out the general procedure used to obtain the numeric results. ### Hamiltonian We split the electronic Born-Oppenheimer Hamiltonian \[\hat{H}=\hat{H}^{\prime}_{1}+\hat{H}^{\prime}_{2}, \tag{1}\] into a single-particle operator \[\hat{H}^{\prime}_{1}=\sum_{\mathbf{k}}\sum_{pq}h_{pq}(\mathbf{k})\hat{a}^{ \dagger}_{p\mathbf{k}}\hat{a}_{q\mathbf{k}}, \tag{2}\] and a two-particle operator \[\hat{H}^{\prime}_{2}=\frac{1}{2}\sum_{\mathbf{q}}\hat{L}^{\prime}_{\mathbf{q} \mathbf{G}}\hat{L}^{\prime\dagger}_{\mathbf{q}\mathbf{G}}. \tag{3}\] In Eq. (2), \(\hat{a}_{p\mathbf{k}}\) (\(\hat{a}^{\dagger}_{p\mathbf{k}}\)) is the fermionic annihilation (creation) operator, \(p\) and \(q\) are band indices and \(\mathbf{k}\) is the crystal momentum. In Eq. (3), \(\mathbf{q}\) and \(\mathbf{G}\) are the transferred crystal momentum and a reciprocal lattice vector, respectively. For more details we refer the reader to Appendix A, where we describe the contributions to the matrix elements \(h_{pq}\) and how to rewrite the two-body part in terms of single-particle operators \(\hat{L}^{\prime}_{\mathbf{q}\mathbf{G}}\) as expressed in Eq. (3). We neglect the spin indices for the sake of simplicity. ### Application of SVD For each \(\mathbf{q}\), we represent the \(\hat{L}^{\prime}_{\mathbf{q}\mathbf{G}}\) operators as a matrix \(L^{\prime}_{\mathbf{q}}\) (see Eqs. (10), (12) and (11)). To reduce the computational cost, we reduce the size of these matrices by an SVD \[L^{\prime}_{\mathbf{q}}=U\Sigma V^{\dagger}. \tag{4}\] \(\Sigma\) is a diagonal matrix containing \(N_{s}\) nonzero singular values. \(U\) and \(V\) are semi-unitary matrices, i.e., \(U^{\dagger}U=VV^{\dagger}=\mathbb{I}\), where \(\mathbb{I}\) is the identity matrix. The dimension of \(U\) and \(V^{\dagger}\) are \(N_{\text{b}}^{2}\times N_{\text{s}}\) and \(N_{\text{s}}\times N_{\text{G}}\), respectively, where \(N_{b}\) is the number of orbitals. The number of singular values \(N_{\text{s}}\leq\min(N_{\text{b}}^{2},N_{\text{G}})\) is reduced to \(n_{\text{qg}}\leq N_{\text{s}}\) by neglecting values smaller than a particular threshold. Empirically, we find that \(n_{\text{qg}}\) is an order of magnitude smaller than \(N_{\text{G}}\). In passing we note that a similar approach was used for reducing the computational cost of integral calculations in periodic coupled-cluster theory with a plane-wave basis.[78] We then approximate the two-body part of the Hamiltonian by \[\hat{H}_{2}^{\prime}=\frac{1}{2}\sum_{\mathbf{q}}\sum_{g=1}^{n_{\text{qg}}} \hat{\mathcal{L}}_{\text{qg}}^{\prime}\hat{\mathcal{L}}_{\text{qg}}^{\prime \dagger}, \tag{5}\] where in the matrix representation we have \[\hat{\mathcal{L}}_{\mathbf{q}}^{\prime}=U^{\prime}\Sigma^{\prime}. \tag{6}\] Here, the dimension of \(U^{\prime}\) and \(\Sigma^{\prime}\) are \(N_{\text{b}}^{2}\times n_{\text{qg}}\) and \(n_{\text{qg}}\times n_{\text{qg}}\), respectively. Table. 1 summarizes the matrix dimensions in Eqs. (4) and (6). Introducing \[\begin{split}\hat{\mathcal{L}}_{\text{qg}}^{(\text{e})}& =\frac{1}{2}\bigg{[}\hat{\mathcal{L}}_{\text{qg}}^{\prime}+\hat{ \mathcal{L}}_{\text{qg}}^{\prime\dagger}\bigg{]},\\ \hat{\mathcal{L}}_{\text{qg}}^{(\text{o})}&=\frac{i }{2}\bigg{[}\hat{\mathcal{L}}_{\text{qg}}^{\prime}-\hat{\mathcal{L}}_{\text{ qg}}^{\prime\dagger}\bigg{]},\end{split} \tag{7}\] the two-body part of the Hamiltonian is expressed in a quadratic form \[\hat{H}_{2}^{\prime}=\frac{1}{2}\sum_{\mathbf{q}}^{n_{\text{qg}}}\bigg{[} \hat{\mathcal{L}}_{\text{qg}}^{(\text{e})2}+\hat{\mathcal{L}}_{\text{qg}}^{( \text{o})2}\bigg{]}, \tag{8}\] or in a more compact form \[\hat{H}_{2}^{\prime}=\frac{1}{2}\sum_{\mathbf{q}}^{2n_{\text{qg}}}\hat{ \mathcal{L}}_{\text{qg}}^{2}. \tag{9}\] Here, we also defined: \[\hat{\mathcal{L}}_{\text{qg}}=\begin{cases}\hat{\mathcal{L}}_{\text{qg}}^{( \text{e})}&\text{for }1\leq g\leq n_{\text{qg}},\\ \hat{\mathcal{L}}_{\text{qg}^{\prime}}^{(\text{o})}&\text{for }n_{\text{qg}}+1\leq g\leq 2n_{ \text{qg}},\end{cases} \tag{10}\] where \(g^{\prime}=g-n_{\text{qg}}\). ### Mean-field subtraction The \(\hat{\mathcal{L}}_{\text{qg}}\) operators induce density fluctuations. Since the reference is the vacuum state, these fluctuations may be too strong and lead to large variances in the energy estimates or even hamper the convergence to the ground state. Hence, shifting the reference to the mean field improves the numeric stability. The modified operators read \[\hat{\mathfrak{L}}_{\text{qg}}=\hat{\mathcal{L}}_{\text{qg}}-\hat{\mathcal{L}} _{g}\hat{\mathfrak{q}}_{\text{u},0}, \tag{11}\] with \[\hat{\mathcal{L}}_{g}=\frac{\langle\Psi_{\text{T}}|\hat{\mathcal{L}}_{\text{ qg}}|\Psi_{\text{T}}\rangle}{\langle\Psi_{\text{T}}|\Psi_{\text{T}}\rangle}, \tag{12}\] where the trial wavefunction \(|\Psi_{\text{T}}\rangle\) approximates the ground-state wavefunction with a single Slater determinant. Finally, the Hamiltonian is expressed as \[\hat{H}=\hat{H}_{1}+\hat{H}_{2}, \tag{13}\] where \[\begin{split}\hat{H}_{1}&=\hat{H}_{1}^{\prime}+\sum_ {g}\hat{\mathcal{L}}_{g}\big{(}\hat{\mathcal{L}}_{\text{0g}}-\frac{1}{2}\hat{ \mathcal{L}}_{g}\big{)},\\ \hat{H}_{2}&=\frac{1}{2}\sum_{\mathbf{q}}\sum_{g} \hat{\mathfrak{L}}_{\text{qg}}^{2}.\end{split} \tag{14}\] ### Update procedure Using a Hubbard-Stratonovich transformation,[57] one can express the time-evolution for a single time step \(\tau\) as (see Appendix C for more details) \[\hat{B}(\mathbf{x})=\exp\bigg{(}-\tau\hat{H}_{1}+i\sqrt{\tau}\sum_{\mathbf{q}} \sum_{g}x_{\text{qg}}\hat{\Delta}_{\text{qg}}\bigg{)}, \tag{15}\] where \(\mathbf{x}\) is a random vector whose entries \(x_{\text{qg}}\) are normally distributed. Thouless' theorem[80; 79] shows that \(\hat{B}\) transforms a Slater determinant into another one. Therefore, if one initializes an ensemble of \(N_{\text{w}}\) random walkers as single Slater determinants \(|\Psi_{\text{I}}\rangle\), they remain single Slater determinants for all time steps \(k\). The initial wavefunction \(|\Psi_{\text{I}}\rangle\) is set to the Hartee-Fock (HF) wavefunction[81; 82; 83] denoted by \(|\Psi_{\text{T}}\rangle\). In addition, we assign complex weights \(W_{\text{w}}^{\text{w}}\text{e}^{i\theta_{\text{w}}^{k}}\) to each of the walkers that are initialized to one. At the \(k\)-th MC step the walkers' states are updated as \[|\Psi_{k+1}^{w}\rangle=\hat{B}(\mathbf{x}_{k}^{w})|\Psi_{k}^{w}\rangle. \tag{16}\] The update criteria for the complex weights is \[W_{k+1}^{w}\text{e}^{i\theta_{\text{w}+1}^{w}}=\frac{\langle\Psi_{\text{T}}| \Psi_{k+1}^{w}\rangle}{\langle\Psi_{\text{T}}|\Psi_{k}^{w}\rangle}W_{k}^{w} \text{e}^{i\theta_{\text{w}}^{w}}. \tag{17}\] The update scheme formulated by Eqs. (16) and (17) is called free-projection AFQMC. This scheme is theoretically exact but numerically unstable due to the fermionic phase \begin{table} \begin{tabular}{c c c c} & in Eq. (4) & & in Eq. (6) \\ matrix & dimension & matrix & dimension \\ \hline \(U\) & \(N_{\text{b}}^{2}\times N_{\text{s}}\) & \(U^{\prime}\) & \(N_{\text{b}}^{2}\times n_{\text{qg}}\) \\ \(\Sigma\) & \(N_{\text{s}}\times N_{\text{s}}\) & \(\Sigma^{\prime}\) & \(n_{\text{qg}}\times n_{\text{qg}}\) \\ \(V^{\dagger}\) & \(N_{\text{s}}\times N_{\text{G}}\) & & \\ \(L^{\prime}_{\mathbf{q}}\) & \(N_{\text{b}}^{2}\times N_{\text{G}}\) & \(\mathcal{L}^{\prime}_{\mathbf{q}}\) & \(N_{\text{b}}^{2}\times n_{\text{qg}}\) \\ \end{tabular} \end{table} Table 1: Dimension of matrices introduced in Eqs. (4) and (6). Applying an SVD and truncating the singular values reduces one dimension of \(L^{\prime}_{\mathbf{q}}\) from the number of plane waves \(N_{\text{G}}\) to the number of truncated singular values \(n_{\text{qg}}\leq N_{\text{s}}\leq\min(N_{\text{b}}^{2},N_{\text{G}})\) where \(N_{\text{s}}\) is the number of singular values and \(N_{\text{b}}\) is the number of orbitals. problem.[59] The issue is suppressed by using the phaseless approximation.[59] In this scheme, the walkers' states and weights are updated as[46] \[\begin{split}&|\Psi_{k+1}^{w}\rangle=\hat{B}(\mathbf{x}_{k}^{w}- \mathbf{f}_{k}^{w})|\Psi_{k}^{w}\rangle,\\ & W_{k+1}^{w}=W_{k}^{w}\bigg{|}\frac{\langle\Psi_{\mathrm{T}}| \Psi_{k+1}^{w}\rangle}{\langle\Psi_{\mathrm{T}}|\Psi_{k}^{w}\rangle}I_{k}^{w} \bigg{|}\max(0,\cos(\Delta\theta)),\end{split} \tag{18}\] where \(\Delta\theta\) is the phase difference of \(\langle\Psi_{\mathrm{T}}|\Psi_{k+1}^{w}\rangle\) and \(\langle\Psi_{\mathrm{T}}|\Psi_{k}^{w}\rangle\) at each time step, i.e., \[\Delta\theta=\mathrm{Arg}\bigg{(}\frac{\langle\Psi_{\mathrm{T}}|\Psi_{k+1}^{w} \rangle}{\langle\Psi_{\mathrm{T}}|\Psi_{k}^{w}\rangle}\bigg{)}, \tag{19}\] and the importance sampling factor is defined as \[I_{k}^{w}\equiv I(\mathbf{x}_{k}^{w},\mathbf{f}_{k}^{w},\Psi_{k}^{w})=\exp \big{[}(\mathbf{x}_{k}^{w}-\tfrac{1}{2}\mathbf{f}_{k}^{w})\cdot\mathbf{f}_{k} ^{w}\big{]}. \tag{20}\] This imposed shift \(\mathbf{f}_{k}^{w}\) to the auxiliary field vector \(\mathbf{x}_{k}^{w}\) is called force bias. Choosing the entries of the force bias vector as \[f_{\mathbf{q},k}^{w}=-i\sqrt{\tau}\frac{\langle\Psi_{\mathrm{T}}|\hat{\Delta} _{\mathbf{q}g}|\Psi_{k}^{w}\rangle}{\langle\Psi_{\mathrm{T}}|\Psi_{k}^{w}\rangle} \tag{21}\] minimizes the fluctuations in the importance function to first order in \(\sqrt{\tau}\).[46] See Appendix D for the matrix representation of the force bias. As the ph-AFQMC simulation proceeds the weights of some of the walkers become very large and statistically more important. The comb procedure[84; 85] increases efficiency by splitting these walkers into multiple independent ones. Some walkers with small weight are killed to keep the total number of walkers constant. The bias imposed by the population control can be removed by a standard linear extrapolation or using a large enough number of walkers in the simulation.[52; 86] ### Measurement of ground-state energy We measure the ground state energy at the \(k\)-th MC step in the ph-AFQMC simulation using \[E_{0}=\frac{\sum_{kw}W_{k}^{w}E_{\mathrm{loc}}(\Psi_{k}^{w})}{\sum_{kw}W_{k}^ {w}}. \tag{22}\] The local energy \[E_{\mathrm{loc}}(\Psi_{k}^{w})=\frac{\langle\Psi_{\mathrm{T}}| \hat{H}|\Psi_{k}^{w}\rangle}{\langle\Psi_{\mathrm{T}}|\Psi_{k}^{w}\rangle}=E_{ 1}(\Psi_{k}^{w})+E_{\mathrm{H}}(\Psi_{k}^{w})+E_{\mathrm{X}}(\Psi_{k}^{w}) \tag{23}\] consists of a single-particle contribution \(E_{1}(\Psi_{k}^{w})\) and the two-particle contributions of Hartree \(E_{\mathrm{H}}(\Psi_{k}^{w})\) and exchange \(E_{\mathrm{X}}(\Psi_{k}^{w})\). We present the matrix representations of one- and two-body parts of the local energy in Appendix E. ### Computational details We used the Vienna Ab initio Simulation Package[3; 4] (VASP) to compute the matrix representation of \(\hat{H}_{1}^{\prime}\) in Eq. (2) and \(\hat{L}_{\mathbf{q}\mathbf{G}}^{\prime}\) in Eq. (3). VASP represents the pseudo-orbitals with plane waves but computes the matrix elements with all-electron precision using the PAW method. For more details about the VASP interface we refer reader to Appendix B. We employed the PAW potentials listed in Table. 2. For Li, we considered the semi-core \(1s\) states as valence states. Table 3 summarizes the experimental lattice constants, the crystal structures and the plane-wave energy cutoffs of the solids explored in Sec. III. We computed the MP2, CCSD, and CCSD(T) energies with the same setup in VASP to ensure comparability. The probe-charge Ewald method treats the Coulomb kernel singularity in the reciprocal space.[91] In this method, one subtracts an auxiliary function, which has the same singularities as the Coulomb kernel, to regularize the exchange \begin{table} \begin{tabular}{l l l l} atom & valence & projector radii \(r_{\mathrm{cut}}\) & \(r_{\mathrm{core}}\) \\ \hline Li & \(1s^{2}2s^{1}\) & \(2\times 1.2s\), \(1.3s\), \(2\times 1.5s\), \(1.5d\) & 1.5 \\ B & \(2s^{2}2p^{1}\) & \(2\times 1.5s\), \(2\times 1.7p\), \(1.7d\) & 1.7 \\ C & \(2s^{2}2p^{2}\) & \(2\times 1.2s\), \(2\times 1.5p\), \(1.5d\) & 1.5 \\ N & \(2s^{2}2p^{3}\) & \(2\times 1.3s\), \(2\times 1.5p\), \(1.5d\) & 1.5 \\ F & \(2s^{2}2p^{5}\) & \(2\times 1.2s\), \(2\times 1.4s\), \(1.4d\) & 1.4 \\ Ne & \(2s^{2}2p^{6}\) & \(3\times 1.4s\), \(3\times 1.5p\), \(1.5d\), \(1.6d\) & 1.6 \\ Al & \(3s^{2}3p^{1}\) & \(2\times 1.9s\), \(2\times 1.9s\), \(2\times 1.9s\), \(2\times 1.9d\), \(2.0f\) & 2.0 \\ Si & \(3s^{2}3p^{2}\) & \(2\times 1.9s\), \(2\times 1.9s\), \(2\times 1.9s\), \(1.9f\) & 1.9 \\ P & \(3s^{2}3p^{3}\) & \(2\times 1.9s\), \(2\times 1.9s\), \(2\times 2.0d\), \(2.0f\) & 2.0 \\ Ar & \(3s^{2}3p^{6}\) & \(1.4s\), \(1.9s\), \(2\times 1.9s\), \(2\times 1.9s\), \(2\times 1.9s\), \(1.9f\) & 1.9 \\ \end{tabular} \end{table} Table 2: List of PAW potentials used in the present work and the corresponding valence electrons. For the projector radii \(r_{\mathrm{cut}}\), the subscript describes their angular momentum and a prefactor their multiplicity. Within \(r_{\mathrm{core}}\) the pseudopotential replaces the all-electron potential. The values of \(r_{\mathrm{cut}}\) and \(r_{\mathrm{core}}\) are in atomic units. In VASP, the PAWs are labeled Li\({}_{\textrm{$\textrm{$\textrm{$\textrm{$\textrm{$\textrm{$\textrm{ \textrm{$\textrm{\textrm{\textrm{\textrm{\textrm{\textrm \textrm{\textrm{\textrm{\textrm{\textrm\textrm\textrm\textrm{\textrm\textrm\textrm{\textrm\textrm\textrm\textrm\textrm{\textrm\textrm\textrm\textrm{\textrm\textrm\textrm{\textrm\textrm\textrm \textrm \ \ \ \ \ \ \ \}}}}}}}$}}}}\) for Li and \(X\)\({}_{\textrm{$\textrm{$\textrm{\textrm{$\textrm{\textrm{\textrm{\textrm{ \textrm{\textrm{\textrm{\textrm{\textrm{\textrm{\textrm{\textrm{\textrm\textrm{\textrm\textrm\textrm{\textrm\textrm\ \textrm{\textrm\ \textrm{\textrm\ \textrm \textrm \textrm \textrm \textrm{\textrm \textrm \textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm \textrm{\textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm{ \textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{ \textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm{\textrm \textrm \textrm{ \textrm \textrm \textrm{\textrm \textrm \textrm{ \textrm \textrm \textrm{\textrm \textrm{\textrm \textrm \textrm \textrm{\textrm \textrm{\textrm \textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm{\textrm \textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm{\textrm \textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm{\textrm \textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm{\textrm \textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm{\textrm \textrm \textrm \textrm{\textrm{\textrm \textrm \textrm \textrm{\ \textrm \textrm{\textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm \textrm{\textrm \textrm{\textrm \textrm{\textrm \textrm \textrm{\textrm \textrm{\textrm \textrm \textrm \textrm{\textrm \textrm{\textrm{\textrm \textrm \textrm \textrm{\textrm \textrm{\textrm \textrm{\textrm \textrm{\textrm \textrm \textrm \textrm{\textrm \textrm{\textrm{\textrm \textrm{\textrm \textrm{\textrm \textrm \textrm \textrm{\textrm \textrm{\textrm{\textrm{\textrm \textrm \textrm \textrm{\textrm{\textrm \textrm \textrm{\textrm \textrm{\textrm \textrm{\textrm \textrm{ \textrm \textrm \textrm \textrm{\textrm{\textrm\textrm{\textrm \textrm{\textrm{\textrm \textrm \textrm integral.[92] In practice, VASP calculates the correction by placing a probe-charge and compensating homogeneous background into a supercell (determined by the primitive cell and the employed \(\mathbf{k}\)-point grid) and calculates this energy using an Ewald summation.[5] This introduces a constant shift in the input matrices, which we call singularity correction. The singularity correction must be taken into account in the calculation of total energies but does not affect energy differences. For instance, the HF energy and the ph-AFQMC energies shift by a constant value proportional to the number of electrons and the singularity correction. This means that we do not need to apply the correction in the ph-AFQMC calculation, as long as we determine only the _correlation_ energy in the ph-AFQMC calculation. We also performed ph-AFQMC calculations with and without imposing the singularity correction and confirmed that both approaches yield identical correlation energies within statistical errors. Furthermore, as a sanity test, we reconstructed the Fock matrix from the input matrices and ascertained that they yield consistent eigenvalues and MP2 correlation energies. ## III Results In this section, we present our ph-AFQMC results. We explore the errors associated with the finite time step and the population control (Sec. III.1). Comparing the correlation energy, we validate the implementation with a \(\mathbf{k}\)-point mesh and the equivalent supercell (Sec. III.2). For the equation of state of diamond, ph-AFQMC agrees almost perfectly with CCSD(T) much improving on CCSD and MP2 (Sec. III.3). This trend holds for the correlation energy of many more materials (Sec. III.4). Finally, we employ a down-sampling technique in order to recover the correlation energy of a denser \(\mathbf{k}\)-point mesh at large basis sets (Sec. III.5). All energies are reported in eV per unit cell. ### Population-control and time-step errors We investigate the time-step error and the population-control bias to determine appropriate choices for the time step and the walker population size to ascertain that these systematic errors are smaller than the statistical error. For small walker populations, the population control biases the correlation energy towards smaller absolute values.[46] Fig. 1 shows the correlation energy \(E_{\text{c}}\) of diamond employing a \(2\times 2\times 2\)\(\mathbf{k}\)-point mesh. The error in the correlation energy increases proportionally to the inverse of the population size consistent with what Ref. [46] reports for molecules. This dependence allows to extrapolate to the infinite population size to correct for this bias.[52; 86] The linear extrapolated value agrees with the result with 2048 walkers. We find that 256 walkers are sufficient to calculate the correlation energy to a precision of 1 meV. Fig. 2 shows how the relative error \(\delta E_{0}\) of the total energy per unit cell scales with number of \(\mathbf{k}\) points \(N_{\mathbf{k}}\). Here, we defined \(\delta E_{0}=\frac{\delta E_{0}}{E_{0}}\), where \(\Delta E_{0}\) is the standard error of the mean. We conclude that utilizing a denser \(\mathbf{k}\)-point mesh requires a smaller number of MC steps because each walker samples more of the Brillouin zone in each MC step. For a fixed number of walkers, we observe a linear relationship between the relative error and \(\frac{1}{\sqrt{N_{\mathbf{k}}}}\). Therefore, denser \(\mathbf{k}\)-point meshes achieve the same relative error \(\delta E_{0}\) in \(\frac{1}{\sqrt{N_{\mathbf{k}}}}\) fewer MC steps, if the number of walkers is kept fixed. This observation is very beneficial, and suggests that the statistical errors at different \(\mathbf{k}\) points are largely uncorrelated and thus average out. It also means that we can either decrease the number of Figure 1: ph-AFQMC correlation energy \(E_{\text{c}}\) vs. inverse population size for diamond using a \(\Gamma\)-centered \(2\times 2\times 2\)\(\mathbf{k}\)-point mesh with 8 electrons in 8 HF orbitals per \(\mathbf{k}\) point. The population control biases \(E_{\text{c}}\) when small populations are used. 256 walkers reproduce \(E_{\text{c}}\) of the infinite population size limit to a precision of 1 meV. walkers or the sampling period by \(N_{\mathbf{k}}\), when the number of \(\mathbf{k}\) points increases. ph-AFQMC is only accurate up to linear order in the time step \(\tau\). The Hubbard-Stratonovich transformation, the Trotter factorization [93], and evaluating the matrix exponential introduce time-step errors. [46] Fig. 3 illustrates that time steps \(\tau\leq 0.55\) keV\({}^{-1}\) yield equivalent correlation energies considering the statistical fluctuations. Larger time steps systematically decrease the absolute value of the correlation energy. In this work, we set the time step to \(\tau=0.25\) keV\({}^{-1}\) and expect a negligible (less than statistical error) time-step error without extrapolation. ### Supercell vs. primitive-cell calculation In this subsection, we validate our ph-AFQMC implementation with a \(\mathbf{k}\)-point mesh in the primitive cell against the corresponding supercell. Sampling the Brillouin zone with a \(2\times 2\times 2\)\(\mathbf{k}\)-point mesh is equivalent to a \(\Gamma\)-point calculation of a supercell enlarged by a factor of two along each axis. The primitive-cell calculation takes advantage of translational symmetry. Therefore, the number of numerical operations is roughly a factor \(N_{\mathbf{k}}\) smaller in the primitive cell than in the corresponding supercell. We note in passing that these savings are not always observed in actual calculations, since the matrix dimensions are significantly smaller in the primitive cell, resulting in some performance loss. For a direct comparison, it is important that the primitive cell and the supercell use the same orbitals. The orbitals are ordered by their eigenvalues \(\epsilon_{\mathbf{nk}}\) at each \(\mathbf{k}\) point. In the primitive cell, \(\epsilon_{n+1\mathbf{k}}<\epsilon_{n\mathbf{k}^{\prime}}\) is possible; whereas in the supercell the former would be included before the latter. To address this, we include more bands in the primitive cell such that all eigenvalues of the supercell are reproduced. Then, we zero all contributions in the primitive cell corresponding to orbitals not present in the supercell. In Table 4, we verify that our ph-AFQMC implementation computes correlation energies consistent within statistical fluctuations for five different crystals. For all systems, we used 8 HF orbitals per \(\mathbf{k}\) point in the primitive cell. Compared to ph-AFQMC, CCSD(T) yields systematically about 10 meV more positive correlation energies, a point we will return to in Section III.4. ### Equation of state In this section, we benchmark the accuracy of the ph-AFQMC total energies against popular deterministic quantum-chemistry methods for diamond, concentrating in particular on relative energies, such as those produced by changes of the volume. Fig. 4 shows the equation of state of diamond for a range of lattice constants. The calculations use a \(\Gamma\)-centered \(3\times 3\times 3\)\(\mathbf{k}\)-point mesh with 8 HF orbitals per \begin{table} \begin{tabular}{c c c c c} \hline \hline crystal & \(E_{\text{prim}}^{\text{AF}}(\text{eV})\) & \(E_{\text{sc}}^{\text{AF}}(\text{eV})\) & \(E_{\text{sc}}^{\text{CC}}(\text{eV})\) \\ \hline Ne & Fm3m & \(-0.2131(12)\) & \(-0.2127(12)\) & \(-0.2107\) \\ C & Fd3m & \(-1.4945(101)\) & \(-1.4951(54)\) & \(-1.4830\) \\ BN & F43m & \(-1.1913(153)\) & \(-1.1988(52)\) & \(-1.1861\) \\ LiF & Fm3m & \(-0.5257(18)\) & \(-0.5223(37)\) & \(-0.5134\) \\ SiC & F43m & \(-1.0323(76)\) & \(-1.0310(70)\) & \(-1.0252\) \\ \hline \hline \end{tabular} \end{table} Table 4: ph-AFQMC correlation energies of the primitive cell (\(E_{\text{prim}}^{\text{AF}}\)) and of the corresponding supercell (\(E_{\text{sc}}^{\text{AF}}\)) agree within statistical fluctuations. The absolute value of ph-AFQMC correlation-energy is systematically larger than the one obtained with CCSD(T) (\(E_{\text{sc}}^{\text{CC}}\)). Figure 3: ph-AFQMC correlation energy \(E_{\text{c}}\) vs. time step \(\tau\) for diamond using a \(\Gamma\)-centered \(2\times 2\times 2\)\(\mathbf{k}\)-point mesh with 8 electrons in 8 HF orbitals per \(\mathbf{k}\) point. For all \(\tau\leq 0.55\) keV\({}^{-1}\), the calculated \(E_{\text{c}}\) agree within statistical errors. Figure 4: Equation of state for diamond: MP2 (blue) and CCSD (green) yield larger energies than CCSD(T) (red) and ph-AFQMC (black). The optimized lattice constants (dashed vertical lines) are very similar except for the MP2 method. CCSD and ph-AFQMC vertical lines, marking the equilibrium lattice constants, are visually indistinguishable. \(\mathbf{k}\) point. Compared to ph-AFQMC, the total energies of MP2 and CCSD are 273 meV and 99 meV higher, respectively. In contrast, the CCSD(T) energies are less than 10 meV higher than the ph-AFQMC energies. Furthermore, the optimized lattice constant is 3.578 A in MP2 considerably smaller than the one of 3.596 A in ph-AFQMC. CCSD and CCSD(T) yield lattice constants of 3.596 A and 3.597 A, respectively, almost identical to the ph-AFQMC result. ### Comparison with coupled-cluster methods for total energies This subsection illustrates what accuracy of the correlation energy one can expect from ph-AFQMC compared to coupled-cluster methods. We compute correlation energies for several prototypical semiconductors and insulators with experimental band gaps ranging from 1.4 eV (Si) to 21.7 eV (Ne) to cover different bonding situations. A \(\Gamma\)-centered \(2\times 2\times 2\)\(\mathbf{k}\)-point mesh allows for a reasonable run time for all systems. For LiF and LiCI, we use 10 electrons and 9 HF orbitals per \(\mathbf{k}\) point. All other materials are calculated with 8 valence electrons and 8 HF orbitals per \(\mathbf{k}\) point. We compare the correlation energy of the different methods in Fig. 5. With the exception of the noble-gas crystals, CCSD underestimates the absolute value of the correlation energy by 40 meV to 95 meV compared to ph-AFQMC. In contrast, ph-AFQMC and CCSD(T) agree within 25 meV for the correlation energy of all materials, which is within chemical accuracy. Sukurma _et al._[94] recently proposed a modified version of the phaseless approximation (ph*-AFQMC) to suppress the over-correlation issues of ph-AFQMC. In this approach, the complex nature of the walker weights is retained and if \(|\theta_{k}^{w}|\geq\frac{g}{2}\) the walker is explicitly killed. Furthermore, the real part of the importance weight instead of the absolute value is used in the update procedure. For more detailed description of ph*-AFQMC we refer the reader to Ref. [94]. Fig. 5 demonstrates that ph*-AFQMC yields either consistent or slightly less negative correlation energies compared to the original ph-AFQMC method for the explored solids with the exception of LiF. This is consistent with what Ref. [94] reports for the HEAT set molecules. Generally, compared to CCSD(T), ph-AFQMC yields more negative correlation energies with the exception of silicon. As to why the ph-AFQMC values are more negative than the CCSD(T) values, we need to speculate. Generally, for weakly correlated systems, coupled-cluster methods converge from above to the exact correlation energy.[95] In other words, CCSD(T) has a slight tendency to under-correlate compared to coupled-cluster methods that include triple, quadruple and pentuple excitation operators.[95] However, likewise ph-AFQMC can also yield too negative correlation energies.[94] Only if strong double excitations are relevant, ph-AFQMC generally tends to under-correlate. Since both methods are non-variational, it is hard to tell which value is more accurate. The under-correlation for Si and the relatively small single particle band gap of Si, however, suggests that the slight under-correlation for Si is related to ph-AFQMC underestimating energy contributions from double excitations. Since the ph*-AFQMC correlation energies are also more negative than the CCSD(T) ones, we tend to believe that the ph*-AFQMC are closer to the ground truth. Anyhow, to resolve this issue, obviously more accurate reference methods are required. Booth _et al._[96] employed FCIQMC with the initiator approximation (\(i\)-FCIQMC) on the same set of materials to investigate the accuracy of standard quantum-chemistry methods. However, recent work suggests that various approximations such as the number of initiators used in this seminal work could have led to an underestimation of the correlation energies.[97] Underestimated absolute correlation energies were also found for benzene, where \(i\)-FCIQMC results were 1.5 kcal/mol above the most accurate total energy estimates.[98, 99] Hence, although our results are within chemical accuracy of the reference values, fully converged CI calculations with consistent basis sets and pseudopotentials are required to resolve this ambiguity. In this context it is also worth noting that Lee _et al._[100] carried out a comparative study using ph-AFQMC for the uniform electron gas. Their findings show that ph-AFQMC correlation energies are in good agreement with \(i\)-FCIQMC at densities corresponding to \(r_{S}=1.0\) and \(r_{S}=2.0\), whereas at \(r_{S}=5.0\) the \(i\)-FCIQMC correlation are more negative than those obtained with ph-AFQMC.[100, 101] CCSDT theory yields energies in good agreement with \(i\)-FCIQMC at high densities but underestimates the absolute correlation energies at densities corresponding \(r_{S}=2.0\) and \(r_{S}=5.0\).[100, 102] Although CCSD(T) and CCSDT can differ, this is not expected for the investigated relatively small system sizes with about 14 electrons. Therefore, the uniform electron gas findings also support our hypothesis that ph*-AFQMC is closer to the ground truth for the investigated solids in this work. Figure 5: Relative correlation energies obtained from CCSD (green), ph-AFQMC (black) and ph*-AFQMC (yellow) with respect to the CCSD(T) values for a range of crystals employing a \(\Gamma\)-centered \(2\times 2\times 2\)\(\mathbf{k}\)-point mesh. The reference values are the CCSD(T) correlation energies, i.e., \(\Delta E=E_{\mathrm{c}}-E_{\mathrm{c}}^{\mathrm{CCSD(T)}}\). ph-AFQMC exhibits a better scaling with system size compared to CCSD(T). To illustrate this, we benchmark the scaling of our ph-AFQMC Python code against the VASP CCSD(T) code developed in Fortran. In Fig. 6, we explore the scaling for various regular \(\Gamma\)-centered \(\mathbf{k}\)-point meshes of diamond with 8 orbitals per \(\mathbf{k}\) point. The CCSD(T) computational cost is slightly larger than for ph-AFQMC at \(N_{\mathbf{k}}=64\), which is the largest system size we have investigated in this work. Empirically, ph-AFQMC shows a roughly-cubic scaling with \(N_{\mathbf{k}}\) while CCSD(T) approximately scales as \(N_{\mathbf{k}}^{5.5}\), here. ### Down-sampling of the correlation energy The preceding subsections illustrated benchmark calculations for the PAW method with coarse \(\mathbf{k}\)-point meshes and few HF orbitals. In this subsection, we present a strategy to perform predictive ph-AFQMC calculations. This strategy is outlined and tested for diamond. We test two commonly used techniques to converge to the complete basis-set (CBS) limit. First, natural orbitals [103, 104, 105, 106] reduce the number of virtual orbitals required to converge to the correlation energy. We use natural orbitals obtained by the random phase approximation [7, 107]. Second, because differences converge faster than absolute energies, _down-sampling_ via energy differences [108] allows to estimate the CBS correlation energy at significantly reduced cost. Specifically, we intend to extrapolate the correlation energy to infinitely-dense \(\mathbf{k}\)-point meshes. Here, we use \(n_{\mathbf{k}}\times n_{\mathbf{k}}\times n_{\mathbf{k}}\)\(\mathbf{k}\)-point meshes for a linear fit to zero volume per \(\mathbf{k}\) point based on \(n_{\mathbf{k}}=3\) and \(n_{\mathbf{k}}=4\). To obtain the down-sampled correlation energy at a denser \(\mathbf{k}\)-point mesh, we compute the energy difference when increasing \(n_{\mathbf{k}}\) at fixed number of orbitals per \(\mathbf{k}\) point \(n_{\mathrm{b}}\) \[\Delta_{\mathrm{c}}(n_{\mathbf{k}},n_{\mathrm{b}})=E_{\mathrm{c}}(n_{\mathbf{ k}}+1,n_{\mathrm{b}})-E_{\mathrm{c}}(n_{\mathbf{k}},n_{\mathrm{b}}). \tag{24}\] We converge the correlation energy at the coarser \(\mathbf{k}\)-point mesh with respect to the number of orbitals per \(\mathbf{k}\) point \(n_{\mathrm{b}}\). Adding the difference yields an approximation for the correlation energy at the dense mesh \[E_{\mathrm{c}}(n_{\mathbf{k}}+1)\approx E_{\mathrm{c}}(n_{\mathbf{k}}+1,n_{ \mathrm{b}})=E_{\mathrm{c}}(n_{\mathbf{k}})+\Delta_{\mathrm{c}}(n_{\mathbf{k} },n_{\mathrm{b}}). \tag{25}\] One can then iterate this procedure with a reduced \(n_{\mathrm{b}}\) to obtain approximations for even denser meshes. To verify this procedure, we evaluate the correlation energy of MP2 with the down-sampling technique \[\begin{split} E_{\mathrm{c}}(n_{\mathbf{k}}=3)& \approx E_{\mathrm{c}}(n_{\mathbf{k}}=2)+\Delta_{\mathrm{c}}(n_{ \mathbf{k}}=2,n_{\mathrm{b}}=16),\\ E_{\mathrm{c}}(n_{\mathbf{k}}=4)&\approx E_{\mathrm{ c}}(n_{\mathbf{k}}=3)+\Delta_{\mathrm{c}}(n_{\mathbf{k}}=3,n_{\mathrm{b}}=8)\.\end{split} \tag{26}\] We compute the correlation energy \(E_{\mathrm{c}}(n_{\mathbf{k}}=2)\) with 64 natural orbitals. Fig. 7a compares these energies to the CBS where all calculations use \(n_{\mathrm{b}}=64\); the two sets of data points are visually indistinguishable. Extrapolating to the infinitely dense \(\mathbf{k}\)-point mesh yields a correlation energy of \(-8.987\) eV for the CBS and of \(-8.986\) eV for the down-sampled energies. We employ the down-sampling technique for the correlation energy of MP2, CCSD, CCSD(T), and ph-AFQMC. Fig. 7b shows the resulting extrapolations. For MP2, the extrapolation from \(n_{\mathbf{k}}=3\) to \(n_{\mathbf{k}}=4\) shows a different slope and a large deviation for \(n_{\mathbf{k}}=2\) compared to the other methods. In contrast, the ph-AFQMC correlation energy would only change by 9 meV if one used \(n_{\mathbf{k}}=2\) and \(n_{\mathbf{k}}=3\) for the extrapolation. CCSD(T) and ph-AFQMC agree within statistical errors for denser meshes. The extrapolated correlation energy is \(-9.116\) eV for CCSD(T) and \(-9.106(30)\) eV for ph-AFQMC. Figure 6: System size scaling of ph-AFQMC (black) and CCSD(T) (red) for various \(\Gamma\)-centered \(\mathbf{k}\)-point meshes of diamond. We employed 8 HF orbitals per \(\mathbf{k}\) point. Both ph-AFQMC and CCSD(T) calculations were performed on a single-socket AMD EPYC 7713 and one MPI process. Figure 7: Diamond correlation energy \(E_{\mathrm{c}}\) sampled with an \(n_{\mathbf{k}}\times n_{\mathbf{k}}\times n_{\mathbf{k}}\)\(\mathbf{k}\)-point mesh. The solid lines extrapolate to \(n_{\mathbf{k}}\rightarrow\infty\) using \(n_{\mathbf{k}}=3\) and \(n_{\mathbf{k}}=4\). (a) Comparison of the down-sampled correlation energy (blue) with the one obtained with the complete basis set for MP2 (orange). The extrapolated lines are visually indistinguishable. (b) The down-sampled correlation energies for MP2 (blue), CCSD (green), CCSD(T) (red) and ph-AFQMC (black). There are two important conclusions we can draw from this test, and these are also more clearly borne out in Table 5. First, the difference in the correlation energy between ph-AFQMC and CCSD(T) does not increase as the number of virtual orbitals increases. On the contrast it seems to decrease, although we might need better statistical accuracy and tests for more materials to be certain. This observation is fully in line with the observations we recently made for small molecules.[94] This also means that the potential errors of both methods relate to low energy excitations, and tests using few states are already very meaningful. Second, increasing the number of \(\mathbf{k}\) points does not change the difference between ph-AFQMC and CCSD(T) appreciably (note that our error bars for the densest \(\mathbf{k}\)-point mesh are fairly sizable). Hence, the excellent agreement of CCSD(T) and ph-AFQMC prevails even in the thermodynamic limit. ## IV Conclusion ph-AFQMC is potentially a great method to obtain very accurate reference results for solid-state systems. The present work merely tries to establish that ph-AFQMC is competitive to CCSD(T) and is capable to yield very accurate results for solids with quite different characteristics. As already emphasized in the introduction, one advantage of ph-AFQMC is that it is fully compatible with other quantum-chemistry methods, such as MP2, CCSD or CCSD(T). This means that one can directly compare predicted energies with these and other well-established quantum-chemistry methods. It is clear and well understood that this also entails significant disadvantages, such as a slow convergence with respect to the number of virtual states ( unoccupied orbitals) included in the calculations. Certainly DMC is superior in this respect, but validation of DMC against other quantum-chemistry methods is notoriously difficult. For instance, recent conflicting results for localized coupled-cluster and DMC calculations for large weakly bonded molecules are very difficult to disentangle, as absolute energies can not be compared between the methods.[19] We have demonstrated that in the thermodynamic limit--that is for many \(\mathbf{k}\) points and virtual bands--ph-AFQMC is superior to a quite efficient Fortran implementation of CCSD(T). This is insofar remarkable, as our own implementation is not yet fully optimized and uses Python. Specifically, CCSD(T) possesses a disadvantageous scaling with both the number of \(\mathbf{k}\) points and the number of virtual orbitals. In practice we found that our ph-AFQMC code scales quadratic with respect to the number of unoccupied orbitals, and cubic with respect to the number of \(\mathbf{k}\) points. This means it will always outpace a CCSD(T) code for sufficiently large systems. The second important observation is that for the solids investigated here, CCSD(T) and ph-AFQMC agree within chemical accuracy for absolute energies. Although we have done the comparison initially for few \(\mathbf{k}\) points and few bands, our tests for diamond clearly show that the differences will not increase as the number of \(\mathbf{k}\) points or bands increases. The excellent agreement also means that one can potentially mix and match ph-AFQMC and CCSD(T) in actual investigations and rely on advantages of one or the other method for specific sub-problems. Last but not least, we have made one somewhat discorting observation: our ph-AFQMC correlation energies are generally slightly more negative than the corresponding CCSD(T) energies (but again well within chemical accuracy). It is well understood that CCSD(T) often converges from above, so potentially better agreement would be obtained when quadruple and pentuple excitation operators are included in the coupled-cluster calculations. However, ph-AFQMC--being non-variational--also sometimes overcorrelates; so maybe the CCSD(T) values are more accurate after all. The available data are clearly not sufficient to draw a final conclusion. Specifically, the reported FCIQMC values hint towards even smaller absolute correlation energies than CCSD(T). This we believe to be unlikely: for instance for small molecules, CCSD(T) certainly underestimates the correlation energy consistently.[94; 95] Why should this be any different for simple prototypical insulators and semiconductors? So reference-type calculations for solids using few \(\mathbf{k}\) points and bands would be very helpful to solve this small but important "riddle". ###### Acknowledgements. Funding by the Austrian Science Foundation (FWF) within the project P 33440 is gratefully acknowledged. Parts of the presented computational results have been obtained using the Vienna Scientific Cluster (VSC). ## Author Declarations ### Conflict of Interest The authors have no conflicts to disclose. ### Author Contributions Amir Taheridehkordi: Investigation, Methodology, Software, Writing - original draft. Martin Schlipf: Software, Writing - review & editing. Zoran Sukurma: Methodology, Writing - review & editing. Moritz Humer: Writing - review & editing. Andreas Gruneis: Writing - review & editing. Georg Kresse: Project administration, Writing - review & editing. \begin{table} \begin{tabular}{l c c c} \(n_{\mathbf{k}}\) & \(n_{\mathbf{b}}\) & \(E_{\mathrm{c}}^{\mathrm{CC}}\) (eV) & \(E_{\mathrm{c}}^{\mathrm{AF}}\) (eV) \\ \hline 2 & 64 & -8.661 & -8.693(10) \\ 3 & 16 & -8.966 & -8.977(13) \\ 4 & 8 & -9.052 & -9.052(16) \\ \end{tabular} \end{table} Table 5: CCSD(T) and ph-AFQMC down-sampled correlation energies of the primitive cell for various \(n_{\mathbf{k}}\) and \(n_{\mathbf{b}}\). ## Data Availability The data that support the findings of this study are available within the article. The Python code is available from the first author upon reasonable request. ## Appendix A Hamiltonian Units used in the appendix and throughout the manuscript are Hatree-units. The electronic Born-Oppenheimer Hamiltonian consists of one-body and two-body parts [16; 109] \[\hat{H}=\hat{H}_{1}+\hat{H}_{2}. \tag{11}\] The one-body part is defined as \[\hat{H}_{1}=\sum_{pq}t_{pq}\hat{a}_{p}^{\dagger}\hat{a}_{q}, \tag{12}\] where \(\hat{a}_{p}\) (\(\hat{a}_{p}^{\dagger}\)) are fermionic annihilation (creation) operators associated with an orthonormal basis \(\phi_{p}\). The matrix elements are \[t_{pq}=\int\mathrm{d}\mathbf{r}\phi_{p}^{*}(\mathbf{r})\bigg{(}-\frac{1}{2} \hat{\nabla}^{2}-\sum_{a}\frac{Z_{a}}{|\mathbf{r}-\mathbf{R}_{a}|}\bigg{)}\phi_ {q}(\mathbf{r}), \tag{13}\] where \(\mathbf{R}_{a}\) and \(Z_{a}\) denote the position and atomic number of the nuclei with label \(a\), respectively. The two-body part of the Hamiltonian (11) describes the electron-electron interaction \[\hat{H}_{2}=\frac{1}{2}\sum_{pq\neq s}\langle pq|rs\rangle\hat{a}_{p}^{\dagger }\hat{a}_{q}^{\dagger}\hat{a}_{s}\hat{a}_{r} \tag{14}\] introducing an abbreviation for the Coulomb integral \[\langle pq|rs\rangle=\int\mathrm{d}\mathbf{r}\mathrm{d}\mathbf{r}^{\prime}\phi _{p}^{*}(\mathbf{r})\phi_{q}^{*}(\mathbf{r}^{\prime})\frac{1}{|\mathbf{r}- \mathbf{r}^{\prime}|}\phi_{r}(\mathbf{r})\phi_{s}(\mathbf{r}^{\prime}). \tag{15}\] Next, we introduce a \(\mathbf{k}\)-point mesh to sample the Brillouin zone. The one-body part is diagonal in the Bloch vector \(\mathbf{k}\) \[\hat{H}_{1}=\sum_{\mathbf{k}}\sum_{pq}t_{pq}(\mathbf{k})\hat{a}_{pk}^{\dagger }\hat{a}_{q\mathbf{k}}. \tag{16}\] The two-body part has to fulfill momentum conservation [55] \[\hat{H}_{2}=\frac{1}{2}\sum_{\mathbf{k}_{p}+\mathbf{k}_{q}=\mathbf{ k}_{r}+\mathbf{k}_{r}}\sum_{pqs}\langle p\mathbf{k}_{p},q\mathbf{k}_{q}|r\mathbf{k} _{r},s\mathbf{k}_{s}\rangle\\ \times\hat{a}_{p\mathbf{k}_{p}}^{\dagger}\hat{a}_{q\mathbf{k}_{q} }^{\dagger}\hat{a}_{q\mathbf{k}_{r}}\hat{a}_{q\mathbf{k}_{r}}, \tag{17}\] which we rewrite introducing the transferred momentum \(\mathbf{q}=\mathbf{k}_{p}-\mathbf{k}_{r}\) \[\hat{H}_{2}=\frac{1}{2}\sum_{\mathbf{q}\mathbf{k},\mathbf{k}_{r }}\sum_{pqs}\langle p\mathbf{k}_{r}+\mathbf{q},q\mathbf{k}_{s}-\mathbf{q}|r \mathbf{k}_{r},s\mathbf{k}_{s}\rangle\\ \times\hat{a}_{p\mathbf{k}_{r}+\mathbf{q}}^{\dagger}\hat{a}_{q \mathbf{k}_{r}-\mathbf{q}}^{\dagger}\hat{a}_{q\mathbf{k}_{r}}\hat{a}_{q \mathbf{k}_{r}}. \tag{18}\] Using a plane-wave basis set, we express electron-repulsion integrals in the reciprocal space as \[\langle p\mathbf{k}_{r}+\mathbf{q},q\mathbf{k}_{s}-\mathbf{q}|r \mathbf{k}_{r},s\mathbf{k}_{s}\rangle=\sum_{\mathbf{G}}\frac{4\pi}{|\mathbf{G }-\mathbf{q}|^{2}}\\ \times\rho_{pr\mathbf{k}_{r}}(\mathbf{q},\mathbf{G})\rho_{sq \mathbf{k}_{r}-\mathbf{q}}^{*}(\mathbf{q},\mathbf{G}). \tag{19}\] Here, we introduced the two-orbital density \(\rho\) \[\rho_{pr\mathbf{k}_{r}}(\mathbf{q},\mathbf{G})=\frac{1}{\sqrt{\Omega}}\int \mathrm{d}\mathbf{r}e^{i(\mathbf{G}-\mathbf{q})\cdot\mathbf{r}}\phi_{p\mathbf{ k}_{r}+\mathbf{q}}^{*}(\mathbf{r})\phi_{q\mathbf{k}_{r}}(\mathbf{r}) \tag{20}\] and \(\Omega\) is the volume of the system. The summation over plane waves in Eq. (19) is truncated by an energy cutoff \(G^{2}/2=E_{\mathrm{cut}}\). Finally, we commute \(\hat{a}_{q\mathbf{k}_{r}}\) to the left in Eq. (18) using the anti-commutation relations between the fermionic operators \(\{\hat{a}_{r}^{\dagger},\hat{a}_{p}\}=\hat{a}_{r}^{\dagger}\hat{a}_{p}+\hat{ a}_{p}\hat{a}_{r}^{\dagger}=\delta_{pp}\). This yields the modified one- and two-body operators shown in Eqs. (2) and (3), respectively. The updated one-body matrix elements are \[h_{pq}(\mathbf{k})=t_{pq}(\mathbf{k})-\frac{1}{2}\sum_{\mathbf{k}_{r}}\sum_{r }\langle p\mathbf{k},r\mathbf{k}_{r}|r\mathbf{k}_{r},q\mathbf{k}\rangle. \tag{21}\] We introduce the operators \[\hat{L}_{\mathbf{q}\mathbf{G}}^{\prime}=\frac{\sqrt{4\pi}}{|\mathbf{G}-\mathbf{ q}|}\sum_{\mathbf{k}_{r}}\sum_{pr}\rho_{pr\mathbf{k}_{r}}(\mathbf{q},\mathbf{G}) \hat{a}_{p\mathbf{k}_{r}+\mathbf{q}}^{\dagger}\hat{a}_{q\mathbf{k}_{r}} \tag{22}\] to write the two-body operator in terms of one-body operators: \[\hat{H}_{2}^{\prime}=\frac{1}{2}\sum_{\mathbf{q}\mathbf{G}}\hat{L}_{\mathbf{q} \mathbf{G}}^{\prime}\hat{L}_{\mathbf{q}\mathbf{G}}^{\prime\dagger}. \tag{23}\] ## Appendix B VASP interface For the ph-AFQMC code, we require the one-body matrix \(t_{pq}(\mathbf{k})\) in Eq. (16) and the two-body tensor \[L_{pr\mathbf{k},\mathbf{q}\mathbf{G}}^{\prime}=\frac{\sqrt{4\pi}}{|\mathbf{G}- \mathbf{q}|}\rho_{pr\mathbf{k}_{r}}(\mathbf{q},\mathbf{G}), \tag{24}\] in Eq. (22). We compute these using VASP. [3; 4] Typically, we use a smaller energy cutoff for the two-body tensor than for the one-body matrix; this can lead to small inconsistencies in the energy of the HF ground state. To address this, we precalculate the fully self-consistent one-body HF-Hamiltonian matrix \(t_{pq}^{\mathrm{HF}}(\mathbf{k})\) within the VASP code. This term includes the kinetic energy, the ion-electron potential, the Hartree potential and the Fock exchange operator. Details on the VASP implementation are given in Ref. [5]. For the canonical HF orbitals, the matrix is diagonal and equal to the eigenvalues \(t_{pq}^{\mathrm{HF}}(\mathbf{k})=\varepsilon_{p\mathbf{k}}\delta_{pq}\). For the ph-AFQMC calculation, we require only the kinetic energy term and the ion-electron potential. We compute this term by subtracting the Hartree (\(J_{pq}(\mathbf{k})\)) and Fock contribution (\(K_{pq}(\mathbf{k})\)) calculated from the two-body tensors (with the lower energy cutoff, which is equal to the plane-wave energy cutoff) and subtract them to obtain the matrix elements \[t_{pq}(\mathbf{k})=t_{pq}^{\mathrm{HF}}(\mathbf{k})-J_{pq}(\mathbf{k})+K_{pq}( \mathbf{k}), \tag{25}\] with \[\begin{split} J_{pq}(\mathbf{k})&=2\sum_{\mathbf{k} \mathbf{K}^{\prime}\mathbf{G}}\bigg{(}\sum_{\mathbf{q}}L^{\prime}_{\mathbf{q} \mathbf{G}}\bigg{)}_{p\mathbf{k},q\mathbf{k}}\bigg{(}\sum_{\mathbf{q}}L^{ \prime}_{\mathbf{q}\mathbf{G}}\bigg{)}^{*}_{\mathbf{k}^{\prime},\mathbf{k}^{ \prime}},\\ K_{pq}(\mathbf{k})&=\sum_{\mathbf{k}^{\prime}\mathbf{ G}}\bigg{(}\sum_{\mathbf{q}}L^{\prime}_{\mathbf{q}\mathbf{G}}\bigg{)}_{p \mathbf{k},\mathbf{k}^{\prime}}\bigg{(}\sum_{\mathbf{q}}L^{\prime}_{\mathbf{q} \mathbf{G}}\bigg{)}^{*}_{q\mathbf{k},\mathbf{k}^{\prime}}.\end{split} \tag{23}\] where index \(i\) goes over the occupied states. This strategy minimized truncation errors that would occur if we just exported the kinetic energy and the electron-ion matrix elements from the VASP code. For instance, a self-consistent HF calculation using the one-body Hamiltonian \(t_{pq}(\mathbf{k})\) and the two body tensors \(L^{\prime}_{\mathbf{q}\mathbf{G}}\) yields exactly the same eigenvalues as the preceding VASP calculation. For the two-orbital tensor defined above, we loop \(\mathbf{q}\) and \(\mathbf{k}_{p}\) over the mesh sampling the Brillouin zone. Each pair corresponds to a \(\mathbf{k}_{r}=\mathbf{k}_{p}-\mathbf{q}\) conserving the momentum. \(\mathbf{k}_{r}\) may lay outside of the first Brillouin zone; folding it back with a reciprocal lattice vector \(\delta\mathbf{G}\) introduces a phase \(\varphi=\exp[i\,\delta\mathbf{G}\cdot\mathbf{r}]\). The orbitals \(\phi_{\mathbf{k}_{r}}\) consist of a smooth pseudo-orbital \(\tilde{\phi}_{\mathbf{k}_{r}}\) augmented by the difference of atomic orbitals \(\phi^{\dagger}_{\mathbf{v}}\) and their pseudized counterpart \(\tilde{\phi}^{\dagger}_{\mathbf{v}}\). Projectors \(p^{\dagger}_{\mathbf{v}}\) determine the replaced fraction of the pseudo-orbital \[|\phi_{\mathbf{k}_{r}}\rangle=|\tilde{\phi}_{\mathbf{k}_{r}}\rangle+\sum_{ \mathbf{v}}(|\phi^{\dagger}_{\mathbf{v}}\rangle-|\tilde{\phi}^{\dagger}_{ \mathbf{v}}\rangle)\langle p^{\dagger}_{\mathbf{v}}|\tilde{\phi}_{\mathbf{k}_ {r}}\rangle. \tag{24}\] The superscript 1 indicates _one-center_ quantities that are only nonzero within the PAW sphere of one atom. Non-local operators contain coupling between the pseudo-orbital and the one-center terms.[3] Therefore the two-orbital density given by Eq. (10) consists of a pseudo and a one-center contribution \[\rho_{pr\mathbf{k}_{r}}(\mathbf{q},\mathbf{G})=\tilde{\rho}_{pr\mathbf{k}_{r} }(\mathbf{q},\mathbf{G})+\rho^{1}_{pr\mathbf{k}_{r}}(\mathbf{q},\mathbf{G}). \tag{25}\] Representing the one-center term would require a very dense real-space grid. To avoid this one introduces a compensation density \(\hat{\rho}\) that restores the multi-poles of the all-electron density on the plane-wave grid.[4; 5] As a result, the one-center density has no Coulomb interaction outside the PAW sphere. We also neglect the contributions inside the PAW sphere in the present work: \[\rho_{pr\mathbf{k}_{r}}(\mathbf{q},\mathbf{G})\approx\tilde{\rho}_{pr\mathbf{k }_{r}}(\mathbf{q},\mathbf{G})+\tilde{\rho}_{pr\mathbf{k}_{r}}(\mathbf{q}, \mathbf{G}). \tag{26}\] To make up for the neglect of the terms in the PAW spheres, we use shape restoration. Shape restorations allows to accurately restore the all-electron density distribution inside the PAW spheres even on a coarse plane-wave grid.[110; 111] It is routinely used in VASP for calculations using, e.g., the random phase approximation and sufficiently accurate to obtain highly reliable correlation energy differences.[7] Here, we set \(\text{LMAXFOCKAE}=4\) to force an accurate treatment for the charge augmentation up to the angular quantum number of 4. Finally, we combine the two-orbital density with the square of the Coulomb potential and correct the phase \(\varphi\) if necessary. The one-body matrix \(t_{pq}(\mathbf{k})\) and the two-body tensor \(L^{\prime}_{pr\mathbf{k}+\mathbf{q}\mathbf{G}}\) are each exported in NPY format to facilitate easy processing in Python. ## Appendix C Time evolution To determine the ground-state wavefunction \(|\Phi_{0}\rangle\) of a system governed by a Hamiltonian \(\hat{H}\), the imaginary-time propagator is applied to the initial wavefunction \(|\Psi_{\mathbf{I}}\rangle\) in the infinite time limit[9] \[|\Phi_{0}\rangle\propto\lim_{\beta\rightarrow\infty}e^{-\beta\hat{H}}|\Psi_{ \mathbf{I}}\rangle. \tag{27}\] In practice, one approaches the ground-state wavefunction by repeated application of the imaginary-time propagator for a small time step \(\tau=\frac{\beta}{n}\) \[\lim_{\beta\rightarrow\infty}e^{-\beta\hat{H}}|\Psi_{\mathbf{I}}\rangle=\lim_ {n\rightarrow\infty}\bigg{[}\text{e}^{-\tau\hat{H}}\bigg{]}^{n}|\Psi_{ \mathbf{I}}\rangle. \tag{28}\] The Hubbard-Stratonovich transformation[56; 57] translates the two-body part to an integral of one-body operators \[\text{e}^{-\tau\hat{H}}=\int\text{d}\mathbf{x}p(\mathbf{x})\hat{B}(\mathbf{x}) +O(\tau^{2}). \tag{29}\] \(p(\mathbf{x})\) is the normal distribution function \[p(\mathbf{x})=(2\pi)^{-N_{g}/2}\text{e}^{-\frac{1}{2}|\mathbf{x}|^{2}}, \tag{30}\] where \(N_{g}\) is the number of components of \(\mathbf{x}\). The propagation operator \(\hat{B}\) is defined in Eq. (15) in the main text. ## Appendix D Force bias The force bias is given by \[J^{w}_{\mathbf{q}g,k}=-i\sqrt{\tau}\frac{\langle\Psi_{\mathbf{T}}|\hat{\Delta }_{\mathbf{q}g}|\Psi^{w}_{k}\rangle}{\langle\Psi_{\mathbf{T}}|\Psi^{w}_{k} \rangle}. \tag{31}\] A creation-annihilation operator pair is \[\frac{\langle\Psi_{\mathbf{T}}|\hat{a}^{\dagger}_{p}\hat{a}_{R}|\Psi^{w}_{k} \rangle}{\langle\Psi_{\mathbf{T}}|\Psi^{w}_{k}\rangle}=\bigg{[}\Psi^{w}_{k} \big{(}\Psi^{\dagger}_{\mathbf{T}}\Psi^{w}_{k}\big{)}^{-1}\Psi^{\dagger}_{ \mathbf{T}}\bigg{]}_{RP} \tag{32}\] in matrix representation. \(P\) and \(R\) are composite indices for band and momentum, i.e., \(R\equiv(r\mathbf{k}_{r})\). We compute the biorthogonalized orbitals in every step \[\Theta^{w}_{k}=\Psi^{w}_{k}\big{(}\Psi^{\dagger}_{\mathbf{T}}\Psi^{w}_{k} \big{)}^{-1}. \tag{33}\] The action of the \(\hat{\Delta}\) operator onto the trial wavefunction is precomputed \[\alpha_{\mathbf{q}g}=\Psi^{\dagger}_{\mathbf{T}}\Delta_{\mathbf{q}g}. \tag{34}\] We convolute these two matrices to obtain the force bias for a non-spin-polarized system \[f^{w}_{\mathbf{q}g,k}=-2i\sqrt{\tau}\text{Tr}[\Theta^{w}_{k}\alpha_{\mathbf{q}g}]. \tag{35}\] ## Appendix E Local energy The local energy is \[E_{\text{loc}}(\Psi^{w}_{k})=\frac{\langle\Psi_{\text{T}}|\hat{H}_{1}+\hat{H}_{2} |\Psi^{w}_{k}\rangle}{\langle\Psi_{\text{T}}|\Psi^{w}_{k}\rangle}. \tag{10}\] Similar to the force bias we use Eq. (11) to evaluate the one-body part in the matrix representation. For a non-spin-polarized system we obtain \[E_{1}(\Psi^{w}_{k})=2\text{Tr}[\Psi^{\dagger}_{\text{T}}H_{1}\Theta^{w}_{k}]. \tag{11}\] For the two-body part we consider the generalized Wick's theorem: [112; 113] \[\frac{\langle\Psi_{\text{T}}|\hat{d}^{\dagger}_{p}\hat{d}^{\dagger}_{q}\hat{d} _{s}\hat{d}_{R}|\Psi^{w}\rangle}{\langle\Psi_{\text{T}}|\Psi^{w}\rangle}= \mathcal{G}^{w}_{PR}\mathcal{G}^{w}_{QS}-\mathcal{G}^{w}_{PS}\mathcal{G}^{w}_{ QR}. \tag{12}\] Here, \(\mathcal{G}\) is the Green's function \[\mathcal{G}^{w}_{PR}\equiv\mathcal{G}_{PR}(\Psi^{w})=\frac{\langle\Psi_{\text {T}}|\hat{d}^{\dagger}_{p}\hat{d}_{R}|\Psi^{w}\rangle}{\langle\Psi_{\text{T}} |\Psi^{w}\rangle} \tag{13}\] that we know to compute from Eq. (11). Thus, the two-body part of the local energy is split into Hartree- and exchange-like terms: \[E_{2}(\Psi^{w}_{k})=E_{\text{H}}(\Psi^{w}_{k})+E_{\text{X}}(\Psi^{w}_{k}). \tag{14}\] For a non-spin-polarized system: \[E_{\text{H}}(\Psi^{w}_{k})=2\sum_{\mathbf{q}}\sum_{s=1}^{N_{\text{qq}}}\text {Tr}[\Theta^{w}_{k}\alpha_{\mathbf{q}s}]\text{Tr}[\Theta^{w}_{k}\beta_{ \mathbf{q}s}], \tag{15}\] \[E_{\text{X}}(\Psi^{w}_{k})=-\sum_{\mathbf{q}}\sum_{s=1}^{N_{ \text{qq}}}\text{Tr}[(\Theta^{w}_{k}\alpha_{\mathbf{q}s})(\Theta^{w}_{k}\beta _{\mathbf{q}s})],\] where for each specific \(g\) we have \[\beta_{\mathbf{q}s}=\Psi^{\dagger}_{\text{T}}\varsigma^{\dagger}_{\mathbf{q}s}, \tag{16}\] and \(\alpha_{\mathbf{q}s}\) is given by Eq. (10).
2306.02032
ADMM-based Detector for Large-scale MIMO Code-domain NOMA Systems
Large-scale multi-input multi-output (MIMO) code domain non-orthogonal multiple access (CD-NOMA) techniques are one of the potential candidates to address the next-generation wireless needs such as massive connectivity, and high reliability. This work focuses on two primary CD-NOMA techniques: sparse-code multiple access (SCMA) and dense-code multiple access (DCMA). One of the primary challenges in implementing MIMO-CD-NOMA systems is designing the optimal detector with affordable computation cost and complexity. This paper proposes an iterative linear detector based on the alternating direction method of multipliers (ADMM). First, the maximum likelihood (ML) detection problem is converted into a sharing optimization problem. The set constraint in the ML detection problem is relaxed into the box constraint sharing problem. An alternative variable is introduced via the penalty term, which compensates for the loss incurred by the constraint relaxation. The system models, i.e., the relation between the input signal and the received signal, are reformulated so that the proposed sharing optimization problem can be readily applied. The ADMM is a robust algorithm to solve the sharing problem in a distributed manner. The proposed detector leverages the distributive nature to reduce per-iteration cost and time. An ADMM-based linear detector is designed for three MIMO-CD-NOMA systems: single input multi output CD-NOMA (SIMO-CD-NOMA), spatial multiplexing CD-NOMA (SMX-CD-NOMA), and spatial modulated CD-NOMA (SM-CD-NOMA). The impact of various system parameters and ADMM parameters on computational complexity and symbol error rate (SER) has been thoroughly examined through extensive Monte Carlo simulations.
Vinjamoori Vikas, Kuntal Deka, Sanjeev Sharma, A. Rajesh
2023-06-03T07:22:35Z
http://arxiv.org/abs/2306.02032v1
# ADMM-based Detector for Large-scale MIMO Code-domain NOMA Systems ###### Abstract Large-scale multi-input multi-output (MIMO) code domain non-orthogonal multiple access (CD-NOMA) techniques are one of the potential candidates to address the next-generation wireless needs such as massive connectivity, and high reliability. This work focuses on two primary CD-NOMA techniques: sparse-code multiple access (SCMA) and dense-code multiple access (DCMA). One of the primary challenges in implementing MIMO-CD-NOMA systems is designing the optimal detector with affordable computation cost and complexity. This paper proposes an iterative linear detector based on the alternating direction method of multipliers (ADMM). First, the maximum likelihood (ML) detection problem is converted into a sharing optimization problem. The set constraint in the ML detection problem is relaxed into the box constraint sharing problem. An alternative variable is introduced via the penalty term, which compensates for the loss incurred by the constraint relaxation. The system models, i.e., the relation between the input signal and the received signal, are reformulated so that the proposed sharing optimization problem can be readily applied. The ADMM is a robust algorithm to solve the sharing problem in a distributed manner. The proposed detector leverages the distributive nature to reduce per-iteration cost and time. An ADMM-based linear detector is designed for three MIMO-CD-NOMA systems: single input multi output CD-NOMA (SIMO-CD-NOMA), spatial multiplexing CD-NOMA (SMX-CD-NOMA), and spatial modulated CD-NOMA (SM-CD-NOMA). The impact of various system parameters and ADMM parameters on computational complexity and symbol error rate (SER) has been thoroughly examined through extensive Monte Carlo simulations. Code domain-NOMA (CD-NOMA), dense code multiple access (DCMA), sparse code multiple access (SCMA), alternating direction method of multipliers (ADMM), single input multi output (SIMO), spatial multiplexing MIMO (SMX-MIMO), spatial modulation MIMO (SM-MIMO). ## I Introduction ### _Motivation_ The existing literature extensively studies two types of code domain non-orthogonal multiple access (CD-NOMA) systems. The first type is sparse coded CD-NOMA systems, which utilize low-density spreading sequence design (e.g., low-density signatures (LDS)) or sparse codebook design (e.g., sparse code multiple access (SCMA)) [1]. The second type is densely coded CD-NOMA systems, which utilize dense spreading sequence design (e.g., overloaded code-division multiple access (CDMA)) or dense codebook design (e.g., dense code multiple access (DCMA)) [2]. Throughout the paper, dense codebook-based NOMA is referred to as DCMA, and dense spreading sequence-based NOMA is referred to as overloaded CDMA. SCMA and DCMA are critical enabling technologies to improve spectral efficiency and massive connectivity by overloading multiple user equipment's (UEs) data on a single resource element (RE). These techniques exploit the codebooks' multi-dimensional constellation (MDC) coding gain. Spreading sequence-based NOMA (LDS, overloaded CDMA) does not benefit from the coding gain of the MDC. Thus, codebook-based NOMA techniques offer substantial error rate performance gains over the spreading sequence-based NOMA. Multi-input multi-output (MIMO) technology is another crucial enabler in improving spectral efficiency and system reliability [3]. CD-NOMA systems using a MIMO system (MIMO-CD-NOMA) offer even greater spectral efficiency, reliability, and massive connectivity improvements. This paper considers three types of uplink (UL) MIMO-CD-NOMA systems: (1) Single-input multi-output (SIMO) aided CD-NOMA, where each UE has a single antenna [4]; (2) Spatial multiplexing-CD-NOMA (SMX-CD-NOMA), where each UE is equipped with multiple transmit antennas [5]; and (3) Spatial modulated CD-NOMA (SM-CD-NOMA), where each UE activates a single transmit antenna out of multiple transmit antennas at the UE [6]. The primary challenge in implementing MIMO-CD-NOMA systems lies in designing a multiuser signal detector that offers excellent performance while maintaining affordable complexity. The message passing algorithm (MPA) is usually used in SCMA detection [7, 8]. However, the MPA for MIMO-SCMA signal detection [9, 10] needs to improve its computational complexity. MPA detection over the MIMO-SCMA system uses an extended factor graph due to the additional antennas at the UEs and the BS [11]. The MPA exhibits exponential complexity with the codebook size (\(M\)), the number of UEs (\(J\)), and the number of transmit antennas. Thus, MPA becomes impractical for highly overloaded SCMA systems over large-scale MIMO systems. Further, MPA often faces convergence issues when the factor graph contains cycles. To overcome the limited diversity issues (due to sparsity) in SCMA, researchers have recently started designing the DCMA systems [2]. However, MPA demands sparsity for the accurate detection of codewords. DCMA codewords do not hold sparsity properties. Due to enormous short cycles in the DCMA factor graph, MPA is no longer a suitable detector for DCMA systems. Thus, an alternate detection algorithm must be developed to achieve the full diversity offered by DCMA with minimal detection complexity. The generalized sphere decoder (GSD) is used as a detector for a spread sequence-based DCMA system by considering a single antenna at both the base station (BS) and the UE [2]. The GSD works on principles of the tree search algorithm. Its complexity depends on the number of tree nodes and floating point operations (FLOPs) at each node. The number of tree nodes will increase multi-fold in the codebook-based DCMA systems. Moreover, the complexity of GSD increases rapidly for large-scale MIMO DCMA systems. So, it is not an efficient detector for large-scale MIMO CD-NOMA systems. Designing a low-complexity multiuser detector (MUD) for large-scale MIMO-aided CD-NOMA systems with higher overloading factors (\(\lambda\)) and modulation order/codebook-size (\(M\)) is becoming essential. The existing research studies mainly nonlinear detectors (MPA and sphere decoder) for CD-NOMA systems [7, 12]. On the other hand, minimum mean square error (MMSE) and zeros forcing (ZF) are commonly used linear detectors in standalone MIMO systems [13]. These detectors are simple to implement and they exhibit low computational complexity over nonlinear detectors. However, these detectors show poor symbol error rate (SER) performance over CD-NOMA systems with high \(\lambda\) values. In addition, they fail to detect the CD-NOMA signals as \(M\) increases. Thus, designing an efficient detector with low complexity is mandatory to fully unfurl the benefits of MIMO CD-NOMA systems with a practically viable approach. Further, the detector must perform better than the existing linear detectors for large-scale CD-NOMA parameters (\(\lambda\), \(M\)). ### _Related prior works_ Recently, the ADMM has been widely used to solve convex and non-convex problems in a distributed manner [14]. Glowinski and Marrocco first proposed the ADMM in the mid-1970s. Later on, Boyd _et al_. rigorously discussed various complex optimization problems that can be solved using ADMM in their tutorial [14]. The ADMM is formed by the composition of dual ascent and the method of multipliers. The dual decomposition properties of dual ascent make the ADMM solve in a distributed manner. ADMM has superior convergence properties and often is able to converge without the requirement of strict convexity. A comprehensive treatment of ADMM and its advancements are detailed in [15]. The ADMM is extensively applied to solve the linear programming (LP) problem in low-density parity-check code's (LDPC) decoding [16, 17]. The authors here have also shown that the ADMM significantly reduces the complexity of the decoder compared to the belief propagation (BP) methods. The MUD problem of UL grant-free NOMA is solved by using ADMM [18]. The ADMM-based infinity norm (ADMIN) iterative linear detector is proposed for massive MIMO systems [19]. The authors have also proposed VLSI architecture for the ADMIN detector. The results show that ADMIN outperforms all the linear detectors in terms of SER performance and low hardware cost compared to the nonlinear BP-based detectors. Similarly, an ADMM-based QAM signal detector for massive MIMO systems is proposed [20]. A distributed penalty sharing (DPS) ADMM method is designed to convert the maximum-likelihood (ML) detection problem into sharing optimization problem. This method shows good performance and complexity trade-offs for massive MIMO systems. The authors in [19, 20] show that the ADMM-based detector performs MMSE equalization in the first iteration. The performance of the ADMM improves over MMSE as iterations progress. ### _Contributions:_ This paper proposes an ADMM-based iterative linear detector that solves the MUD problem of large-scale MIMO-aided CD-NOMA systems. It exhibits the best trade-off between the SER performance and computational complexity for SCMA and DCMA systems. We have formulated the highly computationally complex ML detection problem of large-scale MIMO-CD-NOMA systems as the non-convex distributed optimization problem. To the best of the authors' knowledge, this is the first attempt to convert the ML detection problem of CD-NOMA into a sharing problem. Further, the ADMM approach is applied to solve the proposed sharing problem using distributed optimization. The proposed ADMM-based detector is guaranteed to improve the error rate performance over linear detectors with lower computational complexity than nonlinear detectors. The main contributions of this work are listed below: * The ML MUD problem is nondeterministic polynomial-time hard (NP-hard) and can be solved using exhaustive search [21]. However, the complexity of the exhaustive search increases exponentially as the number of UEs and codebook size increase. The ML problem is transformed into a non-convex sharing optimization problem to overcome this challenge. An efficient distributed optimization method is then applied to solve the proposed sharing problem. * The CD-NOMA system models, i.e., the relationships between the input signal and received signal, are reformulated so that the sharing ADMM algorithm can be readily applied for detection. For SIMO-CD-NOMA, the signal received by multiple antennas at BS is considered as a single observation vector for detection. For SMX-CD-NOMA and SM-CD-NOMA, resource-wise processing of the received signal is carried out during ADMM detection. * A low-complexity iterative linear detector is designed via the ADMM approach to solve the sharing optimization problem for large-scale MIMO-CD-NOMA systems. ADMM solves the sharing optimization problem through a distribution optimization framework. Indeed, distributive nature of sharing ADMM allows parallel processing in multiuser detection, reducing per-iteration computational time. * The proposed ADMM-based detector is applied to large-scale MIMO (SIMO, SMX-MIMO, and SM-MIMO) aided CD-NOMA systems. Thus, the proposed single ADMM-based MUD is capable of solving the detection problem of various CD-NOMA systems. * The complexity of the proposed ADMM-based detector is independent of the CD-NOMA system's codebook size/modulation order (\(M\)), unlike the conventional MPA detector. Thus, it can be applied to large-size codebooks. The impact of the different MIMO-CD-NOMA system parameters is also thoroughly examined. The ADMM-based detector exhibits polynomial complexity with all other parameters, such as the numbers of antennas, UEs, and REs. * The comparison of the error rate performance and receiver complexity of the proposed ADMM-based detector with all other conventional detectors is thoroughly examined. Comprehensive Monte Carlo simulations indicate that the ADMM-based detector substantially reduces the computational complexity while maintaining an error performance comparable to the MPA in SCMA detection. For spread sequence-based DCMA, the ADMM gives superior performance than GSD. ### _Organization:_ The paper is organized as follows. The preliminaries of the existing CD-NOMA system models and sharing ADMM problem are discussed in Section II. The proposed system models and ADMM-based detection for UL MIMO-CD-NOMA systems are provided in Section III. Section IV presents a detailed computational complexity analysis. The simulation results for different MIMO-CD-NOMA systems are discussed in Section V. Section VI concludes the paper and mentions future scopes. _Notations:_ Lower case, bold lower case, and bold upper case letters denote scalars, vectors, and matrices, respectively. \((\cdot)^{T}\) and \((\cdot)^{H}\) denote transpose and Hermitian transpose, respectively. \(\|(\cdot)\|\) denotes the Euclidean norm of a vector. \(\prod_{[-\alpha,\alpha]}(\cdot)\) denotes the Euclidean projection onto the interval \([-\alpha,\alpha]\). \(\langle\cdot,\cdot\rangle\) denotes the inner product, and \(\text{Re}(\cdot)\) denotes the real part of a complex variable. \(\mathbb{R}^{n}\) and \(\mathbb{C}^{n}\) denote \(n\)-dimensional real and complex vector spaces, respectively. \(\mathcal{CN}(0,\sigma^{2})\) denotes complex Gaussian distribution with zero mean and variance \(\sigma^{2}\). ## II Preliminaries This section mainly discusses the existing CD-NOMA system models and steps to solve the sharing optimization problem via the ADMM approach. The CD-NOMA techniques can be broadly divided into sparse-coded and dense-coded NOMA. Low-density signature (LDS) [22] and SCMA [23] belong to the first group, whereas overloaded CDMA [2] and DCMA [2] belong to the second group. LDS and overloaded CDMA are sequence-based CD-NOMA techniques where mapping and spreading are performed separately. LDS utilizes sparse sequences, while overloaded CDMA employs dense sequences. These techniques suffer from error performance loss due to limited coding gain. On the other hand, SCMA and DCIA are codebook-based CD-NOMA techniques. In these systems, the data of each UE is mapped to a multi-dimensional codeword. SCMA utilizes sparse codewords, while DCIA applies dense codewords. These techniques offer error performance benefits through the use of MDCs [24]. ### _SCMA system model_ SCMA is a sparse codebook-based NOMA technique. Each UE's data is sent in the form of sparse codewords. Consider a CD-NOMA system having \(J\) UEs accessing \(K\) REs, where \(J>K\) ensures the overloading nature of SCMA. Each UE in the SCMA system has access to \(d_{\rm v}<K\) active REs among \(K\) REs. Thus, \(d_{\rm v}\) non-zero elements are present in the \(K\)-dimensional sparse codewords. The SCMA encoder maps each UE bitstream to a sparse codeword \({\bf x}^{K\times 1}\) in a pre-designed SCMA codebook \({\cal X}^{K\times M}\) of the respective UE. The sparse nature of SCMA codewords enforces \(d_{\rm f}<J\) number of overlapping UEs on each RE. The diversity order of SCMA is essentially limited to \(d_{\rm v}\), i.e., far less than the maximum possible diversity order \(K\). For example, the codebook structure for 150 % overloaded SCMA system (\(d_{\rm v}=2\), and \(d_{\rm f}=3\)) is shown in Fig. 1(a). The received signal vector is given by \[{\bf r}=\sum_{j=1}^{J}{\rm diag}({\bf h}_{j}){\bf x}_{j}+{\bf w}, \tag{1}\] where * \({\bf w}\sim{\cal CN}(0,\sigma^{2}{\bf I}_{K})\) denotes the additive white Gaussian noise (AWGN) at the receiver. * \({\bf h}_{j}\sim{\cal CN}(0,{\bf I}_{K})\) denotes the Rayleigh fading channel vector for the \(j\)th UE. * \({\rm diag}({\bf h}=[h[1],\ldots,h[k],\ldots,h[K]]^{T})\) denotes diagonal matrix and \(h[k]\) is the \(k\)th diagonal element. * \({\bf x}_{j}\) is the \(K\)-dimensional codeword of \(j\)th UE. ### _DCMA system model_ The error performance of SCMA suffers from limited diversity order. This limitation of SCMA can be overcome by using dense codebooks in the DCIA system [2]. Each UE in the DCIA system has access to all \(K\) REs. No zero entries are present in the \(K\)-dimensional dense codewords. Therefore, the DCIA system exploits the full diversity of multi-dimensional codewords. For example, the codebook structure of 150 % overloaded DCIA system is shown in Fig. 1(b). The received signal vector for the DCIA system is similar to that of the SCMA system, as described in (1). In addition to the codebook-based DCIA system, the spread sequence-based DCIA can be designed with non-orthogonal spreading sequences. The idea is similar to the overloaded CDMA systems. The number of UEs is larger than the length of the spreading sequence. Uni-modular spreading sequences have been used in this paper to achieve full diversity of the dense sequences [2]. ### _Shared ADMM problem_ This subsection introduces the fundamental concepts of solving sharing problems using ADMM within a distributed optimization framework. A novel method, referred to as ADMM, was proposed to address both convex and non-convex sharing optimization problems [14]. The subsequent subsection delves into the details and ideas behind the sharing ADMM problem. The ADMM is formed by combining the superior properties of dual ascent and the method of multipliers. This combination ensures the robustness of ADMM [14]. The generic sharing problem is \[\min\quad\sum_{i=1}^{N}f_{i}(\mathbf{x}_{i})+g\left(\sum_{i=1}^{N}\mathbf{x}_ {i}\right), \tag{2}\] where \(\mathbf{x}_{i}\in\mathbb{R}^{n},i=1,\ldots,N\), the associated local cost function \(f_{i}(\mathbf{x}_{i})(f_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R})\) of subsystem \(i\) is handled by processor \(i\), and \(g\)\((g:\mathbb{R}^{n}\rightarrow\mathbb{R})\) is the shared objective, whose argument is the sum of \(N\) variables. Each variable \(\mathbf{x}_{i}\) is involved in minimizing the individual cost \(f_{i}(\mathbf{x}_{i})\), as well as the shared objective \(g\left(\sum_{i=1}^{N}\mathbf{x}_{i}\right)\). The sharing problem can be converted into an ADMM problem by introducing an alternative variable, \(\mathbf{z}_{i}\in\mathbb{R}^{n}\). Thus, the cost function is minimized over \(\mathbf{x}_{i}\) and \(\mathbf{z}_{i}\), alternatively. The problem (2) is converted to the following problem: \[\min \quad\sum_{i=1}^{N}f_{i}(\mathbf{x}_{i})+g\left(\sum_{i=1}^{N} \mathbf{z}_{i}\right)\] (3) s.t. \[\quad\mathbf{x}_{i}-\mathbf{z}_{i}=0\quad i=1,\ldots,N.\] The augmented Lagrangian function for (3) can be written as \[\mathcal{L}\left(\{\mathbf{x}_{i},\mathbf{z}_{i},\mathbf{y}_{i}\}_{i=1}^{N} \right)=\sum_{i=1}^{N}f_{i}(\mathbf{x}_{i})+g\left(\sum_{i=1}^{N}\mathbf{z}_{ i}\right)+\sum_{i=1}^{N}\langle\mathbf{x}_{i}-\mathbf{z}_{i},\mathbf{y}_{i} \rangle+\frac{\rho}{2}\sum_{i=1}^{N}\|\mathbf{x}_{i}-\mathbf{z}_{i}\|_{2}^{2} \tag{4}\] Fig. 1: CD-NOMA codebook structure for \(J=6\) and \(K=4\). where \(\mathbf{x}_{i},\mathbf{z}_{i}\) are called primal variables, \(\mathbf{y}_{i}\in\mathbb{R}^{n}\) is the Lagrangian variable and \(\rho>0\) is called the penalty parameter. The function in (4) can be minimized by the ADMM steps [14] \[\mathbf{x}_{i}^{t+1} :=\operatorname*{argmin}_{\mathbf{x}_{i}}\left(f_{i}(\mathbf{x}_{ i})+\frac{\rho}{2}\|\mathbf{x}_{i}-\mathbf{z}_{i}^{t}+\mathbf{u}_{i}^{t}\|_{2}^{2}\right) \tag{5}\] \[\mathbf{z}_{i}^{t+1} :=\operatorname*{argmin}_{\mathbf{z}_{i}}g\left(\sum_{i=1}^{N} \mathbf{z}_{i}\right)+\frac{\rho}{2}\sum_{i=1}^{N}\|\mathbf{z}_{i}-\mathbf{u}_ {i}^{t}-\mathbf{x}_{i}^{t+1}\|_{2}^{2}\] (6) \[\mathbf{u}_{i}^{t+1} :=\mathbf{u}_{i}^{t}+\mathbf{x}_{i}^{t+1}-\mathbf{z}_{i}^{t+1} \tag{7}\] where \(\mathbf{u}_{i}\) is called a dual variable (scaled version of Lagrangian variable, \(\mathbf{u}_{i}=\frac{\mathbf{y}_{i}}{\rho}\)). The problems (5) and (7) can be solved in parallel for \(i=1,\ldots,N\). Let \(\mathbf{q}_{i}=\mathbf{u}_{i}^{t}+\mathbf{x}_{i}^{t+1}\). Then (6) can be rewritten as \[\min g\left(\sum_{i=1}^{N}\mathbf{z}_{i}\right)+\frac{\rho}{2}\sum_{i=1}^{N} \|\mathbf{z}_{i}-\mathbf{q}_{i}\|_{2}^{2}\] (8) s.t \[\bar{\mathbf{z}}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{z}_{i}\] with fixed variable \(\bar{\mathbf{z}}\in\mathbb{R}^{n}\). The problem (8) has the following solution \[\mathbf{z}_{i}=\mathbf{q}_{i}+\bar{\mathbf{z}}-\bar{\mathbf{q}},\quad\text{ where}\quad\bar{\mathbf{q}}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{q}_{i}=\bar{\mathbf{u}}^{t}+ \bar{\mathbf{x}}^{t+1}, \tag{9}\] \[\bar{\mathbf{u}}^{t}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{u}_{i}^{t},\;\;\text{ and}\;\;\bar{\mathbf{x}}^{t+1}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{x}_{i}^{t+1}.\] From (8) and (9), the \(\bar{\mathbf{z}}\)-update step can be simplified to the following unconstrained problem: \[\min g(N\bar{\mathbf{z}})+\frac{\rho}{2}\sum_{i=1}^{N}\|\bar{\mathbf{z}}-\bar {\mathbf{q}}\|. \tag{10}\] Substituting (9) for \(\mathbf{z}_{i}^{t+1}\) into (7), the \(\bar{\mathbf{u}}\)-update step is \[\mathbf{u}_{i}^{t+1}=\bar{\mathbf{u}}^{t}+\bar{\mathbf{x}}^{t+1}-\bar{\mathbf{ z}}^{t+1} \tag{11}\] The dual variables \(\{\mathbf{u}_{i}^{t}\}_{i=1}^{N}\) are equal and can be replaced by a single variable \(\mathbf{u}^{t}\). The ADMM steps are simplified as follows \[\mathbf{x}_{i}^{t+1} :=\operatorname*{argmin}_{\mathbf{x}_{i}}\left(f_{i}(\mathbf{x}_{ i})+\frac{\rho}{2}\|\mathbf{x}_{i}-\mathbf{x}_{i}^{t}+\bar{\mathbf{x}}^{t}- \bar{\mathbf{z}}^{t}+u^{t}\|_{2}^{2}\right) \tag{12}\] \[\bar{\mathbf{z}}^{t+1} :=\operatorname*{argmin}_{\bar{\mathbf{z}}}\left(g(N\bar{\mathbf{ z}})+\frac{N\rho}{2}\|\bar{\mathbf{z}}-u^{t}-\bar{\mathbf{x}}^{t+1}\|_{2}^{2}\right)\] (13) \[\mathbf{u}^{t+1} :=\mathbf{u}^{t}+\bar{\mathbf{x}}^{t+1}-\bar{\mathbf{z}}^{t+1} \tag{14}\] The original sharing problem (2) is decomposed into a three-step iterative optimization problem. The problem (12) can be solved in parallel for \(i=1,\ldots,N\). The step (13) solves the shared objective, and (14) is the dual variable update step. ## III ADMM-based detection for UL CD-NOMA systems This section focuses on reformulating the relation between the input and received signals of three multi-antenna CD-NOMA system models. These models are 1. SIMO-CD-NOMA, 2. Spatial multiplexing CD-NOMA (SMX-CD-NOMA), and 3. Spatial modulated CD-NOMA (SM-CD-NOMA). The reformulated models are readily applied for sharing ADMM-based detection process. These models, along with the derivation of the ADMM steps, are described in the following. ### _Simo Cd-Noma_ Consider a UL scenario with the BS equipped with \(N_{\rm r}\) antennas and each UE having a single antenna. The idea is to exploit the spatial diversity gain offered by multiple receive antennas at BS [25]. The error rate performance of the CD-NOMA system with \(N_{\rm r}\) antennas at the receiver is significantly improved [4] due to higher diversity gain. The \(K\times 1\) observation vector at \(n_{\rm r}\)th BS antenna can be expressed as \[{\bf r}^{(n_{\rm r})}=\sum_{j=1}^{J}\text{diag}\left({\bf h}_{j}^{(n_{\rm r})} \right){\bf x}_{j}+{\bf w}^{(n_{\rm r})},\ \ n_{\rm r}=1,\ldots,N_{\rm r}\] where * \({\bf w}^{(n_{\rm r})}\sim\mathcal{CN}(0,\sigma^{2}{\bf I}_{K})\), \(K\times 1\) vector assumed to be independent and identically distributed (i.i.d.), AWGN at the \(n_{\rm r}\)th BS antenna. * \({\bf h}_{j}^{(n_{\rm r})}\sim\mathcal{CN}(0,{\bf I}_{K})\), \(K\times 1\) vector assumed to be i.i.d. complex Gaussian random vector, denotes Rayleigh fading channel coefficients between the \(j\)th UE and \(n_{\rm r}\)th BS antenna. \({\bf h}_{j}^{(n_{\rm r})}=\left[h[1]\ldots h[k]\ldots h[K]\right]^{T}\) and \(\text{diag}\left({\bf h}_{j}^{(n_{\rm r})}\right)\) denotes diagonal matrix with \(h[k]\) being the \(k\)th diagonal element. * \({\bf x}_{j}=\left[x_{j}[1]\ldots x_{j}[k]\ldots x_{j}[K]\right]^{T}\) is a codeword from the \(j\)th UE's codebook, \(\mathcal{X}_{j}^{K\times M}\). The overall \(KN_{\rm r}\times 1\) observation vector at the BS for the SIMO CD-NOMA system is given by \[{\bf r}={\bf H}{\bf x}_{\rm mu}+{\bf w}, \tag{15}\] where * \({\bf r}=[{\bf r}^{(1)^{T}}\ldots{\bf r}^{(n_{\rm r})^{T}},\ldots,{\bf r}^{(N _{\rm r})^{T}}]^{T}\). * \({\bf w}\sim\mathcal{CN}(0,\sigma^{2}{\bf I}_{KN_{\rm r}}),N_{\rm r}K\times 1\) vector denotes the AWGN vector at the BS. * \({\bf x}_{\rm mu}\) is the \(JN_{\rm e}\times 1\) multi-user concatenated transmitted signal. \({\bf x}_{\rm mu}=[x_{1}[1]\ldots x_{1}[N_{\rm e}]\ldots x_{j}[1]\ldots x_{j}[ N_{\rm e}]\ldots x_{J}[1]\ldots x_{J}[N_{\rm e}]]^{T}\), where \(N_{\rm e}\) is the number of nonzero elements in codeword. We have \(N_{\rm e}=K\) and \(N_{\rm e}=d_{\rm v}\) for DCMA and SCMA, respectively. The transmitted signal \({\bf x}_{\rm mu}\) can be rewritten to facilitate the sharing-based detection problem in Section III-D i.e., \({\bf x}_{\rm mu}=\sum_{j=1}^{J}{\bf x}_{0j}\). The variable \({\bf x}_{0j}=[0\ldots 0\ x_{j}[1]\ x_{j}[2]\ldots x_{j}[N_{\rm e}]\ 0\ldots 0]^{T}\) represents the \(j\)th UE codeword. * The channel matrix for DCMA system of size \(KN_{\text{r}}\times JK\) is given by \[\mathbf{H}=\begin{bmatrix}\text{diag}(\mathbf{h}_{1}^{(1)})&\ldots&\text{diag}( \mathbf{h}_{j}^{(1)})&\ldots&\text{diag}(\mathbf{h}_{J}^{(1)})\\ \text{diag}(\mathbf{h}_{1}^{(2)})&\ldots&\text{diag}(\mathbf{h}_{J}^{(2)})& \ldots&\text{diag}(\mathbf{h}_{J}^{(2)})\\ \vdots&\ldots&\vdots&\ldots&\vdots\\ \text{diag}(\mathbf{h}_{1}^{(N_{\text{r}})})&\ldots&\text{diag}(\mathbf{h}_{ j}^{(N_{\text{r}})})&\ldots&\text{diag}(\mathbf{h}_{J}^{(N_{\text{r}})}) \end{bmatrix}.\] * The channel matrix for SCMA system of size \(KN_{\text{r}}\times Jd_{\text{v}}\) is given by \[\mathbf{H}=\begin{bmatrix}\overline{\text{diag}(\mathbf{h}_{1}^{(1)})}&\ldots &\overline{\text{diag}(\mathbf{h}_{j}^{(1)})}&\ldots&\overline{\text{diag}( \mathbf{h}_{J}^{(1)})}\\ \overline{\text{diag}(\mathbf{h}_{1}^{(2)})}&\ldots&\overline{\text{diag}( \mathbf{h}_{j}^{(2)})}&\ldots&\overline{\text{diag}(\mathbf{h}_{J}^{(2)})}\\ \vdots&\ldots&\vdots&\ldots&\vdots\\ \overline{\text{diag}(\mathbf{h}_{1}^{(N_{\text{r}})})}&\ldots&\overline{ \text{diag}(\mathbf{h}_{j}^{(N_{\text{r}})})}&\ldots&\overline{\text{diag}( \mathbf{h}_{J}^{(N_{\text{r}})})}\end{bmatrix}.\] where \(\overline{\text{diag}\left(\mathbf{h}_{j}^{(N_{\text{r}})}\right)}\) is \(K\times d_{\text{v}}\) matrix after removing the columns in \(\text{diag}(\mathbf{h}_{j}^{(N_{\text{r}})})\) corresponding to zero elements of \(j\)th UE codeword \(\mathbf{x}_{j}\) (inactive REs of \(j\)th UE). **Example 1**.: _Consider the factor graph matrix for the 150 % overloading SCMA system with \(K=4,J=6,d_{\text{f}}=3,d_{\text{v}}=2\):_ \[\mathbf{F}=\begin{bmatrix}1&0&1&0&1&0\\ 0&1&1&0&0&1\\ 1&0&0&1&0&1\\ 0&1&0&1&1&0\end{bmatrix}. \tag{16}\] _For the \(1\)st UE and the \(n_{\text{r}}\)th receiving antenna, the channel matrix is given below:_ \[\overline{\text{diag}(\mathbf{h}_{1}^{(n_{\text{r}})})}=\begin{bmatrix}h_{1}^ {(n_{\text{r}})}[1]&0\\ 0&0\\ 0&h_{1}^{(n_{\text{r}})}[3]\\ 0&0\end{bmatrix}.\] _The transmitted signal from all UEs is given by_ \[\mathbf{x}_{\text{mu}}=\begin{bmatrix}x_{1}[1]&x_{1}[3]\\ \text{\sc UE\,1}&\underbrace{x_{2}[2]}_{\text{\sc UE\,2}}x_{2}[4]\\ \end{bmatrix}\ \underbrace{x_{3}[1]}_{\text{\sc UE\,3}}x_{3}[2]\\ \text{\sc UE\,4}&\underbrace{x_{4}[3]}_{\text{\sc UE\,4}}x_{5}[1]\ \underbrace{x_{5}[4]}_{\text{\sc UE\,5}}\ \underbrace{x_{6}[2]}_{\text{\sc UE\,6}}[3]\\ \end{bmatrix}^{T}. \tag{17}\] _The 3rd UE codeword \(\mathbf{x}_{03}\) in \(\mathbf{x}_{\text{mu}}\) is given by_ \[\mathbf{x}_{03}=\begin{bmatrix}0&0\\ \text{\sc UE\,1}&\underbrace{0}_{\text{\sc UE\,2}}\\ \end{bmatrix}\ \underbrace{x_{3}[1]}_{\text{\sc UE\,3}}x_{3}[2]\\ \end{bmatrix}\ \underbrace{0}_{\text{\sc UE\,4}}\ \underbrace{0}_{\text{\sc UE\,5}}\ \underbrace{0}_{\text{\sc UE\,6}}\end{bmatrix}^{T}. \tag{18}\] _The variables \(\{\mathbf{x}_{0j}\}_{j=1}^{J}\) are similarly represented as \(\mathbf{x}_{03}\). Note that, the transmitted signal \(\mathbf{x}_{\text{mu}}\) can be written as, \(\mathbf{x}_{\text{mu}}=\sum_{j=1}^{6}\mathbf{x}_{0j}\)._ ### _Smx-Cd-Noma_ The spectral efficiency of the UL CD-NOMA system is further enhanced by placing multiple antennas at the transmitter [5]. Multiple antennas at both transmitter and receiver of the CD-NOMA system are considered in this section. Multiple antennas at the transmitter exploit multiplexing gain offered by spatially multiplexing several data streams onto the MIMO channel. Consider a scenario where each UE is equipped with \(N_{\rm t}\) transmitting antennas, and the BS is equipped with \(N_{\rm r}\) receiving antennas. The total input data at each UE, \(N_{\rm t}\text{log}_{2}(M)\) bits, are divided into \(N_{\rm t}\) parallel data streams. Each data stream is fed to the CD-NOMA encoder, as shown in Fig. 2. Note that in the SMX-CD-NOMA system, all the UE antennas transmit data simultaneously. The main challenge of SMX-CD-NOMA in practical implementation is the computational complexity at both transmitter and receiver. Further, detecting the data transmitted from multiple antennas of multiple UEs is a complex operation. The proposed method performs resource-wise processing at the BS via the ADMM algorithm, as shown in Fig. 3. The received signal at the BS over the \(k\)th RE is given by \[{\bf r}_{k}={\bf H}_{k}{\bf x}_{\rm smx,k}+{\bf w}_{k}, \tag{19}\] where * \({\bf r}_{k}=[r_{k}^{1}\ldots r_{k}^{n_{\rm r}}\ldots r_{k}^{N_{\rm t}}]^{T}\), \(N_{\rm r}\times 1\) observation vector. * \({\bf w}_{k}\sim\mathcal{CN}(0,\sigma^{2}{\bf I}_{N_{\rm r}}),N_{\rm r}\times 1\) vector denotes the AWGN at the BS over \(k\)th RE. * \({\bf x}_{\rm smx,k}\) is the \(N_{\rm u}N_{\rm t}\times 1\) transmitted vector on \(k\)th RE. \({\bf x}_{\rm smx,k}=[{\bf x}_{1,k}^{T}\ldots{\bf x}_{j,k}^{T}\ldots{\bf x}_{N_ {\rm u},k}^{T}]^{T}\), and each \({\bf x}_{j,k}=[x_{j,k,1}\ldots x_{j,k,n_{\rm t}}\ldots x_{j,k,N_{\rm t}}]^{T}\) is \(N_{\rm t}\times 1\) vector corresponding to \(j\)th UE. Each Fig. 2: SMX-CD-NOMA system model in the UL. is the symbol transmitted from \(n_{\rm t}\)th antenna of \(j\)th UE over \(k\)th RE, where \({\bf x}_{j}^{n_{\rm t}}\) is the codeword of \(j\)th UE transmitted from \(n_{\rm t}\)th antenna. The set \(\zeta_{k}\) represents the set of UEs overlapping on \(k\)th RE given by \[\zeta_{k}=\{j:{\bf x}_{j}[k]\neq 0;1\leq j\leq J\},\ \ {\rm and}\ \ |\zeta_{k}|=N_{\rm u},\] where \(N_{\rm u}=d_{\rm f}\) and \(N_{\rm u}=J\) for SCMA and DCMA, respectively. The transmitted signal \({\bf x}_{\rm smx,k}\) can be rewritten to formulate the sharing-based detection problem in Section III-D i.e., \({\bf x}_{\rm smx,k}=\sum_{j=1}^{N_{\rm u}}{\bf x}_{0j,k}\). The variable \({\bf x}_{0j,k}=[0\ldots 0\ \ x_{j,k,1}\ \ x_{j,k,2}\ldots x_{j,k,N_{\rm t}}\ 0\ldots 0]^{T}\) is the \(N_{\rm u}N_{\rm t}\times 1\) vector, and represents the \(j\)th overlapped UE on \(k\)th RE. The nonzero elements in \({\bf x}_{0j,k}\) are symbols transmitted from \(N_{\rm t}\) antennas. * \({\bf H}_{k}\) is the \(N_{\rm r}\times N_{\rm u}N_{\rm t}\) matrix given by \[{\bf H}_{k}=[{\bf H}_{1,k}\ldots{\bf H}_{j,k}\ldots{\bf H}_{N_{\rm u},k}]\] where \({\bf H}_{j,k}\) represents the \(j\)th UE MIMO channel matrix of size \(N_{\rm r}\times N_{\rm t}\). **Example 2**.: _Consider \(N_{\rm t}=2\), and according to the factor graph matrix given in (16). The transmitted signal over the first RE in the SCMA system is_ \[{\bf x}_{\rm smx,1}=\begin{bmatrix}{\bf x}_{1,1}^{T}&{\bf x}_{3,1}^{T}&{\bf x} _{5,1}^{T}\end{bmatrix}^{T}. \tag{20}\] _For DCMA, due to the dense structure of codebooks, all \(J\) UEs overlap on each RE [2]. The transmitted signal over the first RE is given by_ \[{\bf x}_{\rm smx,1}=\begin{bmatrix}{\bf x}_{1,1}^{T}&{\bf x}_{2,1}^{T}&\ldots &{\bf x}_{5,1}^{T}&{\bf x}_{6,1}^{T}\end{bmatrix}^{T}, \tag{21}\] _where each \({\bf x}_{j,1}\) is a \(2\times 1\) (\(N_{\rm t}\times 1\)) vector given by_ \[{\bf x}_{j,1}=\begin{bmatrix}x_{j,1,1}&x_{j,1,2}\end{bmatrix}^{T}.\] _The transmitted signal over each remaining RE (\({\bf x}_{\rm smx,k},k=2,\ldots,K\)) has a similar representation as in (20) and (21) for SCMA and DCMA, respectively._ Fig. 3: Processing at \(k\)th RE of SMX-CD-NOMA system. _For SCMA, \(\mathbf{x}_{03,1}\) represents the 3rd UE's symbols as given by_ \[\mathbf{x}_{03,1}=\left[\underbrace{0}_{\text{UE 1}}\;\underbrace{x_{3,1,1} }_{\text{UE 3}}\;\underbrace{0}_{\text{UE 5}}\right]^{T}. \tag{22}\] _For DCMA, \(\mathbf{x}_{03,1}\) can be written as_ \[\mathbf{x}_{03,1}=\left[\underbrace{0}_{\text{UE 1}}\;\underbrace{0}_{\text{UE 2}}\;\underbrace{x_{3,1,1}}_{\text{UE 3}}\;\underbrace{0}_{\text{UE 4}}\;\underbrace{0}_{\text{UE 5}}\;\underbrace{0}_{\text{UE 6}}\right]^{T}. \tag{23}\] _The variables \(\{\mathbf{x}_{0j,k}\}_{k=1}^{K}\) are similarly represented as \(\mathbf{x}_{03,1}\). The transmitted signal \(\mathbf{x}_{\text{s}_{\text{s}_{\text{s}_{\text{s}_{\text{s}_{\text{s}_{\text{s} _{\text{s}_{\text{s}_{\text{s}_{\text{s}}}}}}}}}}}}\) can be written as, \(\mathbf{x}_{\text{s}_{\text{s}_{\text{s}_{\text{s}_{\text{s}_{\text{s}_{ \text{s}_{\text{s}_{\text{s}}}}}}}}}}=\sum_{j=1}^{3}\mathbf{x}_{0j,k}\) and \(\mathbf{x}_{\text{s}_{\text{s}_{\text{s}_{\text{s}_{\text{s}_{\text{s}_{ \text{s}_{\text{s}}}}}}}}}=\sum_{j=1}^{6}\mathbf{x}_{0j,k}\) for SCMA and DCMA, respectively. _ ### _SM-CD-NOMA_ Due to the requirement of multiple radio frequency (RF) chains, the SMX-CD-NOMA system is not affordable for various applications. Further, inter-channel interference is the main limitation of this system. Spatial modulation (SM) MIMO systems are promising technology to overcome the limitations of SMX MIMO systems. SM for single-UE communications is well studied in the literature [26]. Due to the demands of future wireless networks, it is important to study and analyze SM in multiuser scenarios. In [27], the authors studied and analyzed SM sparse CDMA. SM-aided CD-NOMA (SM-CD-NOMA) is guaranteed to improve the spectral efficiency of the CD-NOMA with feasible complexity at both transmitter and receiver [6, 28]. Fig. 4 shows the system model for the SM-CD-NOMA system. Each UE is equipped with \(N_{\text{t}}\) transmitting antennas, in which only one antenna is active at any time, as shown in Fig. 4. The active antenna index of the \(j\)th UE is denoted by \(n_{\text{a}}^{j}\). All other antennas remain silent in that particular time slot. Thus, the active antenna index is a spatial modulation symbol to transmit extra information bits. The information bit stream \(\mathbf{b}_{j}\) of the \(j\)th UE, is split into two parts [\(\mathbf{b}_{ja}\),\(\mathbf{b}_{jc}\)] having \(\text{log}_{2}(N_{\text{t}})\) and \(\text{log}_{2}(M)\) bits, respectively. Each UE transmits an overall \(\text{log}_{2}(N_{\text{t}})+\text{log}_{2}(M)\) bits from the active antenna. The observation vector over the \(k\)th RE at the BS is an \(N_{\text{r}}\times 1\) vector given by \[\mathbf{r}_{k}=\mathbf{H}_{k}\mathbf{x}_{\text{sm,}k}+\mathbf{w}_{k}, \tag{24}\] where * \(\mathbf{r}_{k},\mathbf{w}_{k}\), and \(\mathbf{H}_{k}\) have the similar forms as in SMX-CD-NOMA for both DCMA and SCMA systems. * \(\mathbf{x}_{\text{sm,}k}\) is the \(N_{\text{u}}N_{\text{t}}\times 1\) vector \(\mathbf{x}_{\text{sm,}k}=[\mathbf{x}_{1,k}^{T}\ldots\mathbf{x}_{j,k}^{T} \ldots\mathbf{x}_{N_{\text{u}},k}^{T}]^{T}\), and each \(\mathbf{x}_{j,k}=[0\ldots x_{j,k,n_{\text{a}}^{j}}\ldots 0]^{T}\) is \(N_{\text{t}}\times 1\) vector corresponding to \(j\)th UE. The nonzero element \(x_{j,k,n_{\text{a}}^{j}}\) is the symbol transmitted from \(n_{\text{a}}^{j}\)th active antenna. The transmitted signal \(\mathbf{x}_{\text{sm,}k}\) can be rewritten to formulate the sharing-based detection problem in Section III-D i.e., \(\mathbf{x}_{\text{sm,}k}=\sum_{j=1}^{N_{\text{u}}}\mathbf{x}_{0j,k}^{n_{\text{ a}}^{j}}\). The variable \(\mathbf{x}_{0j,k}^{n_{k}^{j}}=[0\dots 0\dots\ x_{j,k,n_{k}^{j}}\ 0\dots\ 0\dots 0]^{T}\) represents the \(j\)th overlapped UE on \(k\)th RE. The nonzero element in \(\mathbf{x}_{0j,k}^{n_{k}^{j}}\) is the symbol transmitted from \(n_{\text{a}}^{j}\)th active antenna. Fig. 5 depicts the \(k\)th RE processing at the BS through the extended factor graph. The thick and dotted lines correspond to the active and inactive antennas, respectively. **Example 3**.: _Consider \(N_{\mathrm{t}}=2\), and the factor graph matrix given in (16). Suppose the active antenna indices of the \(6\)-users are {2,1,2,2,1,1}. The structure of transmit codewords from \(J\) UEs for SM-CD-NOMA is as follows [28]:_ \[\mathbf{x}_{\mathrm{tx}}=\begin{bmatrix}\mathbf{0}_{K\times 1}&\mathbf{x}_{2_{K \times 1}}^{1}&\mathbf{0}_{K\times 1}&\mathbf{0}_{K\times 1}&\mathbf{x}_{5_{K \times 1}}^{1}&\mathbf{x}_{6_{K\times 1}}^{1}\\ \mathbf{x}_{1_{K\times 1}}^{2}&\mathbf{0}_{K\times 1}&\mathbf{x}_{3_{K \times 1}}^{2}&\mathbf{x}_{4_{K\times 1}}^{2}&\mathbf{0}_{K\times 1}&\mathbf{0}_{K \times 1}\end{bmatrix}_{N_{\mathrm{t}}K\times J=8\times 6}. \tag{25}\] _Each \(\mathbf{x}_{j_{K\times 1}}^{n_{k}^{j}}\) in \(\mathbf{x}_{\mathrm{tx}}\) represents the transmitted codeword from \(j\)th UE, and each column of \(\mathbf{x}_{\mathrm{tx}}\) carries information about active transmit antenna (\(n_{\text{a}}^{j}\)) as well as the codeword. In (25), \(\mathbf{0}_{K\times 1}\) indicates zero power transmitted by a deactivated antenna. To facilitate the resource-wise processing at the receiver via ADMM, the transmitted signal over the first RE in the SCMA system is modeled as_ \[\mathbf{x}_{\mathrm{sm,1}}=\begin{bmatrix}\mathbf{x}_{1,1}^{T}&\mathbf{x}_{3, 1}^{T}&\mathbf{x}_{5,1}^{T}\end{bmatrix}^{T}. \tag{26}\] Fig. 4: SM-CD-NOMA system model in the UL. Fig. 5: Processing at \(k\)th RE of SM-CD-NOMA system. _The transmitted signal over the first RE in the DCMA system is modeled as_ \[\mathbf{x}_{\mathrm{sm,1}}=\begin{bmatrix}\mathbf{x}_{1,1}^{T}&\mathbf{x}_{2,1}^{ T}&\ldots&\mathbf{x}_{5,1}^{T}&\mathbf{x}_{6,1}^{T}\end{bmatrix}^{T}, \tag{27}\] _where \(\mathbf{x}_{1,1}=[0\;\,x_{1,2}]^{T}\), \(\mathbf{x}_{2,1}=[x_{2,1,1}\;\,0]^{T}\), \(\mathbf{x}_{3,1}=[0\;\,x_{3,1,2}]^{T}\), \(\mathbf{x}_{4,1}=[0\;\,x_{4,1,2}]^{T}\), \(\mathbf{x}_{5,1}=[x_{5,1,1}\;\,0]^{T}\), and \(\mathbf{x}_{6,1}=[x_{6,1,1}\;\,0]^{T}\). The transmitted signals over the remaining REs (\(\mathbf{x}_{\mathrm{sm,k}},k=2,\ldots,K\) ) have similar representations as (26) and (27) for SCMA and DCMA, respectively. For SCMA, \(\mathbf{x}_{03,1}^{2}\) represents the symbol transmitted from 2nd active antenna of 3rd UE on 1st RE. \(\mathbf{x}_{03,1}^{2}\) can be written as_ \[\mathbf{x}_{03,1}^{2}=\begin{bmatrix}0&0&\underbrace{0&x_{3,1,2}} &\underbrace{0&0\\ \mathrm{UE\,1}&0&\end{bmatrix}^{T}. \tag{28}\] _For DCMA, \(\mathbf{x}_{03,1}^{2}\) can be written as_ \[\mathbf{x}_{03,1}^{2}=\begin{bmatrix}0&0&\underbrace{0&0}& \underbrace{0&x_{3,1,2}}&\underbrace{0&0\\ \mathrm{UE\,1}&\end{bmatrix}^{T}. \tag{29}\] _The variables \(\{\mathbf{x}_{0j}^{n_{\mathrm{f}}^{j}}\}_{k=1}^{K}\) are similarly represented as \(\mathbf{x}_{03,1}^{2}\). The transmitted signal \(\mathbf{x}_{\mathrm{sm,}k}\) can be written as, \(\mathbf{x}_{\mathrm{sm,}k}=\sum_{j=1}^{3}\mathbf{x}_{0j,k}^{n_{\mathrm{f}}^{j}}\) and \(\mathbf{x}_{\mathrm{sm,}k}=\sum_{j=1}^{6}\mathbf{x}_{0j,k}^{n_{\mathrm{f}}^{j}}\) for SCMA and DCMA, respectively \(\square\)_ Observe from equations (15), (19), and (24) that the system models of three MIMO-CD-NOMA systems are similar to those of the conventional MIMO system model. This model is given by \[\mathbf{r}=\mathbf{H}\mathbf{x}+\mathbf{w},\;\,\mathbf{x}\in\mathcal{X}^{N_{ \mathrm{r}}}, \tag{30}\] where \(\mathcal{X}\) is the signal constellation, \(\mathbf{r}\) is the \(N_{\mathrm{r}}\times 1\) observation vector, \(\mathbf{H}\) is \(N_{\mathrm{r}}\times N_{\mathrm{t}}\) channel matrix (\(N_{\mathrm{r}}>N_{\mathrm{t}}\)), \(\mathbf{x}\) is \(N_{\mathrm{t}}\times 1\) transmitted vector and \(\mathbf{w}\) is i.i.d. AWGN vector with each component being distributed as \(\mathcal{CN}(0,\sigma^{2})\). The ML detection problem can be formulated as \[\text{min}_{\mathbf{x}\in\mathcal{X}^{N_{\mathrm{t}}}}\|\mathbf{r}-\mathbf{H} \mathbf{x}\|^{2}.\] The ML detection problem for SIMO CD-NOMA system in (15) is given by \[\min_{\mathbf{x}_{\mathrm{smu}}\in\mathcal{X}_{\mathrm{sm}}^{JN_{\mathrm{e}}} }\;\;\;\|\mathbf{r}-\mathbf{H}\mathbf{x}_{\mathrm{sm}}\|^{2}, \tag{31}\] where \(\mathcal{X}_{\mathrm{mu}}^{JN_{\mathrm{e}}}\) denotes the multi-user signal constellation and it consists of \(J\)-UEs concatenated codewords. However, solving ML detection problems, in general, is NP-hard. The exhaustive search method can be used to solve the ML detection problem. However, the exhaustive search is exponentially complex as per \(\mathcal{O}(M^{J})\)[13] and thus it is not feasible. The ML detection problem (31) can be solved with a polynomial complexity using a distributed optimization framework. The above problem needs to be converted into a sharing problem that can be solved using distributed optimization methods. In the next section, we apply an efficient method based on the ADMM algorithm to solve the sharing-based detection problem in a distributed manner. ### _Large-scale UL multi-antenna CD-NOMA detection via ADMM_ This section discusses the design of ADMM-based MUD for two CD-NOMA techniques, i.e., SCMA and DCMA. The ADMM-based detector is applied to three different MIMO systems, namely, SIMO-CD-NOMA, spatial multiplexed CD-NOMA (SMX-CD-NOMA), and spatially modulated CD-NOMA (SM-CD-NOMA). The implications of the proposed method are discussed in subsequent sections. #### Iii-D1 SIMO CD-NOMA The ML detection problem in (31) can be converted into a sharing problem. Further, this problem can be solved in a distributed manner via the ADMM approach, as discussed in Section II-C. ADMM allows parallel processing in MUD problems, guaranteeing a minimal computational time at the receiver. Recall from Section III-A, the transmitted signal \(\mathbf{x}_{\mathrm{mu}}\) is given by \(\mathbf{x}_{\mathrm{mu}}=\sum_{j=1}^{J}\mathbf{x}_{0j}\). The problem (31) can be rewritten as \[\min_{\mathbf{x}_{0j}} \|\mathbf{r}-\mathbf{H}\left(\sum_{j=1}^{J}\mathbf{x}_{0j}\right) \|^{2}\] (32) s.t \[\mathbf{x}_{0j}\in\mathcal{X}_{\mathrm{mu}}^{JN_{\mathrm{e}}} \quad j=1,\ldots,J.\] Here, for the channel matrix \(\mathbf{H}^{N,K\times JN_{\mathrm{e}}}\), we consider \(N_{\mathrm{r}}K>JN_{\mathrm{e}}\). Each entry's real and imaginary parts in \(\mathbf{x}_{0j}\) are restricted by set constraints. The set constraint must be converted into an interval constraint to convert (32) into a sharing problem. Box constraint relaxation (BCR) is used to relax the set constraints of each entry in the codeword [20]. Hence, each entry's real and imaginary parts in the \(j\)th UE codeword belong to \([-\alpha_{j},\alpha_{j}]\) and \([-\beta_{j},\beta_{j}]\). After relaxation, each element in the \(j\)th UE codebook is defined as \(\tilde{\mathcal{X}}_{j}=\{x_{j}=x_{jR}+ix_{jI}|x_{jR}\in[-\alpha_{j},\alpha_{j }],x_{jI}\in[-\beta_{j},\beta_{j}]\}\), \(\alpha_{j}=\max|\mathbb{R}(\mathcal{X}_{j})|,\beta_{j}=\max|\mathcal{I}( \mathcal{X}_{j})|\). The highly complex MIMO-CD-NOMA ML detection problem in (32) is now ready to be converted into a non-convex distributed optimization problem. However, the constraint relaxation degrades the detection performance due to the losses introduced by the interval constraints. The losses can be compensated by adding a set of quadratic penalty functions \(\sum_{j=1}^{J}\frac{\gamma_{j}}{2}\|\mathbf{x}_{0j}\|_{2}^{2}\) to the objective function where \(\gamma_{j}\geq 0\) is a penalty parameter. The penalty functions are selected so that each variable, \(\mathbf{x}_{0j}\), in the penalty term minimizes the individual penalty and the shared objective. The added penalty term makes the solution as sparse as possible. However, the sparse vectors with specific numbers (\(d_{\mathrm{v}}\) for SCMA and \(K\) for DCMA) of non-zero entries only will minimize the shared objective function. The favorable solutions are the sparse vectors with \(d_{\mathrm{v}}\) and \(K\) non-zero elements for SCMA and DCMA, respectively. The sharing ADMM problem can be written as \[\min_{\mathbf{x}_{0j}} \|\mathbf{r}-\mathbf{H}\left(\sum_{j=1}^{J}\mathbf{x}_{0j}\right) \|^{2}+\sum_{j=1}^{J}\frac{\gamma_{j}}{2}\|\mathbf{x}_{0j}\|_{2}^{2}\] (33) s.t \[\mathbf{x}_{0j}\in\tilde{\mathcal{X}}_{\mathrm{mu}}^{JN_{\mathrm{ e}}},\quad j=1,\ldots,J.\] The problem in (33) is similar to the sharing ADMM problem in (2). Proceeding similarly as in Section II-C, (33) can be written as \[\min_{\mathbf{x}_{0j},\mathbf{z}_{0j}} \|\mathbf{r}-\mathbf{H}\left(\sum_{j=1}^{J}\mathbf{x}_{0j}\right) \|^{2}+\sum_{j=1}^{J}\frac{\gamma_{j}}{2}\|\mathbf{z}_{0j}\|_{2}^{2}\] (34) s.t \[\mathbf{x}_{0j}=\mathbf{z}_{0j},\quad\mathbf{x}_{0j}\in\tilde{ \mathcal{X}}_{\text{mu}}^{JN_{\text{e}}},\quad\forall j=1,\ldots,J\] For the formulation of the ADMM steps, the augmented Lagrangian function for the problem (34) is considered as shown below: \[\mathcal{L}\left(\{\mathbf{x}_{0j},\mathbf{z}_{0j},\mathbf{y}_{j} \}_{j=1}^{J}\right)=\|\mathbf{r}-\mathbf{H}\left(\sum_{j=1}^{J}\mathbf{x}_{0j }\right)\|_{2}^{2}+\sum_{j=1}^{J}\frac{\gamma_{j}}{2}\|\mathbf{z}_{0j}\|_{2}^ {2} \tag{35}\] \[+\sum_{j=1}^{J}\text{Re}\langle\mathbf{x}_{0j}-\mathbf{z}_{0j}, \mathbf{y}_{j}\rangle+\frac{\rho}{2}\sum_{j=1}^{J}\|\mathbf{x}_{0j}-\mathbf{z }_{0j}\|_{2}^{2}\] where \(\mathbf{y}_{j}\in\mathbb{C}^{JN_{\text{e}}}\) is Lagrangian multiplier of the \(j\)th UE. Letting \(\mathbf{u}_{j}=\frac{\mathbf{y}_{j}}{\rho}\), the scaled form of ADMM steps are as follows [14] : \[\mathbf{z}_{0j}^{t+1}:= \operatorname*{argmin}_{\mathbf{z}_{0j}\in\tilde{\mathcal{X}}_{ \text{mu}}^{JN_{\text{e}}}}\frac{\gamma_{j}}{2}\|\mathbf{z}_{0j}\|_{2}^{2}+ \frac{\rho}{2}\|\mathbf{z}_{0j}-\mathbf{x}_{0j}^{t}+\mathbf{u}_{j}^{t}\|_{2}^ {2} \tag{36}\] \[\mathbf{x}_{0j}^{t+1}:= \operatorname*{argmin}_{\mathbf{x}_{0j}\in\tilde{\mathcal{X}}_{ \text{mu}}^{JN_{\text{e}}}}\|\mathbf{r}-\mathbf{H}\left(\sum_{j=1}^{J} \mathbf{x}_{0j}\right)\|^{2}+\frac{\rho}{2}\sum_{j=1}^{J}\|\mathbf{z}_{0j}^{t+ 1}-\mathbf{x}_{0j}+\mathbf{u}_{j}^{k}\|_{2}^{2}\] (37) \[\mathbf{u}_{j}^{t+1}:= \mathbf{u}_{j}^{k}+(\mathbf{z}_{0j}^{t+1}-\mathbf{x}_{0j}^{t+1}) \tag{38}\] ``` 1:Input: Noise variance (\(N_{0}\)), average codebook energy (\(E_{\text{s}}\)) and initialize \(\{\mathbf{z}_{0j}\}_{j=1}^{J},\mathbf{x}_{0},\mathbf{u},\mathbf{z}_{0}\) with zero vectors. 2:Output:\(\mathbf{x}_{0}^{(T)}\), where \(T\) is the maximum number of iterations. 3:From (39) and (40), obtain optimal solutions (42) and (43), respectively. 4:Preprocessing 5:\(\rho=N_{0}/E_{\text{s}}\) 6:\(\mathbf{G}=(\mathbf{H}^{H}\mathbf{H}J+\rho\mathbf{I})^{-1}\) 7:\(\mathbf{M}=\mathbf{H}^{H}\mathbf{r}\) 8:for\(t=1,2,\ldots T\)do 9:Step:1 Update \(\{\mathbf{z}_{0j}^{t+1}\}_{j=1}^{J}\) in parallel via (42) and compute \(\mathbf{x}_{1}\). 10:Step:2 11: Update \(\hat{\mathbf{x}}_{0}^{t+1}\) via (43). 12: Update \(\mathbf{u}^{t+1}\) via (41) 13:endfor 14:Compute \(\hat{\mathbf{x}}_{\text{sum}}=J\hat{\mathbf{x}}_{0}^{(T)}\). 15:After extracting each UE codeword from \(\hat{\mathbf{x}}_{\text{mu}}\), apply the MED rule according to (44). ``` **Algorithm 1** SIMO CD-NOMA detection via sharing ADMM The variables \(\mathbf{z}_{0j}\) and \(\mathbf{u}_{j}\) can be updated independently in parallel for each \(j=1,\ldots,J\). Following a similar sequence of steps as discussed in Section II-C, the steps in (36), (37), and (38) are simplified as \[\mathbf{z}_{0j}^{t+1}:= \operatorname*{argmin}_{\mathbf{z}_{0j}\in\tilde{\mathcal{X}}_{ \text{an}}^{jN_{\text{e}}}}\frac{\gamma_{j}}{2}\|\mathbf{z}_{0j}\|_{2}^{2}+ \frac{\rho}{2}\|\mathbf{z}_{0j}-\mathbf{z}_{0j}^{t}-\bar{\mathbf{x}}_{0}^{t}+ \mathbf{u}^{t}+\bar{\mathbf{z}}_{0}^{t}\|_{2}^{2} \tag{39}\] \[\bar{\mathbf{x}}_{0}^{t+1}:= \operatorname*{argmin}_{\bar{\mathbf{x}}_{0}}\|\mathbf{r}-J \mathbf{H}\bar{\mathbf{x}}_{0}\|_{2}^{2}+\frac{\rho J}{2}\|\bar{\mathbf{x}}_{ 0}-\bar{\mathbf{x}}_{1}^{t+1}-\mathbf{u}^{t}\|_{2}^{2}\] (40) \[\mathbf{u}^{t+1}:= \mathbf{u}^{t}+\bar{\mathbf{z}}_{0}^{t+1}-\bar{\mathbf{x}}_{0}^ {t+1} \tag{41}\] where \(\bar{\mathbf{x}}_{0}=\frac{1}{J}\sum_{j=1}^{J}\mathbf{x}_{0j}\) and \(\bar{\mathbf{z}}_{0}=\frac{1}{J}\sum_{j=1}^{J}\mathbf{z}_{0j}\). The solutions obtained by solving (39) and (40) are as follows \[\mathbf{z}_{0j}^{t+1} =\prod_{[-\alpha_{j},\alpha_{j}]k[-\beta_{j},\beta_{j}]}\frac{ \rho}{(\rho+\gamma_{j})}(\mathbf{z}_{0j}^{t}+\bar{\mathbf{x}}_{0}^{t}- \mathbf{u}^{t}-\bar{\mathbf{z}}_{0}^{t}),\forall j=1,\ldots,J. \tag{42}\] \[\bar{\mathbf{x}}_{0}^{t+1} =(\mathbf{H}^{H}\mathbf{H}J+\rho\mathbf{I})^{-1}(\mathbf{H}^{H} \mathbf{r}+\rho(\bar{\mathbf{z}}_{0}^{t+1}+\mathbf{u}^{t})), \tag{43}\] where \(\prod_{[-\alpha_{j},\alpha_{j}]k[-\beta_{j},\beta_{j}]}(.)\) denotes the projection of the real part of each entry of the vector onto \([-\alpha_{j},\alpha_{j}]\) and imaginary part of each entry of the vector onto \([-\beta_{j},\beta_{j}]\). The step in (42) can be solved in parallel, for \(j=1,\ldots,J\). \(\mathbf{I}\) is identity matrix of size \(Jd_{\text{v}}\times Jd_{\text{v}}\) and \(JK\times JK\) for SCMA and DCMA, respectively. From the definition of \(\bar{\mathbf{x}}_{0}\) given above, the estimated multi-user codeword is given by, \(\hat{\mathbf{x}}_{\text{mu}}=J\bar{\mathbf{x}}_{0}\). Each UE's concatenated sparse codeword \(\hat{\mathbf{x}}_{0j}\) can be extracted from the \(\hat{\mathbf{x}}_{\text{mu}}\). Let \(\tilde{\mathbf{x}}_{0j}\) be the \(j\)th UE codeword after removing zeros from \(\hat{\mathbf{x}}_{0j}\). The minimum Euclidean distance (MED) rule is applied to detect the transmitted codeword index corresponding to each UE, \[\hat{p}_{j}=\operatorname*{argmin}_{p}\|\tilde{\mathbf{x}}_{0j}- \mathbf{x}_{j,p}\|,\quad j=1,\ldots,J \tag{44}\] where \(\mathbf{x}_{j,p}\) represents \(p\)th codeword of the \(j\)th UE codebook \(\mathbf{x}_{j,p}\in\mathcal{X}_{j}^{K\times M}\). Note that for SCMA, the zeros from \(\mathbf{x}_{j,p}\) are removed to find \(\hat{p}_{j}\). **Algorithm** 1 details the steps for detection. #### Iii-B2 Smx-Cd-Noma The detector of SMX-CD-NOMA needs to detect the signals transmitted from \(N_{\text{t}}\) antennas of \(J\) UEs. Here, the detection is more complex than the SIMO case. The resource-wise processing is adapted to simplify the ADMM-based detection problem. Recall from Section III-B, that the SMX codeword is given by, \(\mathbf{x}_{\text{smx},k}=\sum_{j=1}^{N_{\text{a}}}\mathbf{x}_{0j,k}\). The ADMM processing first estimates the resource-wise spatial multiplexed transmitted vector \(\hat{\mathbf{x}}_{\text{smx},k}\), for \(k=1,\ldots,K\). Then, the transmitted codeword indices are detected via the MED rule. The ML detection problem is converted into sharing problem as follows \[\min_{\mathbf{x}_{0j,k}} \|\mathbf{r}_{k}-\mathbf{H}_{k}(\sum_{j=1}^{N_{\text{a}}}\mathbf{ x}_{0j,k})\|_{2}^{2}+\sum_{j=1}^{N_{\text{a}}}\frac{\gamma_{j}}{2}\|\mathbf{x}_{0j,k}\|_{2}^{2}\] (45) s.t. \[\mathbf{x}_{0j,k}\in\tilde{\mathcal{X}}_{j,\text{smx}}^{N_{\text{ a}}N_{\text{t}}}\] where \(\gamma_{j}\geq 0\) is the penalty parameter. By introducing alternative variable \(\mathbf{z}_{0j,k}=\mathbf{x}_{0j,k}\), the ADMM steps for (45) are as follows: \[\mathbf{z}_{0j,k}^{t+1} :=\operatorname*{argmin}_{\mathbf{z}_{0j,k}\in\mathcal{\hat{X}}_{ j,\text{sm}}^{N_{\text{u}}N_{\text{t}}}}\frac{\gamma_{j}}{2}\|\mathbf{z}_{0j,k}\|_{ 2}^{2}+\frac{\rho}{2}\|\mathbf{z}_{0j,k}-\mathbf{x}_{0j,k}^{t}+\mathbf{u}_{j}^ {t}\|_{2}^{2} \tag{46}\] \[\mathbf{x}_{0j,k}^{t+1} :=\operatorname*{argmin}_{\mathbf{x}_{0j,k}}\|\mathbf{r}_{k}- \mathbf{H}_{k}\left(\sum_{j=1}^{N_{\text{u}}}\mathbf{x}_{0j,k}\right)\|_{2}^{2 }+\frac{\rho}{2}\sum_{j=1}^{J}\|\mathbf{z}_{0j,k}^{t+1}-\mathbf{x}_{0j,k}+ \mathbf{u}_{j}^{k}\|_{2}^{2}\] (47) \[\mathbf{u}_{j}^{t+1} :=\mathbf{u}_{j}^{k}+\left(\mathbf{z}_{0j,k}^{t+1}-\mathbf{x}_{0 j,k}^{t+1}\right). \tag{48}\] The solutions for (46) and (47) are as follows: \[\mathbf{z}_{0j,k}^{t+1}= \prod_{[-\alpha_{j},\alpha_{j}]k[-\beta_{j},\beta_{j}]}\frac{ \rho}{(\rho+\gamma_{j})}(\mathbf{z}_{0j,k}^{t}+\tilde{\mathbf{x}}_{0,k}^{t}- \mathbf{u}^{t}-\tilde{\mathbf{z}}_{0,k}^{t}) \tag{49}\] \[\bar{\mathbf{x}}_{0,k}^{t+1}= (\mathbf{H}_{k}^{H}\mathbf{H}_{k}N_{\text{u}}+\rho\mathbf{I})^{ -1}(\mathbf{H}_{k}^{H}\mathbf{r}_{k}+\rho(\bar{\mathbf{z}}_{0,k}^{t+1}+ \mathbf{u}^{t})) \tag{50}\] where \(\mathbf{I}\) is identity matrix of size \(N_{\text{u}}N_{\text{t}}\times N_{\text{u}}N_{\text{t}}\) and the definitions of \(\bar{\mathbf{x}}_{0,k}\) and \(\bar{\mathbf{z}}_{0,k}\) are given below \[\bar{\mathbf{x}}_{0,k}= \frac{1}{N_{\text{u}}}\sum_{j=1}^{N_{\text{u}}}\mathbf{x}_{0j,k} \qquad\bar{\mathbf{z}}_{0,k}=\frac{1}{N_{\text{u}}}\sum_{j=1}^{N_{\text{u}}} \mathbf{z}_{0j,k}. \tag{51}\] From the definition of \(\bar{\mathbf{x}}_{0,k}\), the estimated codeword corresponding to the \(k\)th RE is given by \(\hat{\mathbf{x}}_{\text{smx},k}=N_{\text{u}}\bar{\mathbf{x}}_{0,k}\). The sparse vector \(\hat{\mathbf{x}}_{0j,k}\) corresponding to \(j\)th UE can be extracted from \(\hat{\mathbf{x}}_{\text{smx},k}\). Let \(\tilde{\mathbf{x}}_{j,k}\) be a \(N_{\text{t}}\times 1\) vector formed after removing the zeros from sparse vector \(\hat{\mathbf{x}}_{0j,k}\). Let \(\hat{\mathbf{x}}_{j}^{n_{\text{t}}}\) be the estimated codeword of \(j\)th UE corresponding to \(n_{\text{t}}\)th transmit antenna. \(\hat{\mathbf{x}}_{j}^{n_{\text{t}}}\) is formed after resource-wise processing is finished. MED rule is applied to detect the transmitted codeword index corresponding to each UE and transmit antenna as follows: \[\tilde{p}_{j}^{n_{\text{t}}}=\operatorname*{argmin}_{p}\|\hat{\mathbf{x}}_{\text {j}}^{n_{\text{t}}}-\mathbf{x}_{j,p}\|,\quad j=1,\ldots,J,\;\;n_{\text{t}}=1, \ldots,N_{\text{t}}. \tag{52}\] where \(\mathbf{x}_{j,p}\) represents the \(p\)th codeword of the \(j\)th UE codebook \(\mathcal{X}_{j}^{K\times M}\). The detailed ADMM-based detection procedure is given in **Algorithm** 2. #### Iii-B3 Sm-Cd-Noma This section discusses the formulation of the ADMM-based detection problem for the SM-CD-NOMA system. The modulation happens in both the signal domain and the spatial domain. The SM-CD-NOMA system transmits information on the codeword and active antenna indices simultaneously. The receiver needs to estimate these two quantities. The variable \(\mathbf{x}_{\text{sm},k}\) from Section III-A, is given by \(\mathbf{x}_{\text{sm},k}=\sum_{j=1}^{N_{\text{t}}}\mathbf{x}_{0j,k}^{n_{j}^{ j}}\). The proposed method estimates the resource-wise spatial modulated transmitted vector \(\hat{\mathbf{x}}_{\text{sm},k}\) for \(k=1,\ldots,K\) via ADMM processing. Then, the \(L1\)-norm rule is applied to detect the antenna index, and the MED rule is applied to detect the transmitted codeword index. The sharing problem of the SM-CD-NOMA system is similar to the problem in (45). Following the steps (46), (47), and (48), the sharing ADMM problem of SM-CD-NOMA can be solved. The solutions obtained for the sharing ADMM problem are similar to (49) and (50) and are repeated here for clarity: \[\mathbf{z}_{0j,\text{sm},k}^{t+1}= \prod_{[-\alpha_{j},\alpha_{j}]k[-\beta_{j},\beta_{j}]}\frac{ \rho}{(\rho+\gamma_{j})}(\mathbf{z}_{0j,\text{sm},k}^{t}+\tilde{\mathbf{x}}_{ 0,\text{sm},k}^{t}-\mathbf{u}^{t}-\bar{\mathbf{z}}_{0,\text{sm},k}^{t}) \tag{53}\] \[\bar{\mathbf{x}}_{0,\text{sm},k}^{t+1}= (\mathbf{H}_{k}^{H}\mathbf{H}_{k}N_{\text{u}}+\rho\mathbf{I})^{- 1}(\mathbf{H}_{k}^{H}\mathbf{r}_{k}+\rho(\bar{\mathbf{z}}_{0,\text{sm},k}^{t+ 1}+\mathbf{u}^{t})). \tag{54}\] The definitions of \(\bar{\mathbf{x}}_{0,\text{sm},k}\) and \(\bar{\mathbf{z}}_{0,\text{sm},k}\) are similar to (51). The estimated SM codeword corresponding to \(k\)th RE is \(\hat{\mathbf{x}}_{\text{sm},k}=N_{\text{u}}\bar{\mathbf{x}}_{0,\text{sm},k}\). After resource-wise processing, the structure of the transmitted codewords for \(J\) UEs can be retrieved. For **Example** 3, the detected codewords for \(6\) UEs is given by \[\hat{\mathbf{x}}_{\text{tx}}=\begin{bmatrix}\hat{\mathbf{x}}_{1}^{1}&\hat{ \mathbf{x}}_{2}^{1}&\hat{\mathbf{x}}_{3}^{1}&\hat{\mathbf{x}}_{4}^{1}&\hat{ \mathbf{x}}_{5}^{1}&\hat{\mathbf{x}}_{6}^{1}\\ \hat{\mathbf{x}}_{1}^{2}&\hat{\mathbf{x}}_{2}^{2}&\hat{\mathbf{x}}_{3}^{2}& \hat{\mathbf{x}}_{4}^{2}&\hat{\mathbf{x}}_{5}^{2}&\hat{\mathbf{x}}_{6}^{2} \end{bmatrix}_{N_{\text{t}}K\times J(8\times 6)}\] where \(\hat{\mathbf{x}}_{j}^{n_{\text{t}}}\) indicates the estimated codeword of \(j\)th UE over the \(n_{\text{t}}\)th transmit antenna. In the presence of noise, the detected active transmit antenna index is given by \[\hat{n}_{\text{a}}^{j}=\operatorname*{argmax}_{n_{\text{t}}}(\|\hat{\mathbf{x }}_{j}^{n_{\text{t}}}\|_{1}),\quad\forall j \tag{55}\] where \(\|(\cdot)\|_{1}\) denotes the \(L1\)-norm of \((\cdot)\). Let \(\hat{\mathbf{x}}_{j}^{\hat{n}_{j}^{j}}\) be the estimated codeword of the \(j\)th UE transmitted from \(\hat{n}_{\text{a}}^{j}\) active antenna. The MED rule is applied to detect the codeword index corresponding to each UE \[\hat{p}_{j}=\operatorname*{argmin}_{p}\|\hat{\mathbf{x}}_{j}^{\hat{n}_{j}^{j}}- \mathbf{x}_{j,p}\|,\quad\forall j. \tag{56}\] Similar steps of **Algorithm** 2 are followed in the detection of the SM-CD-NOMA system. ## IV Computational Complexity This section analyses the computational complexity of the SIMO-CD-NOMA system. The detection algorithm's computational complexity determines its practical viability. The number of FLOPs (mainly complex multiplications) is a useful metric to analyze the complexity of the detector [12]. The number of calculations in the proposed CD-NOMA detection via ADMM consists of two parts. Part-1 is iteration-independent (Pre-processing) steps, described in lines 6 and 7 in **Algorithm**1, and Part-2 is iteration-dependent steps, from lines 9 to 12. The calculations in Part-1 are performed only once, i.e., before the ADMM iterations. In the SIMO-CD-NOMA system, Part-1 contains three steps of calculations such as \(\mathbf{H}^{H}\mathbf{H}\), \((\mathbf{H}^{H}\mathbf{H}+\rho\mathbf{I})^{-1}\), and \(\mathbf{H}^{H}\mathbf{r}\). The size of \(\mathbf{H}\) is \(N_{\mathrm{r}}K\times Jd_{\mathrm{v}}\) and \(N_{\mathrm{r}}K\times JK\) for SCMA and DCMA, respectively. Further, the numbers of FLOPs required to perform these three steps over the SCMA system are \((N_{\mathrm{r}}K)(Jd_{\mathrm{v}})^{2},(Jd_{\mathrm{v}})^{3}\) and \((N_{\mathrm{r}}K)(Jd_{\mathrm{v}})\). The numbers of FLOPs required to perform the same steps over the DCMA system are \((N_{\mathrm{r}}K)(JK)^{2},(JK)^{3}\), and \((N_{\mathrm{r}}K)(JK)\). The calculations in Part-2 need to be repeated in every iteration and contain mainly two steps. For SCMA system, these steps involve scalar multiplication of \(Jd_{\mathrm{v}}\times 1\) vector for \(J\) UEs parallelly in (42), multiplication of \(Jd_{\mathrm{v}}\times Jd_{\mathrm{v}}\) matrix with \(Jd_{\mathrm{v}}\times 1\) and scalar multiplication with \(Jd_{\mathrm{v}}\times 1\) vector in (43), in total \(J^{2}d_{\mathrm{v}}+(Jd_{\mathrm{v}})^{2}+Jd_{\mathrm{v}}\) FLOPs approximately. For DCMA system, \((J^{2}K+(JK)^{2}+JK)\) FLOPs are required. The approximate total computational cost to implement the ADMM-based detector over the SIMO-SCMA system and SIMO-DCMA system are \((N_{\mathrm{r}}K)(Jd_{\mathrm{v}})^{2}+(Jd_{\mathrm{v}})^{3}+(N_{\mathrm{r}}K )(Jd_{\mathrm{v}})+T(J^{2}d_{\mathrm{v}}+(Jd_{\mathrm{v}})^{2}+Jd_{\mathrm{v}})\) and \((N_{\mathrm{r}}K)(JK)^{2}+(JK)^{3}+(N_{\mathrm{r}}K)(JK)+T(J^{2}K+(JK)^{2}+JK)\) respectively. TABLE I compares the total complexity of the sharing ADMM-based detection problem with other known detection schemes. In TABLE I, NA stands for 'Not applicable'. MPA is widely used for SCMA systems, and its complexity is exponential with \(M\) and \(d_{\mathrm{f}}\), as mentioned in TABLE I[12]. The computational burden will grow further for large-scale SCMA systems. Further, due to exponential complexity, MPA is not feasible in large-scale MIMO systems. Additionally, the lack of sparsity in DCMA makes MPA impractical for detection. A single tree search (STS) based soft-in soft-out (SISO) GSD [29] is applied to detect the signals of a spreading sequence-based DCMA system, also known as an overloaded CDMA system [2]. GSD performs key pre-processing steps to convert the rank deficient system into full rank one [30]. These steps include \(\mathbf{H}^{H}\mathbf{H}\) in \((\mathbf{Q}=\mathbf{H}^{H}\mathbf{H}+\lambda\mathbf{I}_{J})\), Cholesky decomposition \(\mathbf{Q}=\mathbf{D}^{H}\mathbf{D}\), and \((\mathbf{H}\mathbf{D}^{-1})^{H}\mathbf{r}\). These steps require \((N_{\mathrm{r}}K)J^{2}\), \(J^{3}/3\), and \((J^{3}+(N_{\mathrm{r}}K)J^{2}+JN_{\mathrm{r}}K)\) FLOPs, respectively. The rank-deficient linear system is thus converted into a full-rank one, and standard sphere decoding (SD) can be readily applied. The SD performs pre-processing steps, including **QR** decomposition and \(\mathbf{Q}^{H}\mathbf{r}\). Further, the numbers of FLOPs to perform these steps are \(J^{3}\) and \(2J^{2}\), respectively. The major complexity of the SD lies in the tree search algorithm [31]. The expected complexity of SD is \(E_{\text{FLOPS}}=\sum_{j=1}^{J}f_{p}(j)N_{j}\), where \(N_{j}\) is the average number of nodes visited in level-\(j\) of the tree and \(f_{p}(j)\) is the number of FLOPs needed in level-\(j\). It is given in [31], \(f_{p}(j)=2j+11\), and \(N_{j}\) is roughly cubic in the number \(J\) of unknowns to be solved. Fig. 6 depicts the computational complexity comparison of various detection schemes for different CD-NOMA systems with \(N_{\text{r}}=4,K=4,J=6,d_{\text{v}}=2\), and \(M=4\). It can be observed that the complexity of ADMM-based detector is almost equal to that of the low-complexity MMSE detector. ## V Simulations and Discussions This section presents wide-ranging simulation results of various MIMO-CD-NOMA systems, including SER performance and selection of ADMM parameters \(\left(T,\{\gamma\}_{j=1}^{J}\right)\). The simulations are performed for various MIMO-CD-NOMA systems described in Section III with varying parameters \(\lambda,M,N_{\text{r}}\), and \(N_{\text{t}}\). Further, the performance of the ADMM-based detector is compared with that of the MPA detector with ten iterations (\(T=10\)) in the case of the SCMA system. The DCMA and overloaded CDMA systems' detection is carried out using the MMSE and the GSD detectors, respectively. The codebooks designed in [32] and [2] are considered, for SCMA and DCMA, respectively. Table II further details the simulation parameters. The sample simulation codes for this work are available at [https://github.com/vikas2020-del/ADMM-based-detector-for-NOMA](https://github.com/vikas2020-del/ADMM-based-detector-for-NOMA). ### _SER performance of SIMO CD-NOMA system_ Fig. 7(a) and Fig. 7(b) show the SER performance of SIMO-SCMA system for \(150\)\(\%\) and \(200\)\(\%\) overloading, respectively. Each curve represents the SER performance for a specific number \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Detector & \multicolumn{3}{c|}{Spreading sequence based} & \multicolumn{2}{c|}{Codebook based} \\ \hline & LDS & DCMA & SCMA & DCMA \\ \hline \hline MPA & NA & NA & \(\left(Kd_{M}^{2}M^{4}N_{\text{r}}+Nd_{\text{r}}Md_{\text{v}}\right)T\) & NA \\ \hline GSD & NA & \(\left(N_{\text{r}}K\right)\)\(J^{2}\)+\(J^{3}\)/\((J^{3}+(N_{\text{r}}K)J^{2}+\) & NA & NA \\ & \(JN_{\text{r}}K)\) + \(J^{3}\) + \(J^{2}\)+ \(E_{\text{FLOPS}}\) & & \\ \hline MMSE & NA & \(\left(N_{\text{r}}K\right)\)\(J^{2}\) + \(J^{3}\) + \(\left(N_{\text{r}}K\right)\) & \(\left(N_{\text{r}}K\right)\)\((Jd_{\text{v}})^{2}\) + \(\left(Jd_{\text{v}}\right)^{3}\) + \(\left(N_{\text{r}}K\right)\)\((Jd_{\text{v}})\) & \(\left(N_{\text{r}}K\right)\)\((JK)^{2}\) + \(\left(JK\right)^{3}\) + \(\left(N_{\text{r}}K\right)\)\((JK)\) \\ \hline Sharing & NA & \(\left(N_{\text{r}}K\right)\)\(J^{2}\) + \(J^{3}\)+ \(\left(N_{\text{r}}K\right)\)\(J\)+ \(T\)(\(2J^{2}+\) & \(N_{\text{r}}K\)\((Jd_{\text{v}})^{2}\) + \(\left(Jd_{\text{v}}\right)^{3}\)+ \(\left(N_{\text{r}}K\right)\)\((Jd_{\text{v}})\) + \(N_{\text{r}}K\)\((JK)^{2}\)+ \(\left(JK\right)^{3}\)+\(\left(N_{\text{r}}K\right)\)\((JK)\) \\ ADMM & \(J\)) & \(T\)\((J^{2}d_{\text{v}}\) + \(\left(Jd_{\text{v}}\right)^{2}\) + \(Jd_{\text{v}})\) & +\(T\)(\(J^{2}K+\left(JK\right)^{2}\) + \(JK\)) \\ \hline \end{tabular} \end{table} TABLE I: Computational complexity of different detectors for SIMO CD-NOMA systems Fig. 6: Computational complexity comparison of various detectors. of the receive antennas at the BS and codebook size \(M\). Fig. 7(a) and Fig. 7(b) show that the SER performance improves as \(N_{\text{r}}\) increases. Thus, the proposed ADMM-based detector is able to exploit the diversity gain provided by the SIMO-CD-NOMA system. However, for \(M=8\), the SER performance is slightly worse than \(M=4\) due to the inevitable decrease in the minimum Euclidean and minimum product distance [32]. Fig. 7(a) and Fig. 7(b) also compare the SER performance of the MMSE and ADMM-based detectors. For a small-scale SCMA system (\(M=4\), \(\lambda=150\)\(\%\)), the performance of the MMSE detector is close to that of the ADMM-based detector. However, for \(\lambda=200\)\(\%\), the MMSE performance is inferior to ADMM. Observe from (15) that as the number \(J\) of UEs increases, the number \(Jd_{v}\) of unknowns approaches the number \(KN_{r}\) of observations. The MMSE performance degrades when the \(KN_{r}\) to \(Jd_{v}\) ratio approaches unity. ADMM provides approximately 2 dB SNR gain at \(\text{SER}=10^{-3}\) over the MMSE detector for 200 % overloaded SCMA system, as observed from Fig. 7(b). Fig. 8(a) and Fig. 8(b) show the SER performance of \(M=4\) and \(M=16\) overloaded SIMO-DCMA systems, respectively. Fig. 8(a) compares the SER performance of the MMSE and ADMM-based detectors. ADMM provides approximately 2 dB SNR gain at \(\text{SER}=10^{-3}\) over the MMSE detector for the 200 % overloaded DCMA system, as observed from Fig. 8(a). Therefore, ADMM is a more suitable detector compared with MMSE \begin{table} \begin{tabular}{|c|c|} \hline Parameter & Value \\ \hline \hline Modulation order (\(M\)) & \(4,8,16\) \\ \hline Number of UEs (\(J\)) & \(6,10\) \\ \hline Number of resources (\(K\)) & \(4,5\) \\ \hline Number of transmit antenna (\(N_{\text{r}}\)) & \(2,4,8\) \\ \hline Number of receive antenna (\(N_{\text{r}}\)) & \(4,8,32,64\) \\ \hline Overloading factor (\(\lambda\)) & \(150\)\(\%,200\)\(\%\) \\ \hline Detectors & ADMM, MPA, MMSE, GSD \\ \hline Number of iterations (\(T\)) & 30 (ADMM), 10 (MPA) \\ \hline \end{tabular} \end{table} TABLE II: Simulation parameters. Fig. 7: SER performance SIMO SCMA system. Solid and dotted lines denote the ADMM and MMSE performance, respectively. for 200 % overloaded CD-NOMA systems. Furthermore, simulation attempts indicate that MMSE cannot detect the CD-NOMA signals for \(M=8\) and \(M=16\) size codebooks. Therefore, these plots are not shown in Fig. 7 and Fig. 8. ### _SER performance of SMX CD-NOMA systems_ Fig. 9(a) and Fig. 9(b) show the SER performance of SMX-SCMA system for \(150\)\(\%\) and \(200\)\(\%\) overloading, respectively. Observe from Fig. 9(a) and Fig. 9(b), that the higher the value of \(N_{\text{r}}\), the better is the SER performance. As the value of \(N_{\text{t}}\) increases, \(N_{\text{r}}\) is increased to maintain \(N_{\text{r}}>d_{\text{f}}N_{\text{t}}\) in (19). The improvement in the SER performance is maximum for \(N_{\text{r}}=64\) case due to more observations at the BS. This improvement comes at the cost of a marginal computational complexity increment (TABLE I) in ADMM detection. Fig. 9(c) depicts the SER performance of SMX-DCMA systems. For SCMA, \(N_{\text{u}}=d_{\text{f}}\) UEs overlap on each RE. On the other hand, for DCMA \(N_{\text{u}}=J\)\((J>d_{\text{f}})\) UEs overlap on each RE, as shown in Fig. 1(b). This significant overlapping increases the number of unknowns in a DCMA system compared to the SCMA one, as observed in (19). To maintain the number of observations higher than the number of unknowns (\(N_{\text{r}}>JN_{\text{t}}\)), DCMA requires larger \(N_{\text{r}}\) at the BS as compared to SCMA for a fixed \(N_{\text{t}}\). As a result, the computational complexity of DCMA detection significantly increases. The tree search-based SD algorithms are highly complex in codebook-based DCMA, as explained in Section IV. Further, the ADMM-based detector shows similar complexity as that of MMSE, as shown in Fig. 6. Therefore, compared with MMSE, the ADMM-based detector performs well with almost the same complexity. Fig. 8: SER performance of SIMO DCMA system for 150 % and 200 % overloading. Solid and dotted lines denote the ADMM and MMSE performance, respectively. ### _SER performance of SM CD-NOMA systems_ Fig. 10(a) and Fig. 10(b) show the SER performance of SM-SCMA system for \(150~{}\%\) and \(200~{}\%\) overloading, respectively. The performance of the ADMM-based detector mainly depends on \(N_{\text{t}},N_{\text{r}}\), and \(M\), as we have seen in the previous subsections. The \(M=4\) size codebook exhibits superior performance over \(M=8\) due to its better distance properties (Euclidean distance and product distance). In SM-CD-NOMA systems, each UE's codeword contains the active antenna and codeword index information of that particular UE, as given in (25). Fig. 10(a) and Fig. 10(b) depict the efficiency of ADMM in detecting the information both in the signal domain and space domain. Note that the codewords in (25) contain zero-column vectors to indicate zero power transmission from the inactive antenna. As a result, the number of observations goes below the number of unknowns (\(N_{\text{r}}<N_{\text{u}}N_{\text{t}}\)). Thus, it leads to performance loss for an ADMM-based detector. However, increasing the number of observations (\(N_{\text{r}}\)) at the BS can compensate for this loss. Therefore, the performance of the ADMM-based detector can be maintained by a marginal increment in the computational complexity as given in TABLE I. The observations mentioned above for SMX Fig. 9: SER vs. SNR performance of SMX-CD-NOMA system by varying \(N_{\text{t}},N_{\text{r}},M\) using the ADMM-based detector. DCMA and SM-SCMA systems apply to SM-DCMA systems, as shown in Fig. 10(c). Further, \(N_{\rm r}=32,\lambda=200~{}\%\) provides significantly better performance than \(N_{\rm r}=16,\lambda=150~{}\%\), as the former has more observations (\(N_{\rm r}\)) than the latter. So the ADMM-based detector is more appropriate for highly overloaded DCMA systems equipped with large-scale MIMO parameters. ### _SER performance: ADMM vs. MPA_ Fig. 11(a) shows the SER performance comparison between the ADMM and MPA detectors for the SIMO-SCMA system. Existing research shows that the MPA is a near-optimal detection with high computational complexity [7]. MPA gives a maximum SNR gain of around 2.5 dB over ADMM at \(\text{SER}=10^{-2}\) for \(N_{\rm r}=4\) and \(M=4\), as shown in Fig. 11(a). As the modulation order \(M\) increases, the performance of ADMM becomes close to MPA, as observed for \(N_{\rm r}=4,M=8\) in Fig. 11(a). Therefore, for \(M=8\), codebook distance properties influence MPA and ADMM-based detectors similarly. Further, for \(N_{\rm r}=8,M=8\), the ADMM-based detector shows approximately 2 dB SNR gain at \(\text{SER}=10^{-3}\) over the MPA detector. The increase in the number of observations Fig. 10: SER vs. SNR performance for SM-MIMO CD-NOMA system using the proposed ADMM-based detector. significantly improves the ADMM performance compared to MPA for large-size codebooks (i.e. with large \(M\)). ### _SER performance: ADMM vs. GSD_ Fig. 11(b) shows the SER performance comparison between ADMM and GSD detectors for the SIMO-overloaded CDMA system. Observe from Fig. 11(b) that, at low SNR regions, GSD outperforms the ADMM with a maximum of around 2 dB SNR gain at SER=\(10^{-4}\). As the SNR increases, GSD performance degrades compared to ADMM. Unlike GSD, as \(M\) increases, ADMM performance is enhanced by doubling the number of received antennas as shown in Fig. 11(b). Note that ADMM performance improves as the number of observations increases. ### _SER performance: Imperfect channel state information (CSI)_ All the above simulations are analyzed by considering perfect CSI at the receiver. In practical scenarios, obtaining perfect CSI at the receiver is not feasible due to channel estimation errors (CEEs). Thus, the proposed detector's performance in the presence of CEEs is an important metric to show its practical feasibility. An imperfect channel can be modeled as [33] \[\hat{\mathbf{H}}=\mathbf{H}+e~{}\mathbf{\Omega} \tag{57}\] where \(\mathbf{\Omega}\) is the CEE and is considered to be uncorrelated with \(\mathbf{H}\). The entries of \(\mathbf{\Omega}\) are independent and i.i.d. complex Gaussian random variables with zero mean and unit variance, i.e., \(\mathcal{CN}(0,1)\). The quantity \(e\) in (57) determines the variance of the CEE. Fig. 12 depicts the performance of the proposed ADMM-based detector for the imperfect CSI with \(e=0~{}\%,e=5~{}\%,e=10~{}\%\), and \(e=20~{}\%\). The simulations for SCMA and DCMA systems are shown in Fig. 12(a) and Fig. 12(b), respectively. Observe that the impact of the CEE on the ADMM-based detector's performance is minimal. Therefore, the ADMM-based detector can be applied in imperfect CSI scenarios. Fig. 11: Comparison of SER performance using the proposed ADMM and the conventional MPA and GSD detectors. ### _Convergence of ADMM and selection of parameters_ The convergence analysis of the ADMM-based detector in the DCMA system is performed based on the Monte Carlo simulations. Fig. 13(a) shows the impact of ADMM iterations on the SER performance. The proposed detector exploits the iterative nature of ADMM to converge. The SER performance improves as the number of iterations (\(T\)) increases. The improvement in SER becomes marginal after a certain number of iterations. For all considered \(\frac{E_{\mathrm{h}}}{N_{0}}\) values, ADMM converges after 15 iterations, as shown in Fig. 13(a). The penalty parameters \(\{\gamma_{j}\}_{j=1}^{J}\) and \(\rho\) are selected to maximize the SER performance. The augmented Lagrangian parameter \(\rho\) is selected as the reciprocal of SNR [19]. The impact of \(\gamma\) on SER performance in the ADMM-based detector is analyzed through Monte Carlo simulations, and the results are given in Fig. 13(b). For low values of \(\gamma\), the ADMM performance degrades, and for large values, the variation in SER becomes inconsequential. When \(\gamma\) is close to zero, the impact of penalty terms is nullified. Thus, the al Fig. 12: SER performance of CD-NOMA system in imperfect CSI for \(e=0\)\(\%,e=5\)\(\%,e=10\)\(\%,e=20\)\(\%\). Fig. 13: Impact of \(T\) and \(\gamma\) on the ADMM-based detector’s SER performance. penalty function lose importance in minimizing the objective function. Observe from the Fig. 13(b) that the ADMM performs better in the range \(\{\gamma_{j}\}_{j=1}^{J}\in[50\ \ 100]\). ## VI Conclusions and Future Scopes This paper proposed new system models and a low-complexity iterative linear detector for large-scale MIMO-CD-NOMA systems. The optimal ML detection is converted into a sharing problem, which is efficiently solved via the ADMM algorithm using distributed optimization framework. The proposed ADMM-based detector enables the detection of large-scale MIMO-CD-NOMA systems with high overloading factors (\(\lambda\)) and modulation orders (\(M\)) while maintaining low complexity. By leveraging the proposed ADMM, the CD-NOMA systems achieve significantly increased connectivity. Further, the impact of ADMM parameters, such as the number of iterations and penalty parameters, is analyzed. Exhaustive simulation results are presented to validate the effectiveness of the proposed ADMM-based detector when compared to conventional detectors such as MPA, GSD, and MMSE detectors. In addition, the results demonstrate that the ADMM-based detector offers excellent performance with low complexity across various CD-NOMA system variants. This paper has explored the concept of ADMM detection specifically for uncoded CD-NOMA systems. Designing a soft decision detector using the ADMM algorithm for coded CD-NOMA systems is an interesting future work.
2305.14300
Distributed CONGEST Algorithms against Mobile Adversaries
In their seminal PODC 1991 paper, Ostrovsky and Yung introduced the study of distributed computation in the presence of mobile adversaries which can dynamically appear throughout the network. Over the years, this setting has been studied mostly under the assumption that the communication graph is fully-connected. Resilient CONGEST algorithms for general graphs, on the other hand, are currently known only for the classical static setting, i.e., where the set of corrupted edges (or nodes) is fixed throughout the entire computation. We fill this gap by providing round-efficient simulations that translate given CONGEST algorithms into equivalent algorithms that are resilient against $f$-mobile edge adversaries. Our main results are: -Perfect-Security with Mobile Eavesdroppers: A translation of any $r$-round $f$-static-secure algorithm into an equivalent $\Theta(f)$-mobile-secure algorithm with $\Theta(r)$ rounds. We also show that the $f$-static-secure algorithms of [Hitron, Parter and Yogev, DISC 2022 & ITCS 2023] can be modified into $f$-mobile-secure algorithms with the same number of rounds. -Resilience with Mobile Byzantine Adversaries: An $f$-mobile-byzantine simulation which is based on a decomposition of the graph into low-diameter edge-disjoint spanning trees. This provides us with near-optimal CONGEST compilers for expander graphs. It also leads to near-optimal compilers in the congested-clique model against $\Theta(n)$-mobile adversaries. For general $(2f+1)$ edge-connected graphs with $f$-mobile adversary, we almost match the bounds known for the $f$-static setting, when provided a trusted pre-processing phase. Our results are based on a collection of tools from interactive coding [Gelles, Found. Trends Theor. Comput. Sci. 2017], linear sketches and low-congestion graph decomposition. The introduced toolkit might have further applications for resilient computation.
Orr Fischer, Merav Parter
2023-05-23T17:42:29Z
http://arxiv.org/abs/2305.14300v1
# Distributed CONGEST Algorithms against Mobile Adversaries ###### Abstract In their seminal PODC 1991 paper, Ostrovsky and Yung introduced the study of distributed computation in the presence of mobile adversaries which can dynamically appear throughout the network, analogous to a spread of a virus. Over the years, this setting has been studied mostly under the assumption that the communication graph is fully-connected. Resilient CONGEST algorithms for _general_ graphs, on the other hand, are currently known only for the classical _static_ setting, i.e., where the set of corrupted edges (or nodes) is fixed throughout the entire computation. We fill this missing gap by providing round-efficient simulations that translate given CONGEST algorithms into equivalent algorithms that are resilient against \(f\)-mobile edge adversaries, i.e., where the adversary controls a (possibly distinct) subset of \(f\) edges \(F_{i}\) in each round \(i\). Our main results are: * **Perfect-Security with Mobile Eavesdroppers.** A translation of any \(r\)-round \(f\)-_static_-secure algorithm into an equivalent \(\Theta(f)\)-_mobile_-secure algorithm with \(\Theta(r)\) rounds. We also show that the \(f\)-static-secure algorithms of [10] can be modified into \(f\)-_mobile_-secure algorithms with the _same_ number of rounds. * **Resilience with Mobile Byzantine Adversaries.** An \(f\)-mobile-byzantine simulation which is based on a decomposition of the graph into low-diameter edge-disjoint spanning trees. This provides us with near-optimal CONGEST compilers for expander graphs. It also leads to near-optimal compilers in the congested-clique model against \(\Theta(n)\)-mobile adversaries. For general \((2f+1)\) edge-connected graphs with \(f\)-mobile adversary, we almost match the bounds known for the \(f\)-static setting, when provided a trusted pre-processing phase. Our results are based on a collection of tools borrowed from the area of interactive coding [12]. Trends Theor. Comput. Sci. 2017, linear sketches and low-congestion graph decomposition. The introduced toolkit might have further applications for resilient computation. ###### Contents * 1 Introduction * 1.1 New Results * 1.2 Technical Overview * 1.2.1 Perfect-Security with Mobile Adversaries * 1.2.2 Resilience with Mobile Byzantine Adversaries * 1.3 Preliminaries * 1.4 Model, Security and Resilience Notions * 1.5 Useful Tools * 2 Security with Mobile Eavesdropper Adversaries * 3 Resilience with Mobile Byzantine Adversaries * 3.1 Tools * 3.2 \(f\)-Resilient Compilation with Round Overhead \(\tilde{O}(\mathsf{D}_{\mathsf{TP}})\) * 3.2.1 Sub-Procedure for Safe Broadcast * 3.2.2 \(f\)-Resilient Compiler * 3.3 Applications * 4 Resilience to Bounded Round-Error Corruption Rate * 4.1 Algorithm Description * 4.2 Analysis * 4.3 Applications * 5 Mobile Resilience using Fault-Tolerant Cycle Covers * A Translation of Fault-Free Algorithms into \(f\)-Mobile-Secure Algorithms * A.1 Secure Unicast with Mobile Adversaries * A.2 \(f\)-Mobile-Secure Broadcast * A.3 Congestion-Sensitive Compiler with \(f\)-Mobile Security * B Proof of Lemma 4.2 * C Distributed Computation of a Low Depth Tree Packing Introduction Following our increased dependence on distributed infrastructures, protecting the correctness and the privacy of users' information in the presence of faults has become an imperative mission in distributed network design. The inherent vulnerability of these systems seems inevitable as in distributed algorithms the output of one node is used in the computation of another. Modern network instantiations, e.g., the Blockchain, call for new kinds of distributed algorithms. The study of _resilient_ and _secure_ distributed computation has evolved along two lines of research. The line on resilient byzantine computation has been initiated by the work of Pease et al. [61] and Lamport et al. [50, 61] on the _byzantine agreement problem_. The second line which focuses on _information-theoretic security_ dates back to the work of Yao [75], and has been extensively addressed by the Cryptographic community under the Multi-Party-Communication (MPC) model [5]. While earlier work assumed _static_ adversaries (in which the set of corruptions is fixed), the arguably more realistic _mobile_ (or dynamic) faulty setting has attracted a lot of attention as well, in both of these communities. In this mobile setting, faults might be introduced in a dynamic and adaptive manner, similarly to a spread of a computer virus. A key limitation of many of these existing algorithms, however, is their restriction to fully-connected communication graphs. A recent line of works [59, 58, 57, 39, 40, 41, 42] mitigated this gap, by providing resilient and secure algorithms, for any graph topology, in the \(\mathsf{CONGEST}\) model of distributed computing [64]. These algorithms have been limited, so far, to _static_ adversaries that control a fixed number of edges (or nodes) in the graph1. The primary objective of this paper is in providing a new algorithmic approach for handling _mobile adversaries_ while keeping the round overhead as close as possible to the static counterparts. We focus on the following fundamental question that has been addressed so-far mainly in the complete graph setting: Footnote 1: Many of these works can handle adaptive adversaries, a stronger variant of the static setting, which allows the adversary to place the total of \(f\) corruptions in an adaptive manner. **Question 1.1**.: What is the cost (in terms of the number of \(\mathsf{CONGEST}\) rounds) for providing resilience against _mobile_ vs. _static_ adversaries in general distributed networks? In terms of feasibility, earlier work, e.g., by Srinathan et al. [71] has demonstrated that the graph connectivity requirements are the same for both static and mobile adversaries. Our main contribution is in providing new algorithms for mobile adversaries that almost match the state-of-the-art results for their static counterparts. An additional benefit of our approach is that in some cases it leads to improved bounds (and new results) already for the static setting. **Line 1: Resilient Computation, in Complete Graphs.** In the classical (static) byzantine setting, an all-powerful adversary controls a fixed subset of edges (or nodes) by sending malicious messages through these edges. Time-efficient and communication-efficient algorithms have been devised for various of distributed tasks that can tolerate up to a _constant_ fraction of corrupted edges and nodes in complete network topologies. Examples include: broadcast and consensus [21, 23, 28, 11, 72, 67, 7, 68, 6, 27, 32, 29, 49, 63, 47, 24, 53, 44, 19, 48], gossiping [8, 2, 15], and agreement [23, 61, 10, 18, 32]. Mobile byzantine (node) faults have been addressed by Garay [31] in the context of the byzantine agreement problem. Tight bounds for this problem, in terms of the allowed number of faults per round, have been provided by Bonnet et al. [9]. See Yung [76] for an overview on mobile adversaries. **Line 2: Secure Computation, in Complete Graphs.** The notion of information-theoretic security is among the most fundamental and long-studied concepts in the area of secure MPC. Starting with the earlier work of Yao [75] for \(n=2\), Goldreich, Micali and Wigderson [37] for general \(n\), to the well-known Ben-Or, Goldwasser and Widgderson (BGW) protocol [5] that provides information-theoretic security against semi-honest adversaries controlling almost half of the parties. Inspired by the mobility of computer viruses and swarms, Ostrovsky and Yung [55] initiated the study of mobile adversarial settings, where corruptions are introduced, and removed, in a dynamic manner, throughout the course of execution. The extensive line of _mobile_ secure algorithms has developed into the well-established topic of _proactive-security_[12, 71, 3, 25, 26]. As in the static setting, most of these algorithms are usually designed for complete networks, and relatively little is known on the complexity of such computations in general graphs. **Line 3: Resilient and Secure Computation for Any Graph, Static Adversaries.** Throughout, an algorithm is denoted as \(f\)-static-secure (resp., \(f\)-static-resilient) if it guarantees information-theoretic security (resp., correctness) in the presence of (static) adversaries controlling at most \(f\) edges in the graph2. It is well-known that handling \(f\)-static eavesdroppers requires an edge-connectivity of \(f+1\). In contrast, \(f\)-static byzantine adversaries require an edge-connectivity of \(2f+1\)[21, 22, 62]. In a sequence of works, Parter and Yogev [59, 58, 57] introduced a graph-theoretic paradigm for round-efficient CONGEST algorithms that are \(f\)-static-secure and \(f\)-static-resilient, for sufficiently connected graphs. Their approach is based on providing low-congestion _reliable_ paths between every pair of neighboring nodes in the graph. This yields a general compilation of any fault-free CONGEST algorithm into an equivalent \(f\)-secure (or resilient) algorithm. The round overhead depends on the length of the \(\Theta(f)\) edge-disjoint paths between neighbors, which might be bounded by \(O(\min\{n,(D/f)^{\Theta(f)}\})\), where \(D\) is the graph diameter [40]. Footnote 2: The precise definition of the adversarial settings are elaborated later on. In a sequence of two very recent works, Hitron, Parter and Yogev [41, 42] bypassed this \(D^{f}\) barrier for the adversarial setting of eavesdroppers. By employing the secure unicast algorithm of Jain [45], they provide \(f\)-static-secure broadcast algorithms [41] with round complexity3 of \(\widetilde{O}(D+\sqrt{fn})\), for \(D\)-diameter \(n\)-node graphs with edge-connectivity of \(\Theta(f)\). [42] provided \(f\)-static-secure compilers for low-congestion CONGEST algorithms, along with near-optimal \(f\)-static-secure algorithms for Minimum-Spanning-Tree (MST). Footnote 3: As usual, \(\widetilde{O}()\) hides factors poly logarithmic in \(n\). **New: CONGEST Algorithms with Mobile Adversaries.** We provide \(f\)-mobile-secure (resp., \(f\)-mobile-resilient) whose privacy and resilience guarantees hold in the presence of mobile adversary that controls distinct subsets of at most \(f\) edges, in each round. Srinathan et al. [71] show that the connectivity requirements are the same for both static and mobile adversaries (either eavesdroppers or byzantine). Providing round-efficient algorithms for this dynamic and adaptive adversarial behaviors calls for a new algorithmic paradigm that borrows useful techniques from streaming algorithms, graph decomposition and _interactive coding_. While mobile secure algorithms can be provided quite readily, our major struggles go into mobile resilient algorithms against mobile byzantine adversaries. This is based on a completely different approach than that taken in the prior (static) work of [40, 39]. We note that a special attention in the literature has been devoted to the unicast problem (a.k.a., the _Secure Message Transmission_ problem) [30, 71, 60]. In our general compilation scheme for a given \(m\)-edge graph, it is required to solve \(m\) many unicast instances, i.e., for every \((u,v)\in E\). **Related Setting: Interactive Coding.** While there is no general machinery for providing mobile resilience in the CONGEST model, the closest setting to ours is that of interactive coding, in which the adversary is allowed to corrupt a bounded _fraction_ of the _total_ communication bits (i.e., bounded _communication-error-rate_). Rajagopalan and Schulman [65] provided a network analog of Shannon's coding theorem against stochastic noise. Computationally efficient protocols for this setting were subsequently provided by Gelles, Moitra and Sahai [34]. Hoza and Schulman [43] provided the first network protocols against adversarial noise, that also fit the bandwidth limitation of the CONGEST model4. Censor-Hillel, Gelles and Haeupler [13] presented the first _fully_ distributed interactive coding scheme in which the topology of the communication network is not assumed to be known in advance, as in prior works in this setting. See [33] for an excellent review. Footnote 4: In fact, their algorithms send one bit of information on each of the graph edges, in a given round. Our \(f\)-mobile setting is in some sense incomparable to that of interactive coding. Assume an \(n\)-node graph with \(m=\Theta(n^{2})\) edges. Then, in the case where the protocol sends \(O(n)\) messages per round, our adversary is _stronger_ as the interactive coding adversary is limited to an error rate of \(O(1/n)\), and therefore cannot corrupt even a single edge in each and every round. On the other hand, if the \(r\)-round protocol sends \(\Omega(m)\) messages per round, the interactive coding setting allows for \(\Omega(m/n)\) corruptions, while our \(f\)-mobile setting allows for a total of \(fr\) corruptions. ### New Results We present a new algorithmic framework for distributed computation in the presence of mobile edge adversaries, in which the set of corrupted edges changes dynamically and adaptively throughout the execution. We investigate two main adversarial settings: (i) mobile eavesdroppers where the key objective is _security_ of information and (ii) mobile byzantine adversaries where we strive for maintaining the correctness of the computation. **Security against Mobile Eavesdroppers.** We present a general simulation result that translates any given static-secure algorithm into a mobile-secure algorithm while keeping the same asymptotic bound on the number of controlled edges and round complexity. We show: **Theorem 1.2**.: _Let \(\mathcal{A}\) be an \(r\)-round \(f\)-static-secure algorithm for \(r\leq\operatorname{poly}(n)\). Then for any positive integer \(t\), there exists an equivalent \(r^{\prime}\)-round \(f^{\prime}\)-mobile-secure algorithm \(\mathcal{A}^{\prime}\) such that: \(r^{\prime}=2r+t\) and \(f^{\prime}=\lfloor(f\cdot(t+1))/(r+t)\rfloor\). Moreover, an equivalent protocol exists for any \(t\geq 2fr\), \(r^{\prime}=2r+t\) and \(f^{\prime}=f\). Consequently, any \(r\)-round \(f\)-static-secure algorithm \(\mathcal{A}\) can be turned into \(r^{\prime}\)-round \(f^{\prime}\)-mobile-secure algorithm with: (i) \(r^{\prime}=O(r)\) and (ii) \(r^{\prime}=O(fr)\) and \(f^{\prime}=f\)._ To avoid the extra \(f\) factor in the round overhead (when insisting on \(f^{\prime}=f\)), we also provide a white-box modification of the existing \(f\)-static-secure algorithms of [41] and [42]. A notable tool introduced in [42] is a general _congestion-sensitive_ compiler whose performances are optimized for (fault-free) distributed algorithms with low-congestion. A distributed algorithm is said to have \(\operatorname{cong}\)-_congestion_ for an integer \(\operatorname{cong}\) if the maximum number of messages that the algorithm sends over any given edge in the graph throughout its entire execution is bounded by \(\operatorname{cong}\). We show: **Theorem 1.3** (Congestion-Sensitive Compiler with Perfect Mobile Security).: _For every \((2f+3)(1+o(1))\) edge-connected \(D\)-diameter \(n\)-vertex graph \(G\), any \(r\)-round \(\operatorname{cong}\)-congestion algorithm \(\mathcal{A}\) for \(G\) in the fault-free setting can be compiled into an equivalent \(f\)-mobile-secure algorithm \(\mathcal{A}^{\prime}\) that runs in \(\widetilde{O}(r+D+f\cdot\sqrt{\operatorname{cong}\cdot n}+f\cdot\operatorname {cong})\) CONGEST rounds. The correctness of the simulation holds w.h.p.5_ Footnote 5: As usual, w.h.p. refers to a success guarantee of \(1-1/n^{c}\) for any desired constant \(c\geq 1\). This matches the \(f\)-static, statistically-secure compilers of [42]. Our compilers have the benefit of achieving perfect security. This is obtained by replacing the implicit balls-into-bins ingredient of [42] with bounded-independence hash functions. To prove Theorem1.3, we also provide matching bounds for the mobile variants of the secure broadcast and unicast problems, studied by [41]. **Resilience against Mobile Byzantine (Edge) Adversaries.** An \(f\)-mobile byzantine adversary can maliciously corrupt the messages exchanged over at most \(f\) edges \(F_{i}\) in each round \(i\). We first provide a brute-force extension of the cycle-cover based solution of [40] to the mobile setting. **Theorem 1.4** (\(f\)-Mobile-Resilient Compilers for General Graphs).: _Given any \(n\)-node \(D\)-diameter graph \(G\) with edge-connectivity \(2f+1\), any \(r\)-round algorithm \(\mathcal{A}\) for \(G\) can be compiled into equivalent \(r^{\prime}\)-round algorithm \(\mathcal{A}^{\prime}\) that is \(f\)-mobile resilient and \(r^{\prime}=D^{\Theta(f)}\cdot\log n\). This holds provided that either (i) all nodes know the graph topology (a.k.a., the supported-CONGEST model), or (ii) there is a fault-free preprocessing step of \(D^{\Theta(f)}\) rounds._ This extends the \(1\)-mobile-resilient compilation of [58]. It also matches the state-of-the-art of [40] for the \(f\)-static setting. While [58] also requires a fault-free preprocessing, [40] does not. To handle \(f=\Omega(\log n)\) faults, our key technical contribution is in providing a new compilation scheme which is based on _low-diameter tree packing_. For a graph with edge connectivity \(k\), the _tree-packing-diameter_ of the graph is measured by the minimum diameter \(\mathsf{D_{TP}}\) such that one can decompose the graph into \(\Omega(k/\log n)\) near6 edge-disjoint spanning trees (a.k.a tree-packing) of diameter at most \(\mathsf{D_{TP}}\). We show that given a \(\mathsf{D_{TP}}\)-diameter tree-packing for \(k=\Theta(f\log n)\), any (fault-free) algorithm can become \(f\)-mobile-resilient with a round overhead of \(\widetilde{O}(\mathsf{D_{TP}})\). Footnote 6: In this context, near edge-disjoint means that each edge \(e\in E\) appears in at most \(\tilde{O}(1)\) many trees in the packing **Theorem 1.5**.: _Given a \(\mathsf{D_{TP}}\)-diameter tree-packing with \(k=\Theta(f\log n)\) trees, any \(r\)-round algorithm \(\mathcal{A}\) can be compiled into an \(r^{\prime}\)-round \(f\)-mobile-resilient algorithm \(\mathcal{A}^{\prime}\) where \(r^{\prime}=\widetilde{O}(\mathsf{D_{TP}})\)._ **Useful Applications.** Theorem1.5 leads to several applications of interest. Most notably a general \(\Theta(n)\)-mobile compiler in the classical CONGESTED CLIQUE model [52] where the underlying communication graph is a clique. **Theorem 1.6** (Mobile-Resilient Compilers in the Congested Clique).: _Any \(r\)-algorithm in the CONGESTED CLIQUE model can be compiled against \(\Theta(n)\)-mobile adversaries using \(\widetilde{O}(r)\) rounds._ This theorem requires no preprocessing step, as the clique configuration trivially defines a tree packing of diameter \(2\). Our second application is for expander graphs, where we compute in the \(f\)-mobile setting, a (weaker variant) of tree packing, which provides the following: **Theorem 1.7** (Mobile-Resilient Compilers for Expander Graphs).: _Assume \(G\) is a \(\phi\)-expander with minimum degree \(k=\widetilde{\Omega}(1/\phi^{2})\). Then any \(r\)-round algorithm \(\mathcal{A}\) can be compiled into an \(f\)-mobile-resilient algorithm \(\mathcal{A}^{\prime}\) for \(f=\widetilde{O}(k\phi)\) that runs in \(\widetilde{O}(r/\phi)\) CONGEST rounds._ Finally, Theorem1.5 also provides compilers for general graphs, in which the round overhead depends (up to poly-log factors) on the (instance) optimal length of \(k\) edge-disjoint paths between neighboring pairs, in the given graph. This is in contrast to prior work (e.g., [40]) which competes with the worst-case bound on this length. **Even Stronger Adversaries: Resilience with Bounded Round-Error-Rate.** Finally, we extend our \(f\)-mobile compilation scheme to the stronger setting in which the adversary is allowed to corrupt a total of \(fr\) edges in an \(r\)-round algorithm. That is, corrupting at most \(f\) edges, per round, on _average_. By using the rewind-if-error technique from interactive coding [69], we match the round overhead provided for the \(f\)-mobile setting. This also provides stronger formulations of Theorem 1.6 and 1.7. For example, for the CONGESTED CLIQUE model, one can compile an \(r\)-round algorithm in \(\widetilde{O}(r)\) rounds, while tolerating a total of \(\widetilde{\Theta}(r\cdot n)\) corruptions. ### Technical Overview #### 1.2.1 Perfect-Security with Mobile Adversaries Simulating a given \(f\)-static-secure \(r\)-round algorithm \(\mathcal{A}\) securely in the \(f\)-mobile setting is based on the following observation: Assume that all but a subset of \(f\) edges, denoted as \(F^{*}\), hold \(r\) secret random messages, that are hidden from the adversary. That is, assume that for every \((u,v)\in E\setminus F^{*}\), \(u\) and \(v\) hold \(R_{1}(u,v),\ldots,R_{r}(u,v)\) random messages, which the adversary does not know. Then, one can simulate \(\mathcal{A}\) in a round-by-round manner, where in round \(i\), each \(u\) sends \(m_{i}(u,v)\oplus R_{i}(u,v)\) to each neighbor \(v\), where \(m_{i}(u,v)\) is the message that \(u\) sends to \(v\) in round \(i\) of Alg. \(\mathcal{A}\). We then claim that the resulting compiled algorithm, \(\mathcal{A}^{\prime}\), is \(f\)-mobile secure. Observe that all the messages of \(\mathcal{A}\) exchanged over the edges of \(E\setminus F^{*}\) are distributed uniformly at random, in the eyes of the adversary. We then use the \(f\)-static security guarantees of \(\mathcal{A}\) to show that the information exchanged over \(F^{*}\), where \(|F^{*}|\leq f\), leaks no information as well. We therefore conclude that our key task is in providing all, but at most \(f\) neighboring pairs, a sufficiently large pool of secret keys, in the presence of the \(f\)-mobile adversary. This task is captured by the neat formulation of the _Bit-Extraction_ problem introduced by Chor et al. [16]. In this problem, it is desired to extract random bits from several bits, where a bounded number of these bits are controlled by an adversary and the rest are uniformly distributed. To improve upon the extra \(f\) factor overhead (when insisting on \(f\)-mobility, see Thm. 1.2), we show that a white-box combination of the Bit-Extraction procedure of Chor et al. [16] with the framework of [41, 42] yields \(f\)-mobile algorithms with the same number of rounds. As an appetizer, we provide the following very simple, yet at first glance, surprising observation which serves as the basis for adapting [41, 42] to the \(f\)-mobile setting. **Key Observation: Mobile-Secure Unicast is Easy.** At the heart of the algorithms of [41, 42] lies a (static) secure unicast procedure of Jain [45]. This procedure allows a given pair of nodes \(s,t\) to exchange a secret message in \(O(D)\) rounds, provided that the set of edges \(F\) controlled by the _static_ adversary does not disconnect \(s\) and \(t\). A remarkable property of this algorithm is its _lightness_: exactly one message is exchanged along each of the graph edges (throughout the algorithm). This leads to a very simple mobile compilation: Let all neighbors \(u,v\) exchange a random message \(R(u,v)\), within a single round. Then, simulate Jain's algorithm in a round-by-round-manner where the messages are encrypted with the \(\{R(u,v)\}\) keys. It is then easy to prove perfect security provided that the following minimal condition holds. Let \(F_{i}\) be the edges controlled by the adversary in round \(i\). Then, security holds if \(F_{1}\) does not disconnect \(s\) and \(t\) and \(F_{i}=E\) for every \(i\geq 2\). This exercise illustrates that mobile security is _easy_ when the given static-secure algorithm has low _congestion_. Thm. 1.2 allows one to handle the general case of _arbitrary_ congestion. #### 1.2.2 Resilience with Mobile Byzantine Adversaries We now turn to consider the considerably more challenging task of providing resilience in the presence of \(f\)-mobile byzantine adversary. Unlike the mobile security setting, our simulation translates any fault-free algorithm into an \(f\)-mobile-resilient algorithm. This provides also an alternative approach for the \(f\)-static setting, in the regime where \(f=\Omega(\log n)\). The main application of our technique is a general compiler in the CONGESTED CLIQUE model, which can handle \(\Theta(n)\) mobile byzantine faults (in every round!) while paying only a poly-logarithmic overhead in the number of rounds. To illustrate our ideas, we take a gradual approach in terms of the delta w.r.t prior work. **Handling \(f=O(1)\) Mobile Faults with Fault-Tolerant (FT) Cycle Covers.** Patra et al. [60] presented an \(f\)-mobile-resilient algorithm that allows a node pair \(s,t\) to exchange a message \(m\). Their algorithm is based on sending \(m\), in a pipeline manner, along \(2f+1\) edge-connected \(s\)-\(t\) paths, for a sufficient number of rounds. Note that the length of these paths can be bounded by \(D^{\Theta(f)}\) where \(D\) is the diameter of the graph, see e.g., [56]. To simulate a fault-free algorithm \(\mathcal{A}\) in the \(f\)-mobile-byzantine setting, it is desired to employ the solution of [60] for all neighboring pairs \(u,v\). A naive application leads to a super-linear round complexity, in the worst case, as a single edge might appear on the \(u\)-\(v\) path collection of potentially \(\Omega(n)\) many \(u,v\) pairs. This _congestion_ barrier is mitigated by the notion of fault-tolerant (FT) cycle-covers [58, 40]. Informally, a \(k\)-FT cycle cover is a collection of cycles such that each edge \((u,v)\) is _covered_ by \((k-1)\) edge-disjoint cycles (except for the common edge \((u,v)\)). [58] and [40] showed that any \(k\) edge-connected \(D\)-diameter graph admits a \(k\)-FT cycle cover such that: (i) the largest cycle length is \(D^{\Theta(k)}\) and (ii) the largest (edge) overlap between the cycle is \(D^{\Theta(k)}\). Employing the algorithm of [60] for each neighboring pair \(u,v\) on top of \(k\)-FT cycle cover for \(k=2f+1\), allows us to compile any round of a given fault-free algorithm within \(D^{\Theta(f)}\) rounds. The key limitation of this technique is in handling a larger number of faults. It is easy to show (by an averaging argument) that for any graph, any \(k\)-FT cycle cover induces a cycle overlap of \(\Omega(k)\). Therefore, providing \(\Theta(n)\)-mobile CONGESTED CLIQUE compilers with \(\widetilde{O}(1)\) overhead calls for a new approach. **Handling \(f=\Omega(\log n)\) Mobile Faults with Low-Depth Tree Packing.** The notion of low-diameter tree packing, introduced by Chuzhoy, Parter and Tan [17], decomposes the graph into multiple near edge-disjoint trees of bounded depth. A graph \(G\) is \((k,\mathsf{D_{TP}})\)-connected if for every pair \(u,v\) there is a collection of \(k\) edge-disjoint paths of length at most \(\mathsf{D_{TP}}\). [17] presented a centralized construction that decomposes every \((k,\mathsf{D_{TP}})\)-connected graph into \(O(k/\log n)\) near edge-disjoint spanning-trees of depth \(O(\mathsf{D_{TP}}\cdot\log n)\). Our key result provides an \(f\)-mobile-resilient compilation of any fault-free algorithm while paying a round overhead of \(\widetilde{O}(\mathsf{D_{TP}})\), given a distributed knowledge7 of a \(\mathsf{D_{TP}}\)-diameter tree packing with \(k=\widetilde{O}(f)\). We first explain a strategy for obtaining a round overhead of \(\widetilde{O}(\mathsf{D_{TP}}+f)\). Footnote 7: By distributed knowledge we mean that each node knows its parent in each of the trees. **Compilation with a Round Overhead of \(\widetilde{O}(\mathsf{D_{TP}}+f)\).** It is instructive to explain first the simulation in the \(f\)-static setting when given a collection of \(k=\Omega(f\log n)\) nearly edge-disjoint spanning trees of depth \(\widetilde{O}(\mathsf{D_{TP}})\). We root all trees at some root node \(v_{r}\). Consider round \(i\) of the given fault-free algorithm \(\mathcal{A}\), and let \(m_{i}(u,v)\) be the message sent by \(u\) to \(v\) on that round, for every (directed) edge \((u,v)\in E\). At the start of the simulated round \(i\), we let all nodes exchange the \(\{m_{i}(u,v)\}_{(u,v)\in E}\) messages, as in Alg. \(\mathcal{A}\). Let \(m^{\prime}_{i}(u,v)\) be the message received by \(v\) from \(u\), on that round. As the adversary corrupts at most \(f\) bidirectional edges, it might be that \(m^{\prime}_{i}(u,v)\neq m_{i}(u,v)\) for at most \(2f\) ordered pairs \((u,v)\). We call a message \(m_{i}(u,v)\) a _mismatch_ if \(m^{\prime}_{i}(u,v)\neq m_{i}(u,v)\), hence we have at most \(2f\) mismatches, that we need to "correct". We introduce a message-correction-procedure which is based on the powerful tool of _sparse recovery sketches_, commonly employed in the context of the _turnstile streaming_ model [20]. In that setting, we are given a stream of elements which arrive with some (positive or negative) frequency, and our goal at the end of the stream is to output all elements with non-zero frequency. Detecting \(s\) elements can be done with a memory of \(\widetilde{O}(s)\) bits. Consider a (turnstile) stream \(S\) formed by adding each of the _sent_ messages \(m_{i}(u,v)\) with frequency \(1\), and each of the received messages \(m^{\prime}_{i}(u,v)\) with frequency \((-1)\). Since all the messages such that \(m_{i}(u,v)=m^{\prime}_{i}(u,v)\) cancel-out, we are left with only the sent and received copies of the mismatches. We utilize the mergeability property of the sparse recovery sketches, and aggregate local sketch information on each of the trees, in parallel. Initially, each node \(v\) locally computes a sketch \(\sigma(v)\) of its incoming and outgoing messages8. By aggregating these \(\sigma(v)\) values over the trees, the root \(v_{r}\) obtains the final sparse recovery sketch, and detects the mismatches. Since the majority of trees do not have a corrupted edge (in the \(f\)-static setting), the sketch returned by the majority of the trees contains the correct list of mismatches, and we can broadcast this list through the trees to have all nodes correct their received messages. Since the sparse recovery sketches are implemented with a sparsity parameter of \(s=\Theta(f)\), this computation can be implemented in \(O(f+\mathsf{D_{TP}})\) rounds using a pipelining argument. Footnote 8: I.e., it computes a stream in which \(m_{i}(v,u)\) is added with a frequency \(1\), and \(m^{\prime}_{i}(u,v)\) with frequency \(-1\), for every \(u\in N(v)\). If implemented naively in a mobile setting, the adversary may alter the result of \(f\) trees _per_ round, and eventually corrupt, at least a single edge, in each of the given \(\Theta(f\log n)\) trees9. This is the critical point where _interactive-coding_ comes to rescue. We use the compiler of [65, 43], denoted hereafter by _RS-compiler_, to compile the sketch aggregation procedure in each of the trees. The RS-compiler is designed for a setting in which the adversary can maliciously corrupt an \(O(1/m)\) fraction of the total communication, where \(m\) is the number of graph edges. In our context, this compiler is applied on a tree subgraph, hence tolerating \(O(1/n)\) fraction of corrupted messages, with a round-overhead \(O(1)\). Since the \(f\)-mobile adversary may only corrupt \(O(f\log n)\) many trees in any given round, for most of the RS-compiled protocols, the total fraction of corrupted communication is \(o(1/n)\). Consequently, the majority of the RS-compiled protocols are _successful_. Footnote 9: It can corrupt \(f\) edges in each round, and each edge appears on \(O(\log n)\) many trees. On the conceptual level, the RS-compilers allow us to utilize the collection of \(\Omega(f\log n)\) near edge-disjoint trees in an \(f\)-mobile setting in an _almost_ analogous manner to the \(f\)-static setting. We cannot guarantee that a majority of the trees are fault-free (as we could, in the static case), but we can still guarantee that a majority of RS-compiled algorithms over these trees end successfully. This comes with a cost of increasing the edge-connectivity requirement by a constant factor which depends on the hidden constants of the RS-compilers. **Omitting the Dependency in \(f\).** The improved bound is obtained by replacing the sparse recovery sketches by \(\ell_{0}\)-sampling sketches which have only \(\widetilde{O}(1)\) bits. The basic intuition for this procedure is the following: Given that we have \(\Omega(f\log n)\) many spanning trees, if each tree propagates \(O(\log n)\) uniformly random real mismatches (obtained by independent \(\ell_{0}\)-sampling sketches), then the root observes all real mismatches w.h.p. Note, however, that some of these observed mismatches might be _fake_, as the adversary might control some trees and introduce mismatches that are not obtained from the \(\ell_{0}\)-sketches. To overcome this, we set a minimal threshold \(\Delta\), and make the root node \(v_{r}\) ignore observed mismatches that are sampled by less than \(\Delta\) trees. The threshold \(\Delta\) should be set with care: high enough to filter-out unreal mismatches, but also sufficiently low to detect _many_ real mismatches. As we cannot expect to capture all \(2f\) (real) mismatches at once while producing no new mismatches, we have \(\ell=O(\log f)\) repetitions. At the beginning of each phase \(j\in[\ell]\), each nodes \(v\) recomputes its local \(\ell_{0}\)-sketches, based on the current estimate \(m^{\prime}_{i,j}(u,v)\) of its received messages, for each \(u\in N(v)\). Our goal is to reduce the number of mismatches by a constant factor in each phase, hence eventually correcting all mismatches within \(O(\log f)\) phases. For phase \(j\), we define a threshold \(\Delta_{j}=\widetilde{O}(2^{j})\) and the root node \(v_{r}\) only considers the mismatches that received a support by at least \(\Delta_{j}\) trees, and ignores the rest. These highly-supported mismatches are downcast from \(v_{r}\) to all the nodes, on each of the trees, in parallel. Assume, for now, that all the nodes correctly received this information from \(v_{r}\). Then, one can show by induction, that the number of unfixed (real) mismatches drops by a constant factor in a phase. As the number of real mismatches decreases, each real mismatch will be sampled more times by the good trees which allows us to increase the supported threshold \(\Delta_{j}\) accordingly. The remaining caveat is the assumption on correctly receiving the root's information. This might not hold, in general, as the adversary may introduce \(f\) incorrect mismatches in each phase when downcasting the sketch information from the root. To overcome this last hurdle, we combine error correction codes with the RS-compilers. The root \(v_{r}\) encodes the \(O(f)\) detected mismatches by a codeword \(w\). It splits \(w\) into \(\Theta(f)\) shares and broadcasts each share on some tree using the RS-compiler. This guarantees that a large fraction of the trees broadcasts their share correctly, and each node can locally recover the list of observed mismatches. **Handling Adversaries with Bounded Round-Error-Rates.** In Theorem 4.1, we consider a stronger setting in which the adversary can corrupt "on average" \(f\) messages in each round. In particular, the adversary might corrupt a large number of messages in given rounds. The compiler is based on the _rewind-if-error_ technique [69], originally introduced for the two-party setting. On a high level in this paradigm parties keep on verifying whether or not errors have occurred so-far in the protocol execution. If there is no indication of errors, the parties continue to simulate the next round. Otherwise, they _rewind_ by omitting the possibly incorrect last messages, and repeat. The key challenges is in detecting the errors and deciding simultaneously whether to rewind or not. We provide a network extension to this paradigm, that is somewhat different than the approach taken in prior works, e.g., in [43]. Recall that the classical interactive coding setting allows a communication-error-rate, while we account for round-error-rate. In each given point of our compilation, each node \(u\) simulates a round \(t_{u}\) of Alg. \(\mathcal{A}\), where possibly \(t_{u}\neq t_{v}\) for distinct nodes \(u,v\). Once errors are detected, only the nodes of largest \(t_{u}\) values apply a rewind step. The analysis is based on defining a potential function which provides a global progress guarantees for the entire network, over time. ### Preliminaries For an integer \(a\), we denote by \([a]=\{0,1,\ldots,a-1\}\). For a matrix \(M\), let \(M_{ij}\) its value in index \((i,j)\). **Definition 1** (Vandermonde Matrix).: _Given a field \(\mathbb{F}\), an \(k\times n\) matrix \(A\) is called a Vandermonde matrix if there exist some \(k\) distinct non-zero field elements \(\alpha_{1},\ldots,\alpha_{k}\in\mathbb{F}\), such that the value \(A_{ij}=\alpha_{i}^{j-1}\), where the multiplication is defined by the multiplication operator of the field._ **Error-Correcting Codes.** We recall the definition of error correcting codes and the standard Reed-Solomon code construction. We use the notion of _Hamming_ distance used in coding theory and then define error correcting codes with its various parameters. **Definition 2** (Distance).: _Let \(\Sigma\) be a finite set and \(\ell\in\mathbb{N}\), then the distance between \(x,y\in\Sigma^{\ell}\) is defined by \(\operatorname{Hamm}(x,y)=|\{i\in[\ell]\mid\ x_{i}\neq y_{i}\}|\)._ **Definition 3** (Error Correcting Code).: _Let \(\Sigma\) be a finite set. For every \(k\in\mathbb{N}\), a subset \(C\subseteq\Sigma^{k}\) is said to be an error correcting code with block length \(k\), message length \(\ell\), and relative distance \(\delta\) if \(|C|\geq|\Sigma|^{\ell}\) and for every \(x,y\in C\), \(\operatorname{Hamm}(x,y)\geq\delta\cdot k\). We denote then \(\operatorname{Hamm}(C)=\delta\). Moreover, we say that \(C\) is a \([\ell,k,\delta]_{q}\) code to mean that \(C\) is a code defined over alphabet set of size \(q\) and is of message length \(\ell\), block length \(k\) and relative distance \(\delta\). The elements of \(C\) are denoted as codewords._ **Theorem 1.8** (Reed-Solomon Codes [66]).: _For every prime power \(p\), message length \(\ell\) and block size \(k\leq p^{m}=q\), there exists a \([\ell,k,\delta_{C}]_{q}\) code for \(\delta_{C}=(k-\ell+1)/k\)._ **Graph Notations.** For a graph \(G=(V,E)\) and \(u\in V\), let \(N(u)\) denote the neighbors of \(u\) in \(G\). For a given \(G\)-subgraph family \(\mathcal{G}=\{G_{1},\ldots,G_{k}\}\), let \(\operatorname{load}(e)=|\{G_{i}\in\mathcal{G}\ \mid\ e\in G_{i}\}|\) for every \(e\in E(G)\) and \(\operatorname{load}(\mathcal{G})=\max_{e\in E(G)}\operatorname{load}(e)\). We say that a subgraph family \(\mathcal{G}\) is _known in a distributed manner_ if each \(u\in V(G_{i})\) knows an ID of \(G_{i}\) and its incident edges in \(G_{i}\), for every \(G_{i}\in\mathcal{G}\). For a rooted spanning tree \(T\) and node \(v\in V\), we denote by \(\operatorname{Children}(v,T)\subseteq V\) the set of child nodes of \(v\) in \(T\). For vertex set \(A\subseteq V\) we denote by \(E(A)\subseteq E\) the set of edges incident to \(A\) in \(G\). For vertex sets \(A,B\subseteq V\), we denote by \(E(A,B)\subseteq E\) the set of edges in \(G\) with one endpoint in \(A\) and one endpoint in \(B\). We say that a graph \(G\) is a \(\phi\)-_expander_ if for any set \(S\subseteq V\) it holds that \((|E(S,V\setminus S)|)/\min(|E(S)|,|E(V\setminus S)|)\geq\phi\). This is also known as the graph \(G\) having _conductance_\(\geq\phi\). ### Model, Security and Resilience Notions **The Adversarial Communication Model.** Throughout, we consider the adversarial \(\mathsf{CONGEST}\) model introduced by [40, 39]. The synchronous communication follows the standard \(B\)-\(\mathsf{CONGEST}\) model, where initially, each node knows the IDs of its neighbors in the the graph \(G\). This is usually referred to as the KT1 setting [1]. In each round, nodes can exchange \(B\)-bit messages on all graph edges for \(B=O(\log n)\). Some of our results hold in the \(\mathsf{CONGESTED}\)\(\mathsf{CLIQUE}\) model [52], in which _each_ pair of nodes (even non-neighboring) can exchange \(O(\log n)\) bits, in every round. We study two main (edge) adversarial settings, namely, eavesdroppers and byzantine adversaries. All adversarial settings considered in this paper assume an _all-powerful_ adversary that controls subsets of _edges_ whose identity is not known to the nodes. The adversary is allowed to know the topology of the graph \(G\) and the algorithm description run by the nodes. It is oblivious, however, to the randomness of the nodes. In the case of _active_ adversaries (e.g., byzantine), the adversary is allowed to send \(B\)-bit messages, on each of the edges it controls (in each direction), in every round of the computation. In the static settings, the adversary controls a fixed set of at most \(f\) edges, while in the mobile setting, it is allowed to control a distinct set of \(f\) edges in each round. In the context of (passive) eavesdroppers, we aim at providing perfect-security guarantees. For byzantine adversaries, we strive for correctness. It will be interesting to extend our techniques to provide both correctness and security guarantees against a byzantine adversaries. We next formally define the desired security and resilience guarantees under these adversarial settings, respectively. **Perfect Security with Eavesdroppers.** In the _static_ adversarial setting, a (computational unbounded) eavesdropper adversary controls a fixed set of edges \(F^{*}\) in the graph. The nodes do not know the identity of \(F^{*}\), but rather only a bound on \(|F^{*}|\). In the _mobile_ eavesdropper setting, the adversary is allowed to control a distinct subset of edges \(F_{i}\) in each round \(i\). Let \(\mathcal{A}\) be a randomized algorithm running on a graph \(G\). Denote the input domain of the algorithm \(\mathcal{A}\) by \(\mathcal{X}\). We say that an eavesdropper is _listening_ over an edge \((u,v)\) in round \(i\) of Alg. \(\mathcal{A}\) if the eavesdropper observes the message that \(u\) sent to \(v\) and the message that \(v\) sent to \(u\) in round \(i\). For a subset of edges \(F^{*}=\{e_{1},\ldots,e_{k}\}\subseteq E\), and input \(x\in\mathcal{X}\), let \(\operatorname{View}_{A}(F^{*},x)\) be a random variable vector indicating the messages of the edges of \(F^{*}\) throughout the execution of \(\mathcal{A}\) given input \(x\). Algorithm \(\mathcal{A}\) is said to be _\(f\)-static-secure_ against an eavesdropper adversary, if for every choice of \(|F^{*}|\leq k\), and every possible configuration of input values \(x_{1},x_{2}\in\mathcal{X}\), it holds that the following two are equivalent distributions: \(\operatorname{View}_{G,A}(F^{*},x_{1})\equiv\operatorname{View}_{G,A}(F^{*},x _{2})\). This notion is known as _perfect security_. For an \(r\)-round algorithm \(\mathcal{A}\), input \(x\in\mathcal{X}\) and a collection of \(r\) subsets of edges, \(F_{1},\ldots,F_{r}\subseteq E\) let \(\operatorname{View}_{A}((F_{1},\ldots,F_{r}),x)\) be a random variable vector for the messages exchanged over each edge \(e\in F_{i}\) at round \(i\) given the input \(x\), for all \(1\leq i\leq r\). Alg. \(\mathcal{A}\) is _\(f\)-mobile-secure_ in a graph \(G\) if for any \(F_{1},\ldots,F_{r}\subseteq E\), of size \(|F_{i}|\leq f\) and for any inputs \(x_{1},x_{2}\in\mathcal{X}\) it holds that \(\operatorname{View}_{G,A}((F_{1},\ldots,F_{r}),x_{1})\equiv\operatorname{View }_{G,A}((F_{1},\ldots,F_{r}),x_{2})\). **Resilience with Byzantine Adversaries.** The graph edges are controlled by a computationally unbounded _byzantine_ adversary. Unlike the eavesdropper setting, the adversary is allowed to see the messages sent through _all_ graph edges in each round, but can manipulate the messages exchanged over a bounded subset of controlled edges. An _\(f\)-static_ byzantine adversary can manipulate the messages sent through a fixed \(F^{*}\subseteq E\) where \(|F^{*}|\leq f\). An _\(f\)-mobile_ byzantine adversary can manipulate at most \(f\) edges \(F^{*}_{i}\) in _each_ round \(i\), where possibly \(F^{*}_{i}\neq F^{*}_{j}\) for \(i\neq j\). We say that an algorithm is _\(f\)-static_ (resp., _mobile_) _resilient_ if its correctness holds in the presence of \(f\)-static (resp., mobile) byzantine adversary. In the stronger setting of _\(f\)-round-error-rate_, the adversary is allowed to corrupt at most \(f\) edges per round, on _average_. That is, for an \(r\)-round algorithm the adversary is allowed to corrupt a total of \(f\cdot r\) edges. **Distributed Scheduling.** The congestion of an algorithm \(\mathcal{A}\) is defined by the worst-case upper bound on the number of messages exchanged through a given graph edge when simulating \(\mathcal{A}\). Throughout, we make an extensive use of the following random delay approach of [51], adapted to the \(\mathsf{CONGEST}\) model. **Theorem 1.9** ([35, Theorem 1.3]).: _Let \(G\) be a graph and let \(\mathcal{A}_{1},\ldots,\mathcal{A}_{m}\) be \(m\) distributed algorithms, each algorithm takes at most dilation rounds, and where for each edge of \(G\), at most \(\operatorname{cong}\) messages need to go through it, in total over all these algorithms. Then, there is a randomized distributed algorithm that w.h.p. runs all the algorithms in \(\widetilde{O}(\operatorname{cong}+\operatorname{dilation})\) rounds._ ### Useful Tools **Lemma 1.10** (Chernoff Bound).: _Let \(X_{1},\ldots,X_{n}\) be i.i.d random variables over the values \(\{0,1\}\). Let \(X=\sum_{i}^{n}X_{i}\) and \(\mu=E(X)\). Then for any \(0<\delta<1\), \(\Pr(X\leq(1-\delta)\mu)\leq e^{-\mu\delta^{2}/2}\)._ **Families of bounded-independence hash functions.** Some of our algorithms are based on generating \(c\)-wise independent random variables from a short random seed. For that purpose, we use the concept of families of bounded-independence hash functions: **Definition 4**.: _For \(N,L,c\in\mathbb{N}\) such that \(c\leq N\), a family of functions \(\mathcal{H}=\{h:[N]\to[L]\}\) is \(c\)-wise independent if for all distinct \(x_{1},\ldots,x_{c}\in[N]\), the random variables \(h(x_{1}),\ldots,h(x_{c})\) are independent and uniformly distributed in \([L]\) when \(h\) is chosen uniformly at random from \(\mathcal{H}\)._ **Lemma 1.11**.: _[Corollary 3.34 in [73]] For every \(a\), \(b\), \(c\), there is a family of \(c\)-wise independent hash functions \(\mathcal{H}=\{h:\{0,1\}^{a}\to\{0,1\}^{b}\}\) such that choosing a random function from \(\mathcal{H}\) takes \(c\cdot\max\{a,b\}\) random bits, and evaluating a function from \(\mathcal{H}\) takes \(\operatorname{poly}(a,b,c)\) computation._ **Roadmap.** The paper is split into two parts: security against an eavesdropper adversary, and resilience towards a byzantine adversary. In the first part, we prove Theorem 1.2 in Section 2, and Theorem 1.3 in Appendix A. In the second part, we prove Theorem 1.5, Theorem 1.6 and Theorem 1.7 in Section 3. Results for the round-error rate setting of a byzantine adversary are proven in in Section 4. Theorem 1.4 is proven in Section 5. ## 2 Security with Mobile Eavesdropper Adversaries In this section we prove Theorem 1.2 by providing a round-efficient simulation that converts an \(r\)-round \(f\)-static-secure into an \(r^{\prime}\)-round \(f^{\prime}\)-mobile-secure algorithm. The main lemma optimizes the ratios \(r^{\prime}/r\) and \(f^{\prime}/f\) by reducing to the problem of Bit-Extraction introduced by Chor et al. [16]. The Bit-Extraction Problem and \(t\)-Resilient Functions.Let \(n,m,t\) be arbitrary integers. In [16], Chor et al. considered the adversarial situation where for a given vector \(x\in\{0,1\}^{n}\) the adversary knows \(t\) entries in \(x\) while the remaining \(n-t\) entries are uniformly distributed in \(\{0,1\}^{n-t}\). It is then required to output \(m\) uniform random bits that are completely hidden from the adversary. The question is how large can \(m\) be as a function of \(n\) and \(t\). **Definition 5**.: _Let \(f:\{0,1\}^{n\cdot k}\to\{0,1\}^{m\cdot k}\) be a function and \(\{y_{1},\ldots,y_{n}\}\) be a set of random variables assuming values in \(\{0,1\}^{k}\). The function \(f\) is said to be \(k\)-unbiased with respect to \(T\subset\{1,2,\ldots,n\}\) if the random variable \(f(y_{1},y_{2},\ldots,y_{n})\) is uniformly random on \(\{0,1\}^{m\cdot k}\) when \(\{y_{i}\ \mid\ i\notin T\}\) is a set of independent uniformly random variables on \(\{0,1\}^{k}\) and \(\{y_{i}\ \mid\ i\in T\}\) is a set of constant random variables. A function \(f:\{0,1\}^{n\cdot k}\to\{0,1\}^{m\cdot k}\) is \((t,k)\)-resilient if for every \(T\subseteq\{1,2,\ldots,n\}\) of cardinality \(t\), \(f\) is \(k\)-unbiased w.r.t \(T\)._ Let \(B_{k}(n,t)\) be the maximum \(m\) such that there exist a \((t,k)\)-resilient function \(f:\{0,1\}^{nk}\to\{0,1\}^{mk}\). In the (Block) Extraction Problem given \(n,t\), it is required to determine \(B_{k}(n,t)\). **Theorem 2.1** ([16]).: _For \(n\leq 2^{k}-1\) it holds that \(B_{k}(n,t)=n-t\). Moreover, the following explicit function \(f\) obtains this bound: let \(M\) be an arbitrary \(n\times(n-t)\) Vandermonde matrix over the finite field \(\mathbb{F}_{2^{k}}\). Then for any random variables \(x_{1},\ldots,x_{n}\in\mathbb{F}_{2^{k}}\) such that at least \(n-t\) of the \(x_{i}\)'s are uniform random variables on \(\mathbb{F}_{2^{k}}\) and the rest are constants, the values \(y_{1},\ldots,y_{n-t}\in\mathbb{F}_{2^{k}}\), defined as \(y_{i}=\sum_{j=1}^{n}M_{ji}\cdot x_{j}\), where operations are made over the field \(\mathbb{F}_{2^{k}}\), are independent uniform random variables on \(\mathbb{F}_{2^{k}}\)._ For further applications of Vandermonde matrices in static byzantine settings, see [4]. **The Static \(\to\) Mobile Simulation.** Assume we are given an \(r\)-round protocol \(\mathcal{A}\) which is \(f\)-static-secure with round complexity \(r\in\operatorname{poly}(n)\), and an integer parameter \(t\). Our goal is to construct an \(r^{\prime}=2r+t\) round algorithm which is \(f^{\prime}=\Theta((f\cdot t)/(r+t))\)-mobile-secure. Let \(\mathbb{F}_{q}\) be a finite field of size \(q=2^{O(\log n)}\) and let \(M\) be an arbitrary \((r+t)\times r\) Vandermonde matrix over the field \(\mathbb{F}_{q}\). Assume all messages in \(\mathcal{A}\) are encoded as elements of \(\mathbb{F}_{q}\). Algorithm \(\mathcal{A}^{\prime}\) has two phases, the first phase consists of \(\ell=r+t\) rounds, and the second has \(r\) rounds. In the first phase, for every \(j=1,\ldots,r+t\) rounds, for each ordered neighboring pair \((u,v)\in E\), \(u\) sends to \(v\) a uniform random number \(R_{j}(u,v)\in\mathbb{F}_{q}\). At the end of this phase, each node \(u\) locally computes for every \(1\leq i\leq r\), the values \(K_{i}(u,v)\) and \(K_{i}(v,u)\) for every neighbor \(v\), defined as \(K_{i}(u,v)=\sum_{j=1}^{r+t}M_{ji}\cdot R_{j}(u,v)\).10 The second phase simulates Alg. \(\mathcal{A}\) in a round by round fashion, where the \(i\)-round messages of \(\mathcal{A}\) are encrypted using the keys \(\{K_{i}(u,v)\}_{(u,v)\in E}\): For \(i=1,\ldots,r\) rounds, each node \(u\) sends to each neighboring node \(v\) the message \(m^{\prime}_{i}(u,v)=m_{i}(u,v)+K_{i}(u,v)\), where \(m_{i}(u,v)\) is the message \(u\) sends \(v\) in the \(i\)'th round of \(\mathcal{A}\). Locally, each \(v\) decodes every received \(m^{\prime}_{i}(u,v)\) by applying \(m_{i}(u,v)=m^{\prime}_{i}(u,v)-K_{i}(u,v)\). Consequently, it is easy to see that each node \(u\) obtains the exact same messages as in Alg. \(\mathcal{A}\). Footnote 10: all \(+,\times\) operations on field elements are done over the field \(\mathbb{F}_{q}\) throughout the section. Proof of Theorem 1.2.As correctness and running time follow immediately, we focus on showing that Algorithm \(\mathcal{A}^{\prime}\) is \(f^{\prime}\)-mobile-secure. Consider a simulation of \(\mathcal{A}^{\prime}\) and for every round \(i\in\{1,\ldots,r^{\prime}\}\), let \(F_{i}\) be the set of edges that the adversary eavesdrops on that round, where \(|F_{i}|\leq f^{\prime}\). We partition the edges of \(G\) into two classes \(E_{\text{good}}\) and \(E_{\text{bad}}\) depending on the total number of rounds that the given edge has been eavesdropped by the adversary. The input parameter \(t\) serves as a threshold that determines the partitioning, as follows. For every edge \(e\), let \(R(e)=|\{F_{i}\ \mid\ e\in F_{i},i\in\{1,\ldots,\ell\}\}|\) be the number of rounds in which \(e\in F_{i}\) for \(i\in\{1,\ldots,\ell\}\). An edge \(e\) is denoted as _good_ if \(R(e)\leq t\) and it is _bad_ otherwise. The set \(E_{\text{good}}\) (resp., \(E_{\text{bad}}\)) consists of all good (resp., bad) edges. I.e., \(E_{\text{good}}=\{e\in E\ \mid\ R(e)\leq t\}\) and \(E_{\text{bad}}=E(G)\setminus E_{\text{good}}\). By an averaging argument, we have that \(|E_{\text{bad}}|\leq\lfloor(f^{\prime}\cdot\ell)/(t+1)\rfloor\leq f\). For the special case of \(t\geq 2fr\), note that since \(|E_{\text{bad}}|\) is an integer, then \(|E_{\text{bad}}|\leq\lfloor(f^{\prime}\cdot\ell)/(t+1)\rfloor\). Observe that the condition \(t\geq 2rf\) is equivalent to \(t\geq(t+r)/(1+1/2f)=\ell/(1+1/2f)\), hence \(\lfloor\frac{\ell}{t+1}f\rfloor=f\). Therefore, \(|E_{\text{bad}}|\leq\lfloor(f^{\prime}\cdot\ell)/(t+1)\rfloor=\lfloor f\ell/( t+1)\rfloor=f\). **Observation 2.2**.: _(i) For every \(e=(u,v)\in E_{\text{good}}\), it holds that \(\{K_{i}(u,v)\}_{i\in\{1,\ldots,r\}}\) are distributed uniformly at random in \(\mathbb{F}_{q}\). (ii) \(|E_{\text{bad}}|\leq f\)._ Proof.: (i) follows by a direct application of Theorem 2.1 and (ii) follows by a simple counting argument. The eavesdropper controls at most \(f^{\prime}\) edges in each round, and therefore in the first phase of \(\ell\) rounds, it controls at most \(f^{\prime}\ell\) edges. Therefore at most \((f^{\prime}\cdot\ell)/(t+1)\) edges are controlled for at least \(t+1\) rounds. Let \(\mathcal{X}\) be the input domain of \(\mathcal{A}\). Denote by \(\pi_{\mathcal{B}(\mathcal{A}),i}(F)\) the messages sent over edges in \(F\) at round \(i\) of algorithm \(\mathcal{B}\) with input \(x\). Throughout, we treat the messages in algorithms \(\mathcal{A}\) and \(\mathcal{A}^{\prime}\) as field elements in \(\mathbb{F}_{q}\). Assume by contradiction that \(\mathcal{A}^{\prime}\) is not \(f^{\prime}\)-mobile-secure. Then there exist some \(F_{1},\ldots,F_{r^{\prime}}\subseteq|E|\) of size at most \(f^{\prime}\), and inputs \(x_{1},x_{2}\in\mathcal{X}\) for which \[\operatorname{View}_{G,\mathcal{A}^{\prime}}((F_{1},\ldots,F_{r^{\prime}}),X=x_{ 1})\not\equiv\operatorname{View}_{G,\mathcal{A}^{\prime}}((F_{1},\ldots,F_{r^{ \prime}}),X=x_{2}). \tag{1}\] Let \(P_{i}=F_{i}\cup E_{\rm bad}\) and \(P=(P_{1},\ldots,P_{r^{\prime}})\). It therefore also holds: \[{\rm View}_{G,\mathcal{A}^{\prime}}((P_{1},\ldots,P_{r^{\prime}}),X=x_{1}) \not\equiv{\rm View}_{G,\mathcal{A}^{\prime}}((P_{1},\ldots,P_{r^{\prime}}),X=x _{2}).\] Hence in particular, there exist \(\alpha_{1},\ldots,\alpha_{r^{\prime}}\in(\mathbb{F}_{q})^{\leq f}\) such that: \[{\rm Pr}\left(\pi_{\mathcal{A}^{\prime}(x_{1}),1}(P_{1})=\alpha_{1},\ldots, \pi_{\mathcal{A}^{\prime}(x_{1}),r^{\prime}}(P_{r^{\prime}})=\alpha_{r^{\prime }}\right)\neq{\rm Pr}\left(\pi_{\mathcal{A}^{\prime}(x_{2}),1}(P_{1})=\alpha_{ 1},\ldots,\pi_{\mathcal{A}^{\prime}(x_{2}),r^{\prime}}(P_{r^{\prime}})=\alpha_ {r^{\prime}}\right). \tag{2}\] That is, in our notation, \(\alpha_{i}\) is a vector of \(|P_{i}|\leq f\) field elements in \(\mathbb{F}\), where the \(j^{th}\) entry, namely \(\alpha_{i,j}\), specifies the message sent over the \(j^{th}\) edges in \(P_{i}\) in the \(i^{th}\) round. We also define two \(r^{\prime}\)-sets of subset of edges \(P^{\rm good}=(F_{1}\cap E_{\rm good},\ldots,F_{r^{\prime}}\cap E_{\rm good})\), and \(P^{\rm bad}=(E_{\rm bad},\ldots,E_{\rm bad})\). For any \(r^{\prime}\)-edge set \(W=(W_{1}\subseteq P_{1},\ldots,W_{r^{\prime}}\subseteq P_{r^{\prime}})\), Algorithm \(\mathcal{B}\), input \(x\in\mathcal{X}\) and indices \(0\leq i\leq j\leq r^{\prime}\), let \(Y_{i,j}(\mathcal{B}(x),W)\) denote the event where: \[\pi_{\mathcal{B}(x),i}(W_{i})=\alpha_{i}(W_{i}),\ldots,\pi_{\mathcal{B}(x),j}( W_{j})=\alpha_{j}(W_{j}),\] where \(\alpha_{j}(W_{j})\) denotes the sub-vector of \(\alpha_{j}\) restricted on the coordinates of \(W_{j}\). Since the communication in the first \(\ell\) rounds depends only on the private randomness of the nodes, and does not depend on the input \(X\) of Alg. \(\mathcal{A}\), for any such \(r^{\prime}\)-edge set \(W\), it holds: \[{\rm Pr}\left(Y_{1,\ell}(\mathcal{A}^{\prime}(x_{1}),W)\right)={\rm Pr}\left(Y _{1,\ell}(\mathcal{A}^{\prime}(x_{2}),W)\right). \tag{3}\] By Theorem 2.1, all keys of the good edges \(\{K_{i}(u,v)\}_{\{u,v\}\in E_{\rm good};i\in[\tau]}\) are i.i.d distributed in \(\mathbb{F}_{q}\) conditioned on the transcript of messages exchanged over11\(P_{j}\) in round \(j\) for every \(j\in\{1,\ldots,\ell\}\). The security provided by the OTP guarantees that all the messages exchanged over \(P^{\rm good}\) in the second phase are distributed i.i.d in \(\mathbb{F}_{q}\) even when conditioned on the observed transcript. Therefore, Footnote 11: Note that this holds despite the fact that \(|P_{i}|\) might be larger than \(f^{\prime}\), as we only added \(E_{\rm bad}\) to \(F_{i}\). \[{\rm Pr}\left(Y_{\ell+1,r^{\prime}}(\mathcal{A}^{\prime}(x_{1}),P^{\rm good}) \mid Y_{1,\ell}(\mathcal{A}^{\prime}(x_{1}),P^{\rm good})\right)={\rm Pr} \left(Y_{\ell+1,r^{\prime}}(\mathcal{A}^{\prime}(x_{2}),P^{\rm good})\mid Y_{ 1,\ell}(\mathcal{A}^{\prime}(x_{2}),P^{\rm good})\right).\] By combining with Equation (3), we get: \[{\rm Pr}\left(Y_{\ell+1,r^{\prime}}(\mathcal{A}^{\prime}(x_{1}),P^{\rm good}) \right)={\rm Pr}\left(Y_{\ell+1,r^{\prime}}(\mathcal{A}^{\prime}(x_{2}),P^{\rm good })\right). \tag{4}\] Moreover, Theorem 2.1 implies that the observed transcript \(P^{\rm good}_{\ell+1},\ldots,P^{\rm good}_{r^{\prime}}\) in the second phase is independent of the transcript of \(P^{\rm bad}\) in the second phase, when conditioned on the view of the adversary in the first phase. Therefore, for \(P=(P_{1},\ldots,P_{r^{\prime}})\), \[{\rm Pr}\left(Y_{\ell+1,r^{\prime}}(\mathcal{A}^{\prime}(x_{1}),P) \mid Y_{1,\ell}(\mathcal{A}^{\prime}(x_{1}),P)\right)\] \[={\rm Pr}\left(Y_{\ell+1,r^{\prime}}(\mathcal{A}^{\prime}(x_{1}),P ^{\rm bad})\mid Y_{1,\ell}(\mathcal{A}^{\prime}(x_{1}),P^{\rm bad})\right) \cdot{\rm Pr}\left(Y_{\ell+1,r^{\prime}}(\mathcal{A}^{\prime}(x_{1}),P^{\rm good })\mid Y_{1,\ell}(\mathcal{A}^{\prime}(x_{1}),P^{\rm good})\right). \tag{5}\] **Claim 1**.: \({\rm Pr}\left(Y_{\ell+1,r^{\prime}}(\mathcal{A}^{\prime}(x_{1}),P^{\rm bad}) \mid Y_{1,\ell}(\mathcal{A}^{\prime}(x_{1}),P^{\rm bad})\right)={\rm Pr} \left(Y_{\ell+1,r^{\prime}}(\mathcal{A}^{\prime}(x_{2}),P^{\rm bad})\mid Y_{1, \ell}(\mathcal{A}^{\prime}(x_{2}),P^{\rm bad})\right)\.\)__ Proof.: Since \(\mathcal{A}\) is \(f\)-static-secure and \(|E_{\rm bad}|\leq f\), we have: \[{\rm Pr}\left(Y_{1,r}(\mathcal{A}(x_{1}),P^{\rm bad})\right)={\rm Pr}\left(Y_{ 1,r}(\mathcal{A}(x_{2}),P^{\rm bad})\right). \tag{6}\] By the second phase of Alg. \(\mathcal{A}^{\prime}\), for any edge \((u,v)\in E(G)\) and any \(x\in\mathcal{X}\), we have \(\pi_{\mathcal{A}^{\prime}(x),\ell+i}((u,v))=\pi_{\mathcal{A}(x),i}((u,v))+K_{i}( u,v)\). Therefore, \[\Pr\left(Y_{\ell+1,r^{\prime}}(\mathcal{A}^{\prime}(x_{1}),P^{ \mathrm{bad}})\mid Y_{1,\ell}(\mathcal{A}^{\prime}(x_{1}),P^{\mathrm{bad}}) \right)=\Pr\left(Y_{1,r}(\mathcal{A}(x_{1}),P^{\mathrm{bad}})\right)=\Pr\left(Y _{1,r}(\mathcal{A}(x_{2}),P^{\mathrm{bad}})\right)\] \[=\Pr\left(Y_{\ell+1,r^{\prime}}(\mathcal{A}^{\prime}(x_{2}),P^{ \mathrm{bad}})\mid Y_{1,\ell}(\mathcal{A}^{\prime}(x_{2}),P^{\mathrm{bad}}) \right)\,\] where the second equality holds due to Equation (6). We next show the following which provides a contradiction to our assumption and establishes the security guarantees of Alg. \(\mathcal{A}^{\prime}\). **Claim 2**.: \(\Pr\left(Y_{1,r^{\prime}}(\mathcal{A}^{\prime}(x_{1}),P)\right)=\Pr\left(Y_{1, r^{\prime}}(\mathcal{A}^{\prime}(x_{2}),P)\right)\) _._ Proof.: Combining Cl. 1 with Equality (3), we obtain \[\Pr\left(Y_{1,r^{\prime}}(\mathcal{A}^{\prime}(x_{1}),P^{\mathrm{bad}}) \right)=\Pr\left(Y_{1,r^{\prime}}(\mathcal{A}^{\prime}(x_{2}),P^{\mathrm{bad} })\right). \tag{7}\] Therefore, \[\Pr\left(Y_{1,r^{\prime}}(\mathcal{A}^{\prime}(x_{1}),P)\right) =\Pr\left(Y_{1,r^{\prime}}(\mathcal{A}^{\prime}(x_{1}),P^{\mathrm{ good}})\right)\cdot\Pr\left(Y_{1,r^{\prime}}(\mathcal{A}^{\prime}(x_{1}),P^{ \mathrm{bad}})\right)\] \[=\Pr\left(Y_{1,r^{\prime}}(\mathcal{A}^{\prime}(x_{2}),P^{ \mathrm{good}})\right)\cdot\Pr\left(Y_{1,r^{\prime}}(\mathcal{A}^{\prime}(x_{ 1}),P^{\mathrm{bad}})\right)\] \[=\Pr\left(Y_{1,r^{\prime}}(\mathcal{A}^{\prime}(x_{2}),P^{ \mathrm{good}})\right)\cdot\Pr\left(Y_{1,r^{\prime}}(\mathcal{A}^{\prime}(x_{ 2}),P^{\mathrm{bad}})\right)\] \[=\Pr\left(Y_{1,r^{\prime}}(\mathcal{A}^{\prime}(x_{2}),P)\right)\,\] where the first equality follows from Equation (5), the second equality from Equation (4), and the third equality from Equation (7). The claim follows. ## 3 Resilience with Mobile Byzantine Adversaries Our goal in this section is providing \(f\)-mobile-resilient distributed algorithms for graphs with edge-connectivity of \(\Omega(f\log n)\). Unlike our results in Sec. 2, the \(f\)-mobile-resilient algorithms are not obtained by translating an \(f\)-static-resilient algorithm into an \(f^{\prime}\)-mobile algorithm, but rather translating any given (non-faulty) distributed \(r\)-round \(\mathsf{CONGEST}\) algorithm \(\mathcal{A}\) into an equivalent \(r^{\prime}\)-round \(f\)-mobile resilient algorithm \(\mathcal{A}^{\prime}\). ### Tools Our simulation is based on the following three main tools. **Tool 1: Tree-Packing of \((k,\mathsf{D_{TP}})\) Connected Graphs.** We need the following definition that has been introduced by [17]. A graph \(G=(V,E)\) is \((k,\mathsf{D_{TP}})\)-_connected_ iff for every pair \(u,v\in V\), there are \(k\) edge-disjoint paths connecting \(u\) and \(v\) such that the length of each path in bounded by \(\mathsf{D_{TP}}\). Observe that \(\mathsf{D_{TP}}\) might be considerably larger than the graph diameter. A tree-packing is a decomposition of the graph edges into near edge-disjoint spanning trees. In the distributed setting, an important parameter of a tree-packing is the depth of the trees, which determines the round complexity of our procedures. **Definition 6** (Low-Diameter Tree Packing).: _For a given graph \(G=(V,E)\), an \((k,\mathsf{D_{TP}},\eta)\) tree packing \(\mathcal{T}=\{T_{1},\ldots,T_{k}\}\) consists of a collection of \(k\) spanning trees in \(G\) such that (i) the diameter of each tree \(T_{i}\) is at most \(\mathsf{D_{TP}}\) and (ii) each \(G\)-edge appears on at most \(\eta\) many trees in \(\mathcal{T}\) (i.e., the load of \(\mathcal{T}\) is at most \(\eta\)). When \(\eta=O(\log n)\), we may omit it and simply write \((k,\mathsf{D_{TP}})\) tree packing._ Chuzhoy et al. [17] presented an efficient randomized centralized algorithm for computing a \((k,\mathsf{D_{TP}}\cdot\log n)\) tree-packing for any given \((k,\mathsf{D_{TP}})\)-connected graph \(G\). **Theorem 3.1** (Theorem 4 [17]).: _There is an efficient randomized centralized algorithm that given \((k,\mathsf{D_{TP}})\)-connected \(n\)-node graph \(G\) computes a collection \(\mathcal{T}=\{T_{1},\ldots,T_{k}\}\) of \(k\) spanning trees of \(G\), such that, w.h.p., for each \(1\leq\ell\leq k\), the tree \(T_{\ell}\subset G\) has diameter \(O(\mathsf{D_{TP}}\log n)\) and the load of the trees is \(O(\log n)\)._ It is well-known [54] that any \(k\)-edge connected graph, contains a \((\lfloor k/2\rfloor,n)\)-tree packing. Moreover, Karger sampling [46] obtains a \((\widetilde{O}(k),n)\) tree packing (i.e., by randomly partitioning the edges into \(\widetilde{O}(k)\) graphs). The main advantage of Thm. 3.1 is in providing low-diameter trees. In Appendix C, we show a distributed algorithm for computing the tree packing of Theorem 3.1 in \(\widetilde{O}(k\cdot\mathsf{D_{TP}^{2}})\) rounds, in the fault-free setting, using the techniques of [35]12. The algorithm presented in Appendix C is useful when handling general graphs. For the latter, the trees are computed as part of a trusted preprocessing step. Footnote 12: This claim is implicitly shown in [35], but since their packing is optimized under different parameters, we provide the complete analysis for completeness. The main applications of this paper, namely, compilers for expander graphs and for the CONGESTED CLIQUE model, do _not_ require such trusted pre-processing steps and rather compute the tree packing itself in the byzantine setting13. For the sake of the applications of Sec. 3.3, we consider a weaker notion of tree-packing, which allows for at most \(0.1\) fraction of the subgraphs to be of arbitrary structure. Footnote 13: This is trivial in the CONGESTED CLIQUE model, but more challenging for expander graphs in the CONGEST model. **Definition 7**.: _[Weak Tree-Packing] A collection of \(k\)\(G\)-subgraphs \(\mathcal{T}=\{T_{1},\ldots,T_{k}\}\) is a weak \((k,\mathsf{D_{TP}},\eta)\) tree packing if (i) at least \(0.9k\) of the subgraphs in \(\mathcal{T}\) are spanning trees of diameter at most \(\mathsf{D_{TP}}\), rooted at a common root \(v_{r}\); and (ii) the load of \(\mathcal{T}\) is at most \(\eta\)._ **Tool 2: Compilers against Bounded Adversarial Rate.** We use known compilers that can successfully simulate a given fault-free algorithm in the \(1\)-CONGEST model, as long as the adversary corrupts a bounded _fraction_ of the total _communication_. The latter is referred to as _communication-error-rate_. Specifically, we use the compilers of Rajagopalan and Schulman [65], that we denote hereafter by _RS-compilers_. While in their original work [65] the RS-compilers proved useful against stochastic noisy channels, Hoza and Schulman [43] later observed that it is in-fact resilient against _adversarial_ corruptions. **Theorem 3.2** (Slight restatement of Proposition 1 of [43] (see also [65])).: _Given a network on \(m\) edges on which a \(1\)-CONGEST protocol \(\Pi\) runs in \(r\) rounds (in the fault-free setting), there is an \(r^{\prime}\)-round \(1\)-CONGEST protocol \(\Pi_{\mathsf{RS}}\), where \(r^{\prime}\in[r,t_{\mathsf{RS}}\cdot r]\) for some constant \(t_{\mathsf{RS}}\geq 1\), which simulates \(\Pi\) by sending \(1\)-bit messages over all graph edges in every round with the following guarantee: if the adversary corrupts at most \(1/(c_{\mathsf{RS}}\cdot m)\)-fraction of the total communication of the protocol for a constant \(c_{\mathsf{RS}}>1\), the output distributions of \(\Pi\) and \(\Pi_{\mathsf{RS}}\) are the same._ Our compilation scheme is based on applying the RS compiler on a collection of \(k\) algorithms, each running over a distinct tree in a given \((k,\mathsf{D_{TP}})\) tree-packing \(\mathcal{T}\). To save on the round complexity (i.e., avoiding a multiplicative dependency in \(k\)), we provide the following scheduling lemma which allows us to run these \(k\) algorithms in parallel, and enjoy the fact that the trees have a bounded level of overlap. The guarantee is that the vast majority of these algorithms end correctly in the presence of an \(f\)-mobile byzantine adversary (or, more strongly, in the presence of round-error rate \(f\)). Throughout, we assume that all nodes hold an upper bound on the runtime of the individual RS-algorithms, as well as an upper bound on the overlap of the trees. **Lemma 3.3** (\(f\)-Mobile (Almost) Resilient Scheduling of RS-Compiled Algorithms).: _Let \(\mathcal{G}=\{G_{1},\ldots,G_{s}\}\) be an ordered \(G\)-subgraph family of load \(\eta\). Let \(\mathcal{A}_{1},\ldots,\mathcal{A}_{s}\) be a collection of RS-algorithms, such that each \(\mathcal{A}_{j}\) sends messages over all \(G_{j}\)-edges in each of its \(r_{j}\) rounds, where \(r_{j}\in[r,t_{\mathsf{RS}}\cdot r]\) for some fixed \(r\) known to all nodes. Then, assuming a round-error rate of \(f\), all \(s\) algorithms can be run in parallel within \(\leq t_{\mathsf{RS}}\cdot r\eta\) rounds, such that the following holds: all but at most \(t_{\mathsf{RS}}\cdot c_{\mathsf{RS}}\cdot f\cdot\eta\) algorithms end correctly (where \(c_{\mathsf{RS}},t_{\mathsf{RS}}\) are the constants from Thm. 3.2)._ Proof.: Scheduler \(\mathsf{RSSScheduler}\) runs in phases of \(\eta\) rounds. In phase \(i\), it exchanges all messages of the \(i^{th}\) round of all \(k\) algorithms \(\{\mathcal{A}_{j}\}_{j|r_{j}\geq i}\) as follows. For every directed edge \(e=(u,v)\), let \(j_{1}(e),\ldots,j_{q_{e}}(e)\) be the set of indices such that \(e\in G_{j}\) and \(r_{j}\geq i\), for every \(j\in\{j_{1}(e),\ldots,j_{q_{e}}(e)\}\). By definition we have that \(q_{e}\leq\eta\) for every \(e\). Then in the \(j^{th}\) round of phase \(i\), \(u\) sends \(v\) the message \(m_{j,i}(u,v)\), where \(m_{j,i}(u,v)\) is the message that \(u\) sends to \(v\) in round \(i\) of Alg. \(\mathcal{A}_{j}\). (Since \(\mathcal{A}_{j}\) sends messages over all edges in \(G_{j}\) in every round, this message is well-defined). This completes the description of the scheduler. We say that algorithm \(\mathcal{A}_{j}\) ends _correctly_ if the communication-error-rate among the messages of \(\mathcal{A}_{j}\), in the scheduled algorithm \(\mathsf{RSSScheduler}\), is below the RS-threshold, namely, \(1/(c_{\mathsf{RS}}|E(G_{j})|)\). By Theorem 3.2, we get that in such a case, the RS-compilation is indeed successful. We next turn to show that all but \(t_{\mathsf{RS}}\cdot c_{\mathsf{RS}}\cdot f\cdot\eta\) algorithms end correctly. The total communication of each algorithm \(\mathcal{A}_{j}\) is \(r_{j}|E(G_{j})|\). Therefore, if the total number of corrupted \(\mathcal{A}_{j}\)-messages in Alg. \(\mathsf{RSSScheduler}\) is below \(r/c_{\mathsf{RS}}\leq r_{j}|E(G_{j})|/(c_{\mathsf{RS}}|E(G_{j})|)\), then \(\mathcal{A}_{j}\) is correct. As the round complexity of \(\mathsf{RSSScheduler}\) is at most \(t_{\mathsf{RS}}\cdot r\cdot\eta\), the adversary can corrupt at most \(f\cdot t_{\mathsf{RS}}\cdot r\cdot\eta\) messages. By an averaging argument, it holds that for all but \(t_{\mathsf{RS}}\cdot c_{\mathsf{RS}}\cdot f\cdot\eta\) many algorithms have at most \(r/c_{\mathsf{RS}}\) corrupted messages that are associated with it, hence experiencing a communication-error-rate below the RS-threshold. We conclude that all but \(c_{\mathsf{RS}}\cdot f\cdot\eta\) many algorithms end correctly. **Tool 3: Sketches for \(\ell_{0}\)-Sampling.** Sketching is a powerful compression tool which allows one to store aggregate information of a data stream using small space. To introduce the concept of sketching, consider the following setting. For some size parameter \(n\in\mathbb{N}\), let \(U=\{1,\ldots,\operatorname{poly}(n)\}\) be a universe of elements, and let \(\sigma\) be a multi-set of \(O(\operatorname{poly}(n))\) tuples \(\{(e_{i},f_{i})\}_{i\in[O(\operatorname{poly}(n))]}\), where \(e_{i}\in U\) is an element in the universe and \(f_{i}\in\{-\operatorname{poly}(n),\ldots,\operatorname{poly}(n)\}\) is an integer referred to as the change in frequency. The _frequency_ of an element \(e\in U\) in the multi-set \(\sigma\) is defined as the sum of its changes of frequency, i.e. \(f(e)=\sum_{i:e_{i}=e}f_{i}\). Denote by \(N(\sigma)\) the set of elements in \(U\) with non-zero frequency in \(\sigma\), i.e. \(N(\sigma)=\{e\in U\mid f(e)\neq 0\}\). Informally, the act of \(\ell_{0}\)-sampling a multi-set \(\sigma\) is choosing a uniform element from \(N(\sigma)\), or in other words, choosing a uniformly random non-zero frequency element from \(\sigma\). In the context of this paper, an \(\ell_{0}\)-sampling sketch \(\tau\) of \(\sigma\) is a randomized string of size \(\operatorname{polylog}(n)\), which can be constructed using a multi-set \(\sigma\) and randomness \(R\) of \(\operatorname{polylog}(n)\) bits, and on which two operations are defined: Query and Merge. A Query operation receives as input a sketch \(\tau\) of a multi-set \(\sigma\), and outputs a random element in \(N(\sigma)\) w.h.p., where each element in \(N(\sigma)\) is chosen with probability \(\frac{1}{N(\sigma)}\pm\frac{1}{\operatorname{poly}(n)}\) (the randomness is taken entirely over the selection of \(R\)). The _merge_ operation receives as input two sketches \(\tau_{1},\tau_{2}\) constructed using the same randomness \(R\) on the multi-sets \(\sigma_{1}\),\(\sigma_{2}\) respectively, and its output is a sketch \(\sigma^{\prime}\) (of \(\operatorname{polylog}n\) bits) which is equal to a sketch obtained using randomness \(R\) and the concatenated multi-set \(\sigma_{1}\cup\sigma_{2}\). **Theorem 3.4** (\(\ell_{0}\)-sampler, rephrasing of [20] Corollary 1).: _There exists an algorithm \(L\), which given a size parameter \(n\in\mathbb{N}\), a multi-set \(\sigma\) of size \(\operatorname{poly}(n)\) and \(R\), a string of \(O(\log^{4}n)\) random bits, outputs a randomized string \(L(n,\sigma,R)\) called a \(\ell_{0}\)-sketch, which is encoded using \(O(\log^{4}n)\)-bits. The following operations are defined on the sketch:_ * \(\operatorname{Query}\)_: a deterministic procedure in which given a (randomized) sketch_ \(\tau=L(n,\sigma,R)\) _outputs an element_ \(e\in N(\sigma)\)_, where any given_ \(e\in N(\sigma)\) _is sampled with probability at least_ \(1/N(\sigma)-1/n^{c}\)_, and probability at most_ \(1/N(\sigma)+1/n^{c}\) _for any constant_ \(c>1\)_, w.h.p. (randomness taken over_ \(R\)_)._ * \(\operatorname{Merge}\)_: a deterministic procedure in which given two sketches_ \(\tau_{1}=L(n,\sigma_{1},R),\tau_{2}=L(n,\sigma_{2},R)\) _created using the same randomness_ \(R\) _on multi-sets_ \(\sigma_{1},\sigma_{2}\) _respectively, outputs the sketch_ \(\tau^{\prime}=L(n,\sigma_{1}\cup\sigma_{2},R)\)_._ In our context, the universe \(U\) is the set \(U=\{0,1\}^{O(\log n)}\), i.e. the collection of all possible messages of size \(O(\log n)\). A multi-set \(\tau\) contains each string with either zero, positive, or negative frequency. Intuitively, use the sketches in to find for some fixed round \(i\) of Alg. \(\mathcal{A}\) the (original) messages in round \(i\) that got corrupted by the adversary, by computing a sketch in which our non-zero frequency elements is the set of faulty messages and their corrections. To obtain this, we make sure that each sent message of round \(i\) of Alg. \(\mathcal{A}\) is added into the set with positive frequency, and each received message in round \(i\) with negative frequency. Messages that were sent correctly across an edge are then cancelled out. On the other hand messages that got corrupted do not. We note that the Merge operation does not modify the universe set, or the encoding size of the sketch (which is always of size \(\widetilde{O}(1)\) bits). ### \(f\)-Resilient Compilation with Round Overhead \(\widetilde{O}(\mathsf{D_{TP}})\) Throughout, we consider \((k,\mathsf{D_{TP}})\)-connected graphs for \(k=d\cdot f\log n\), for some constant \(d\) to be specified later. It is assumed that a weak \((k,\mathsf{D_{TP}},\eta)\) tree packing of \(G\) is known in a distributed manner, where each node knows its parent in each of the trees. We show the following stronger statement of Thm. 1.5. **Theorem 3.5**.: _Given a distributed knowledge of a weak \((k,\mathsf{D_{TP}},\eta)\) tree packing \(\mathcal{T}\) for graph \(G\), then for any \(r\)-round algorithm \(\mathcal{A}\) over \(G\), there is an equivalent \(r^{\prime}\)-round algorithm \(\mathcal{A}^{\prime}\) for \(G\) that is \(f\)-mobile resilient, where \(r^{\prime}=\widetilde{O}(r\cdot\mathsf{D_{TP}})\) and \(f=\Theta(k/\eta)\)._ **Immediate Corollary: Compilers in the Congested-Clique Model.** An \(n\)-node clique over vertices \(V=\{v_{1},\ldots,v_{n}\}\) contains a \((k,\mathsf{D_{TP}},\eta)\) tree packing \(\mathcal{T}=\{T_{1},\ldots,T_{k}\}\) for \(k=n\) and \(\mathsf{D_{TP}},\eta=2\). Specifically, for every \(i\in\{1,\ldots,n\}\), let \(T_{i}=(V,E_{i})\) where \(E_{i}=\{(v_{i},v_{j})\mid v_{j}\in V\}\), be the star centered at \(v_{i}\). It is easy to see that the diameter and the load are exactly \(2\). Theorem 1.6 is then immediate by Theorem 3.5. #### 3.2.1 Sub-Procedure for Safe Broadcast Before presenting the proof of Theorem 3.5, we present a useful sub-procedure, called \(\mathsf{ECCSafeBroadcast}\), which allows us to broadcast messages in an \(f\)-mobile-resilient manner by using Error-Correcting-Codes. We assume the network has a distributed knowledge of a weak \((k,\mathsf{D_{TP}},\eta)\) tree packing \(\mathcal{T}\) (see Def. 7), and that the root node \(v_{r}\) of the packing holds a broadcast message \(M\) of size \(O(k\log n)\) bits. Our Safe Broadcast procedure allows every node \(v\in V\) to output the message \(M\) within \(O((\mathsf{D_{TP}}+\log n)\eta)\) rounds. We represent the broadcast message \(M\) as a list \([\alpha_{1},\ldots,\alpha_{\ell}]\in[q]^{\ell}\) where \(q=2^{p}\) for some positive integer \(p\) and \(q\geq k\), and such that \(\ell\) is an integer satisfying \(k\geq c^{\prime\prime}\ell\) for a sufficiently large \(c^{\prime\prime}>0\). Let \(C\) be a \([\ell,k,\delta_{C}]_{q}\)-code for \(\delta_{C}=(k-\ell+1)/k\), known as the Reed Solomon Code (see Theorem 1.8). The root encodes the broadcast message \([\alpha_{1},\ldots,\alpha_{\ell}]\) into a codeword \(C([\alpha_{1},\ldots,\alpha_{\ell}])=[\alpha^{\prime}_{1},\ldots,\alpha^{ \prime}_{k}]\). Next, the algorithm runs \(k\) RS-compiled \(\mathsf{D_{TP}}\)-hop broadcast algorithms, in parallel. That is, for every \(T_{j}\in\mathcal{T}\), let \(\Pi(T_{j})\) be a \(\mathsf{D_{TP}}\)-hop broadcast algorithm in which the message \(\alpha^{\prime}_{j}\) starting from the root \(v_{r}\), propagates over \(T_{j}\) for \(\mathsf{D_{TP}}\) hops (hence taking \(O(\mathsf{D_{TP}}+\log q)\) rounds). Let \(\Pi_{RS}(T_{j})\) be the RS-compilation of that algorithm, denoted hereafter as RS-broadcast algorithm. All \(k\) RS-broadcast algorithms are implemented in parallel by using the scheduling scheme of Lemma 3.3. Let \(\alpha^{\prime}_{j}(u)\) be the value that a node \(u\) receives from \(T_{j}\) (or \(\alpha^{\prime}_{j}(u)=0\) if it received no value) at the end of this execution. To determine the broadcast message \([\alpha_{1},\ldots,\alpha_{\ell}]\), each \(u\) calculates first the closest codeword \(\alpha(u)\) to \(\alpha^{\prime}(u)=[\alpha^{\prime}_{1}(u),\ldots,\alpha^{\prime}_{k}(u)]\). Its output is then given by \(\widetilde{\alpha}(u)=[\widetilde{\alpha}_{1}(u),\ldots,\widetilde{\alpha}_{ \ell}(u)]=C^{-1}(\alpha(u))\). This completes the description of this procedure. **Lemma 3.6**.: _Consider the execution of Alg. \(\mathsf{ECCSafeBroadcast}\) in the presence of \(f\)-mobile byzantine adversary with a given broadcast message \([\alpha_{1},\ldots,\alpha_{\ell}]\in[q]^{\ell}\); and a distributed knowledge of a weak \((k,\mathsf{D_{TP}},\eta)\) tree packing for \(k\geq\max\{c^{\prime\prime}\cdot\ell,c^{*}\eta f\}\) for large constants \(c^{\prime\prime},c^{*}\). Then, \(\widetilde{\alpha}(u)=[\alpha_{1},\ldots,\alpha_{\ell}]\) for every node \(u\). In addition, the round complexity is \(O((\mathsf{D_{TP}}+\log q)\eta)\)\(1\)-\(\mathsf{CONGEST}\) rounds._ Proof.: Let \(\mathcal{T}^{\prime}\subseteq\mathcal{T}\) be the collection of \(\mathsf{D_{TP}}\)-spanning trees rooted at a common root \(v_{r}\). By the definition of weak tree-packing, we have that \(|\mathcal{T}^{\prime}|\geq 0.9k\). Broadcasting a message of size \(O(q)\) for \(\mathsf{D_{TP}}\)-hops takes \(O(\mathsf{D_{TP}}+\log q)\) rounds using \(1\)-bit messages. Therefore, a single application of the RS-broadcast algorithm \(\Pi_{RS}(T_{j})\) over some subgraph \(T_{j}\in\mathcal{T}\) takes \(O(\mathsf{D_{TP}}+\log q)\) rounds. By Lemma 3.3, all \(k\) algorithms \(\{\Pi_{RS}(T_{j})\}_{j=1}^{k}\) can be performed in \(O((\mathsf{D_{TP}}+\log q)\eta)\) rounds, such that all but \(c\cdot f\cdot\eta\) end correctly, for some constant \(c\). Therefore, at least \(|\mathcal{T}^{\prime}|-c\cdot f\cdot\eta\geq(1-1/c^{\prime})k\) algorithms are valid, by taking \(k=c^{*}\eta f\) for a sufficiently large \(c^{*}\). This implies that for at least \((1-1/c^{\prime})k\) algorithms we have \(\alpha^{\prime}_{j}(u)=\alpha^{\prime}_{j}\). (For that to happen, we have \(T_{j}\in\mathcal{T}^{\prime}\) and the RS-compiled algorithm \(\Pi_{RS}(T_{j})\) is valid. Or in other words, \[\frac{\operatorname{Hamm}(\widetilde{\alpha}(u),C(\alpha_{1},\ldots,\alpha_{ \ell}))}{k}\leq\frac{1}{c^{\prime}}.\] On the other hand, since \(k\geq c^{\prime\prime}\ell\), the relative distance of the code \(C\) can be bounded by: \[\delta_{C}=\frac{k-\ell+1}{k}\geq 1-\frac{\ell}{k}\geq 1-1/c^{\prime\prime}.\] Therefore, for any given point \(x\in\mathbb{F}_{q}^{k}\), there is at most one codeword of relative distance less than \(\delta_{C}/2\). As for sufficiently large \(c^{\prime},c^{\prime\prime}\), one obtains \((1-\frac{1}{c^{\prime\prime}})/2\geq\frac{1}{c^{\prime}}\), we get that: \[\frac{\mathrm{Hamm}(\widetilde{\alpha}(u),C(\alpha_{1},\ldots,\alpha_{\ell}))} {k}<\frac{\delta_{C}}{2}.\] We conclude that the decoding of every node is correct, i.e., that \(\widetilde{\alpha}(u)=[\alpha_{1},\ldots,\alpha_{\ell}]\) for every \(u\). #### 3.2.2 \(f\)-Resilient Compiler Let \(\mathcal{A}\) be a given \(r\)-round distributed algorithm for \(G\), and for every \(i\in\{1,\ldots,r\}\) and an edge \((u,v)\in E\), let \(m_{i}(u,v)\) be the message sent from \(u\) to \(v\) in round \(i\) in Alg. \(\mathcal{A}\). Throughout, we assume w.l.o.g. that the last \(O(\log n)\) bits of each message \(m_{i}(u,v)\) specifies the ID of the message defined by the sender identifier (\(\mathrm{id}(u)\)) appended by the receiver identifier (\(\mathrm{id}(v)\)), i.e., \(\mathrm{id}(m_{i}(u,v))=\mathrm{id}(u)\circ\mathrm{id}(v)\).14 Recall that every node \(u\in V\) holds private \(\mathrm{poly}(n)\) random coins that are unknown to the adversary. Given this randomness, the simulation of \(\mathcal{A}\) is deterministic. Footnote 14: This requires the KT1 assumption, i.e. that nodes know the identifiers of their neighbors. Our goal is to simulate \(\mathcal{A}\) in a round-by-round manner, such that the resulting algorithm \(\mathcal{A}^{\prime}\) provides the same output distribution as that of \(\mathcal{A}\) in the presence of an \(f\)-mobile byzantine adversary. Every round \(i\) of Alg. \(\mathcal{A}\) is simulated by a phase of \(O(k+\mathsf{D_{TP}})\) rounds. At the end of the phase, each node \(v\) can determine the set of incoming messages \(\{m_{i}(u,v)\}_{u\in N(v)}\) as in the fault-free simulation of Alg. \(\mathcal{A}\). For the remainder of the section, we fix round \(i\geq 1\) and assume that each node already holds the correct list of received messages \(\{m_{j}(u,v)\}_{u\in N(v)}\) for every \(j\leq i-1\). Hence, all nodes can simulate their state in a fault-free simulation of the first \(i-1\) rounds of Alg. \(\mathcal{A}\). #### Simulation of the \(i^{th}\) Round We are now ready to describe the reliable simulation of round \(i\). On a high level, in the first round, we let all nodes exchange their \(i^{th}\)-round messages as in (the fault-free) Alg. \(\mathcal{A}\). Then, the bulk of the simulation is devoted for a message correction procedure, which allows all nodes to hold the correct received messages as in the fault-free setting, despite the corruptions of the mobile adversary. **Step (1): Single Round \(i^{th}\) Message Exchange.** The phase starts by letting each node \(u\) sending the \(i^{th}\)-round messages of Alg. \(\mathcal{A}\), namely, \(\{m_{i}(u,v)\}_{v\in N(u)}\). For every directed edge \((u,v)\in E\), let \(m_{i}^{\prime}(u,v)\) be the message _received_ by \(v\) from \(u\) in this round. Since the adversary might corrupt at most \(f\) edges in this round, it might be the case where \(m_{i}^{\prime}(u,v)\neq m_{i}(u,v)\) for at most \(2f\) ordered pairs. Note that since the identifiers of the messages are known to every node, we can assume that the last bits of the messages \(m_{i}^{\prime}(u,v),m_{i}(u,v)\) specify the message-ID15 given by \(\mathrm{id}(u)\circ\mathrm{id}(v)\). We denote the event where \(m_{i}^{\prime}(u,v)\neq m_{i}(u,v)\) as a _mismatch_. Footnote 15: I.e., the identifier of the message consists of the sender-ID (namely, \(\mathrm{id}(u)\)) concatenated by the receiver-ID (namely, \(\mathrm{id}(v)\)). **Step (2): Upcast of Mismatches using \(\ell_{0}\)-Samplers.** Our algorithm works in iterations \(j\in\{1,\ldots,z\}\) for \(z=\Theta(\log f)\). At the beginning of every iteration \(j\), each node \(v\) holds for each neighbor \(u\), a variable \(m_{i,j-1}^{\prime}(u,v)\), which represents its estimate16 for its received message from in round \(i\) of Alg. \(\mathcal{A}\). Initially, \(m^{\prime}_{i,0}(u,v)=m^{\prime}_{i}(u,v)\). Next, every node \(v\) locally defines two multi-sets corresponding to its outgoing messages and and its \(j\)-estimated incoming messages: \(\mathrm{Out}_{i}(v)=\{m_{i}(v,u_{1}),\ldots,m_{i}(v,u_{\deg(v)})\}\) and \(\mathrm{In}_{i,j}(v)=\{m^{\prime}_{i,j}(u_{1},v),\ldots,m^{\prime}_{i,j}(u_{ \deg(v)},v)\}\). Let \(S_{i,j}(v)\) be a multi-set of tuples consisting of every element in \(\mathrm{Out}_{i}(v)\) with frequency \(1\), and every element in \(\mathrm{In}_{i,j}(v)\) with frequency \(-1\). In other words, \(S_{i,j}(v)=\{(m,1)\}_{m\in\mathrm{Out}_{i}(v)}\cup\{(m,-1)\}_{m\in\mathrm{In} _{i}(v)}\). Let \(S_{i,j}=S_{i,j}(v_{1})\cup\cdots\cup S_{i,j}(v_{n})\) be the multi-sets formed by taking a union over all \(n\) multi-sets. Each subgraph \(T\in\mathcal{T}\) runs an RS-compiled sub-procedure, \(L0_{RS}(T,S_{i,j})\), which is defined by applying the RS-compiler of Theorem 3.2 for the following (fault-free) \(L0(T,S_{i,j})\) procedure, which is well defined when \(T\) is a spanning tree. In the case where \(T\) is an arbitrary subgraph, the execution of \(L0(T,S_{i,j})\) which is restricted to \(\widetilde{O}(\mathsf{D_{TP}})\) rounds, will result in an arbitrary outcome. **Procedure \(L0(T,S_{i,j})\).** The node \(v_{r}\) first broadcasts \(\widetilde{O}(1)\) random bits \(R_{i,j}(T)=R_{i,j,1}(T),\ldots,R_{i,j,t}(T)\) over the edges of \(T\), where \(t=\Theta(\log n)\).17 Then, each node \(v\) initializes \(t\) mutually independent \(\ell_{0}\)-sampler sketches on the multi-sets \(S_{i,j}(v)\) with randomness \(R_{i,j,h}(T)\) for \(h=1,\ldots,t\) using Theorem 3.4. Let \([\tau_{1}(v),\ldots,\tau_{t}(v)]\) be the \(\ell_{0}\)-sampling sketches obtained for \(S_{i,j}\) with the randomness \(R_{i,j}(T)\). The combined sketches \(L(n,S_{i,j},R_{i,j,1}),\ldots,L(n,S_{i,j},R_{i,j,t})\) are then computed in a bottom-up manner on \(T\) from the leaves to the root \(v_{r}\), in the following manner: first, each leaf node \(v\) sends its sketches, \(\tau_{u,1}=L(n,S_{i,j}(v),R_{i,j,1}),\ldots,\tau_{u,t}=L(n,S_{i,j}(v),R_{i,j,t})\), to its parent in \(T\) in \(\widetilde{O}(1)\) rounds. Any non-leaf node \(v\) waits until it receives \(t\) sketches \(\tau_{u,1},\ldots,\tau_{u,t}\) from each child \(u\in\mathrm{Children}(v,T)\). For each \(h\in\{1,\ldots t\}\), it merges all the sketches \(\{\tau_{u,h}\}_{u\in\mathrm{Children}(v,T)}\) together with \(L(n,S_{i,j}(v),R_{i,j,h})\) (using the merge operation described in Theorem 3.4), and propagates the resulting \(t\) sketches to its parent. Finally, the root \(v_{r}\) obtains \(t\) sketches from each of its children, computes the combined \(t\) sketches \(L(n,S_{i,j},R_{i,j,1}),\ldots,L(n,S_{i,j},R_{i,j,t})\), and (locally) applies the Query operation of Theorem 3.4 on each combined sketch to sample a list of values \(A_{i,j}(T)=[a_{1}(T),\ldots,a_{t}(T)]\), where \(a_{h}(T)=\mathrm{Query}(L(n,S_{i,j},R_{i,j,h}))\) for \(h\in\{1,\ldots,t\}\). Footnote 17: In the case where \(T\) is _not_ a spanning tree, \(v_{r}\) might have no neighbors in \(T\). Nevertheless, the correctness will be based on the \(0.9k\) spanning trees in \(\mathcal{T}\). The round complexity of Procedure \(L0(T,S_{i,j})\) is restricted to \(\widetilde{O}(\mathsf{D_{TP}})\) rounds. This concludes the description of \(L0(T,S_{i,j})\) and hence also its RS-compilation \(L0_{RS}(T,S_{i,j})\). Our algorithm implements the collection of the \(k\) RS-compiled algorithms \(\{L0_{RS}(T,S_{i,j})\}_{T\in\mathcal{T}}\), in _parallel_, by employing the RS-scheduler of Lemma 3.3. **Detecting Dominating Mismatches.** A _positive_ element \(a\in A(T)\) is denoted as an _observed-mismatch_. For every observed-mismatch \(a\in\bigcup_{T\in\mathcal{T}}A_{i,j}(T)\), denote its _support_ in iteration \(j\), by \(\mathrm{supp}_{i,j}(a)=|\{(\ell,T)\,|\,a=a_{\ell}(T),T\in\mathcal{T},\ell\in \{1,\ldots,t\}\}|\). The root \(v_{r}\) then selects a sub-list \(\mathsf{DM}_{i,j}\) of _dominating_ observed mismatches, i.e., mismatches that have a sufficiently large _support_ in \(\bigcup_{T}A(T)\), based on the given threshold \(\Delta_{j}\). Specifically, for a sufficiently large constant \(c^{\prime\prime}\), let: \[\Delta_{j}=0.2c^{\prime\prime}2^{j}\eta\cdot t\text{ and }\mathsf{DM}_{i,j}=\{a\in \bigcup_{T}A_{i,j}(T)\,|\,a>0,\mathrm{supp}_{i,j}(a)\geq\Delta_{j}\}. \tag{8}\] The remainder of the \(i^{th}\) phase is devoted to (resiliently) broadcasting the list \(\mathsf{DM}_{i,j}\). **Step (3): Downcast of Dominating Mismatches.** To broadcast \(\mathsf{DM}_{i,j}\), Alg. ECCSafeBroadcast is applied with parameters \(q=2^{p}\) for \(p=\lceil\max(\log k,\log^{5}n)\rceil\) and \(\ell=|\mathsf{DM}_{i,j}|\). Upon receiving \(\mathsf{DM}_{i,j}\), each node \(v\) computes its \(j\)-estimate \(m^{\prime}_{i,j}(u,v)\) as follows. If there exists a message \(m\in\mathsf{DM}_{i,j}\) with \(\mathrm{id}(m)=\mathrm{id}(u)\circ\mathrm{id}(v)\), then \(v\) updates its estimate for the received message by setting \(m^{\prime}_{i,j}(u,v)=m\). Otherwise, the estimate is unchanged and \(m^{\prime}_{i,j}(u,v)\gets m^{\prime}_{i,j-1}(u,v)\). This completes the description of the \(j^{th}\) iteration. Within \(z=O(\log f)\) iterations, each node \(v\) sets \(\widetilde{m}_{i}(u,v)=m^{\prime}_{i,z}(u,v)\) for every neighbor \(u\). In the analysis we show that, w.h.p., \(\widetilde{m}_{i}(u,v)=m_{i}(u,v)\) for all \((u,v)\in E\). **Algorithm**ImprovedMobileByzantineSim (\(i^{th}\) **Phase):** **Input:** Weak \((k,\mathsf{D_{TP}},\eta)\) tree packing \(\mathcal{T}=\{T_{1}\ldots,T_{k}\}\). **Output:** Each node \(v\) outputs \(\{\widetilde{m}_{i}(u,v)\}_{u\in N(v)}\), estimation for its received messages in round \(i\) of Alg. \(\mathcal{A}\). 1. Exchange \(m_{i}(u,v)\) over each edge \((u,v)\in E\). 2. Let \(\{m^{\prime}_{i,0}(u,v)\}_{(u,v)\in E}\) be the received messages at the receiver endpoints \(\{v\}\). 3. For \(j=1,\ldots,z=O(\log f)\) do: * Employ protocol \(L0_{RS}(T,S_{i,j})\) over each \(T\in\mathcal{T}\), in parallel, using Lemma 3.3. * Set \(\mathsf{DM}_{i,j}\) as in Eq. (8). * Broadcast \(\mathsf{DM}_{i,j}\) by applying Alg. ECCSafeBroadcast. * For every \(v\in V\) and \(u\in N(v)\): * If \(\exists m\in\mathsf{DM}_{i,j}\) with \(\mathrm{id}(m)=\mathrm{id}(u)\circ\mathrm{id}(v)\): \(m^{\prime}_{i,j}(u,v)\gets m\). * Otherwise, \(m^{\prime}_{i,j}(u,v)\gets m^{\prime}_{i,j-1}(u,v)\). 4. For every \(v\in V\) and \(u\in N(v)\): Set \(\widetilde{m}_{i}(u,v)=m^{\prime}_{i,z}(u,v)\). **Analysis.** We focus on the correctness of the \(i^{th}\) round and omit the index \(i\) from the indices of the variables when the context is apparent. Let \(\mathcal{T}^{\prime}\subseteq\mathcal{T}\) be the collection of spanning-trees of depth \(O(\mathsf{D_{TP}})\), rooted \(v_{r}\). We assume that \(k\geq c^{\prime}\cdot c^{\prime\prime}\eta f\), where \(c^{\prime\prime}\) is the constant of Lemma 3.3 and \(c^{\prime}>c^{\prime\prime}\). A tree \(T_{q}\) is denoted as \(j\)-_good_ if (a) \(T_{q}\in\mathcal{T}^{\prime}\) and (b) \(L0_{RS}(T_{q},S_{i,j})\) ended correctly when using the scheduler of Lemma 3.3. Let \(\mathcal{T}^{(j)}_{\mathrm{good}}\subseteq\mathcal{T}^{\prime}\) be the collection of good trees. By Lemma 3.3, it holds: \[|\mathcal{T}^{j}_{\mathrm{good}}|\geq\left(0.9-\frac{c^{\prime\prime}f\eta}{k} \right)|\mathcal{T}|\geq\left(0.9-\frac{1}{c^{\prime}}\right)|\mathcal{T}|. \tag{9}\] Finally, we say that a sent message \(m_{i}(u,v)\) a \(j\)-_mismatch_ if \(m_{i}(u,v)\neq m^{\prime}_{i,j}(u,v)\). Let \(B_{j}\) denote the number of \(j\)-mismatches. The following lemma follows by the Chernoff bound: **Lemma 3.7**.: _For every \(j\in[z]\), if \(B_{j-1}\leq 2f/2^{j-1}\), then for any \((j-1)\)-mismatch \(m\), \(\mathrm{supp}_{i,j}(m)\geq\Delta_{j}\), w.h.p._ Proof.: Let \(m\) be a \((j-1)\)-mismatch. Since there are at most \(2f/2^{j-1}\) mismatches, there are at most \(4f/2^{j-1}\) non-zero entries in \(S_{i,j}\). By Theorem 3.4, in each sketch of a \(j\)-good tree \(T\) the root detects a given \((j-1)\)-mismatch with probability at least \(2^{j-1}/4f-\epsilon(n)\) for \(\epsilon(n)=O(1/\operatorname{poly}(n))\). Therefore, for a sufficiently large \(c^{\prime}\), by Equations (8,9) and the fact that we use \(t=\Theta(\log n)\) independent \(\ell_{0}\) sketches, we get: \[\mathbb{E}\left(\operatorname{supp}_{i,j}(m)\right)\geq\left(0.9-\frac{1}{c^{ \prime}}\right)\left(\frac{2^{j}}{4f}-\epsilon(n)\right)k\cdot t\geq 0.8\left( \frac{2^{j-1}}{4f}\right)kt=0.2c^{\prime\prime}\cdot c^{\prime}2^{j}\eta\cdot t =c^{\prime}\Delta_{j}\.\] Moreover, the observed mismatches sampled by the good trees are mutually independent of each other. Given \((j-1)\)-mismatch \(m^{\prime}\), the probability that it is sampled by less than \(\Delta_{j}\) many \(j\)-good trees can be bounded by a Chernoff bound (Lemma 1.10), as follows: \[\Pr(\operatorname{supp}_{i,j}(m)\leq\Delta_{j}) \leq \Pr\left(\operatorname{supp}_{i,j}(m)\leq\mathbb{E}\left( \operatorname{supp}_{i,j}(m)\right)/c^{\prime}\right)e^{-(1-1/c^{\prime})^{2} c^{\prime}\Delta_{j}/2}\] \[= e^{-(1-1/c^{\prime})^{2}\frac{0.2c^{\prime\prime}\cdot c^{\prime }2^{j}nt}{2}}\leq\frac{1}{\operatorname{poly}(n)},\] where the last inequality holds for a sufficiently large \(c^{\prime}\) (which can be chosen sufficiently large compared to \(c^{\prime\prime}\)). **Lemma 3.8**.: _For every \(j\in[z]\), \(B_{j}\leq 2f/2^{j}\) w.h.p._ Proof.: We prove it by induction on \(j\). For \(j=0\), the number of \(j\)-mismatches at the start of the protocol is indeed at most \(2f\) since the adversary corrupts at most \(2f\) messages in the first round of Phase \(i\). Assume that the claim holds up to \(j-1\) and consider \(j\geq 1\). We say that an observed-mismatch \(a\) has \(j\)-_high support_ if \(\operatorname{supp}_{i,j}(a)\geq\Delta_{j}\), and has \(j\)-_low support_ otherwise. An observed-mismatch \(a\) is _competing_ with \(m_{i}(u,v)\) if \(\operatorname{id}(a)=\operatorname{id}(u)\circ\operatorname{id}(v)\) but \(a\neq m_{i}(u,v)\). Note that assuming all sketches on \(j\)-good trees are successful (which indeed holds w.h.p.), then all _competing mismatches_ with any message \(m_{i}(u,v)\) must be sampled by \(j\)-_bad_ trees, since they are not real \((j-1)\) mismatches. A necessary condition for \(m_{i}(u,v)\) to be a \(j\)-mismatch, is either that (a) there is an observed-mismatch with \(j\)-high support that is competing with \(m_{i}(u,v)\), or (b) it is a \((j-1)\)-mismatch and has \(j\)-low support. By Lemma 3.7, all \((j-1)\)-mismatches have \(j\)-high support w.h.p., and therefore, there are no \(j\)-mismatches due to condition (b). Since the number of \(j\)-bad trees is at most \(c^{\prime\prime}f\eta+0.1k\) (see Eq. (9)), at most \(\frac{(c^{\prime\prime}\eta f+0.1k)t}{\Delta_{j}}\leq 2f/2^{j+1}\) competing observed-mismatches have a \(j\)-high support. Therefore, \(B_{j}\leq 2f/2^{j}\) w.h.p. The proof of Theorem 3.5 follows by noting that in the last iteration \(z=O(\log f)\), it holds that \(B_{z}=0\), w.h.p. In particular, at the last iteration \(z\), each estimated message is the correct message, i.e. \(\widetilde{m}_{i,z}(u,v)=m_{i}(u,v)\). ### Applications **Compilers for \((k,\mathsf{D_{TP}})\)-connected Graphs.** By Thm. 3.1 for every \((k,\mathsf{D_{TP}})\)-connected \(n\)-node graph \(G\) one can compute a \((k,O(\mathsf{D_{TP}}\cdot\log n))\) tree-packing with load \(\eta=O(\log n)\). These trees can be computes using \(\widetilde{O}(k\cdot\mathsf{D_{TP}})\)CONGEST (fault-free) rounds. By Theorem 3.5, we have: **Corollary 3.9** (Mobile-Resilient Compilers for \((k,\mathsf{D_{TP}})\)-Connected Graphs).: _Given a \((k,\mathsf{D_{TP}})\)-connected \(n\)-vertex graph \(G\) for \(k=\Omega(\log n)\), then one can compile any \(r\)-round algorithm \(\mathcal{A}\) into equivalent \(r^{\prime}\)-round algorithm \(\mathcal{A}^{\prime}\) that is \(f\)-mobile resilient, where \(f=\Omega(k/\log n)\) and \(r^{\prime}=\widetilde{O}(\mathsf{D_{TP}})\), provided that either (i) all nodes know the graph topology (a.k.a. the supported-CONGEST model), or (ii) there is a fault-free preprocessing step of \(\widetilde{O}(k\cdot\mathsf{D_{TP}})\) rounds._ #### Compilers for Expander Graphs, Proof of Theorem 1.7 By Theorem 3.5, it is sufficient to show that for every \(\phi\)-expander graph with minimum degree \(\Omega(f/\phi^{2})\), there is an \(f\)-mobile-resilient algorithm for computing a weak \((k,\mathsf{D_{TP}},\eta)\) tree packing with \(k=\Omega(f/\phi^{2})\), \(\mathsf{D_{TP}}=O(\log n/\phi)\) and \(\eta=2\). Specifically, we show: **Lemma 3.10** (Weak Tree-Packing in Expander Graphs).: _For every \(\phi\)-expander \(n\)-node graph with minimum degree \(\Omega(f/\phi^{2})\), there is an \(f\)-mobile-resilient algorithm for computing a weak \((k,\mathsf{D_{TP}},\eta)\) tree packing with \(k=O(f/(\phi\log n))\), \(\mathsf{D_{TP}}=O(\log n/\phi)\) and \(\eta=2\). The round complexity is \(O(\log n/\phi)\)._ The Algorithm.In the first round, for every edge \((u,v)\in E\) with \(\mathrm{id}(u)>\mathrm{id}(v)\), let \(u\) choose a color \(c(u,v)\) sampled uniformly at random in \([k]\) and send this color to \(v\), who sets the color of the edge \((u,v)\) to the received value \(c(v,u)\). For every \(i\in[k]\), define the directed subgraph: \[G_{i}=\{(u,v)\ \mid\ \mathrm{id}(u)>\mathrm{id}(v),c(u,v)=i\}\cup\{(v,u)\ \mid\ \mathrm{id}(u)>\mathrm{id}(v),c(v,u)=i\}.\] Let \(N_{i}(u)\) be the outgoing neighbors of \(v\) in \(G_{i}\). Each node \(u\) proceeds to do the following BFS procedure in each \(G_{i}\): Set a variable \(I_{i,0}(u)=\mathrm{id}(u)\) and \(\mathrm{parent}_{i}(u)=\bot\) for each color \(i\in[k]\). For \(\ell=1,\ldots,z=O(\log n/\phi)\) rounds, in parallel for all colors \([k]\), each node \(u\) sends to each (outgoing) neighbor \(v\in N_{i}(u)\) the value \(I_{i,\ell-1}(u)\). Let \(L_{i,\ell}\) be the set of messages \(u\) receives from neighbors \(\{v\in N(u)\mid c(u,v)=i\}\). Each node \(u\) sets \(I_{i,\ell}(u)=\max_{\mathrm{id}(v)\in L_{i,\ell}\cup\{\mathrm{id}(u)\}}( \mathrm{id}(v))\), and if this value strictly increases, then it sets \(\mathrm{parent}_{i}(u)\) to be the neighbor from which this value has arrived, oriented towards the other endpoint. In the final round, each node \(u\) sends in parallel for each \(i\) a message to \(\mathrm{parent}_{i}(u)\), to orient the edge \(\mathrm{parent}_{i}(u)\) towards itself. Each node that receives an edge orientation request, locally sets the orientation of that edge towards itself. Analysis, Proof of Lemma 3.10.For every round \(j\), let \(F_{j}\) be the set of (undirected) edges controlled by the mobile byzantine adversary in that round. A color \(i\in[k]\) is denoted as _good_ if the adversary did not control any of the edges of \(G_{i}\) during the first phase (i.e., no edge in \(\bigcup_{j=1}^{L}F_{j}\) appears in \(G_{i}\), in any orientation). Otherwise, the color \(i\) is defined as _bad_. For every \(i\in[k]\), let \(G^{\prime}_{i}\subseteq G_{i}\) denote the output marked (directed) subgraph obtained at the end of the algorithm. We use the following result of Wulff-Nilsen [74]: **Theorem 3.11** ([74], Lemma 20).: _Given \(c>0,\gamma\geq 1\) and \(\alpha\leq 1\), let \(G=(V,E)\) be an \(n\)-node multigraph with degree at least \(\gamma\alpha\). Let \(G^{\prime}=(V,E^{\prime})\) be the multigraph obtained from \(G\) by sampling each edge independently with probability \(p=\min\{1,(12c+24)\ln n/(\alpha^{2}\gamma)\}\). Then, with probability \(1-O(1/n^{c})\), for every cut \((S,V\setminus S)\) in \(G\), it holds that:_ * _if_ \(\phi_{G}(S)\geq\alpha\) _then_ \(\phi_{G^{\prime}}(S)\) _deviates from_ \(\phi_{G}(S)\) _by a factor of at most_ \(4\)_, and_ * _if_ \(\phi_{G}(S)<\alpha\) _then_ \(\phi_{G^{\prime}}(S)<6\alpha\)_._ **Lemma 3.12** (Section 19.1 from [70], Fact 32 [38]).: _The diameter of every \(n\)-node \(\phi\)-expander is at most \(O(\log n/\phi)\)._ **Lemma 3.13**.: _For each good color \(i\in[k]\), the subgraph \(G_{i}\) is an \(\Omega(\phi)\)-expander with diameter at most \(d\log n/\phi\), for some constant \(d\), w.h.p._ Proof.: For each good color \(i\in[k]\), we have that \(G_{i}=G[p]\) where each edge in \(G\) is sampled independently with probability \(p=1/k=O(\frac{\phi\log n}{f})\). Note that since the adversary does not control any of the \(G_{i}\) edges, all edges in \(G_{i}\) are bidirectional. By Theorem 3.11 for \(\alpha=\phi\), \(\gamma=O(\frac{f}{\phi^{\gamma}})\) such that \(p=\min\{1,\ln n/(\alpha^{2}\gamma)\}\), then w.h.p. a subgraph obtained by sampling by taking each edge in \(G\) with probability \(p\) has conductance at least \(\phi/4\). By the union bound, all graphs \(G_{i}\) have conductance \(\geq\phi/4\), w.h.p. Therefore, by Lemma 3.12 the graph \(G_{i}\) has diameter at most \(d\cdot\log n/\phi\) for some constant \(d\). **Lemma 3.14**.: _For each good color \(i\in[k]\), the output directed subgraph \(G_{i}^{\prime}\) defined by \(\{\operatorname{parent}_{i}(u)\}_{u\in V}\) is a directed spanning tree of depth \(O(\log n/\phi)\) oriented towards the node with the largest ID in the network, \(v_{r}=\operatorname{argmax}_{v}\operatorname{id}(v)\)._ Proof.: Since \(i\) is good, the adversary did not control any of the edges of \(G_{i}\) throughout the entire algorithm. Since \(v_{r}=\operatorname{argmax}_{v}\operatorname{id}(v)\), and all the message \(v_{r}\) receives are real IDs of nodes, it never changes the value \(\operatorname{parent}_{i}(v_{r})=\bot\). Let \(V_{i}(j)\) be nodes of distance \(j\) from \(v_{r}\) in \(G_{i}\). Let \(V_{i}(-1)=\{\bot\}\). We show by induction on \(j\geq 0\) that for any \(u\in V_{i}(j)\), it holds that \(I_{i,j}(u)=\operatorname{id}(v_{r})\) and \(\operatorname{parent}_{i}(u)\in V_{i}(j-1)\). Clearly, \(I_{i,0}(v_{r})=\operatorname{id}(v_{r})\) and as \(\operatorname{parent}_{i}(v_{r})=\bot\), the claim holds for \(j=0\). Assume this is the case for \(j-1\). We note that \(V\setminus\left(\bigcup_{a=0}^{j-1}V_{a}\right)\) has not seen the value \(\operatorname{id}(v_{r})\) before round \(j\), as they are of distance at least \(j\) from \(v_{r}\) in \(G_{j}\). By induction, all \(u\in V_{j-1}\) set \(I_{i,j-1}(u)=v_{r}\), therefore any node \(u\in V_{j}\) receives a message from a node \(v\in V_{j-1}\) with the value \(\operatorname{id}(v_{r})\) and sets its parent to be in \(V_{j-1}\). The induction holds and the claim follows. Set the duration of the first phase to \(L=3d\log n/\phi\) where \(d\) is the constant from Lemma 3.13 and let \(k=20fL\). **Lemma 3.15**.: _The collection of subgraphs \(\{G_{i}^{\prime}\}_{i\in[k]}\) is a weak-\((k,\mathsf{D_{TP}},\eta)\) Tree-Packing with \(\mathsf{D_{TP}}=O(\log n/\phi)\) and \(\eta=2\)._ Proof.: As the adversary can control at most \(f\) undirected edges in each round, it corrupts at most \(2fL\) directed edges. Since each directed edge \((u,v)\) belongs to a unique subgraph \(G_{i}\), in total there are at most \(2fL\) bad colors. Since \(k=20fL\), there are at least \(0.9k\) good subgraphs. By Lemma 3.14, for every good color \(i\), the subgraph \(G_{i}^{\prime}\) corresponds to a spanning tree of depth at most \(\mathsf{D_{TP}}\) and rooted at \(v_{r}\). Since each directed edge is committed to a single color, we get that \(\eta\leq 2\) (even among the bad subgraphs). ## 4 Resilience to Bounded Round-Error Corruption Rate In this section, we consider a stronger adversary, which is allowed to corrupt \(f\) edges per round "on average", allowing e.g. for specific rounds with unbounded faults. We assume that all nodes terminate exactly at some time \(r^{\prime}\) for some given \(r^{\prime}\), and the adversary may corrupt in total \(r^{\prime}\cdot f\) communication transmissions in the duration of the protocol. Our goal is given a protocol \(\mathcal{A}\), to construct a protocol \(\mathcal{A}^{\prime}\) with the same output as \(\mathcal{A}\) which is resilient to this adversary. **Theorem 4.1**.: _Given a distributed knowledge of a weak \((k,\mathsf{D_{TP}},\eta)\) tree packing \(\mathcal{T}\) for graph \(G\), then for any \(r\)-round algorithm \(\mathcal{A}\) over \(G\), there is an equivalent \(r^{\prime}\)-round algorithm \(\mathcal{A}^{\prime}\) for \(G\) that is resilient to round-error rate \(f\), where \(r^{\prime}=\widetilde{O}(r\cdot\mathsf{D_{TP}})\) and \(f=\Theta(\frac{k}{\eta\log n})\)._ Similarly to Section 3.2, we would like to simulate \(\mathcal{A}\) in a round-by-round manner, but in this setting we cannot correct all mismatches in every round, since the adversary may invest a "large budget" of faults in it. We use an approach referred to as the "rewind-if-error" paradigm for interactive coding. For a comprehensive exposition of the rewind-if-error paradigm, and interactive coding in general, we refer to the excellent survey of [33]. Intuitively, our approach is as follows: If we have a round error rate of \(f\), the number of rounds in which there can be more than \(\alpha f\) faulty messages is at most \(O(1/\alpha)\)-fraction of the rounds. This means that if we apply the algorithm of Section 3 on each simulated round such that it is resilient to \(\alpha f\) total faults, in all but \(O(1/\alpha)\)-fraction of the rounds there are no mismatches after a correction phase in the _entire network_. Rounds with more faults may cause errors; to remedy this, we combine the approach of Section 3 with a global variant of the rewind-if-error scheme, in which if _any_ error in the past transcript is detected in _any_ part of the network, the entire network "rewinds" a step (by having nodes delete the last symbol of the transcript). This in turn may cause several issues, such that causing some nodes to have more rounds in their transcript than other nodes, which is why rewinding needs to be done with some care. ### Algorithm Description We assume that the algorithm \(\mathcal{A}\) for the fault-free setting is for the \(1\)-CONGEST model. This can be assumed w.l.o.g. by paying a \(O(\log n)\) factor in its round complexity. Throughout the section, a symbol is a value in \(\{0,1,\bot\}\). Let \(\mathcal{H}_{k}=\left\{h_{i}:\{0,1\}^{k}\to\{0,1\}^{\Theta(\log n)}\right\}\) be the pairwise-independent hash function family of Lemma 1.11. Our algorithm has \(r^{\prime}=5r\) iterations (called _global-rounds_), containing three phases: a round-initialization phase, a message-correcting phase, and a rewind-if-error phase. We design each phase to take \(O(t)\) rounds for some integer \(t=\bar{\Theta}(\mathsf{D}_{\mathsf{TP}})\), and to be resilient (in a sense) to a total of \(O(ft)\) faults (i.e. a round-error rate of \(O(f)\)). Each node \(v\) maintains for each edge \((u,v)\) variables \(\widetilde{\pi}_{i}(u,v)\) (resp., \(\pi_{i}(v,u)\)) which informally contain an "estimated" transcript of all the messages received from \(u\) at \(v\) (resp., sent from \(v\) to \(u\)) in Alg. \(\mathcal{A}\) up to round \(|\widetilde{\pi}_{i}(u,v)|\) (resp., \(\pi_{i}(v,u)\)). Both these variables are initially set to be \(\emptyset\), i.e., \(\widetilde{\pi}_{1}(u,v),\pi_{i}(u,v)=\emptyset\). Throughout the algorithm, the following invariant is maintained: **Invariant 1**.: _For any global-round \(i\) and for any node \(v\), there exists a value \(\gamma(v,i)\) such that for every neighbor \(u\in N(v)\), \(|\pi_{i}(v,u)|=\gamma(v,i)\) and \(|\widetilde{\pi}_{i}(u,v)|=\gamma(v,i)\)._ Clearly the invariant holds vacuously for \(i=1\). We next describe the \(i^{th}\) global-round. Note that it might be the case that in the \(i^{th}\) global-round, node \(u\) simulates some round \(j_{u}\) of Alg. \(\mathcal{A}\) where \(j_{u}\leq i\), and possibly \(j_{u}\neq j_{v}\) for neighboring nodes \(u,v\). Round-Initialization Phase:At the start of a global-round \(i\), each node \(u\) chooses for each neighbor \(v\) a random string of size \(O(\log n)\), denoted by \(R_{i}(u,v)\). Using \(R_{i}(u,v)\), \(u\) chooses a random hash \(h_{R_{i}(u,v)}\in\mathcal{H}\). For \(2t\) rounds, it repeatedly sends to every neighbor \(v\) the message \[M_{i}(u,v)=(m_{i}(u,v),R_{i}(u,v),h_{R_{i}(u,v)}(\pi_{i}(u,v)),|\pi_{i}(u,v)|),\] where \(m_{i}(u,v)\) is the message \(v\) sends \(u\) according to \(\mathcal{A}\) in round \(i\) assuming its incoming transcript is given by: \[(\widetilde{\pi}_{i}(v,u_{1}),\ldots,\widetilde{\pi}_{i}(v,u_{\deg(v)})).\] We note that this is well defined due to Invariant 1. If the node \(u\) has terminated according to \(\mathcal{A}\) (i.e., based in its incoming transcript \(\{\widetilde{\pi}_{i}(w,u)\}_{w\in N(u)}\)), it sends the special symbol \(m_{i}(u,v)=\bot\), instead. In these \(2t\) rounds, each node \(v\) receives from each \(u\) the messages \(\widetilde{M}_{i}(u,v,1),\ldots,\widetilde{M}_{i}(u,v,2t)\). Following this, each \(v\) locally sets for each neighbor \(u\) a variable \(\widetilde{M}_{i}(u,v)=\mathsf{MAJ}(\widetilde{M}_{i}(u,v,1),\ldots, \widetilde{M}_{i}(u,v,2t))\), where the \(\mathsf{MAJ}(\cdot)\) function returns the majority value or \(0\), if no such exists. Message-Correcting Phase:For an integer parameter \(d\), in a \(d\)-Message Correction Procedure, each vertex \(u\) holds as input a message for each neighbor \(M(u,v)\) (its _outgoing values_) and a received message \(\widetilde{M}(u,v)\) (its _incoming values_). Define the number of _mismatches_ in the input instance by \(|\{(u,v)\in E(G)\:|\:M(u,v)\neq\widetilde{M}(u,v)\}|\). Then, the message correction procedure _corrects_ all the mismatches under the promise that the number of mismatches in the input instance is at most \(d\) (and that the error-rate is at most \(f\)). By correcting mismatches, we mean that each node \(v\) obtained the correct received values \(M(u,v)\) from each neighbor \(u\). We prove in Appendix B the existence of the following procedure (which essentially mimics Step (2) of Subsection 3): **Lemma 4.2**.: _Let \(t=\widetilde{\Omega}(\mathsf{D_{TP}})\) be an integer known to all nodes. Given a distributed knowledge of a weak \((k,\mathsf{D_{TP}},\eta)\) tree packing \(\mathcal{T}\) for graph \(G\), then the \(d\)-message correction procedure is a \(\Theta(t)\)-round procedure that is resilient to a round-error-rate is at most \(d\), assuming that \(d=O(\frac{k}{\eta\log n})\). The round complexity of the algorithm does not depend on \(d\)._ Specifically, the algorithm applies the \(d\)-Message Correction Procedure of Lemma 4.2 with \(d=\alpha f\) for a sufficiently large \(\alpha\geq 1\) and the lists \(\{M_{i}(u,v)\}_{v\in N(u)}\) (resp., \(\{\widetilde{M}_{i}(v,u)\}_{v\in N(u)}\)), as the outgoing (resp., incoming) values for each \(u\). At the end of the correction, each node \(v\) obtains for each neighbor \(u\) a value \(M^{\prime}_{i}(u,v)=(m^{\prime}_{i}(u,v),R^{\prime}_{i}(u,v),x^{\prime}_{i}(u,v),\ell^{\prime}_{i}(u,v))\). If the promise for the correction procedure holds (number of mismatches is at most \(d\)) then \(M^{\prime}_{i}(u,v)=M_{i}(u,v)\) for every \((u,v)\in E\). Rewind-If-Error Phase:Every node \(v\) locally checks for every neighbor \(u\) whether \(|\widetilde{\pi}_{i}(u,v)|=\ell^{\prime}_{i}(u,v)\) and \(h_{R^{\prime}_{i}(u,v)}(\widetilde{\pi}_{i}(u,v))=h_{R^{\prime}_{i}(u,v)}(x^{ \prime}_{i}(u,v))\). If these conditions hold for all neighbors, then \(v\) sets \(\mathrm{GoodState}_{i}(v)=1\), and otherwise \(\mathrm{GoodState}_{i}(v)=0\). The network then computes \(\mathrm{GoodState}_{i}=\min_{v}\mathrm{GoodState}_{i}(v)\) and \(\ell_{i}=\max_{v}\gamma(v,i)\) in the following manner. For every \(T_{j}\in\mathcal{T}\), we define the following \(\Pi_{j}\) protocol: In tree \(T_{j}\), perform an upcast of the variables \(\mathrm{GoodState}_{i}(v)\) and \(|\widetilde{\pi}_{i}(u,v)|\), taking the minimum and maximum during the upcast, respectively. Then, in a downcast from the root \(v_{r}\) to the leaves, these values are propagated downwards. Finally, all nodes wait until a total of \(t\) rounds elapse. Let \(\Pi_{RS,j}\) be the RS-compiled procedure using Theorem 3.2. We schedule all \(k\) algorithms \(\Pi_{RS,1},\ldots,\Pi_{RS,k}\), in parallel, using Lemma 3.3. Let \(\ell^{\prime}_{i}(v),\mathrm{GoodState}^{\prime}_{i}(v)\) be the value that node \(v\) receives in the majority of trees (or zero it there is no majority). If \(\mathrm{GoodState}^{\prime}_{i}(v)=1\), the \(v\) updates its incoming and received transcripts by: \[\widetilde{\pi}_{i+1}(u,v)=\widetilde{\pi}_{i}(u,v)\circ m^{\prime}_{i}(u,v) \text{ and }\pi_{i+1}(v,u)=\pi_{i}(v,u)\circ m_{i}(v,u),\forall u\in N(v)\.\] If \(\mathrm{GoodState}^{\prime}_{i}(v)=0\) and \(\gamma(v,i)=\ell^{\prime}_{i}(v)\), then \(v\) sets: \[\widetilde{\pi}_{i+1}(v,u)=\mathrm{DeleteLast}(\widetilde{\pi}_{i}(v,u))\text{ and }\pi_{i+1}(v,u)=\mathrm{DeleteLast}(\pi_{i}(v,u)),\forall u\in N(v),\] where \(\mathrm{DeleteLast}\) is a function that removes the last symbol of its input string. In the remaining case where \(\mathrm{GoodState}^{\prime}_{i}(v)=0\) and \(\gamma(v,i)\leq\ell^{\prime}_{i}(v)-1\), the \((i+1)^{th}\) transcripts are unchanged. I.e., \(\widetilde{\pi}_{i+1}(v,u)=\widetilde{\pi}_{i}(v,u)\) and \(\pi_{i+1}(v,u)=\pi_{i}(v,u)\) for every \(u\in N(v)\). This concludes the description of a global round. After the final global round, each node \(v\) outputs a value according to what it would output in \(\mathcal{A}\) if its incoming transcript from each neighbor \(u\) was \(\widetilde{\pi}_{r^{\prime}}(u,v)\) and according to \(v\)'s inputs. ### Analysis Let \(t^{\prime}=\widetilde{O}(\mathsf{D_{TP}})\) be an upper bound on the round complexity of each of the phases, and set \(\alpha=15t^{\prime}/t\). We note that the round complexity of Lemma 4.2 is unaffected by \(\alpha\), so this is well defined. We say a global-round is _bad_ if during its execution the adversary corrupts at least \(\alpha ft\) messages (i.e., the error-rate for that global-round is \(\Omega(f)\)). Otherwise, the global-round is _good_. Let \(\Gamma(u,v)\) denote the transcript of the protocol \(\mathcal{A}\) on \((u,v)\) in a fault-free network, padded by a suffix of \(\mathrm{poly}(n)\mathrm{many}\perp\mathrm{symbols}\)18. Footnote 18: Intuitively, it’s important to add the \(\perp\) symbols to the end of the estimated transcripts and to \(\Gamma\), in order to allow the potential function, defined later, to grow beyond \(r\). Otherwise, if not handled carefully, a single adversarial rewind in the last global round could have caused an error. First, we prove Invariant 1, i.e., that in any time \(i\), and for any node \(v\), there exists a value \(\gamma(v,i)\) such that for neighbor \(u\in N(v)\), \(|\pi(v,u)|=\gamma(v,i)\) and \(|\widetilde{\pi}_{i}(u,v)|=\gamma(v,i)\). Proof of Invariant 1.: Let \(v\) be a node. We prove the invariant by induction on \(i\). For \(i=1\) the claim is trivial, since for any neighbor \(u\), \(\widetilde{\pi}_{i}(u,v)=\emptyset\) and \(\pi_{i}(v,u)=\emptyset\). Assume this holds for \(i-1\), where \(i\geq 2\). By induction assumption, there is some value \(\gamma(v,i-1)\) such that \(|\pi_{i-1}(v,u)|=\gamma(v,i-1)\) and \(|\widetilde{\pi}_{i-1}(u,v)|=\gamma(v,i-1)\). Recall that for any neighbor \(u\), the variables of \(v\), \(\widetilde{\pi}_{i}(u,v)\) and \(\pi_{i}(v,u)\) are only set in the Rewind-If-Error phase. If \(\mathrm{GoodState}^{\prime}_{i-1}(v)=1\) then \(v\) adds a single symbol to \(\widetilde{\pi}_{i}(u,v)\) and \(\pi_{i}(v,u)\) for each neighbor \(u\) compared to the previous global-round, and the induction claim follows with \(\gamma(v,i)=\gamma(v,i-1)+1\). If \(\mathrm{GoodState}^{\prime}_{i-1}(v)=0\) then if \(\gamma(v,i-1)=\ell^{\prime}_{i-1}\), \(v\) removes a symbol from all \(\widetilde{\pi}_{i}(u,v)\) and \(\pi_{i}(v,u)\) compared to the previous global-round. Otherwise, \(|\widetilde{\pi}_{i}(u,v)|\) and \(|\pi_{i}(v,u)|\) remain unchanged compared to the previous global round. **Lemma 4.3**.: _At most \(r\) global-rounds are bad._ Proof.: Since there are in total at most \(3t^{\prime}r^{\prime}\) rounds, the total number of corrupted messages is at most \(15t^{\prime}\cdot r\cdot f=\alpha\cdot t\cdot r\cdot f\). Since a bad global-round is a round that has at least \(\alpha tr\) total faults, the number of bad global-rounds is at most \(r\). For strings \(a,b\), let \(\mathrm{prefix}(a,b)\) be the maximum index \(j\) for which \(a,b\) agree on the first \(j\) symbols. Let \(g(u,v,i)=2\cdot\mathrm{prefix}(\widetilde{\pi}_{i}(u,v),\Gamma(u,v))\), \(g(i)=\min_{(u,v)\in E}g(u,v,i)\). To analyze the progress of the algorithm, define the potential function \(\Phi(i)\) by: \[\Phi(i)=\min_{(u,v)\in E}\left(2\cdot\mathrm{prefix}(\widetilde{\pi}_{i}(u,v ),\Gamma(u,v))\right)-\max_{(u,v)\in E}|\widetilde{\pi}_{i}(u,v)|. \tag{10}\] In other words, \(\Phi(i)=g(i)-\ell_{i}\). In the following we bound the the potential \(\Phi(i)\) in an inductive manner, depending on whether \(i\) is a good or bad global-round. **Lemma 4.4**.: _If \(i\) is a bad global-round, then \(\Phi(i+1)\geq\Phi(i)-3\)._ Proof.: In any global-round \(i\), it holds that \(\widetilde{\pi}_{i}(u,v)\) and \(\widetilde{\pi}_{i+1}(u,v)\) differ by at most one symbol in the suffix (either (a) the last symbol is removed, (b) a new last symbol is added, or (c) the variable remains the same), for any \((u,v)\in E\). Therefore, \(g(i+1)\geq g(i)-2\), and \(\ell_{i+1}\leq\ell_{i}+1\). It follows that \(\Phi(i+1)\geq\Phi(i)-3\). To bound the potential increase for good global-rounds, we need the following auxiliary claims. **Lemma 4.5**.: _If \(i\) is a good global-round, then each node \(v\) receives from every neighbor \(u\) the correct value \(M_{i}(u,v)\) in the Message-Correcting phase (i.e., \(M_{i}^{\prime}(u,v)=M_{i}(u,v)\) for every \(u\in N(u)\)), and the correct values of \(\mathrm{GoodState}_{i}\) and \(\ell_{i}\) in the Rewind-If-Error phase._ Proof.: Since \(i\) is a good global-round, there are at most \(\alpha ft\) corrupted messages throughout its execution. Recall that for the message from \(M(u,v)\) to be received incorrectly by \(v\), there must be at least \(t\) many faults on the edge \((u,v)\) in this phase. Therefore, in the Round-Initialization phase, there are at most \(\alpha\cdot f\) adjacent pairs \(u,v\) such that \(M_{i}(u,v)\) is not correctly decoded by \(v\). Assuming that after the Round-Initialization phase, there are at most \(\alpha\cdot f\) adjacent pairs \(u,v\) such that \(\widetilde{M}_{i}(u,v)\neq M_{i}(u,v)\), the promise on the input of Lemma 4.2 holds. By Lemma 4.2, the Message-Correcting Phase has round complexity \(O(t)\) and resilience to \(\Theta(\frac{k}{\eta\log n})\) corrupted messages, which for the assumed \(k\) is more than \(\alpha ft\) many total faults. Therefore, it succeeds assuming at most \(\alpha\cdot f\cdot t\) corrupted messages. The Rewind-If-Error phase consists of a single application of the scheduler Lemma 3.3 on protocols with round complexity \(t\), and some local computation. Therefore, at least \(k-\alpha ft\) protocols succeed. By assumption of \(k\), \(k-\alpha ft\geq k/2+1\), therefore the majority value of \(\mathrm{GoodState}_{i},\ell_{i}\) received by any node \(v\) is the correct values. **Lemma 4.6**.: _Let \(v\) be a node, \(i\geq 1\) be an integer, and \(0\leq j\leq\gamma(v,i)-1\). If for all neighbors \(u\in N(v)\) it holds that \(\mathrm{prefix}(\widetilde{\pi}_{i}(u,v),\Gamma(u,v))\geq j\), then for any neighbor \(w\in N(v)\) it holds that \(\mathrm{prefix}(\pi_{i}(v,w),\Gamma(u,v))\geq j+1\)._ Proof.: Let \(i^{*}\leq i-1\) be last global-round before \(i\) in which \(\gamma(v,i^{*})=j\). By choice of \(i^{*}\), for any index \(i^{*}\leq i^{\prime}\leq i\) it holds that \(\gamma(v,i^{\prime})\geq j+1\), and in particular, all \(j+1\) symbols remain the same in between iteration \(i^{*}\) and \(i\), i.e. \[\mathrm{prefix}(\widetilde{\pi}_{i^{*}}(u,v),\widetilde{\pi}_{i}(u,v))\geq j\;. \tag{11}\] By Equation (11) and the assumption, it holds that \(\mathrm{prefix}(\widetilde{\pi}_{i^{*}}(u,v),\Gamma(u,v))\geq j\), for all neighbors \(u\in N(v)\). In addition, by the choice of \(i^{*}\), \(\gamma(v,i^{*}+1)=\gamma(v,i^{*})+1\). Therefore, in that round \(v\) set for every \(w\in N(v)\) the value \(\pi_{i^{*}+1}(v,w)\) such that \(\mathrm{prefix}(\pi_{i^{*}}(v,w),\Gamma(u,v))=j+1\). By Equation (11) it follows that \(\mathrm{prefix}(\pi_{i}(v,w),\Gamma(u,v))\geq j+1\). **Lemma 4.7**.: _Assume that there exists \(\gamma_{i}\) such that for any \(u,v\), it holds that \(|\widetilde{\pi}_{i}(u,v)|=|\pi_{i}(u,v)|=\gamma_{i}\). Then the following two conditions are equivalent:_ 1. _For all adjacent nodes_ \(u,v\)_, it holds that_ \(\pi_{i}(u,v)=\widetilde{\pi}_{i}(u,v)\) _._ 2. _For all adjacent nodes_ \(u,v\) _it holds that_ \(\operatorname{prefix}(\widetilde{\pi}_{i}(u,v),\Gamma(u,v))=|\widetilde{\pi}_{i} (u,v)|\)_._ Proof.: First, we show that (1) implies (2). Assume that for all adjacent nodes \(u,v\), it holds that \(\pi_{i}(u,v)=\widetilde{\pi}_{i}(u,v)\). We prove by induction on \(j\leq\gamma_{i}\) that \(\operatorname{prefix}(\widetilde{\pi}_{i}(u,v),\Gamma(u,v))\geq j\), for all \((u,v)\in E\). For \(j=0\) the claim is trivial. Assume the claim holds up to \(j-1\) and consider \(j\geq 1\). By the induction assumption, for any node \(u\), the first \(j-1\) symbols \(\widetilde{\pi}_{i}(w,u)\) are the same as \(\Gamma(w,u)\), therefore, by Lemma 4.6, the \(j\)'th entry in \(\pi_{i}(u,v)\) is also consistent with \(\Gamma(u,v)\). Moreover, since \(\pi_{i}(u,v)=\widetilde{\pi}_{i}(u,v)\), it also holds that the \(j\)'th entry in \(\widetilde{\pi}_{i}(u,v)\) is also consistent with \(\Gamma(u,v)\). This concludes the induction. Next, we show that (2) implies (1). Assume that for all adjacent nodes \(u,v\) it holds that \[\operatorname{prefix}(\widetilde{\pi}_{i}(u,v),\Gamma(u,v))=|\widetilde{\pi} _{i}(u,v)|.\] By Lemma 4.6, since \(\operatorname{prefix}(\widetilde{\pi}_{i}(u,v),\Gamma(u,v))\geq\gamma_{i}-1\) for all neighbors \(u\in N(v)\), then \(\operatorname{prefix}(\pi_{i}(v,w),\Gamma(v,w))\geq\gamma_{i}\) for any neighbor \(w\in N(v)\). Since by assumption, \(\operatorname{prefix}(\widetilde{\pi}_{i}(v,w),\Gamma(v,w))\geq\gamma_{i}\), then it holds that \(\operatorname{prefix}(\widetilde{\pi}_{i}(v,w),\pi_{i}(v,w))\geq\gamma_{i}\). The claim follows. **Lemma 4.8**.: _If \(i\) is a good global-round, then \(g(i+1)\geq g(i)\)._ Proof.: By Lemma 4.5, each node \(v\) receives at the end of the global-round the correct value \(M_{i}(u,v)\). Let \((u,v)\in E\) be a pair of nodes minimizing \(g(u,v,i)\). We distinguish between the following cases: **Case 1: \(|\widetilde{\pi}_{i}(u,v)|\leq\ell_{i}-1\) or \(\operatorname{prefix}(\widetilde{\pi}_{i}(u,v),\Gamma(u,v))<|\widetilde{\pi} _{i}(u,v)|\).** In both cases, it holds that \(\operatorname{prefix}(\widetilde{\pi}_{i+1}(u,v),\Gamma(u,v))=\operatorname {prefix}(\widetilde{\pi}_{i}(u,v),\Gamma(u,v))\). To see this note that only the last symbol of transcripts of length \(\ell_{i}\) are deleted. **Case 2: \(|\widetilde{\pi}_{i}(u,v)|=\ell_{i}\), and \(\operatorname{prefix}(\widetilde{\pi}_{i}(u,v),\Gamma(u,v))=|\widetilde{\pi} _{i}(u,v)|\)**. Since the pair \(u,v\) minimizes \(g(i)\), then for all neighboring pairs \(u^{\prime},v^{\prime}\) it holds that \(g(u^{\prime},v^{\prime},i)\geq g(u,v,i)=2\ell_{i}\). Consequently, \(\operatorname{prefix}(\widetilde{\pi}_{i}(u^{\prime},v^{\prime}),\Gamma(u^{ \prime},v^{\prime}))=|\widetilde{\pi}_{i}(u^{\prime},v^{\prime})|\), and \(|\widetilde{\pi}_{i}(u^{\prime},v^{\prime})|=|\widetilde{\pi}_{i}(u,v)|\). By Invariant 1 it holds that \(\gamma(u^{\prime},i)=\gamma(v^{\prime},i)\) for any \(u^{\prime},v^{\prime}\), and by Lemma 4.7, it follows that \(\operatorname{GoodState}_{i}=1\), and no symbol is deleted in any transcript, i.e. \(g(i+1)\geq g(i)\). **Lemma 4.9**.: _If \(i\) is a good global-round, then \(\Phi(i+1)\geq\Phi(i)+1\) w.h.p._ Proof.: By Lemma 4.5, each node \(v\) receives at the end of the global-round the correct value \(M_{i}(u,v)\). If for some two nodes \(v_{1},v_{2}\), it holds that \(\gamma(v_{1},i)\neq\gamma(v_{2},i)\), then every \(v\) such that \(\gamma(v,i)=\ell_{i}\) deletes the last symbol, meaning \(f(i+1)=f(i)-1\). Moreover, by Lemma 4.8, the value \(g(i+1)\geq g(i)\), and the claim follows. Otherwise, the exists some \(\gamma_{i}\) such that \(\gamma(v,i)=\gamma_{i}\). We split into two cases: first, assume that there exist adjacent nodes \(u,v\) such that \(\pi_{i}(u,v)\neq\widetilde{\pi}_{i}(u,v)\). Since \(\mathcal{H}\) is a pairwise-independent hash function, \(h_{R_{i}(u,v)}(\pi_{i}(u,v))\neq h_{R_{i}(u,v)}(\widetilde{\pi}_{i}(u,v))\) w.h.p. over the randomness \(R_{i}(u,v)\) during the "Rewind-If-Error" Phase.19 Therefore, \(\operatorname{GoodState}_{i}=0\), and by Lemma 4.5 all nodes in the network receive this value. Meaning each node \(v\) deletes for each neighbor \(u\) the last symbol of \(\widetilde{\pi}_{i}(u,v)\), implying that \(\ell_{i+1}=\ell_{i}-1\). On the other hand, by Lemma 4.8, \(g(i)\) does not decrease. This implies that in this case \(\Phi(i+1)\geq\Phi(i)+1\). It remains to consider the case where \(\gamma(v,i)=\gamma_{i}\) and \(\pi_{i}(u,v)=\widetilde{\pi}(u,v)\) for all \((u,v)\in E\). By Lemma 4.7 on round \(i\), combined with Lemma 4.5, each node \(v\) receives from each \(u\) the value \(m_{i}(u,v)\), which is the next message according to \(\Gamma\). Therefore, \(g(i+1)=g(i)+2\), while \(\ell_{i+1}\leq\ell_{i}+1\), meaning \(\Phi(i+1)\geq\Phi(i)+1\). **Lemma 4.10**.: prefix\((\widetilde{\pi}_{r^{\prime}}(u,v),\Gamma(u,v))\geq r\)_, for every \((u,v)\in E\). Consequently, the protocol \(\mathcal{A}^{\prime}\) has the same output as \(\mathcal{A}\)._ Proof.: By Lemma 4.9, in a good global-round, the potential function increases by at least one. By Lemma 4.4, in a bad global-round, it decreases by at most three. By Lemma 4.3, there are at most \(r\) bad global-rounds, and at least \(r^{\prime}-r=4r\) good global-rounds. Therefore, at the end of all the global-rounds, \(\Phi(r^{\prime})\geq 4r-3r=r\). Since \(\Phi(r^{\prime})\geq r\), by Eq. (10), it holds that prefix\((\widetilde{\pi}_{i}(u,v),\Gamma(u,v))\geq\Phi(r^{\prime})\geq r\) for every \((u,v)\in E\).. Recall that the output of \(v\) in \(\mathcal{A}^{\prime}\) is determined by \(\{\widetilde{\pi}(u,v)\}_{u\in N(v)}\) and \(v\)'s input. Since the first \(r\) symbols of the estimated incoming transcript are equal to the incoming transcript in \(\mathcal{A}\), the output is also the same. We are now ready to complete the proof of Thm. 4.1. Proof of Theorem 4.1.: The correctness follows by Lemma 4.10. The round complexity of the first phase is \(\widetilde{O}(\mathsf{D_{TP}})\) rounds. By Lemma 4.2, the correction phase takes also \(\widetilde{O}(\mathsf{D_{TP}})\) rounds. Each Alg. \(\Pi_{j}\) of the last phase takes \(\widetilde{O}(\mathsf{D_{TP}})\) rounds. Therefore, also \(\Pi_{j,RS}\) takes \(\widetilde{O}(\mathsf{D_{TP}})\) rounds. The final round complexity then follows by the RS-scheduling of Lemma 3.3. The proof follows. ### Applications **Congested Clique Model.** We provide a variant of Theorem 1.6 for the bounded error rate setting. **Theorem 4.11** (Mobile-Resilient Compilers in the Congested Clique).: _For any algorithm \(\mathcal{A}\) that runs in \(r\) congested-clique rounds, there is an equivalent algorithm \(\mathcal{A}^{\prime}\) with \(f\)-round error rate resilience for \(f=\Theta(n/\log n)\) that runs in \(\widetilde{O}(r)\) CONGESTED CLIQUE rounds._ Proof.: Similarly to Theorem 1.6, The proof follows by noting that an \(n\)-node clique over vertices \(V=\{v_{1},\ldots,v_{n}\}\) contains a \((k,\mathsf{D_{TP}},\eta)\) tree packing \(\mathcal{T}=\{T_{1},\ldots,T_{k}\}\) for \(k=n\) and \(\mathsf{D_{TP}},\eta=2\). Specifically, for every \(i\in\{1,\ldots,n\}\), let \(T_{i}=(V,E_{i})\) where \(E_{i}=\{(v_{i},v_{j})\mid v_{j}\in V\}\), be the star centered at \(v_{i}\). It is easy to see that the diameter and the load is exactly \(2\). The claim follows immediately from Theorem 4.1. **Expander Graphs.** Next we show an analog of Theorem 1.7 in to the bounded error rate setting. **Theorem 4.12**.: _[Mobile-Resilient Compilers for Expander Graphs] Assume \(G\) is a \(\phi\)-expander with minimum degree \(k=\widetilde{\Omega}(1/\phi^{2})\). Then, for any algorithm \(\mathcal{A}\) that runs in \(r\) CONGEST rounds, there is an equivalent algorithm \(\mathcal{A}^{\prime}\) which is resilient to round-error-rate \(f\), for \(f=\widetilde{O}(k\phi^{2})\) that runs in \(\widetilde{O}(r/\phi)\) CONGEST rounds._ Let \(c^{\prime\prime}\) be the constant of Lemma 3.12, and let \(f=\frac{k\phi^{2}}{c\log^{c^{\prime}}n}\) for sufficiently large constants \(c,c^{\prime}\), in particular with regards to \(c^{\prime\prime}\). We assume \(\mathcal{A}\) is an \(r\) round algorithm, and let \(r^{\prime}\) be the round complexity of an \(r\) round algorithm compiled Theorem 4.1 against round-error rate of at most \((4c^{\prime\prime}+2)f\). We slightly adapt the tree packing algorithm used in Theorem 1.7, to have the part of computing the weak tree packing to be resilient to many faults. This is done very simply by repeating each round in this computation for \(O(r^{\prime}\cdot\phi/\log n)\) rounds, and then running the algorithm of Theorem 4.1, such that it is resilient to round-error rate of \((4c^{\prime\prime}+2)f\). The Algorithm.We define a padded-round as a round of communication where each node \(u\) sends to each neighbor \(v\) a message \(m(u,v)\) repeatedly for \(s=r^{\prime}\cdot\phi/\log n\) rounds. The received message of \(v\) from \(u\) is defined as the majority value seen by \(v\) during these \(s\) rounds, or \(0\) if a majority does not exist. The following protocol is defined by a series of padded rounds: In the first padded-round, for every edge \((u,v)\in E\) with \(\operatorname{id}(u)>\operatorname{id}(v)\), let \(u\) choose a color \(c(u,v)\) sampled uniformly at random in \([k]\) and send this color to \(v\). Let \(c(u,v)\) be the received message of \(u\) from \(v\) in this padded-round. \(u\) sets the color of the edge \((u,v)\) to the received value \(c(v,u)\). For every \(i\in[k]\), define the directed subgraph: \[G_{i}=\{(u,v)\ \mid\ \operatorname{id}(u)>\operatorname{id}(v),c(u,v)=i\} \cup\{(v,u)\ \mid\ \operatorname{id}(u)>\operatorname{id}(v),c(v,u)=i\}.\] Let \(N_{i}(u)\) be the outgoing neighbors of \(v\) in \(G_{i}\). Each node \(u\) proceeds to do the following padded-round BFS procedure in each \(G_{i}\): Set a variable \(I_{i,0}(u)=\operatorname{id}(u)\) and \(\operatorname{parent}_{i}(u)=\bot\) for each color \(i\in[k]\). For \(\ell=1,\ldots,z=4c^{\prime\prime}\log n/\phi\) padded-rounds, in parallel for all colors \([k]\), each node \(u\) sends to each (outgoing) neighbor \(v\in N_{i}(u)\) the value \(I_{i,\ell-1}(u)\). Let \(L_{i,\ell}\) be the set of messages \(u\) receives from neighbors \(\{v\in N(u)\mid c(u,v)=i\}\). Each node \(u\) sets \(I_{i,\ell}(u)=\max_{\operatorname{id}(v)\in L_{i,\ell}\cup\operatorname{id} (u)}(\operatorname{id}(v))\), and if this value strictly increases, then it sets \(\operatorname{parent}_{i}(u)\) to be the neighbor from which this value has arrived, oriented towards the other endpoint. In the final padded-round, each node \(u\) sends in parallel for each \(i\) a message to \(\operatorname{parent}_{i}(u)\), to orient the edge \(\operatorname{parent}_{i}(u)\) towards itself. Each node that receives an edge orientation request, locally sets the orientation of that edge towards itself. AnalysisA color \(i\in[k]\) is denoted as _good_ if the adversary did not control any edge of \(G_{i}\) for more than \(s/2\) rounds during the tree computation phase. Otherwise, the color \(i\) is defined as _bad_. In particular, in any good color, the adversary did not change the outcome of any of the BFS procedures, and the guarantees of Lemmas 3.13 and 3.14 hold for the altered algorithm and altered notion of good color, as formalized in the following lemma: **Lemma 4.13**.: _For each good color \(i\in[k]\), the output directed subgraph \(G_{i}^{\prime}\) defined by \(\{\operatorname{parent}_{i}(u)\}_{u\in V}\) is a directed spanning tree of depth \(4c^{\prime\prime}\log n/\phi\) oriented towards the node with the largest ID in the network, \(v_{r}=\operatorname{argmax}_{v}\operatorname{id}(v)\)._ Proof.: For each good color \(i\in[k]\), we have that \(G_{i}=G[p]\) where each edge in \(G\) is sampled independently with probability \(p=1/k=O(\frac{\phi\log n}{f})\). Note that since the adversary does not affect the padded-received message of any of the \(G_{i}\) edges, all edges in \(G_{i}\) are bidirectional. By Theorem 3.11 for \(\alpha=\phi\), \(\gamma=O(\frac{f}{\phi^{3}})\) such that \(p=\min\{1,\ln n/(\alpha^{2}\gamma)\}\), then w.h.p. a subgraph obtained by sampling by taking each edge in \(G\) with probability \(p\) has conductance at least \(\phi/4\). By the union bound, all graphs \(G_{i}\) have conductance \(\geq\phi/4\), w.h.p. Therefore, by Lemma 3.12 the graph \(G_{i}\) has diameter at most \(4c^{\prime\prime}\cdot\log n/\phi\). Since \(i\) is good, the adversary did not control any of the edges of \(G_{i}\) throughout the entire algorithm. Since \(v_{r}=\operatorname{argmax}_{v}\operatorname{id}(v)\), and all the message \(v_{r}\) receives are real IDs of nodes, it never changes the value \(\operatorname{parent}_{i}(v_{r})=\bot\). Let \(V_{i}(j)\) be nodes of distance \(j\) from \(v_{r}\) in \(G_{i}\). Let \(V_{i}(-1)=\{\bot\}\). We show by induction on \(j\geq 0\) that for any \(u\in V_{i}(j)\), it holds that \(I_{i,j}(u)=\operatorname{id}(v_{r})\) and \(\text{parent}_{i}(u)\in V_{i}(j-1)\). Clearly, \(I_{i,0}(v_{r})=\text{id}(v_{r})\) and as \(\text{parent}_{i}(v_{r})=\bot\), the claim holds for \(j=0\). Assume this is the case for \(j-1\). We note that \(V\setminus\left(\bigcup_{a=0}^{j-1}V_{a}\right)\) has not seen the value \(\text{id}(v_{r})\) before round \(j\), as they are of distance at least \(j\) from \(v_{r}\) in \(G_{j}\). By induction, all \(u\in V_{j-1}\) set \(I_{i,j-1}(u)=v_{r}\), therefore any node \(u\in V_{j}\) receives a message from a node \(v\in V_{j-1}\) with the value \(\text{id}(v_{r})\) and sets its parent to be in \(V_{j-1}\). The induction holds and the claim follows. We conclude with the following claim: **Lemma 4.14**.: _The collection of subgraphs \(\{G^{\prime}_{i}\}_{i\in[k]}\) is a \(\text{weak-}(k,\mathsf{D_{TP}},\eta)\) Tree-Packing with \(\mathsf{D_{TP}}=O(\log n/\phi)\) and \(\eta=2\)._ Proof.: The total number of corrupted messages is at most \((4c^{\prime\prime}+2)r^{\prime}f\). For a color to be bad, there has to be at least \(r^{\prime}(\phi/\log n)/2\) corrupted messages associated with it. Since we assume \(f=\widetilde{O}(k\phi^{2})=\widetilde{O}(r^{\prime}\phi)\), there are at most \(\frac{(4c^{\prime\prime}+2)r^{\prime}f}{r^{\prime}(\phi/\log n)/2}=\frac{(4c^ {\prime\prime}+2)\cdot f}{(\phi/\log n)/2}\leq k/10\) bad colors, where the last inequality follows from choosing large enough constants \(c,c^{\prime}\) for \(f\). Therefore, there are at least \(0.9k\) good colors. By Lemma 4.13, for every good color \(i\), the subgraph \(G^{\prime}_{i}\) corresponds to a spanning tree of depth at most \(\mathsf{D_{TP}}\) and rooted at \(v_{r}\). Since each directed edge is committed to a single color, we get that \(\eta\leq 2\) (even among the bad subgraphs). Proof of Theorem 4.12.: The round complexity of this procedure is at most \((4c^{\prime\prime}+2)r^{\prime}\). By Lemma 4.14, the first phase of the algorithm computes a weak-\((k,\mathsf{D_{TP}},\eta)\) tree packing. In the second phase, we run \(\mathcal{A}\) compiled by 4.13 on round-error rate \((4c^{\prime\prime}+2)f\). Since its round complexity of the second phase is \(r^{\prime}\) and since there are at most \((4c^{\prime\prime}+2)r^{\prime}f\) corrupted messages in total, we are guaranteed that the output is the same as in \(\mathcal{A}\). ## 5 Mobile Resilience using Fault-Tolerant Cycle Covers In this section we show a \(f\)-mobile-resilient algorithms. We use the same techniques as in [60] for the unicast case, and extend this approach to obtain a general compiler using cycle covers. We also note that this approach is also a direct generalization of the approach of the compiler of [59] against a \(1\)-mobile byzantine adversary. **Definition 8** (Low-Congestion FT Cycle-Covers).: _For a given graph \(G=(V,E)\), an \(f\)-FT \((\operatorname{cong},\operatorname{dilation})\)-Cycle Cover is a collection of paths20\(\mathcal{P}=\bigcup_{e\in E}\mathcal{P}(e)\) where each \(\mathcal{P}(e=(u,v))\) consists of \(k\) edge-disjoint \(u\)-\(v\) paths, with the following properties: (i) \(\max_{P\in\mathcal{P}}|P|\leq\operatorname{dilation}\) and (ii) each \(e\in E\) appears on at most \(\operatorname{cong}\) paths in \(\mathcal{P}\) (i.e., \(\operatorname{load}(\mathcal{P})\leq\operatorname{cong}\))._ Footnote 20: For our purposes, it is instructive to view it as a collection of \(f\) edge-disjoint paths, between each neighboring pair \(u,v\). This is equivalent to \((f-1)\) cycles covering the edge \((u,v)\). **Theorem 5.1** (Existence of Low-Congestion FT-Cycle Covers, [40]).: _Every \(f\)-edge connected graph \(G=(V,E)\) admits an \(f\)-FT \((\operatorname{cong},\operatorname{dilation})\)-Cycle Cover with \(\operatorname{cong},\operatorname{dilation}=D^{O(f)}\log(n)\). Moreover, in the fault-free setting these cycles can be computed in \(\operatorname{cong}+\operatorname{dilation}\) rounds._ For an \(f\)-FT cycle cover \(\mathcal{P}\), we define the _path-conflict graph_ to be a graph \(H=(E,E_{H})\) with a vertex \(v_{e}\) for every edge \(e\in E\), and where \(\{v_{e_{1}},v_{e_{2}}\}\in E_{H}\) if and only if there are paths \(P_{1}\in P(e_{1})\) and \(P_{2}\in P(e_{2})\) that share at least one edge. **Lemma 5.2**.: _Let \(\mathcal{P}\) be an \(f\)-FT \((\operatorname{cong},\operatorname{dilation})\)-Cycle Cover. There exists a coloring of the edges of \(E\), \(\operatorname{Col}:E\to[f\cdot\operatorname{dilation}\cdot\operatorname{cong}+1]\), such that for two distinct edges \(e_{1},e_{2}\in E\), if \(\operatorname{Col}(e_{1})=\operatorname{Col}(e_{2})\), then any two paths \(P_{1}\in P(e_{1})\) and \(P_{2}\in P(e_{2})\) are edge-disjoint._ Proof.: We note that in the _path-conflict graph_\(H\), the degree of each vertex \(v_{e}\) in \(H\) is at most \(f\cdot\operatorname{dilation}\cdot\operatorname{cong}\), since for each \(v_{e}\) there are \(f\) paths of size at most dilation, and for each such edge \(e\), there are at most cong other paths in \(\mathcal{P}\) containing \(e\). Therefore, there exist a coloring \(\operatorname{Col}\) of the vertices by \((f\cdot\operatorname{dilation}\cdot\operatorname{cong}+1)\) colors so that no two adjacent vertices have the same color. In the graph \(G\), coloring each edge of \(e\in E\) by the color \(\operatorname{Col}(v_{e})\), we obtain a coloring such that if two edges \(e_{1},e_{2}\) share the same color, then all paths \(P(e_{1})\) and \(P(e_{2})\) are edge disjoint. We call an edge coloring \(\operatorname{Col}\) with the above properties of Lemma 5.2 good cycle coloring. We note that in a fault-free network, a good cycle coloring can be computed for an \(f\)-FT \((\operatorname{cong},\operatorname{dilation})\)-Cycle Cover by simulating a \((\Delta+1)\)-coloring in the the _path-conflict graph_\(H\) (as defined in Lemma 5.2). Next, we show that \(G\) can simulate a round in \(H\) using \(O(\operatorname{dilation}\cdot\operatorname{cong}^{2})\) rounds. **Lemma 5.3**.: _For a \(\mathcal{P}\) be an \(f\)-FT \((\operatorname{cong},\operatorname{dilation})\)-Cycle Cover, a \(\mathsf{CONGEST}\)-round of its path-conflict graph can be simulated in \(O(\operatorname{dilation}\cdot\operatorname{cong}^{2})\)\(\mathsf{CONGEST}\)-rounds in \(G\) under a fault-free assumption._ Proof.: In the first step, for each edge \((u,v)\), the vertex \(u\) sends through all paths \(P(u,v)\) the identifier \(\operatorname{id}(u)\circ\operatorname{id}(v)\). This can be done in \(O(\operatorname{dilation}\cdot\operatorname{cong})\) rounds, by a standard pipelining argument. Following this step, the endpoints of each edge \(e\) knows entire list \(W_{e}\) of edge identifiers \((u,v)\) such that \(e\in P(u,v)\). In the second step, each edge \(e\in E\) sends back through each of these paths the entire list \(W_{e}\). Since for any \(e\) the list \(|W_{e}|\leq\operatorname{cong}\), this can be done in \(O(\operatorname{cong}^{2}\cdot\operatorname{dilation})\) rounds using a standard pipelining argument. Given this, each node \(u\) knows for each incident edge \((u,v)\) the neighbors of \((u,v)\) in the _path-conflict graph_. To simulate a \(\mathsf{CONGEST}\) round in this graph, each node transmits in reverse the communication it received in the second step, but for each edge identifier \((u^{\prime},v^{\prime})\) it received, it also sends the message \((u,v)\) would send to \((u^{\prime},v^{\prime})\) in the simulated \(\mathsf{CONGEST}\) round on \(H\). The claim follows. **Corollary 5.4**.: _For a \(\mathcal{P}\) be an \(f\)-FT \((\operatorname{cong},\operatorname{dilation})\)-Cycle Cover, a good coloring can be found in \(\widetilde{O}(\operatorname{cong}^{2}\cdot\operatorname{dilation})\) rounds of \(G\)._ Proof.: By Lemma 5.3, we can simulate the deterministic \((\Delta+1)\)-coloring algorithm of [36], which runs in \(\widetilde{O}(1)\) congest rounds. If a vertex \(v_{e}\in H\) is assigned the color \(c(e)\), the edge \(e\) colors itself with \(c(e)\). By definition of the _path-conflict graph_, any edge pair \(e_{1},e_{2}\) share an edge if and only if their paths intersect. Therefore, a vertex coloring on \(H\) such that two adjacent vertices share the same color implies a good coloring on the edges of \(G\) w.r.t. the path cover \(\mathcal{P}\). **Theorem 5.5** (\(f\)-Mobile Resilient Simulation via FT Cycle-Covers).: _Assume that a \(k\)-FT \((\operatorname{cong},\operatorname{dilation})\)-Cycle Cover is known in a distributed manner, and a good cycle coloring is known for the edges, and let \(\mathcal{A}\) be an \(r\)-round algorithm in the fault-free setting. Then, there is an equivalent \(f\)-mobile-resilient algorithm \(\mathcal{A}^{\prime}\) with round complexity of \(r^{\prime}\) for \(f\leq k/c\) for a sufficiently large constant \(c\) and \(r^{\prime}=\operatorname{dilation}\cdot\operatorname{cong}\cdot r\)._ By combining Theorem 5.1 and 5.5, we obtain Theorem 1.4. The rest of the section is devoted for proving Thm. 5.5. Simulation of the \(i^{th}\) Round of Alg \(\mathcal{A}\).Assume the network simulated rounds \(1,\ldots,i-1\), and each node \(v\) is given as input the some outgoing messages \(m_{i}(v,u_{1}),\ldots,m_{i}(v,u_{\deg(v)})\) for every neighbor \(u_{i}\), where our goal is for every node \(v\) to output \(m_{i}(u_{1},v),\ldots,m_{i}(u_{\deg(v)},v)\). The network runs the following protocol: For any \(j\in[k]\), let \(E_{j}=\{e\mid\operatorname{Col}(e)=j-1\}\). The network performs \(j=1,\ldots,f\cdot\operatorname{dilation}\cdot\operatorname{cong}+1\) iterations. In each iteration, for \(t=1,\ldots,(2f\cdot\operatorname{dilation}+\operatorname{dilation}+1)\), in parallel for each \((u,v)\in E_{j}\), node \(u\) sends \(m_{i}(u,v)\) repeatedly over each path \(P\in P(e)\), and whenever an edge in in \(P(u,v)\) receives a message, the receiving endpoint propagates it forward to the next edge in the path. Let \(m_{i}(u,v,\ell,t)\) be the message \(v\) receives over the last edge of \(P_{\ell}\) exactly \(t\) rounds after the start of iteration \(\operatorname{Col}(u,v)\). Let \(M_{i}(u,v)=\{m_{i}(u,v,\ell,t)\mid\ell\leq k\wedge\operatorname{dilation} \leq t\leq 2f\cdot\operatorname{dilation}+\operatorname{dilation}+1\}\) be the multi-set of messages \(v\) receives between rounds \(\mathsf{D_{TP}}\) and \(2f\cdot\operatorname{dilation}+\operatorname{dilation}+1\). For each neighbor \(u\), the node \(v\) outputs the majority value over \(\mathsf{MAJ}(M_{i}(u,v))\). **Lemma 5.6**.: _Let \((u,v)\in E_{j}\). Then \(\mathsf{MAJ}(M_{i}(u,v))=m_{i}(u,v)\)._ Proof.: The number of rounds in iteration \(j\) is \(t=2f\operatorname{dilation}+\operatorname{dilation}+1\). Therefore, there are at most \((2f\operatorname{dilation}+\operatorname{dilation}+1)f\) faults. Let \(L_{i}(u,v)\) be the number of messages in \(M_{i}\) that are not equal to \(m_{i}(u,v)\). Since any fault changes the value of at most one \(m_{i}(u,v,\ell,t)\in M_{i}\), then \(L_{i}(u,v)\leq(2f\operatorname{dilation}+\operatorname{dilation}+1)f\). But the number of \(m_{i}(u,v,\ell,t)\) that \(v\) receives is \((2f\operatorname{dilation}+\operatorname{dilation}+1-\operatorname{dilation}) \cdot k=(2f\operatorname{dilation}+1)\cdot(2f+1)=2f(f\operatorname{dilation }+\operatorname{dilation}+1)+1\geq 2L_{i}(u,v)+1\). Therefore, at least \(L_{i}(u,v)+1\) of the messages are equal to \(m_{i}(u,v)\), meaning \(\mathsf{MAJ}(M_{i}(u,v))=m_{i}(u,v)\). Theorem 5.5 follows, since in each round \(i\), and each pair of adjacent nodes \(u,v\), \(v\) successfully receives \(m_{i}(u,v)\). ## Acknowledgments We are very grateful to Yanic Maus for his extremely valuable comments and suggestions, and for the time and effort he invested in carefully reading our paper. This project is funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 949083), and by the Israeli Science Foundation (ISF), grant No. 2084/18.
2308.11284
LEAP: Efficient and Automated Test Method for NLP Software
The widespread adoption of DNNs in NLP software has highlighted the need for robustness. Researchers proposed various automatic testing techniques for adversarial test cases. However, existing methods suffer from two limitations: weak error-discovering capabilities, with success rates ranging from 0% to 24.6% for BERT-based NLP software, and time inefficiency, taking 177.8s to 205.28s per test case, making them challenging for time-constrained scenarios. To address these issues, this paper proposes LEAP, an automated test method that uses LEvy flight-based Adaptive Particle swarm optimization integrated with textual features to generate adversarial test cases. Specifically, we adopt Levy flight for population initialization to increase the diversity of generated test cases. We also design an inertial weight adaptive update operator to improve the efficiency of LEAP's global optimization of high-dimensional text examples and a mutation operator based on the greedy strategy to reduce the search time. We conducted a series of experiments to validate LEAP's ability to test NLP software and found that the average success rate of LEAP in generating adversarial test cases is 79.1%, which is 6.1% higher than the next best approach (PSOattack). While ensuring high success rates, LEAP significantly reduces time overhead by up to 147.6s compared to other heuristic-based methods. Additionally, the experimental results demonstrate that LEAP can generate more transferable test cases and significantly enhance the robustness of DNN-based systems.
Mingxuan Xiao, Yan Xiao, Hai Dong, Shunhui Ji, Pengcheng Zhang
2023-08-22T08:51:10Z
http://arxiv.org/abs/2308.11284v1
# LEAP: Efficient and Automated Test Method for NLP Software ###### Abstract The widespread adoption of DNNs in NLP software has highlighted the need for robustness. Researchers proposed various automatic testing techniques for adversarial test cases. However, existing methods suffer from two limitations: weak error-discovering capabilities, with success rates ranging from 0% to 24.6% for BERT-based NLP software, and time inefficiency, taking 177.8s to 205.28s per test case, making them challenging for time-constrained scenarios. To address these issues, this paper proposes LEAP, an automated test method that uses LEAP flight-based Adaptive Particle swarm optimization integrated with textual features to generate adversarial test cases. Specifically, we adopt Levy flight for population initialization to increase the diversity of generated test cases. We also design an inertial weight adaptive update operator to improve the efficiency of LEAP's global optimization of high-dimensional text examples and a mutation operator based on the greedy strategy to reduce the search time. We conducted a series of experiments to validate LEAP's ability to test NLP software and found that the average success rate of LEAP in generating adversarial test cases is 79.1%, which is 6.1% higher than the next best approach (PSO\({}_{attack}\)). While ensuring high success rates, LEAP significantly reduces time overhead by up to 147.6s compared to other heuristic-based methods. Additionally, the experimental results demonstrate that LEAP can generate more transferable test cases and significantly enhance the robustness of DNN-based systems. NLP Software Testing, Particle Swarm Optimization ## I Introduction In the field of NLP, Deep Neural Networks (DNNs) (\(e.g.\), ELMo [1], BERT [2], GPT [3], T5 [4]) have been developing rapidly in recent years. These networks are capable of extracting semantic, structural, and other information from text and have been widely integrated as new software components in safety-critical systems like market monitoring [5], code review [6], and intelligence analysis [7]. Such systems are referred to as _DNN-based systems_. To address issues caused by malicious inputs, the software engineering (SE) community has proposed various techniques, including test coverage [8, 9, 10], fuzz testing [11, 12, 13], and automated user interface testing [14, 15, 16]. However, unlike software development methods that follow lifecycle frameworks [17, 18], DNN-based systems do not require developers to design the system's rules. Instead, they rely on DNNs learning from large amounts of data to make decisions, which makes it challenging to ensure the robustness of DNN-based systems using traditional software testing methods. Moreover, recent studies [19, 20] have shown that DNN-based systems have significant robustness pitfalls due to the uninterpretability of such systems and the complexity of training data, as demonstrated by the following scenario. As shown in Fig. 1, the military intelligence analysis system is crucial to military information construction. It must classify a vast amount of text quickly to enhance intelligence analysis effectiveness and reduce command information loop cycles. However, when minor perturbations are added to the original intelligence, the system incorrectly classifies the text label as "Replenishment Method" instead of "Battlefield Situation." This error can result in valuable information being overlooked in the intelligence database, leading to missed fighting opportunities. Therefore, generating as many adversarial texts as possible as test cases is crucial to improving military intelligence analysis capabilities and advancing subsequent Fig. 1: Subtle perturbed text (red) misleads military intelligence analysis systems to judge text labels from “Battlefield Situation” to “Replenishment Method”. strategic deployments. Since it is difficult to manually write numerous test cases for the DNN under test, which we refer to as the _victim model_, in this paper, inspired by fuzz testing, we explore the potential of generating adversarial test cases [19] in a heuristic manner to deceive DNNs' decision-making. This approach facilitates efficient detection of defects and vulnerabilities in NLP software. We summarize the challenges faced by existing work as follows: (1) _Enhancing the ability to detect errors for DNN-based systems is the most urgent issue_. The testing process builds confidence in the system's quality by identifying and resolving defects. However, existing white-box and greedy strategy-based testing methods [21, 22, 23] generate adversarial test cases based on a fixed perturbation paradigm, resulting in a low success rate of 0.4% to 15.2% on the commonly used AG's News dataset [24] for toxic text detection tasks. Although heuristic testing methods [25, 26] generate more successful test cases with a success rate of up to 70.5% by iterating multiple times in an ample perturbation space, there is still room for improvement. Fig. 1 illustrates the perturbation strategies of two existing works, including synonym replacement [25] and character deletion [27], which may generate syntactic errors when the replaced word has different part-of-speech tags or meanings. Such perturbations can be easily detected by syntactic-checking tools in software systems, leading to the generated test cases incapable of revealing errors in the system. A low success rate generates numerous invalid test cases, making testing methods difficult to work on small datasets. (2) _The existing methods take too much time to generate test cases._ Take the military software testing scenario in Fig. 1 as an example - the rapid change of the battlefield situation requires the test methods to generate test cases quickly [28]. Once the time limit of testing the victim model is exceeded, the generated test cases by the test method are useless for the improvement of the robustness of the victim model even if they can mislead the system's decision. Although current heuristic testing methods [25, 26] can generate more successful test cases, the time of generating test cases for text sequences of length up to 250 is 58.53s (IMDB [29]) and 177.81s (AG's News [24]) on average, making them impractical for time and query-constrained scenarios. To this end, we propose LEAP, an automated black-box testing method that employs PSO [30] to search for adversarial test cases in NLP discriminative models. To increase the diversity of the population and improve the attack success rate of the test case, LEAP first generates the initial population using Levy flight and Brownian motion based on synonyms for each word, prepared using WordNet [31]. Next, as stated in the existing work [32] that the exponentially increasing perturbation space and complex search process require the search algorithm to have nonlinear search capability. Inspired by Shi et al.'s work [33], we design a new adaptive inertia weight update strategy for LEAP to optimize the search path in an exponentially growing text space, which makes the search process more efficient. If LEAP fails to find any successful adversarial test case after each round of updating particles, a greedy mutation is performed to accelerate convergence. In this paper, we investigate the ability of LEAP to generate adversarial test cases for three victim models on three datasets, including the classical LSTM model [34] and the two popular pre-trained models, BERT [2] and DistilBERT [35], with metrics including attack success rate [36], change rate [36], and perplexity score [37]. We compared LEAP against different types of baselines, including gradient-based (i.e., A2T), greedy-based (i.e., Checklist and PRUTHI), and heuristic-based (i.e., PSO\({}_{attack}\) and IGA). Our results show that LEAP-generated test cases have the highest attack success rates with an average value of 79.1% against 73.0% for the next best approach (PSO\({}_{attack}\)). Furthermore, LEAP consumes lower time overhead than other heuristic-based methods by 2.14s\({}^{*}\)147.57s. It thus can efficiently detect defects in the system. In addition, we conducted a transferability test, adversarial training, and an ablation study to further evaluate the performance of LEAP. We also assessed the naturalness of LEAP's test cases and found that it generates less modified and more natural test cases in most cases, as evidenced by the lower perplexity scores [37]. The contributions of this paper include the following: * We propose a new automated testing method, LEAP, which uses Levy flight [38] along with Brownian motion to reasonably extend the perturbation range and improve the quality of adversarial test cases. During the iterative search in the perturbation space, LEAP utilizes the proposed adaptive algorithm and greedy mutation for planning the search path to reduce the time overhead and query count. Our implementation and all raw data are open-source1. Footnote 1: [https://github.com/lumos-xiao/LEAP](https://github.com/lumos-xiao/LEAP) * We conducted extensive experiments comparing LEAP with state-of-the-art automated testing methods for DNN-based NLP models. LEAP generated test cases with higher attack success rates while consuming less time. * We evaluated the effectiveness of adversarial test cases in improving the robustness of DNN-based systems. The experimental results show that adversarial training using LEAP's test cases can substantially (9.5%'13.2%) enhance the robustness of most victim models. ## II Background ### _Problem Definition_ As a fundamental aspect of testing techniques for DNN-based NLP systems, the test data of a test case comprises a perturbed text sequence, and the expected result is the predicted label of the original text. LEAP performs automated testing on DNNs embedded in NLP software to generate adversarial examples as perturbed sequences of the test cases. The notion of adversarial testing was introduced by Szegedy et al. [19]. In this test method, a tester adds subtle perturbations \(\varepsilon\) to the original data \(x\), which can be digested by a machine learning model (i.e., the victim model) \(f\), but is difficult for humans to perceive. This results in an adversarial example that can cause the victim model to produce erroneous results that differ from the original output \(f(x)\). This paper focuses on generating test cases using black-box adversarial test methods, which only manipulate the inputs to the model. LEAP uses the requirements of non-target adversarial testing as the objective function to find more test cases and test the DNNs more adequately. Specifically, given an original text segment \(T_{ori}\) in the dataset and the corresponding adversarial test case \(T_{adv}\), the optimization problem of LEAP can be defined as \[\underset{T_{adv}\in C(T_{ori})}{\arg\min}\|T_{ori},T_{adv}\| \tag{1}\] \[\text{s.t. }F\left(T_{ori}\right)\neq F\left(T_{adv}\right)\] where \(\|a,b\|\) denotes the difference between two pieces of text segments \(a\) and \(b\), such as change rate, embedding distance, etc; \(F\) denotes the victim model; \(C\) denotes LEAP's constraint on the quality of the adversarial test cases, here including the stop word filter [39] and the maximum change rate limit, because an excessive change rate affects the semantics and naturalness of the generated cases. ### _Particle Swarm Optimization_ PSO is a population collaborative-based search algorithm developed by Kennedy and Eberhart [30] in 1995. It simulates the foraging behavior of a flock of birds, where each individual is called a particle. It has been successfully applied in many fields, such as economic management [40], information science [41], engineering technology [42] and emotional binary classification in NLP [43]. In the original PSO, the particles simulate the solution of the optimization problem in the search space. The fitness value of a particle is evaluated according to its position, usually in terms of the objective function or optimization problem, and the particle velocity is a vector indicating the direction and distance it will move. The PSO process is described as follows: (1) Initialization. A random population of particles is generated, and the initialization involves randomly generating each particle's position and velocity vector. (2) Evolutionary iteration. Each particle searches the entire solution space by updating its velocity and position according to its optimal position \(lBest\) so far and the optimal position \(gBest\) of the population. When the particle population position is updated, the particle's optimal position and the population's optimal position are also updated. (3) Iteration termination. When the iteration termination condition is met, the algorithm stops searching, and the last optimal position searched is the optimal solution. In the evolutionary iteration, the updated equation for the velocity \(v_{d}^{n}\) of the \(n\)-th particle in \(d\) directions is \[v_{d}^{n}=wv_{d}^{n}+c_{1}*r_{1}*(lBest_{d}^{n}-x_{d}^{n})+c_{2}*r_{2}*(gBest_{ d}^{n}-x_{d}^{n}) \tag{2}\] The position update equation of the particle is \[x_{d}^{n}=x_{d}^{n}+v_{d}^{n} \tag{3}\] where \(x_{d}^{n}\) denotes the \(d\)-th dimension of the \(n\)-th particle in the current population; \(w\) is the inertia weight; \(c_{1}\) and \(c_{2}\) are learning factors; \(r_{1}\) and \(r_{2}\) are random numbers uniformly distributed in the range of [0,1]. The setting of control parameters tremendously influences the performance of PSO [33]. The parameters \(c_{1}\) and \(r_{1}\) indicate the degree of influence of particles by \(lBest\), i.e., how the particles assess their own information sharing and cooperation with other particles in the current population; \(c_{2}\) and \(r_{2}\) indicate the degree of influence of other particles by \(gBest\), i.e., how the particles assess the information sharing and cooperation of other particles. The inertia weight determines the succession to the current velocity of the particle [44]. For the iteration termination, there are two general termination conditions: (1) the current iteration number \(t\) reaches the preset maximum iteration number; or (2) there are individuals in the population that satisfy the accuracy requirements of the optimization problem. ## III Design of LEAP Fig. 2 overviews the proposed LEAP, which aims to generate adversarial test cases using actual examples from the test dataset. (1) It begins by counting all the words in the dataset and using a synonym lexicon called WordNet [31] to find synonyms for each word. (2) It then selects an original text sequence from the dataset and replaces a word with its synonym to obtain the initial position. The initial velocity (3) is obtained through a modified Levy flight. The initial position and velocity together determine the initial population of particles. Next, LEAP (4) performs an iterative search, using the confidence score of the victim model as the fitness function. It then (5) adaptively updates the velocity and position of the particles. LEAP also (6) performs greedy mutation based on the change rate and fitness score. Suppose the best individual Fig. 2: Overview of LEAP. in the population (7) satisfies the termination conditions, which include successfully changing the prediction of the original text and reaching the maximum number of iterations. In such a case, the output is an adversarial test case. Otherwise, the iteration continues. The depiction of LEAP is divided into two parts: 1) establishing the transformation space and 2) searching test cases based on PSO. ### _Establishing the transformation space_ To heuristically search for adversarial test cases, LEAP first defines the search space. Given that the original text \(T_{ori}\)={\(w_{1}\), \(w_{2}\),..., \(w_{n}\)} contains \(n\) words, LEAP generates potential test cases \(T^{\prime}_{ori}\) by replacing a word \(w_{i}\) in \(T_{ori}\) with its synonym \(w^{\prime}_{i}\), and multiple \(T^{\prime}_{ori}\)s for each original text \(T_{ori}\) constitute the search space of the test dataset together. LEAP focuses on generating semantically correct test cases and therefore uses WordNet to construct a synonym vocabulary for each word in the dataset. WordNet is a broad-coverage English lexical-semantic network where nouns, verbs, adjectives, and adverbs are respectively organized into a network of related words, with each set of synonyms representing a basic semantic concept and various relations connecting these sets. The process of generating a synonym vocabulary in LEAP using WordNet is superior to other methods, such as using word embedding [25], language model [39], and HowNet [26], this is because: * The word embedding method can find many candidate words by changing the embedding distance threshold to ensure diversity in the search space. However, it also introduces low-quality substitutions, such as lexical errors. * The method using language models to build the search space produces fluent sentences because these models (especially pre-trained models [2, 3]) are obtained from large text datasets and contain contextual semantic knowledge. However, they are prone to syntactic errors because linguistic features such as syntax and semantics are ignored. * HowNet [45] is an extensive dictionary that uses "sememe" to describe words and semantics. Different from WordNet, it only considers synonym and positive and negative colors in semantic relations, ignoring the summary of related words, such as antonyms of words. The search space established using HowNet is too small, reducing the population diversity of PSO and thereby affecting the algorithm's ability to find higher-quality test cases. We thus use WordNet to generate a synonym vocabulary for LEAP. The output of WordNet is a list of candidate words that are the synonym of each word \(w_{i}\) in the original text \(T_{ori}\). ### _Searching test cases based on PSO_ The respective synonym vocabulary for each word in the original text forms the search space of LEAP, which approaches automated testing as a combinatorial optimization problem and uses our improved PSO to find adversarial test cases that satisfy the objective function and constraints within the search space. We improve PSO since the original one is only suitable for continuous search spaces, but the perturbation space for the NLP test case generation task is discrete, LEAP thus updates PSO by probability according to the scalar shift discussed in Section III-B2 inspired by [26]. In addition, it improves PSO using Levy flight and adaptive methods to generate higher-quality adversarial test cases with less time overhead. Algorithm 1 outlines the search process. Next, we detail this algorithm. ``` 0:\(T_{ori}\): Original text, \(max\_iters\): Max iteration, \(pop\_size\): Number of the population in each iteration. 0:\(T_{ori}\): Adversarial test case. 1:\(T_{pop}\)+Levy-Initialization(\(T_{ori}\)) via \(Eq.7\); 2:if\(T_{adv}\) in \(T_{pop}\)then 3:return\(T_{adv}\) 4:endif 5:\(gBest\)=max{\(T_{pop}\)}; 6:\(lBest\)=copy{\(T_{pop}\)}; 7:while not exceed \(max\_iters\)do 8: Adaptively set inertia weight \(\omega\) via \(Eq.8\); 9:for\(n\) in \(pop\_size\)do 10: Update the velocity and position of particle \(n\); 11:endfor 12: Evaluate current population; 13: Greedy-Mutation based on change rate via \(Eq.11\); 14:for\(n\) in \(pop\_size\)do 15:if tin(\()\)\(>\)fit(\(Best\))then 16:\(lBest\)=pop\({}_{n}\); 17:endif 18:endfor 19:if fit(\(|Best\))\(>\)fit(\(gBest\))then 20:\(gBest\)=\(lBest\); 21:endif 22: Evaluate current population; 23:endwhile 24:return\(T_{adv}\)\(\leftarrow\)\(gBest\) ``` **Algorithm 1** Search Process in LEAP uses these fitness values as probabilities associated with each new sequence. Based on the probabilities, it randomly selects a word in \(T_{ori}\) and uses the best neighbor of this word to replace it. The replaced text is the initial position of the particle. In [26] that uses PSO to search for adversarial test cases, the velocity of the particles is initialized using Brownian motion [46], which focuses on local search. However, the search space for the NLP test case generation task increases exponentially as the number of words in the input case increases. This search process is prone to get stuck in local optima. To address this issue, LEAP uses Levy flight to initialize the velocity of the particles (Lines 1-6). Levy flight is a random wandering mode proposed by French mathematician Paul Pierre Levy in 1930s [38], in which the steps follow the Levy distribution and can move in multidimensional space with isotropic random directions. Fig. 3 illustrates the difference between Levy flight and Brownian motion. Within 500 steps, the step length of Brownian motion mainly bounces around the current point in a small area, while Levy flight has a wandering characteristic that combines short walks and long jumps. This means that Levy flight has a higher probability of taking long steps than normal random walks. In the context of NLP, this can be useful for exploring a larger potential search space, which can improve the chances of finding effective adversarial test cases. Specifically, Levy flight allows the population to explore a wider range of input space, leading to more diverse populations. A diverse population increases the chances of finding effective adversarial test cases and helps to avoid local optima. The step size of the Levy flight is determined by the Levy distribution, which is complex and has not been implemented yet. It is thus usually simulated using the Mantegna algorithm [47] with a step size \(s\) calculated by: \[s=\frac{\mu}{|v|^{1/\beta}} \tag{4}\] where \(\mu\sim N\left(0,\sigma_{\mu}^{2}\right)\), \(v\sim N\left(0,\sigma_{v}^{2}\right)\), \(\beta\) usually takes the value 1.5, and \[\sigma_{\mu}=\left\{\frac{\Gamma(1+\beta)\sin\left(\frac{\pi\beta}{2}\right)} {\Gamma\left[\frac{(1+\beta)}{2}\right]\beta^{2}\frac{(\beta-1)}{2}}\right\}^ {1/\beta} \tag{5}\] \[\sigma_{v}=1 \tag{6}\] LEAP randomly generates the Brownian motion's step size, and each particle's initial velocity is obtained by combining Levy flight and Brownian motion. The assignment formula is: \[v_{\text{init}}\,=\left\{\begin{array}{ll}\text{levy}\left(\beta,\sigma_{v} \right)&,\text{levy}\left(\beta,\sigma_{v}\right)>\text{rand}\left(v_{\text{ min}},v_{\text{max}}\right)\\ \text{rand}\left(v_{\text{min}},v_{\text{max}}\right),&\text{others}\end{array}\right. \tag{7}\] It is observed that the step size of Brownian motion is broader than that of Levy flight, which almost occurs when both values are small. The minor oscillation feature of Brownian motion makes it have better local search capability, so the value generated by Brownian motion is used in this case. The rest of the cases use the step size generated by the Levy flight to enhance the global search capability of LEAP and thus generate better-quality adversarial test cases. #### Iii-B2 Adaptive update particles If there are no test cases in the initial population of LEAP that can test successfully, the population will be iterated, with the velocity of the particles being adaptively updated first, and then the particles being shifted according to the velocity (Lines 8-11). Balancing global and local search by adjusting the step size is vital for the success and efficiency of the iterative search in heuristic algorithms. PSO uses inertia weights to balance global and local search capabilities, with larger weights contributing to global search and smaller weights contributing to local search. Changing the inertia weights allows for dynamic adjustment of the search capability. The existing method [26] uses a linearly decreasing inertia weight to dynamically adjust the search process so that PSO has more global search capability at the beginning and more local search capability near the end of the run. However, the search space increases exponentially with the number of replaced words, which means that the search process of LEAP is non-linear and requires tremendous time overhead. Besides, the method of linearly decreasing inertia weights has a linear transition of search capability from global to local search, resulting in it easily falling into the saddle of high-dimensional text space later in the search. Therefore, the inertia weights should be nonlinear and change dynamically to provide a better dynamic balance between global and local search capabilities and achieve better performance. \[\omega_{n}^{i}=\begin{cases}\omega_{\min}+\frac{\left(fit_{n}^{i}-fit_{\min}^ {i}\right)\left(fit_{\max}^{i}-fit_{\min}^{i}\right)}{fit_{\max}^{i}-fit_{ \min}^{i}},&fit_{n}^{i}<fit_{\text{mean}}^{i}\\ \text{levy}\left(\beta,\sigma_{v}\right)\in\left(\omega_{\text{mean}}\,, \omega_{\max}\right),&\text{others}\end{cases} \tag{8}\] LEAP uses a new adaptive inertia weight update method, as shown in Equation 8, where \(\omega_{\min}\) and \(\omega_{\max}\) are hyperparameters. Suppose the fitness score of the \(n\)-th particle in the \(i\)-th generation is less than the average value of all fitness scores. In that case, this particle can be considered far from the actual value or stuck in a local search. Then, its inertia weight is adaptively adjusted based on the fitness score. Otherwise, the inertia weight is the value generated by Levy flight, ensuring that the search process has certain randomness and explores Fig. 3: Comparison of Levy flight and Brownian motion. x represents the number of steps performed and y represents the step length. Obviously, the search area covered by Levy flight (in range [-40, 40]) is much broader than Brownian motion (in range [-1, 1]). a larger perturbation space. After obtaining the new inertia weights, the velocity is updated according to Equation 9. \[v_{d}^{n}=\omega^{n}v_{d}^{n}+v_{\max}\left(1-\omega^{n}\right)\left[I\left(lBest^ {i},x_{n}^{i}\right)+I\left(gBest,x_{n}^{i}\right)\right] \tag{9}\] In order to search the discrete perturbation space, where \[I(a,b)=\left\{\begin{array}{l}1,a=b\\ -1,a\neq b\end{array}\right. \tag{10}\] The update of the position is similarly divided into two steps. In the first step, a new move probability \(P_{1}\) is introduced by which a particle determines whether to move to its individual best position; in the second step, each particle determines whether to move to the global best position with another move probability \(P_{2}\). The change of each position dimension depends on \(softmax(v_{d}^{n})\). \(P_{1}\) and \(P_{2}\) are hyperparameters that change with iteration to improve the search efficiency by adjusting the balance between local and global search. #### Iii-B3 Greedy mutation In biology, genetic mutations result in differences among individuals within a population, in terms of their structure and function. To simulate this process and ensure population diversity, LEAP introduces a mutation operator to the original PSO algorithm (Line 13). To prevent excessive modification of the text, LEAP generates variation probabilities based on the change rate (_C-rate_) of the current particle from the original text, as shown in Equation 11. \[p_{\text{mutation}}\ =1-\gamma\cdot\textit{C-rate} \tag{11}\] Randomness is ensured by comparing the mutation probability with a random number in the range [0,1). If the generated random value is less than \(p_{\text{mutation}}\), greedy mutation is performed on the particle: the words in the text sequence are replaced one by one to find the perturbed position that makes the greatest improvement in the fitness score, and then the original particle is replaced using the perturbed text. Next, LEAP updates \(gBest\) and \(lBest\) by fitness score, and \(gBest\) is output as an adversarial test case when the iteration terminates. ## IV Experiment Setup We have conducted a series of experiments on three text classification datasets and three deep learning models to validate the performance of LEAP in generating test cases. We have made LEAP and all raw data publicly available. All experiments were conducted on an Ubuntu 18.04.5 LTS server with NVIDIA RTX A4000, a 12-core 2.20GHz processor Intel(R) Xeon(R) Gold 5320, and 32GB physical memory. We conducted three repetitions of experiments and averaged the experiment results for each metric. Similar to many well-acknowledged studies [25, 26, 48, 49], the victim models were tested on a set of 1,000 randomly selected examples in each experiment. Therefore, it is believed that this experimental scale is sufficient to cover different input data types and ensure the representativeness and credibility of the experiment results. ### _Hyperparameters_ LEAP is a heuristic testing method based on PSO, and the selection of hyperparameters significantly influences its performance. Among them, the population size (\(pop\_size\)) determines the coverage of the discrete text space, and the maximum number of iterations (\(max\_iters\)) affects the computational cost required for the search process. The inertia weight (\(\omega\)) and acceleration coefficients (\(P_{1}\), \(P_{2}\)) jointly determine the breadth and depth of the search; overly large or small values of these hyperparameters may cause LEAP to get trapped in local optima. By parameter tuning, we set the number of individuals in the particle swarm to 60, the maximum number of iterations to 20, and the hyperparameters \(\omega_{\min}\), \(\omega_{\max}\), \(P_{1}\), \(P_{2}\) and \(\gamma\) 0.2, 0.8, 0.8, 0.2 and 1, respectively. ### _Datasets_ _IMDB2_[29]. A dataset for emotional binary classification containing 50,000 positive and negative movie reviews was grabbed from online sources. The average length of each sequence is 215.63 words. It is divided into two parts, namely 25,000 training reviews and 25,000 test reviews. Their polarization characterizes these movie reviews. Footnote 2: [https://s3.amazonaws.com/fast-ai-nlp/imdb.tigz](https://s3.amazonaws.com/fast-ai-nlp/imdb.tigz) _AG's News3_[24]. This dataset quotes 496,835 news articles from more than 2,000 news sources in the 4 categories of AG's News Corpus (World, Sports, Business, and Science/Technology) in the title and description fields. We concatenate the title and description fields of each news article and use the dataset organized by kaggle4, in which each category contains 30,000 training examples and 1,900 test examples. Each example contains an average of 43 words. Footnote 3: [https://s3.amazonaws.com/fast-ai-nlp/ag_news_csv.gz](https://s3.amazonaws.com/fast-ai-nlp/ag_news_csv.gz) Footnote 4: [https://www.kaggle.com/amananandrai/ag-news-classification-dataset](https://www.kaggle.com/amananandrai/ag-news-classification-dataset) _Poem Sentiment(POEM)5_[50]. This dataset contains 3,085,117 lines of poetry from hundreds of Project Gutenberg books, which can be used for tasks such as sentiment classification. Each line has a corresponding Gutenberg ID (1,191 unique values) from Project Gutenberg. These text segments are divided into four categories, with an average length of 8 words per segment. Footnote 5: [https://github.com/google-research-datasets/poem-sentiment](https://github.com/google-research-datasets/poem-sentiment) ### _Vvictim models_ To evaluate the test performance of LEAP on different DNN-based systems, we choose BERT [2] and its concise scheme Distil-BERT [35], thus verifying the performance of researchers' most common NLP models. We also report experimental results on a LSTM for text classification [34], which is widely used as a classical deep learning model with excellent performance before the advent of pre-trained models. By parameter tuning, the number of hidden layer neurons of TextBiRNN was set to 150; the dropout ratio was set to 0.1, and the maximum sequence length was set to 250. All these models have been pre-trained on BookCorpus [51], a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables, and titles). We also finetuned the bert-base-uncased6, distilber-base-uncased7 models published by Hugging Face for each dataset. Footnote 6: [https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased) Footnote 7: [https://huggingface.co/distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) ### _Baselines_ We investigated the recent works in terms of the testing framework [36, 52], degree of automation [53, 54], and application scenario [27, 39, 48, 55]. Among these, we selected the testing framework Textattack [36], which does not require manual intervention, and conducted experiments in the context of soft-label black-box testing. To compare LEAP with different fully automated testing methods, we implemented four popular black-box testing methods and one state-of-the-art white-box testing method. Specifically, these methods are: 1) _IGA_ proposed by Wang et al. [25]: the fitness function consists of confidence and alienation rate. Using single-point crossover, the text of the two parents is randomly cut to merge into a new text segment. Allowing to replace the words that have been replaced before, to a certain extent, avoids falling into the trap of local optima. 2) _PSO\({}_{attack}\)_ proposed by Zang et al. [26]: a word-level automated testing method which reforms in two steps - reducing search space and searching for adversarial test cases through designing a word substitution method based on sememes, and presenting a search algorithm based on particle swarm optimization. 3) _CheckList_ proposed by Ribeiro et al. [21]: inspired by principles of behavioral testing in software engineering, CheckList guides users in what to test by providing a list of linguistic capabilities. To break down potential capability failures into specific behaviors, CheckList introduces different test types and then implements multiple abstractions to generate adversarial test cases. 4) _PRUTHI_ proposed by Pruthi et al. [22]: explores adversaries which perturb sentences with four types of character-level edits: (1) Swap: swapping two adjacent internal characters of a word. (2) Drop: removing an internal character of a word. (3) Keyboard: substituting an internal character with adjacent characters of QWERTY keyboard (4) Add: inserting a new character internally in a word. 5) _A2T_ proposed by Yoo et al. [23]: the component of this method is designed to generate adversarial test cases with lower computational cost, which is accelerated by making two key choices when constructing the test: (1) DistilBERT semantic textual similarity constraint, and (2) a cheaper gradient-based word importance ranking white-box method. ### _Evaluation measures_ We choose five evaluation indicators for the experiment: 1) _Success rate_ (_S-rate_) [36] of generated adversarial test cases among all targeted text segments. In this experiment, its formula can be expressed as follows: \[\text{S-rate}=\frac{N_{adv}}{N} \tag{12}\] where, \(N_{adv}\) is the number of adversarial test cases that test victim models successfully, and \(N\) is the total number of input examples (\(N\) = 1,000 in our experiment) for the current test method. 2) _Change rate_ (_C-rate_) [36], which represents the average proportion of the changed words in the original text. C-rate can be expressed as: \[\text{C-rate}\ =\frac{1}{N_{adv}}\sum_{k=1}^{N_{adv}}\frac{\mathrm{diff}\,T_{k }}{\mathrm{len}\left(T_{k}\right)} \tag{13}\] where \(\mathrm{diff}\,T_{k}\) represents the number of words replaced in the input text \(T_{k}\) and \(\mathrm{len}(*)\) represents the sequence length. C-rate is an indicator designed to measure the difference in content between the generated test cases and the original examples. 3) _Perplexity_ (_PPL_) [37], an indicator used to assess the fluency of textual test cases. Perplexity is defined as the exponentiated average negative log-likelihood of a sequence. If we have a tokenized sequence \(X\)=(\(x_{0}\),\(x_{1}\),...,\(x_{t}\)), then the perplexity of \(X\) is, \[\mathrm{PPL}(X)=\exp\left\{-\frac{1}{t}\sum_{i}^{t}\log p_{\theta}\left(x_{i} \mid x_{<i}\right)\right\} \tag{14}\] where \(\log p_{\theta}\left(x_{i}\mid x_{<i}\right)\) is the log-likelihood of the \(i\)-th token conditioned on the preceding tokens \(x_{<i}\) according to the language model [56]. Intuitively, given the language model for computing PPL, the more fluent the test case, the less confusing it is. 4) _Time overhead_ (_T-O_) [36], which refers to the average time it takes for a test method to generate a successful test case. 5) _Query number_ (_Q-N_) [36], which indicates the average number of times a population-based method needed to query the victim model when generating a test case. The query number and the time overhead together reflect the efficiency of the testing method. We use C-rate and PPL to quantitatively measure the naturalness and similarity between adversarial test cases and original ones, as both are easier to reproduce than human evaluation. Regarding time overhead and query number, we compare LEAP with IGA and PSO\({}_{attack}\), which are also heuristic testing methods, considering that non-heuristic test methods [21, 22, 23] generate test cases much faster due to the different search strategies. However, the experimental results in Section V show that the quality of test cases generated by such methods is much inferior to that of heuristic methods. ### _Definition of robustness_ IEEE [57] defines the robustness in software engineering as "degree to which a system, product or component performs specified functions under specified conditions for a specified period of time". Similar to [58], we define robustness as follows: denoting the input as \(x\) and the relevant gold label for the main task as \(y\), assuming that a model \(f\) is trained on \((x,y)\sim\mathcal{D}\). Now given the adversarial test case \((x^{\prime},y^{\prime})\sim\mathcal{D}^{\prime}\neq\mathcal{D}\), we can measure the robustness of the model by the prediction results of \(f\) on \((x^{\prime},y^{\prime})\). Compared to the raw prediction accuracy on \(\mathcal{D}\), the less the model's prediction accuracy on \(\mathcal{D}^{\prime}\) drops, the fewer test cases the model misclassifies, the more robust it is. ## V Experiment Results and Analysis In this section, we present five research questions and discuss the experimental results. **RQ1: How is the quality of the generated test cases by LEAP for different victim models and datasets?** To evaluate LEAP, we compare its success rate, change rate, and perplexity with other baselines. Table I shows the comparison results on different datasets and victim models. Compared to all the baselines, LEAP achieves higher success rates for each dataset and victim model, especially on the multi-categorical and long-series dataset, i.e., AG's News. When generating test cases for BERT, LEAP achieves a success rate of 81.2% compared to the baseline success rates of 69.6%, 70.0%, 0.4%, 9.6%, and 9.2%, which implies that LEAP can test more thoroughly against DNN-based systems with robust performance. In terms of change rate, LEAP achieves optimal results in only a few cases, with PRUTHI and CheckList often having better performance because these two methods have strict restrictions on the modification of the original text and therefore sacrifice too much performance in success rate. In the experiments tested on Distil-BERT finetuned by IMDB, the change rates of Checklist, PRUTHI, and LEAP are 61.16%, 3.4%, and 11.5%. However, the success rate of the three is 1.6%, 18.8%, and 91%, respectively, the disparity of which is significant. In addition, LEAP's PPL scores are the lowest for most cases, indicating that LEAP can generate more fluent and natural test cases. Even though LEAP's PPL is not the lowest in a few cases, it guarantees a sufficiently high success rate. For example, when testing a bidirectional LSTM trained by Poem Sentiment, LEAP outperforms PRUTHI by 52.2% in terms of success rate, while PRUTHI achieves a slightly better PPL score than LEAP. In addition, Table I shows that the three heuristics of PSO\({}_{attack}\), IGA, and LEAP always have the highest success rate. Besides, Table II presents test cases generated from the same testing sequence by the three methods on a BERT finetuned by AG's News. It can be seen that the test case generated by LEAP not only deceives the victim model with high confidence but also makes minor and more natural changes to the original text. On the other hand, although LEAP and PSO\({}_{attack}\) also chose PSO for the iterative search, the adversarial test case generated by LEAP shows better text quality regarding the change rate and PPL score. **Answer to RQ1:** LEAP generates higher-quality test cases for structurally different victim models and datasets with different characteristics, and it performs exceptionally well in terms of success rates. **RQ2: Can LEAP generate test cases more efficiently?** Apart from the quality of test cases, the efficiency of the testing method, including time overhead and query number, is also our main concern. Fig. 4 shows the time overhead of generating test cases for the long text datasets IMDB and AG's News. As we can see, for all victim models, LEAP has less time overhead per successfully generated test case. On average, LEAP is 2.14s*147.57s faster than the best baseline per generated test case. When testing BERT finetuned by AG's News, the time overheads of IGA, PSO\({}_{attack}\), and LEAP are 205.28s/it, 177.81s/it, and 70.17s/it, which indicates that LEAP is more efficient. The vast majority of the testing process is spent on querying the victim models [59], so the reduction in time overhead also indicates that LEAP has fewer query numbers. We show such results in the repository8 due to limited space. Footnote 8: [https://github.com/lumos-xiao/LEAP](https://github.com/lumos-xiao/LEAP) **Answer to RQ2:** In terms of testing efficiency, LEAP can generate successful test cases with less time overhead and fewer query numbers, thus saving more testing time. **RQ3: How transferable are the test cases generated by LEAP?** Figure 5 shows the transferability comparison of LEAP with the baselines, where we selected one baseline (i.e. IGA and PRUTHI) with excellent performance from heuristic and non-heuristic test methods, respectively. Fig. 5(a) shows the success rate results of transferring the test cases made for testing Distil-BERT to BERT and vice versa in Fig. 5(b). We find that, for the victim model finetuned on the three different types of datasets, the test cases generated by LEAP all exhibit the highest transferability, and the migrated test cases have a higher success rate [60]. Taking the IMDB dataset as an example, the success rates of test cases generated by PRUTHI, IGA, and LEAP on BERT are 13.4%, 90.8%, and 92.2%, respectively. The success rates of migration to Distil-BERT are 8.4%, 35.6%, and 65.6%, and LEAP still maintains the highest success rate. Fig. 4: Results of the time overhead for testing different victim models. The lower the values are, the more efficient the method is. **Answer to RQ3:** Test cases generated by LEAP have higher transferability, which means that LEAP is able to uncover more defects in DNN-based systems even without access to their internal DNN models. **RQ4: Whether the test cases generated by LEAP contribute to enhancing the robustness of the victim model?** For this research question, to simulate the low-resource scenario, we mixed the adversarial test cases generated from 10% of the original training set with the original training set according to the experimental setting of [61]. We used the IMDB with the most extended text length (i.e., 215 words/it) in our experimental datasets as the original dataset, resulting in three adversarial training sets as shown in Table III. The success rates of all the methods on different adversarial training datasets decreased, and the success rates on the victim models finetuned with IMDB\({}_{LEAP}\) are 3.9%, 77.59%, and 80.4%, respectively, with the most significant decreases. This implies that the test cases generated by LEAP improve the model's robustness more than the other baselines, since a lower success rate demonstrates that the victim model correctly classifies more adversarial test cases. Notably, LEAP still manages to obtain the highest success rate regardless of which adversarial training set finetuned victim model is tested, which further illustrates the excellent performance of LEAP in mining the defects of DNN-based systems. We use the change rate to measure the quality of test cases. The adversarially trained victim models, especially Fig. 5: The success rates of transferred adversarial test cases on the three datasets (want \(\uparrow\)) those finetuned using IMDB\({}_{LEAP}\), force the test method to increase the original text's perturbation to generate successful test cases. As shown in Table III, when testing the victim model finetuned by IMDB\({}_{PRU}\) and IMDB\({}_{IGA}\), the change rate of PRUTHI becomes lower instead. We believe this is because PRUTHI increases the perturbation on the original text to generate mostly failed test cases, which leads to an excessive decrease in the success rate compared to the one on the original training set. In addition, we observed that models finetuned by the adversarial training sets significantly increased the time overhead of the test methods, with the models finetuned using IMDB\({}_{LEAP}\) increasing the most. This also indicates that testing tools have the most difficulty to find successful adversarial test cases for the model finetuned with LEAP-generated test cases, which on the other hand indicates LEAP can improve the robustness of victim models. **Answer to RQ4:** The training set with test cases generated by LEAP significantly reduces the success rate, case quality, and efficiency of the test methods. Therefore, the adversarial test cases generated by LEAP are efficacious for improving the robustness of the victim model. **RQ5: Does each of the method components proposed in this paper improve the quality of the generated test cases and the testing efficiency of LEAP?** We finetuned BERT as the victim model on three datasets, ablating each component of LEAP that is different from the most similar existing work PSO\({}_{attack}\) to investigate its effectiveness. Table IV shows the experimental results using 1000 test examples. Since the test set of POEM only contains 104 examples, we sampled the test set 10 times using different random seeds with 100 examples each time. On the IMDB dataset, LEAP only improves the success rate by 0.9% compared to PSO\({}_{attack}\). However, LEAP is nearly twice as fast as PSO\({}_{attack}\) in terms of time overhead. On AG's News, LEAP shows significant improvement in all the metrics. In particular, the use of Levy flight for population initialization and adaptive update operator increases the success rate by 11.6%, reduces the change rate by 8.66%, and decreases the time overhead by 107.64s. On POEM, the use of the greedy variation operator reduces the success rate of LEAP by 0.3%, which is because the introduction of the greedy strategy increases the risk of the search algorithm falling into local optima in high-dimensional text data. However, it effectively reduces the change rate by 0.8% and the time overhead by 0.41s. Overall, despite the datasets being from different domains with different textual features, LEAP's improved strategy achieves better test results than PSO\({}_{attack}\). **Answer to RQ5:** Compared to the most similar existing work, LEAP's components are effective in generating high-quality test cases more efficiently. ## VI Threats to Validity Our experimental results demonstrate LEAP's effectiveness. However, we also acknowledge some threats to validity. **Internal validity**. The main threat comes from the setting of hyperparameters in the experiments, such as population size and the maximum number of iterations. To mitigate the threat, we use the same hyperparameters for all experiments on each dataset, and try to choose the same hyperparameters as the existing method PSO\({}_{attack}\) to show the validity of our method. **External validity**. Our experiments focused on testing DNNs in an English environment, which may threaten the generality of LEAP for other languages. But applying LEAP on DNNs in other languages requires only minor input adjustments. We mitigate this threat by evaluating our approach on three kinds of datasets and three types of NLP models. This makes us confident that LEAP will work across a variety of NLP applications. ## VII Related Work **Testing AI Software.** The development of Artificial Intelligence (AI) software has been gaining momentum in recent years, with a growing need for effective testing strategies to ensure their reliability and performance. Automated testing techniques have been widely used by software professionals due to their efficiency, cost-effectiveness and reusability. In the field of Computer Vision (CV), a large number of automated testing techniques have been proposed [62, 63, 64]. The primary difference between NLP and CV software is that the feature space of text data is discrete, and any modifications to the original example are more likely to result in errors in semantics and sentence fluency, which can be easily detected [20]. Morris et al. [36] decomposed the testing process into four components: goal function, constraint list, transformation, and search method, and unified them within the Python framework TextAttack. Tan et al. [52] demonstrated the incorporation of adversarial attacks as reliability tests into the reliability testing framework DOCTOR, presenting a method to enhance accountability in existing efforts. Overall, there are three main types of DNN-based automated testing methods for AI software: 1) white-box testing methods [23, 65] based on internal information such as DNN gradients, 2) greedy methods [21, 22] that modify the text at each specific index to minimize the original DNN prediction, and 3) heuristic methods [25, 26] that heuristically search for the optimal option among potential test cases. **Testing NLP Software.** In the field of NLP, Ribeiro et al. [53] utilized large-scale language models and human feedback to generate adaptive unit tests for victim models. Wu et al. [54] developed Errudite, an interactive tool that utilizes domain-specific language to facilitate precise error grouping and analysis. In contrast, our paper focuses on automating the testing of NLP software, taking into account time and cost constraints. Based on the minimum perturbation units used in applications, related works are divided into three aspects: **1) Sentence-level method.** Sentence-level testing methods are more flexible in terms of perturbation, and the modified sentence can be inserted in any part of the text when the semantics and syntax are correct. It is executed by adding ordered words of a certain length. Sentence-level methods are widely used in Q&A [66, 67] and machine understanding [68, 69] systems, but have yet received more research in text classification [70]. Since the sentence-level method modifies the entire sentence with a substantial impact on the semantics of the paragraph [71], the naturalness of the generated test cases is particularly affected. Even if the test is successful, it is often incomprehensible to humans. In contrast, our method only modifies individual words of the original text with controlled modification restrictions, thus ensuring better naturalness. **2) Char-level method.** Char-level methods aim to modify a few characters within a word to generate test cases that cause DNNs to make decisions incorrectly [72, 73]. Given that the modifications typically involve spelling errors, Li et al. [27] generated adversarial test cases by inserting, swapping, and deleting specific characters, combined with the Jacobian matrix of the victim model. Since character-level methods are prone to produce misspelled words [74], today's splendl spell-checking tools can easily detect such perturbations. In contrast, LEAP plans the perturbation space utilizing a lexical network to produce a synonym dictionary, and the potential perturbations are all actual words, so there is no problem with misspellings. **3) Word-level method.** Word-level methods perturb text by inserting, deleting, and replacing whole words, which is significantly better than other methods in naturalness and transferability, and therefore has gained the most attention [75, 76]. Li et al. [39] utilized pre-trained language models as masked models to generate substitute words, considering contextual information. Jin et al. [48] employed word importance ranking and cosine similarity between word vectors for synonym replacement. Ye et al. [55] formulated a hard-label scenery as an optimization problem based on gradient perturbation metrics in word embedding space, generating test cases with smaller query budgets and higher semantic similarity. LEAP is a word-level testing method that uses PSO to determine the words to be replaced and redesigns the internal arithmetic of PSO by combining the features of NLP test cases. This allows our method to guarantee the same high success rate as other heuristic testing methods while requiring less time overhead and fewer queries. ## VIII Conclusion In this paper, we propose LEAP, a black-box testing method for DNN-based NLP systems that efficiently generates adversarial test cases. To address the problems of low data utilization and high time overhead in current testing methods, we design new components for discrete text data, including initializing populations using Levy flight, adaptively updating particles, and employing a greedy mutation approach. We evaluate the performance of LEAP using three datasets, three advanced deep learning models, and five baselines. The experimental results demonstrate that the average success rate of adversarial test cases generated by LEAP is 79.1%, surpassing other baselines, and that the time overhead is reduced by 2.14s-147.57s compared to other heuristic-based methods. We also investigate the value of adversarial test cases generated by LEAP in enhancing the robustness of victim models. For future work, we plan to enhance the scalability of LEAP to encompass a broader range of NLP downstream tasks and accommodate more complex perturbation scenarios, including character-level or sentence-level modifications. To achieve this, we will explore modular granularity settings and adaptive search algorithms as potential solutions. ## Acknowledgment This work is supported by the National Natural Science Foundation of China under Grants 62272145 and U21B2016.
2304.08400
ATHEENA: A Toolflow for Hardware Early-Exit Network Automation
The continued need for improvements in accuracy, throughput, and efficiency of Deep Neural Networks has resulted in a multitude of methods that make the most of custom architectures on FPGAs. These include the creation of hand-crafted networks and the use of quantization and pruning to reduce extraneous network parameters. However, with the potential of static solutions already well exploited, we propose to shift the focus to using the varying difficulty of individual data samples to further improve efficiency and reduce average compute for classification. Input-dependent computation allows for the network to make runtime decisions to finish a task early if the result meets a confidence threshold. Early-Exit network architectures have become an increasingly popular way to implement such behaviour in software. We create: A Toolflow for Hardware Early-Exit Network Automation (ATHEENA), an automated FPGA toolflow that leverages the probability of samples exiting early from such networks to scale the resources allocated to different sections of the network. The toolflow uses the data-flow model of fpgaConvNet, extended to support Early-Exit networks as well as Design Space Exploration to optimize the generated streaming architecture hardware with the goal of increasing throughput/reducing area while maintaining accuracy. Experimental results on three different networks demonstrate a throughput increase of $2.00\times$ to $2.78\times$ compared to an optimized baseline network implementation with no early exits. Additionally, the toolflow can achieve a throughput matching the same baseline with as low as $46\%$ of the resources the baseline requires.
Benjamin Biggs, Christos-Savvas Bouganis, George A. Constantinides
2023-04-17T16:06:58Z
http://arxiv.org/abs/2304.08400v1
# ATHEENA: A Toolbox for Hardware Early-Exit Network Automation ###### Abstract The continued need for improvements in accuracy, throughput, and efficiency of Deep Neural Networks has resulted in a multitude of methods that make the most of custom architectures on FPGAs. These include the creation of hand-crafted networks and the use of quantization and pruning to reduce extraneous network parameters. However, with the potential of static solutions already well exploited, we propose to shift the focus to using the varying difficulty of individual data samples to further improve efficiency and reduce average compute for classification. Input-dependent computation allows for the network to make runtime decisions to finish a task early if the result meets a confidence threshold. Early-Exit network architectures have become an increasingly popular way to implement such behaviour in software. We create _A Toollow for Hardware Early-Exit Network Automation_ (ATHEENA), an automated FPGA toollow that leverages the probability of samples exiting early from such networks to scale the resources allocated to different sections of the network. The toollow uses the data-flow model of fpgaConvNet, extended to support Early-Exit networks as well as Design Space Exploration to optimize the generated streaming architecture hardware with the goal of increasing throughput/reducing area while maintaining accuracy. Experimental results on three different networks demonstrate a throughput increase of \(2.00\times\) to \(2.78\times\) compared to an optimized baseline network implementation with no early exits. Additionally, the toollow can achieve a throughput matching the same baseline with as low as \(46\%\) of the resources the baseline requires. ## I Introduction Convolutional Neural Networks (CNN) have many applications, especially in computer vision and image classification tasks [1]. Traditional CNNs are composed of common operations/layers that can be represented by frameworks like the Open Neural Network Exchange (ONNX) [2]. The continued increase in the width and depth of CNNs has made many of the top performing networks prohibitively large for acceleration on the limited resources of FPGA hardware. There have been a wide variety of static methods [3] derived to combat the large memory footprints and the compute power required to execute these networks including pruning [4, 5, 6], quantization [7, 8, 9, 10, 11], and knowledge distillation [12]. These methods require the assumption that the full networks have some level of redundancy that is exploitable across the majority of the data set. A parallel trend to create networks that can better fit on smaller target devices has resulted in more stripped down architectures both hand crafted [13] and generated by Neural Architecture Search (NAS) [14, 15]. The result is that models are tending to be less redundant, reducing the potential improvement due to methods like pruning. This is where _input-dependent_ computing can take over. The fundamental concept is that a given data sample can be more or less difficult for the network to classify. Practically, this means that data sample A can be accurately classified based on features derived earlier in the CNN, whereas data sample B is more challenging so requires the more refined features of a higher capacity network, resulting in more computation. A number of network architectures make use of this idea to adapt to the computational requirements of individual samples [16, 17] at run time. It is possible to reduce the average compute for inference of a multitude of tasks for the relatively low overhead of a calculation to determine the confidence in result. Since the difficult samples can continue through the full network, there is a minimal effect on accuracy and in some cases an accuracy improvement over the baseline networks. We target throughput-oriented applications that are subject to latency constraints prohibiting device reconfiguration at runtime. We present the following novel contributions for FPGA implementation of such applications that benefit from an input-dependent approach: * A methodology for utilizing probability profiles to select different points on the throughput / area tradeoff curve for different stages of an Early-Exit (EE) network. * A set of hardware-friendly components for building early-exit networks on FPGAs, compatible with the open source fpgaConvNet toolflow. * The ATHEENA automated toolflow (Figure 1) for utilizing profiling probabilities to transform their ONNX-based representation into optimized HLS code suitable for implementation using Vivado HLS. Fig. 1: High-level overview of the ATHEENA and fpgaConvNet toolflow. ## II Background Section II-A outlines existing work on input-dependent deep neural network (DNN) computation. We also summarize the well-studied network we use for our experimental study in Section IV-A. ### _Input-dependent Computation in DNNs_ The aim of input-dependent computation is to reduce the computational workload of inference whilst maintaining the accuracy of the network. Static compression methods also reduce computational workload but result in accuracy degradation across all inputs. The introduction of input-dependent, dynamic behaviour customises the computational workload per input, allowing the user to explore the trade-off between accuracy and computational workload. This concept is demonstrated in Dynamically Throttleable Neural Network (TNN) [18]. The network uses a two-stage training approach which first produces a high accuracy network followed by reinforcement learning to train a small DNN module adjacent to the main DNN that has a fine grained control of customizable subsets of the main DNN layers. Similarly, Dynamic Deep Neural Network (D\({}^{2}\)NN) [19] constructs a network trained with reinforcement learning but has a network topology consisting of'regular' (convolution and fully connected layers) interspersed with 'control' nodes which dictate the path a given data sample will take. An alternative to the architectural freedom of D\({}^{2}\)NN are the split computing [20] methods. These are located mid-way on a sliding scale of division of computation between edge device and server: at one extreme, there is fully local computation, where inference is performed on an embedded device, and at the other extreme, data is captured and compressed locally [21] before being transmitted via a wireless connection to server for inference. In general, the local computation performs some initial classification (or other ML task) and assesses the confidence of the result. The local compute can then decide whether or not further computation is required and can potentially skip the high latency, round trip of transferring data to power-intensive, server-based compute [22]. Early-Exit networks share structural similarities with split computing and have drawn increasing interest [23] in recent years. The typical architecture of an Early-Exit network is set out in the BranchyNet [16] work and consists of a branching, tree-like structure with a backbone, where the majority of the sample processing is carried out, and exits, which often contain some additional CNN layers and are located at different points along this backbone as illustrated in Figure 2. Due to the hierarchical nature of CNNs [17, 24], the earlier layers in the backbone will have learnt more general or coarser features. As a result, easy samples can be classified based on these features to an acceptable level of accuracy and more complex samples can be further processed in later stages of the backbone before final classification. The benefit of Early-Exit methods over traditional quantisation and pruning is that the former make use of the varying difficulty of samples within a data set. Reducing precision or removing extraneous parameters common to all samples beyond a certain point will decrease the accuracy of the network over the most challenging samples first. With the implementation of early exits, inference efficiency can be improved beyond these limits. Early-Exit presents a tuneable trade-off between the throughput and accuracy of data samples. The other key benefit of these networks is the improved throughput of batch computation due the average of the reduced latency of early exits and similar latency of later exits. Furthermore, Early-Exit has applications in an array of ML tasks including semantic segmentation [25] and image classification [26, 27]. The 'difficulty' of a given data sample is challenging to quantify so there is a range of metrics used in the literature to judge the confidence of a result at a given exit point. A limitation of the deployment of these networks is the resource/latency cost of these exits which may need to compute exponential or logarithmic operations. The most common method of determining confidence is to measure the information entropy [16, 23] of the probability distribution over the available classes. A low entropy value indicates more certainty in the correct result. The other main method to determine confidence is comparing the maximum value in the class probability distribution to a threshold [17]. Both these methods require Softmax computation and the use of logarithmic or exponential functions. A key challenge in migrating these software-oriented Early-Exit designs to hardware, is to develop an architecture that efficiently executes input-dependent control-flow whilst minimising extraneous computation, area overhead, and maximising the available throughput gains. This is the central challenge we address with ATHEENA in this paper, for the case of Early-Exit networks. ### _CNN compilers for FPGAs_ Taking standard CNNs from software-based inference to accelerated, FPGA-based inference can be accomplished automatically with custom compilers [28]. These broadly fit into two categories: single computation engine architectures and data-flow streaming architectures. The single engine [29] typically consists of a fixed architecture on which the CNN layers are mapped, loaded, and executed in a sequential fashion. The CNN is translated to a list of instructions. This execution can be controlled either by software using CPU or specialised control hardware. The benefit of single engine is that they are amenable to different CNN workloads. However, Fig. 2: The form of a generic Early-Exit network with backbone stages and varying levels of compute between exits. there is significant benefit, in terms of resource savings and throughput increases, to customising a given architecture to the CNN workload [30]. Streaming architectures take this customizability further by making use of the data-flow paradigm to produce deeply pipelined designs with specialized layers for state-of-the-art CNNs [31]. We adopt the streaming architecture method so that we can benefit from CNN-specific customization and implement the input-dependent compute spatially, targeting high throughput. ### _fpgaConvNet_ fpgaConvNet [32] is an automated CNN-to-FPGA toolbox able to handle common CNN layers and forms a foundation for our toolflow. The streaming architecture used comes from modelling the CNN as a Synchronous Data-Flow Graph (SDFG) [33]. The nodes are the computation operations such as convolution and the arcs are streams of data. This means that the hardware blocks mapping these computations can have a static schedule and compensate for differing data consumption rates with sufficient buffering determined at compile time. Streaming backpressure is handled by the Vivado HLS [34] streaming interface. fpgaConvNet receives a trained CNN model in the ONNX [2] representation. This device-agnostic representation is parsed to construct the SDFG and populate an initial hardware mapping with templated layers/modules corresponding to the supported ONNX operations (such as convolution and pooling). The tool performs Design Space Exploration (DSE) to optimize the hardware architecture using simulated annealing to select possible incremental transformations to the hardware blocks. This allows an optimized design to be found in a large search space in a relatively short time. Finally, the hardware code is generated in a form suitable for Vivado HLS to compile. The design space exploration makes the most of the flexibility of FPGAs. It makes use of resource models of the hardware blocks as well as the target board resources and the model of the mapped CNN workload. The toolflow is also able to accommodate different application objectives since it is possible to optimize the design for latency or throughput. For larger CNNs, folding is used to tune the amount of intermediate computation required to compute a full result. Folding the inputs and intermediate feature maps multiplexes sections of the design in time to reduce resource consumption. This is accomplished at two scales: coarse-grain folding at the input and output of layers and fine-grain folding of the sliding windows within the convolution module. fpgaConvNet was chosen as a basis for our work because of the existing hardware templates for fundamental CNN layers and the tooling infrastructure that provides DSE to generate a network layer configuration for hardware implementation. fpgaConvNet is open source and demonstrably extensible [35, 36, 37] in a range of contexts. In this work, we transform fpgaConvNet into ATHEENA by extending the scope of supported CNN architectures as well as creating new hardware templates to efficiently support Early-Exit networks. As with existing comparable toolflows [38, 31, 39] (at time of writing), fpgaConvNet does not support the coarse granularity of input-dependent computation of Early-Exit networks. We expand the data-flow model of fpgaConvNet to include input-dependent computation as pipelined control flow. This allows ATHEENA to generate CNN-hardware mappings that benefit from varying data sample difficulty. The method of extension we develop for fpgaConvNet is orthogonal to extreme quantization [39] and exploitation of sparsity [31] detailed in other works. Furthermore, we improve overall compilation times by partitioning the design prior to HLS compilation and automatically stitching generated components prior to synthesis and implementation. ### _Dynamic Machine Learning on FPGAs_ CascadeCNN [40, 41] implements a specialized form of dynamic computation where a runtime decision is made to switch between low and high-precision quantized versions of the same network. The components of the architecture include both a low and high precision implementation of the network on FPGA fabric and the confidence evaluation unit running on a CPU. All data examples are first fed through the low precision network before the computation of a confidence estimate on the results to determine whether the results meet the user-defined accuracy threshold. Any samples with low confidence are fed through the high precision network. ATHEENA is similar in that all data sample processing compute is confined to the FPGA fabric avoiding prohibitive reconfiguration costs [41]. We also choose to include the confidence evaluation on chip for ATHEENA, meaning data does not need to make the round trip to and from off chip memory to perform the confidence decision. The key difference for ATHEENA is the streaming architecture which can achieve higher throughput thanks to deeply pipelined layers. DynExit [42] uses a hand crafted, single engine architecture to implement a ResNet with classifiers attached along a pre-trained backbone. The network architecture consists of pipelined convolution and linear (fully connected) execution blocks attached to a 'branch'. The 'branch' consists of an exponential and a natural logarithm module to compute the confidence of the classification based on a rearranged version of the cross entropy loss function. Similarly, ATHEENA uses a dedicated block to determine the sample confidence but DynExit lacks customizable layer configurations which limits the accelerator design's ability to fully utilise the FPGAs resources for different networks. Adaptive Hierarchical CNN (AHCNN) [43] notes the benefits of being able to utilise the shallow features for easy data sample classification and deeper features of a more accurate network for difficult classifications. To this end, partial reconfiguration is used to swap in and out shallow and deeper sections of a ResNet18 CNN. Large batch sizes are used to amortise the latency penalty for reconfiguration. This allows the design to benefit from full utilization of the resources for each stage of network computation but the latency penalty is prohibitive for low latency applications. They also compute the confidence decision based on the maximum value of the Softmax, as with ATHEENA, but couple this with the profiled probability of the occurrence of a given class. This equates to a confidence threshold that is class-dependent. The heterogeneous architecture proposed in [44] makes use of the FPGA fabric to accelerate the highly parallel CNN computation through the use of multiple processing elements in a systolic array architecture. The onboard CPU computes the Softmax and entropy of the intermediate classification results. As with DynExit, this is another hand-crafted architecture that supports the main convolution kernel sizes but currently has limited support for more recent networks. Interest is growing in the use of input-dependent computation, however, none of these works provide a fully automated toolflow for mapping Early-Exit CNNs to FPGAs. It is this problem we tackle in this paper. ## III Methodology The fundamental issue of implementing Early-Exit networks on FPGAs is determining a hardware mapping that balances the control and data-flow throughput requirements whilst minimising latency overhead. A naive implementation would have all stages of the network optimized for the highest possible throughput. However, in the presence of any resource constraints this is clearly a sub-optimal strategy: having all stages targeting the same fixed throughput will lead to some stages being under-resourced bottlenecks and others being starved of data samples. Hence in this work we target the following problem: given an FPGA platform with certain computational and memory resources, what is the best way to allocate those resources to maximise throughput for a given expected distribution of samples of varying difficulty. We first define our methodology for determining this optimized resource allocation for Early-Exit networks, and then explain the detail of the toolflow and template hardware designs we introduce to fpgaConvNet for ATHEENA. The code will be open-sourced1. Footnote 1: Repository DOI: [https://doi.org/10.5281/zenodo.7809222](https://doi.org/10.5281/zenodo.7809222) ### _Scaling Resource Allocation according to Exit Probability_ Early-Exit networks can be divided into sections according to the stages of compute between each exit. For ease of presentation, we explain the area apportioning process with reference to a two-stage network, however it is trivial to extend the presentation to multi-stage networks. We illustrate the methodology with the generic, two-stage network in Figure 3 before applying it to BranchyNet in Section IV-A. The network is first separated into two stages at the layer level. This means dividing the network into the first stage, containing all the parts of the backbone and the Early-Exit layers that are need to operate at the higher data rate, and the second stage containing the remaining parts of the backbone and final exit. This second stage is only required to operate at a lower data rate because not all input data samples pass through this hardware. As a result, network classification decisions may also return out-of-order. The key challenge is to automate the design of a hardware architecture capable of efficiently supporting these multiple data rates. To this end, we can use existing FPGA design tools capable of folding (in our case fpgaConvNet) to generate separate, optimized Throughput-Area Pareto (TAP) functions for both stages which we then merge to generate a combined TAP function as visualized in Figure 4. We define a TAP function as a function that is (non-strictly) monotonically increasing in each of its arguments. The principle is that this function captures the maximum achievable throughput possible by separately optimizing a section of the network for a given fraction of total resources. Let \(\mathbb{N}\) denote the set of natural numbers and \(\mathbb{Q}\) denote the set of rational numbers. An example of a TAP function might then be, \(f:\mathbb{N}^{4}\rightarrow\mathbb{Q}\), capturing the optimal throughput possible with a constrained number of BlockRAMs, DSPs, FFs, and LUTs, as represented by the four arguments to the function. A function like this is automatically generated by providing the fpgaConvNet optimizer limited fractions of the board resource constraints. The results for each set of constraints are collated Fig. 3: High-level diagram of the proposed control-flow hardware attached to the CNN layers of fpgaConvNet for a two stage network. Black arrows represent normal data-flow plus Sample ID tags and red arrows represent control signals. for input to the ATHEENA optimizer. The _1st Stage_ and _2nd Stage_ graphs of Figure 4 show a sketch of some TAP functions for the first and second stage of a two-stage network. Our objective is to combine these two TAPs into a single TAP for the Early-Exit network. We may think about the problem in this way: since the second stage is only expected to be used by each input sample with some probability \(p\in(0,1]\), it follows that, given suitable buffering between stages, it is possible to extract a higher throughput than the nominal throughput of the design, by a factor \(1/p\). The overall design will be limited by the throughput of the first stage or the second stage scaled by \(1/p\), whichever is lower. However, in any practical setting, the probability for an input sample to need further processing by the second stage will differ by some degree from the (profile-based) probability for which the hardware was designed. We therefore denote the design-time probability estimate as \(p\) and the actually encountered probability as \(q\). Putting these components together, we are able to define an ideal combination of the two TAPs, parameterised by \(p\) and \(q\), expressed formally below, and illustrated in Figure 1 where \(\oplus\) denotes the so-defined TAP combination operator. \[\begin{split} f\underset{p,q}{\oplus}g:x\mapsto\min(f(x_{1}),g( x_{2})/q)\\ \mathrm{where}\;(x_{1},x_{2})=\underset{\begin{subarray}{c}x_{1},x _{2}\\ \iota\cdot x_{2}=x_{1}+x_{2}\end{subarray}}{\arg\max}\min(f(x_{1}),g(x_{2})/p) \end{split} \tag{1}\] Intuitively, what this equation tells us is that for a given total resource budget \(x\), we should apportion a resource allocation \(x_{1}\) for the first stage and \(x_{2}\) for the second stage maximising the throughput of the limiting stage, taking into account our design-time estimate of probability \(p\). At runtime, the throughput demand on the first stage will be as expected, but may vary somewhat from the design-time expectation on the second stage. This is all illustrated in the lower plot of Figure 4. The design points represented by the TAP function for the first and second stages are discrete. This means that it is unlikely for the tool to be able to perfectly match the predicted throughput values. In the case that the second stage is the limiting factor, a reduction in the use of this stage, corresponding to \(q<p\), will result in an increase in throughput. For \(q>p\), the throughput will be reduced due to the reliance on the limiting stage. These situations are represented by the shaded region. The solid purple line corresponds to \(q=p\), in which the probability of hard samples matches that of the profiled, design-time probability. The following section explains how we obtain our probability profiles and use them to inform the combination of TAP functions in the manner previously described. ATHEENA builds on fpgaConvNet and automates this process for the user. We demonstrate this process in practice in Section IV. ### _Toollow Extensions & Automation_ We build the ATHEENA toolflow by extending the open-source fpgaConvNet as illustrated in Figure 5. The fundamental difference between the flows is that the original fpgaConvNet constructed a data-flow graph whereas we require a _control_ and data-flow graph (CDFG) to represent the flow of data through layers as well as the confidence decisions at the end of the early exits. Modifications to the parser and optimizer are made to support the different ONNX operations and encompass the control-flow for hardware translation. Several new hardware component templates (detailed in Section III-C) are designed for the FPGA implementation of control-flow and to support the confidence calculation. #### Iii-B1 Early-Exit Profiler We introduce the Early-Exit profiler which takes a _profiling data set_ and the high-level _Early-Exit ConvNet description_ and apportions the set so that multiple distinct tests can be run which will have a similar probability of hard samples on average but variation individually. Batched inference is performed over the sets followed by collection of the exit probabilities, exit accuracy, and cumulative accuracy. The average probability of hard samples is fed into the optimizer as \(p\), along with the multi-stage CDFG hardware model. The ATHEENA optimizer then creates an optimized mapping for the different stages of the network, scaling resource constraints according to \(1/p\). #### Iii-B2 HLS Limitations & Parallel Compilation In order to allow for HLS-based compilation of large networks, we automatically split the network into the individual layers, generating top-level HLS files for each. This results in multiple smaller designs for HLS that can be compiled independently. The layers are then automatically stitched together at the board design stage in Vivado IP Integrator in conjunction with the supporting processor and memory interfaces. A DMA controller is introduced with input and output FIFOs to manage Fig. 4: Visual representation of the scaling and combination of TAP curves for network stages. Our process combines two TAP graphs with respect to an expected probability \(p\). This \(p\) corresponds to the percentage of samples that require processing by second stage. \(q\) denotes throughput deviations from this design-time probability that may be encountered at runtime. the transfer of data between the host and the FPGA. Since the Early-Exit board design now consists of multiple HLS cores, they each need a start signal from the host CPU. These signals are automatically added into the host code, in addition to the DMA read and write transfers set to batch size specified by the user. By the end of the compilation, the user has a bitstream and complementary host code to perform batch inference on the board. #### Iii-B3 ONNX Conversion We make some minor modifications to the benchmark source code so that it can be converted into a compatible ONNX form using our _Early-Exit profiler_. As the original networks were typically run in software, PyTorch (version 1.8.1) handles the scheduling of the network graph execution for CPU and GPU inference. ONNX generation from a PyTorch description of simple network is trivial but the inclusion of conditional operations requires the PyTorch description to explicitly prevent the operations from being removed by software optimization passes. These network control-flow decisions need to be captured and translated to FPGA-based hardware blocks. We use the PyTorch scripting methods as an intermediate stage [45]. This converts the network into a PyTorch-specific intermediate representation capable of supporting conditional statements. PyTorch-based ONNX methods then convert the intermediate representation to the final ONNX form. An inference test is performed with the ONNX form and the results compared to the original to verify the conditional functionality matches that of the PyTorch implementation. ### _Early-Exit Network Layers: Hardware Templates_ We extended the ONNX parser of fpgaConvNet to support the operations required by the Early-Exit CNNs. These operations include: * Softmax * Reduction (ReduceMax) * Numerical comparison (Greater than) * If conditional Figure 6 illustrates the new Early-Exit layer templates and Figure 3 provides a high-level view of the proposed placement of these layers within the pre-existing CNN data-flow operations in fpgaConvNet. These operations are the foundation of the conditional aspect of the network so are merged into one hardware layer comprising of distinct modules. The remaining layers added do not correspond to ONNX operations but support the control flow extensions of the SDF paradigm. The newly added layers match the fixed-point representation with the exception of the Exit Decision layer. This instead uses single-precision floating-point as this preserves the numerical behaviour of the exponential function at its core. #### Iii-C1 Exit (Softmax) Decision Layer An early exit will occur if Condition (2) holds for some threshold \(C_{thr}\) determined after training prior to exit profiling, where the standard \(\text{Softmax}:\mathbb{R}^{C}\rightarrow\mathbb{R}^{C}\) function corresponds to (3), used to transform the vector of class activations into a probability distribution, where exponentiation of vectors is interpreted component-wise [24]. \[\max_{i\in\{1\ldots C\}}\left[\text{Softmax}(x)\right]_{i}>C_{thr} \tag{2}\] \[\text{Softmax}(x)=\frac{\exp(x)}{\sum_{j=1}^{C}\exp(x_{j})} \tag{3}\] \(C\) is the number of classes, \(x_{i}\) is the network output for the \(i\)th class, and \(C_{thr}\) is the confidence threshold. This layer, along with the exit-specific computation required for the Early-Exit classifier dictate the minimum size of buffering required between the intermediate result and the conditional buffer layer to prevent a pipeline stall (Figure 7). On a small FPGA this has the potential to be a prohibitively large amount of memory required so the need to reduce the Fig. 5: A visual representation of the ATHEENA toolflow. We have built upon the baseline, open source fpgaConvNet toolflow. New elements are in red and modified elements are in blue. latency of the operation is key to the implementation of Early-Exit networks. The operation can be rearranged to remove the division as in (4) to decrease the resource overhead and latency of the decision component. The use of floating-point for this layer means that addition and comparison operations incur a significant latency penalty given a target frequency. To reduce this penalty, we implement adder and comparison trees to compute results in parallel, minimising the latency of this layer. \[\max_{i\in\{1...C\}}\exp(x_{i})>C_{thr}\cdot\sum_{j=1}^{C}\exp(x_{j}) \tag{4}\] #### Iii-B2 Conditional Buffer Layer Two challenges arise from the addition of control flow from conditional buffers. The first challenge is that, after the first stage of the network, data samples will go down the path to the early exit or the path to the second stage. Let's assume \(N\) data samples, \(K\) will exit early while \(N-K\) will continue through the second stage. The existing fpgaConvNet hardware layers in the second stage expect \(N\) data samples and the implementation of the pipeline will only terminate on the final \(N\)th data sample. To avoid deadlock, we flush the pipeline with an unused sample ID and corresponding data. The second challenge is that there is a significant amount of computation between the point at which the network first has to buffer data and the point at which control signals are produced. We temporarily buffer the unfinished data sample while the confidence metric is evaluated as well as the fully processed sample result at the early exit classification stage. If the sample can exit early, the buffer drops the intermediate data and the classification result of the early exit is transferred to memory via the exit merge component. Due to the size of the intermediate feature map buffer, it is important to reduce the impact of both reading in and writing out of the buffer (effectively doubling the latency of the layer for every sample prior to the control signals). To drop an unused feature map we invalidate the addresses of the stored feature map in a single cycle. If the sample cannot exit, the buffer passes the intermediate data through to the next stage of the backbone. The fundamental operation of the conditional buffer is to filter the easy samples from the hard samples. The buffer prevents the later stages of the backbone from unnecessary computation, which has the effect of increasing throughput because of the lower expected data rate for the second stage. This component will be included in the open-source repository. #### Iii-B3 Split Layer The split layer is used to duplicate the result of layers at the branching points in the Early-Exit network. This splits the data-flow stream to allow a copy of the data to continue down the backbone in parallel to the early exit layers. #### Iii-B4 Exit Merge Layer Inference is run on a batch of data samples where each data sample consists of a fixed number of pixels. In line with other static streaming architectures, the original fpgaConvNet has no built-in distinction between data samples in the pipelined hardware. Each component will continue to operate on newly provided pixels and the user is responsible for interpreting the results based on their location in memory. However, given that in an Early-Exit network, data samples within a batch are able to complete out of order, there needs to be an internal representation of each data sample's Fig. 6: Newly added Early-Exit Hardware layer templates. The shapes represent a given Sample ID. For the Conditional Buffer and Exit Decision, we have indicated whether the Sample ID is kept (green) or dropped (red). The associated Sample data is either passed through or dropped. We show that the Exit Merge layer keeps all data for a given Sample ID sequential, opting to stall an exit instead of interleaving. Fig. 7: The latency of the additional exit computation and exit decision layers is used to determine the minimum amount of buffering required by the conditional buffer to prevent deadlock in the design. position within the batch. A _Sample ID_ is assigned to each data sample and is passed through the hardware with each pixel. At the conditional buffers, the IDs are compared to determine whether or not to drop the data points with a given ID. When a sample exits the network, the exit selection layer coherently merges the exit streams into one memory writing component. This may result in the stalling of one network path while another is allowed to pass through. ## IV Experimental Results The following section details a case study of using the ATHEENA toolflow to automatically implement the previously-proposed Branchy-LeNet network [16] directly from PyTorch code. This starts with some minor modifications to the source architecture shown in Figure 8. The network was trained and tested in the manner outlined in the original paper. The alterations resulted in a negligible change in accuracy and a similar Early-Exit probability for a comparable \(C_{thr}\) value in software. These modifications reduce wasted compute on the board and improve compatibility with standard fpgaConvNet convolution layers. The Early-Exit profiler is used to generate the ONNX representation and Early-Exit probabilities for the extended fpgaConvNet optimizer. This in turn creates a hardware mapping for the Early-Exit and fpgaConvNet layers for the chosen board. The tool then converts the mapping into HLS code and compiles the layers in parallel before stitching them together and generating a bitstream with associated host code. ### _BranchyNet: Hardware Experimental Study_ The experimental setup follows that of fpgaConvNet to make comparison direct and expedite the gathering of results. The target device is the Xilinx ZC706 board [46] with the Zynq 7045 System on Chip (SoC). The resources available are 218600 LUTs, 437200 FFs, 900 DSPs, and 1090 18K BRAMs. Vivado HLS 2019.1 was used for compatibility with fpgaConvNet. Each design is conservatively clocked at 125MHz (limited by HLS and board). We compare our hardware Early-Exit implementation to a corresponding single-stage network baseline. This single-stage (backbone) consists of the network layers from the start of the Early-Exit network through to the end of the second stage. For BranchyNet, this means three Convolution, Pooling and ReLU layers followed by a Linear (Fully Connected) layer. Both the ATHEENA optimizer and baseline optimizer are provided the board resources constrained at different percentages in order to generate a Throughput-Area Pareto curve. Due to the random aspect of the simulated annealing within both optimizers, they are run ten times and the best points are chosen to form the curve. Additionally, a range of data points with constrained resources allows us to infer throughput gains/resource savings on boards with lower available resources. Once the optimizers generate the hardware mapping, we collate the predicted results shown in Figure 8(a) and then pass through the best performing subset of these results to the HLS backend in order to generate the board design. Figure 8(b) shows the resulting Throughput-Area Pareto curve after synthesis, implementation and place-and-route. The resource usage is recorded and the board loaded with each bitstream. The automatically generated host code for the board loads a batch of 1024 samples onto off-chip memory and enables the DMA transfers to and from the design. We measure the total time taken from the start of the DMA transfer until the DMA block registers as being idle and use this to calculate the throughput of the network. The same DMA controller is present for baseline and Early-Exit implementations so the impact on resources is consistent for a fair comparison. The baseline results are represented by the red lines in Figure 9. While the fpgaConvNet model is not accurate on a point by point basis, the trend is consistent for the predicted and board implementations. To allow for a fair comparison, we included an implemented point predicted by the optimizer to consume greater than the boards resources, in practice the design point B3 consumed \(98\%\) of the DSPs. For higher Fig. 8: Slightly modified version of B-LeNet [16] for fpgaConvNet. Changes are highlighted in yellow with original values in brackets. Faded layers have been removed from the architecture. Hardware-only layers (detailed in Section III-C) have been added. resource allowances, we can see that the designs tend towards being limited by DSPs, this is due to the baseline optimizer selecting an increased level of parallelism for the Convolution and Linear layers and is common to both the baseline and ATHEENA board implementations. The solid purple line shows the case when the \(q\) (percentage of easy samples in the test set) matches that of \(p\) (the profiling data set percentage), in this case a \(q=p=25\%\). The dashed lines represent a range of points taken on the Throughput-Area space with a differing percentage of hard samples. The predicted results are calculated under ideal conditions: assuming a regular sequence of easy and hard samples and sufficient buffering such that second network stage is able to achieve the estimated throughput. For the implemented board results, we sample a batch from the test data set for the task. The sampled test set has a split of easy and hard samples proportional according to the required test probabilities but distributed randomly within the batch of 1024 samples. The lower dashed line represents a \(q=30\%\) and demonstrates a partially reduced throughput for some of the data points. The upper dashed line represents a \(q=20\%\) and shows that some sub-optimal profiling situations can still result in throughput increase thanks to the significant latency reduction of data samples exiting at the high-throughput first stage. The results demonstrate the potential for greatly improved throughput at the risk of slightly decreased throughput caused by differences between test and profiling exit probabilities, and variation in exit distribution within a batch. Additionally, the predicted Throughput-Area curve is a good approximation of the measured implementation results. We can see the ATHEENA model is slightly optimistic in terms of achievable throughput. This is likely due to sub-optimal resource models for the new components and the variability of HLS compilation. Finally, the gap between lower dashed line and the baseline is indicative of the robustness of the approach to the difference between \(p\) and \(q\) for a real world application, assuming sufficiently sized buffers. The maximum measured ATHEENA throughput is \(2.17\times\) the the maximum measured baseline throughput. ATHEENA achieves this throughput using 16% fewer DSPs (the limiting resource) when the resources are apportioned between the first and second stage according to the profiled probabilities. Alternatively, ATHEENA can achieve the same throughput as the maximum baseline using \(46\%\) of the design's limiting resource. The design points in Table I show that the achieved throughput increase comes at the cost of an increase in BRAM. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} & **LUT** & **\%** & **FF** & **\%** & **DSP** & **\%** & **BRAM** & **\%** \\ \hline \hline A1 & 13912 & 20 & 13595 & 22 & 34 & 20 & 114 & 55 \\ \hline A2 & 37766 & 29 & 35941 & 31 & 122 & 30 & 146 & 61 \\ \hline A3 & 33166 & 20 & 30974 & 22 & 112 & 15 & 186 & 70 \\ \end{tabular} \end{table} TABLE II: Resource overhead of the Early-Exit for labelled designs compared to the network backbone. This includes the resource usage attributed to the _additional_ Early-Exit computation and buffering required for operation of B-LeNet. The proportion of this overhead is detailed as a percentage of the total design. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} & **LUT** & **FF** & **DSP** & **BRAM** & \begin{tabular}{c} **Limiting** \\ **Resource (\%)** \\ \end{tabular} & \begin{tabular}{c} **Throughput** \\ **(Samples/ks)** \\ \end{tabular} \\ \hline \hline B1 & 75513* & 61361 & 295 & 55 & 35 & 35 & 13513 \\ \hline A1 & 68383* & 63170 & 169 & 206 & 31 & 19434 \\ \hline B2 & 105451 & 84761 & 470* & 89 & 52 & 21276 \\ \hline A2 & 128940* & 117138 & 407 & 239 & 59 & 47583 \\ \hline B3 & 138194 & 120764 & 858* & 170 & 98 & 43384 \\ \end{tabular} \end{table} TABLE I: Tabulated resource comparison for implemented Baseline Vs ATHEENA on ZC706 board. *Maximum limiting resource. Fig. 9: The red line represents the corresponding baseline results for fpgaConvNet and the limiting resources for each point are denoted by the following: \(\mathsf{X}\)(BRAM), \(\square\)(LUT), \(\mathsf{O}\)(DSP). Designs A1, A2, A3, and B1, B2, B3 have detailed resource usage in Table I. The designs A1, A2, and A3 (labelled in Figure 8(b)) require this as part of the conditional buffers to store enough of intermediate feature map samples to prevent deadlock, as there is a delay from the point the buffered data and the related control signal are produced. Furthermore, additional BRAM is added to increase robustness to variation in the hard samples' exit probability. This results in the resource overhead from implementing the additional classifier layers, comparison layer, and conditional buffering layers being dominated by BRAM usage, as detailed in Table II with design A3 having \(70\%\) of the total BRAM usage within the early exit overhead. The predicted throughput of each stage of the Early-Exit network is calculated separately. The resulting design points for each stage form a discrete Pareto front. When the optimizer is selecting from these two stages and scaling the throughput of the second stage, there will often be a discrepancy between the predicted throughputs of the stages. If the resulting combined design point over-provisions the second stage then the design will be more robust to variations in the testing data set probability. Excluding the BRAM usage, we can see that the Early-Exit resource overhead is higher for design point A2, suggesting that the throughput is more tightly coupled to the performance of the first stage of the network. The original BranchyNet paper details an implementation of the LeNet and Branchy (B) LeNet networks using using a 3.0GHz CPU with 20MB L3 Cache and NVIDIA GeForce GTX TITAN X (Maxwell) 12GB GPU. They report the average latency of a single sample in milliseconds. We convert their per sample average latency metric into a throughput metric for the comparison in Table III. Both our baseline and ATHEENA designs benefit from adaptations of the network architecture to be more amenable to a hardware implementation. These changes include quantisation to a fixed-point representation, adjusting layer parameters, and adapting the exit condition computation. This has a marginal effect on the accuracy compared to the software implementations and is partly the reason for such high gains compared the CPU and GPU implementations. We are able to exploit per-sample parallelism more effectively using the streaming architecture however the relative gains of implementing Early-Exit on CPUs and GPUs is greater than demonstrated by our toolflow in part due to necessary reduction of exit percentage to maintain accuracy. Despite this, we find that a \(p=50\%\) still results in a throughput improvement relative to the baseline in spite of the area overhead for the additional layers and control logic embedded in the Early-Exit streaming architecture. Overall, significant gains in throughput can be achieved from utilising an optimized streaming architecture converting a CNN implementation to FPGAs and up to \(2.17\times\) further gains from implementing customised Early-Exit hardware tailored to the design. ### _Benchmarking Results_ We include an additional two networks in Table IV. The best performing predicted throughput and corresponding resource results have been taken from the optimizer stage of the baseline and ATHEENA toolflows. Due to the increased size of these networks we specify the target platform as the Xilinx VU440. We use the percentage of hard samples outlined in the papers to generate the two-stage design. We have included software-based implementations of these networks in the open source repository. ## V Conclusions We have demonstrated the benefits of the _input-dependent_ computation paradigm in improving CNN mapping to FPGAs by developing a toolflow that allows for the exploration of the throughput-area trade-off space of Early-Exit network hardware implementations orthogonal to benefits from quantisation and pruning employed by other frameworks. We have proposed an approach to automate the production of Early-Exit networks based on a probabilistic proportioning of resources between parts of the computation operating at different data rates, and expanding an existing toolflow. This is achieved with development of specific Early-Exit layers that can handle the intermediate buffering and conditional data-flow requirements of these networks. We verify the toolflow's model of predicted performance by implementing multiple, resource-constrained, designs points on an FPGA board with randomised test samples. The robustness of the approach is explored using adapted test sets with known Early-Exit probability variation. The resulting ATHEENA framework can transform high-level Early-Exit CNNs into optimized FPGA hardware that out perform their standard CNN counterparts in terms of throughput for a given board or area constraint. ## Acknowledgment We would like to thank Alexander Montgomeryie-Corcoran for the invaluable advice and assistance he has given with regard to the fpgaConvNet toolflow. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) license to any Accepted Manuscript version arising. \begin{table} \begin{tabular}{l|l|l|c|c c|c} \multirow{2}{*}{**Network**} & \multirow{2}{*}{**Toolflow**} & \multicolumn{2}{c|}{**Limiting**} & \multirow{2}{*}{\(p\)} & \multicolumn{2}{c}{**Throughput**} \\ & & & **Resource** & & \multicolumn{1}{c|}{\(p\)} & \multicolumn{1}{c}{\multirow{2}{*}{**Throughput**}} \\ _(Task)_ & & Type & \% & (\%) & & \multicolumn{1}{c}{\multirow{2}{*}{**(Sample/s)**}} \\ \hline \hline B-LeNet[16] & Baseline* & DSP & 84 & - & 43384 & _1.00\(\times\) \\ _(MNIST)_ & ATHEENA* & DSP & 88 & 25 & **94170** & _2.17\(\times\)_ \\ \hline Triple Wins[27] & Baseline & DSP & 86 & - & 19524 & _1.00\(\times\)_ \\ _(MNIST)_ & ATHEENA & DSP & 81 & 25 & **54220** & _2.78\(\times\)_ \\ \hline B-AlexNet[16] & Baseline & DSP & 84 & - & 8676 & _1.00\(\times\)_ \\ _(CIFAR10)_ & ATHEENA & DSP & 88 & 34 & **17357** & _2.00\(\times\)_ \\ \end{tabular} \end{table} TABLE IV: Resulting throughput for two-stage accelerator designs generated by ATHEENA model compared to fpgaConvNet baseline. *Implemented on Xilinx ZC706 \begin{table} \begin{tabular}{c c|c|c} \multirow{2}{*}{**Network**} & \multicolumn{1}{c|}{\begin{tabular}{c} **Top 1** \\ **Acc. (\%)** \\ \end{tabular} } & \(p\) & \multicolumn{1}{c}{\multirow{2}{*}{ \begin{tabular}{c} **Throughput** \\ **(Sample/s)** \\ \end{tabular} }} \\ \hline \hline CPU & LeNet & 99.20 & - & 297 \\ CPU & B-LeNet & **99.25** & 5.7 & 1613 \\ \hline GPU & LeNet & 99.20 & - & 633 \\ GPU & B-LeNet & **99.25** & 5.7 & 2941 \\ \hline Baseline & LeNet & 98.84 & - & 43384 \\ **ATHEEENA** & **B-LeNet** & 98.88 & 25.0 & **94170** \\ \end{tabular} \end{table} TABLE III: Comparison against BranchyNet reported accuracy, converted throughput and converted hard sample probability.
2307.11401
Sandwich Boosting for Accurate Estimation in Partially Linear Models for Grouped Data
We study partially linear models in settings where observations are arranged in independent groups but may exhibit within-group dependence. Existing approaches estimate linear model parameters through weighted least squares, with optimal weights (given by the inverse covariance of the response, conditional on the covariates) typically estimated by maximising a (restricted) likelihood from random effects modelling or by using generalised estimating equations. We introduce a new 'sandwich loss' whose population minimiser coincides with the weights of these approaches when the parametric forms for the conditional covariance are well-specified, but can yield arbitrarily large improvements in linear parameter estimation accuracy when they are not. Under relatively mild conditions, our estimated coefficients are asymptotically Gaussian and enjoy minimal variance among estimators with weights restricted to a given class of functions, when user-chosen regression methods are used to estimate nuisance functions. We further expand the class of functional forms for the weights that may be fitted beyond parametric models by leveraging the flexibility of modern machine learning methods within a new gradient boosting scheme for minimising the sandwich loss. We demonstrate the effectiveness of both the sandwich loss and what we call 'sandwich boosting' in a variety of settings with simulated and real-world data.
Elliot H. Young, Rajen D. Shah
2023-07-21T07:46:08Z
http://arxiv.org/abs/2307.11401v2
# Sandwich Boosting for Accurate Estimation in Partially Linear Models for Grouped Data ###### Abstract We study partially linear models in settings where observations are arranged in independent groups but may exhibit within-group dependence. Existing approaches estimate linear model parameters through weighted least squares, with optimal weights (given by the inverse covariance of the response, conditional on the covariates) typically estimated by maximising a (restricted) likelihood from random effects modelling or by using generalised estimating equations. We introduce a new'sandwich loss' whose population minimiser coincides with the weights of these approaches when the parametric forms for the conditional covariance are well-specified, but can yield arbitrarily large improvements in linear parameter estimation accuracy when they are not. Under relatively mild conditions, our estimated coefficients are asymptotically Gaussian and enjoy minimal variance among estimators with weights restricted to a given class of functions, when user-chosen regression methods are used to estimate nuisance functions. We further expand the class of functional forms for the weights that may be fitted beyond parametric models by leveraging the flexibility of modern machine learning methods within a new gradient boosting scheme for minimising the sandwich loss. We demonstrate the effectiveness of both the sandwich loss and what we call'sandwich boosting' in a variety of settings with simulated and real-world data. ## 1 Introduction Grouped data are commonplace in many scientific, econometric and sociological disciplines. Prime examples include: repeated measures data (e.g. multiple readings of patient data), longitudinal data (e.g. where weekly sales are recorded across multiple stores), and hierarchical data (e.g. educational datasets clustered by school and potentially further sub-clustered by classroom). To fix ideas, consider a regression setting where we have available grouped data \((Y_{i},D_{i},X_{i})\in\mathbb{R}^{n_{i}}\times\mathbb{R}^{n_{i}}\times\mathbb{ R}^{n_{i}\times d}\) for \(i=1,\ldots,I\), with \(Y_{i}\) a vector of responses and predictors \((D_{i},X_{i})\) separated out into a covariate \(D_{i}\), whose contribution to the response we are particularly interested in, and remaining covariates \(X_{i}\); for instance \(D_{i}\) may be a treatment whose effect we wish to assess after controlling for additional covariates. In total therefore we have \(N:=\sum_{i=1}^{I}n_{i}\) observations, though typically not all independent. A simple but popular approach to modelling this data in practice is via a linear model of the form \[Y_{i}=\beta D_{i}+X_{i}\gamma+\varepsilon_{i}. \tag{1}\] Here \(\varepsilon_{i}\in\mathbb{R}^{n_{i}}\) is a vector of errors such that \(\mathbb{E}\left[\varepsilon_{i}\,|\,D_{i},X_{i}\right]=0\) and \((\beta,\gamma)\in\mathbb{R}\times\mathbb{R}^{d}\) are regression coefficients to be estimated, with \(\beta\) our primary target of inference. A challenge in such settings is properly accounting for potential correlations between components of \(\varepsilon_{i}\) in order to obtain accurate estimates of the parameters. This may be achieved through a weighted least squares regression yielding estimates \[\begin{pmatrix}\hat{\beta}\\ \hat{\gamma}\end{pmatrix}:=M^{-1}\begin{pmatrix}\sum_{i}D_{i}^{T}\hat{W}_{i}Y_ {i}\\ \sum_{i}X_{i}^{T}\hat{W}_{i}Y_{i}\end{pmatrix},\qquad\text{where}\qquad M:= \begin{pmatrix}\sum_{i}D_{i}^{T}\hat{W}_{i}D_{i}&\sum_{i}D_{i}^{T}\hat{W}_{i}X_ {i}\\ \sum_{i}X_{i}^{T}\hat{W}_{i}D_{i}&\sum_{i}X_{i}^{T}\hat{W}_{i}X_{i}\end{pmatrix}, \tag{2}\] in terms of weight matrices \(\hat{W}_{i}\in\mathbb{R}^{n_{i}\times n_{i}}\) to be chosen. The optimal choice \(\hat{W}_{i}\propto\operatorname{Cov}[Y_{i}\,|\,D_{i},X_{i}]^{-1}\) results in semiparametric efficient estimation of \((\beta,\gamma)\). A variety of approaches have been proposed for constructing the \(\hat{W}_{i}\). Among the most popular are multilevel models (also known as random or mixed effects models) (Pinheiro and Bates, 2000; Fahrmeir and Tutz, 2001), which additionally make distributional assumptions on the errors \(\varepsilon_{i}\), typically that of Gaussianity, and implicitly specify a particular parametrisation of \(\operatorname{Cov}[Y_{i}\,|\,D_{i},X_{i}]\) in terms of the covariates through the introduction of latent random coefficients. Parameters are typically estimated through (restricted) maximum likelihood estimation (Hartley and Rao, 1967; Corbeil and Searle, 1976; Pinheiro and Bates, 2000). An alternative is the marginal models framework (Heagerty and Zeger, 2000; Diggle et al., 2013; Fahrmeir and Tutz, 2001), which directly models the conditional covariance through a parametric form often estimated via generalised estimating equations (Liang and Zeger, 1986; Hardin and Hilbe, 2003; Ziegler, 2011). Provided the forms of the conditional covariance are well-specified, any of these approaches will result in efficient estimates for \(\beta\) and \(\gamma\). It is however well-known that all models are wrong (Box, 1976), and it is of interest to understand, under misspecification, which approaches remain useful. Below we discuss the consequences of the two potential sources of misspecification, that of the conditional covariance \(\operatorname{Cov}[Y_{i}\,|\,D_{i},X_{i}]\), and the conditional mean \(\mathbb{E}[Y_{i}\,|\,D_{i},X_{i}]\). ### Conditional covariance misspecification Misspecification of the conditional covariance has been given a good deal of attention in the literature. The generalised estimating equation approach that has come to be known as GEE1 (Liang et al., 1992) explicitly recognises the possibility of misspecification, and instead specifies what is referred to as a working model for the conditional covariance, with which to construct the weights. Valid inference is guaranteed even with arbitrary (fixed) weights as the estimator (1) is unbiased and standard errors may be based on a sandwich estimate of the variance of \(\hat{\beta}\)(Huber, 1967; Gourieroux et al., 1984; Royall, 1986; Liang and Zeger, 1986), \[\left\{\left(M^{-1}\begin{pmatrix}\sum_{i}D_{i}^{T}\hat{W}_{i}\hat{R}_{i}\\ \sum_{i}X_{i}^{T}\hat{W}_{i}\hat{R}_{i}\end{pmatrix}\right)_{1}\right\}^{2}, \qquad\text{where}\qquad\hat{R}_{i}:=Y_{i}-\hat{\beta}D_{i}-X_{i}\hat{\gamma} \in\mathbb{R}^{n_{i}}. \tag{3}\] The idea behind the working covariance model however is to approximate the ground truth sufficiently well such that the resulting \(\hat{\beta}\) has reasonably low variance; various estimation methods have been proposed for this purpose (Prentice and Zhao, 1991; Crowder, 1995; Lumley, 1996; Halekoh et al., 2006). While this intuition is basically well-founded, perhaps surprisingly, for a given model for the covariance, the success of these approaches depends crucially on the method of estimation, as we now demonstrate with two simple examples; specific details for these are given in Appendix A.2. **Example 1** (Conditional correlation misspecification).: Consider a version of (1) with \(X_{i}\) omitted for simplicity, \(n_{i}\equiv n\) and each error vector \(\varepsilon_{i}\in\mathbb{R}^{n}\) given by the first \(n\) realisations of an ARMA\((2,1)\) model. Suppose the weights are estimated in the parametric class consisting of inverses of the covariance matrices of AR\((1)\) processes, indexed by a single autoregressive parameter \(\rho\); note that the scale of the weights does not affect the resulting \(\hat{\beta}\) so we do not consider this as a parameter. We consider two specific settings within this general setup. Figure 1 plots the objective functions that are minimised for two well-established methods for constructing weights based on estimates \(\hat{\rho}\) of \(\rho\) The so-called quasi pseudo maximum likelihood approach (ML) (Gourieroux and Monfort, 1993; McCullagh and Nelder, 1989; Ziegler, 2011) treats the errors as if they were normally distributed with correlation matrix given by the AR\((1)\) process for a given \(\rho\), and proceeds to maximise jointly over the unknown \(\beta,\rho\) and variance \(\sigma\), what would be the likelihood were this model to hold. Motivated by the moment equation \(\mathbb{E}\left[\varepsilon_{ij}\varepsilon_{ik}\right]=\sigma^{2}\rho^{|j-k|}\), a second approach (GEE) falling within the GEE1 framework estimates \(\rho\) by the minimiser of \[\sum_{i=1}^{I}\sum_{j,k=1}^{n}(\hat{\varepsilon}_{ij}\hat{\varepsilon}_{ik}- \hat{\sigma}^{2}\rho^{|j-k|})^{2},\qquad\text{with}\qquad\hat{\sigma}:=\underset {\sigma>0}{\text{argmin}}\sum_{i=1}^{I}\sum_{j=1}^{n}(\hat{\varepsilon}_{ij}- \sigma^{2}); \tag{4}\] here the \(\hat{\varepsilon}_{i}\) are the residuals from an initial unweighted least squares regression of \(Y_{i}\) on \(D_{i}\). We also plot in orange the asymptotic variance (equivalent to the mean squared error), i.e., the population equivalent of (3), of the \(\beta\)-estimator weighted by an AR\((1)\) working correlation for a given value of \(\rho\) (the nomenclature 'SL' in the legend is explained in Section 1.3). In Setting (a), we see that optimising either of these objectives can result in suboptimal weights in terms of the resulting mean squared error, and any choice of \(\rho\in[0.1,0.8]\) would result in improved \(\beta\)-estimation. Setting (b) tells a similar story, but also illustrates issues that can arise due to local minima of the objective functions, which, in particular are typically not guaranteed to be Figure 1: The (scaled) objective functions of the parameter \(\rho\) in the AR\((1)\) working model for quasi pseudo (Gaussian) maximum likelihood (ML), GEE1 (GEE) and the corresponding asymptotic MSE relative to the minimum asymptotic MSE, in a settings with a ground truth given by an ARMA\((2,1)\) model of Example 1; coloured intervals at the bottom indicate ranges of \(\rho\) where the (global) minimisers corresponding to ML and GEE would be outperformed in terms of MSE. convex. Attempting to optimise the GEE objective by initialising at \(\rho=0\) results in gradient descent converging to the highly suboptimal local optimum on the left as the derivative of the objective at \(\rho=0\) is slightly positive (see also Table 4 in Appendix A.1). The resulting asymptotic variance of this final \(\hat{\beta}\) is substantially worse than even that of the unweighted choice corresponding to \(\rho=0\). **Example 2** (Conditional variance misspecification).: Consider an instance of model (1) with \(n_{i}\equiv 1\), so the data are ungrouped, and \(d=1\). Suppose the distribution of the data is such that the true conditional variance \(\operatorname{Var}(\varepsilon_{i}\,|\,D_{i},X_{i}=x)=\operatorname{Var}( \varepsilon_{i}\,|\,X_{i}=x)=:\sigma_{0}^{2}(x)\) with \(\sigma_{0}(x)=2+\tanh(\lambda(x-\mu))\); we shall consider different settings for the pair of parameters \((\lambda,\mu)\). Consider using a misspecified class of functions of the form \[\sigma(x;\eta)=1+2\mathbbm{1}_{[\eta,\infty)}(x), \tag{5}\] where \(\eta\in\mathbb{R}\), so the smooth curve of the \(\tanh\) function is to be approximated by a step function. Figure 2 plots the relative asymptotic mean squared errors of the weighted least squares estimates of \(\beta\), given by the first component of (2), for quasi pseudo maximum likelihood (ML) and GEE1-based approaches (GEE) for constructing weights \(\hat{W}_{i}=1/\{\sigma(X_{i};\hat{\eta})\}^{2}\) based on estimates \(\hat{\eta}\) of \(\eta\). The former involves treating the errors as if they were normally distributed with standard deviation \(\sigma(x;\eta)\) for some \(\eta\) while the latter estimates \(\eta\) by the minimiser of the sum of the squared differences between the squared residuals from an initial least squares regression of \(Y_{i}\) on \((D_{i},X_{i})\), and \((\sigma(X_{i};\eta))^{2}\); see also Robinson (1987); Carroll (1982); Tsiatis (2006); You et al. (2007) for examples of estimation of the conditional variance through a similar least squares approach for improving \(\beta\)-estimation in (partially) linear models. We compare these strategies to a naive unweighted estimator, that is (2) with \(\hat{W}_{i}\) constant, which makes no attempt to take advantage of the heteroscedasticity in the errors to improve estimation. Note that such weights are permitted in the model class (5) used by ML and GEE in this example by taking \(\eta\) large, so (5) for some \(\eta\) necessarily gives a better approximation to the ground truth compared to the unweighted approach. Figure 2: Asymptotic MSEs of \(\beta\)-estimators relative to an unweighted estimator, using weights in a misspecified model class (5) estimated by each of ML and GEE in the settings of Example 2 parametrised by \((\lambda,\mu)\). A first interesting observation is the quite different behaviour of the ML and GEE approaches here, with none appearing to be uniformly preferable to the other across all parameter settings. Perhaps more surprising however is the fact that for certain values of \((\mu,\lambda)\), the performances of these more sophisticated approaches lead to an inflation of the variance over an unweighted estimator (of up to almost \(80\%\)). This worrying behaviour can obfuscate model selection via AIC or BIC, as even at the population level they can favour models that result in poorer estimation of the parameter of interest \(\beta\). To resolve this apparent paradox, notice that although the model classes used by ML and GEE here are richer, the optimal 'projections' of the nuisance function \(\sigma_{0}\) (and corresponding optimal weight function \(W_{0}:=\sigma_{0}^{-2}\)) relating to their respective losses do not necessarily coincide with the optimal projection in the sense of the mean squared error of \(\hat{\beta}\); in fact in general there is no reason for them to do so, as illustrated schematically in Figure 3 (see also Proposition 1 and Theorem 2 for a formalisation of this phenomenon). We return to these issues in Section 1.3, but first turn our attention to misspecification of the conditional mean \(\mathbb{E}(Y_{i}\,|\,X_{i},D_{i})\). ### Conditional mean misspecification When the conditional expectation of the response given \((D_{i},X_{i})\) is not necessarily expected to be linear, a popular model to consider is the partially linear model \[\begin{split} Y_{i}&=\beta D_{i}+g_{0}(X_{i})+ \varepsilon_{i},\\ D_{i}&=m_{0}(X_{i})+\xi_{i}.\end{split} \tag{6}\] Here \((Y_{i},D_{i},X_{i},\varepsilon_{i})\) are as in (1); \(g_{0}\) and \(m_{0}\) are potentially nonlinear row-wise functions, that is e.g. \(g_{0}:\mathbb{R}^{d}\to\mathbb{R}\), and writing \(X_{ij}\) for the \(j\)th row of matrix \(X_{i}\), with a slight abuse of notation \(g_{0}:\mathbb{R}^{n_{i}\times d}\to\mathbb{R}^{n_{i}}\) is then defined via \((g_{0}(X_{i}))_{j}:=g_{0}(X_{ij})\); and error \(\xi_{i}\in\mathbb{R}^{n_{i}}\) in the \(D_{i}\) on \(X_{i}\) regression satisfies \(\mathbb{E}\left[\xi_{i}\,|\,X_{i}\right]=0\). Note that the model entails the conditional mean independence assumptions that \[\mathbb{E}(Y_{ij}\,|\,D_{i},X_{i})=\mathbb{E}(Y_{ij}\,|\,D_{ij},X_{ij})\qquad \text{and}\qquad\mathbb{E}(D_{ij}\,|\,X_{i})=\mathbb{E}(D_{ij}\,|\,X_{ij}).\] Nevertheless, the model is flexible enough to well-approximate a wide variety of data generating processes, yet still permits easy interpretation of the contribution of \(D_{i}\) to the response. The second equation serves to model confounding due to \(X_{i}\). In the ungrouped setting, i.e. where \(n_{i}\equiv 1\), estimating \(m_{0}\) in addition to \(g_{0}\) forms a key part of the double / debiased machine learning (DML) framework (Chernozhukov et al., 2018) for inference about \(\beta\), which in recent years has emerged as the dominant approach for estimation in partially linear models. The popularity of this paradigm is due to the fact that it accommodates the use of arbitrary machine learning methods for estimating the nuisance functions \(m_{0}\) and \(g_{0}\), and requires only a relatively slow rate of \(1/N\) for the product of the corresponding mean squared errors in order to yield estimates of \(\beta\) that converge at the parametric \(1/\sqrt{N}\) rate. Emmenegger and Buhlmann (2023) recently extended this approach to the grouped data setting with \(n_{i}>1\), assuming a parametric form of the covariance \(\text{Cov}(\varepsilon_{i}\,|\,D_{i},X_{i})\) governed by a random effects model. To estimate \(\beta\), they considered regressing each of \(Y_{i}\) and \(D_{i}\) on \(X_{i}\) using some independent auxiliary data and, with the resulting estimated regression functions, formed corresponding residuals \(\hat{R}_{i}^{Y}\) and \(\hat{R}_{i}^{D}\) to give \[\hat{\beta}=\left(\sum_{i}\hat{R}_{i}^{D^{T}}\hat{W}_{i}\hat{R}_{i}^{D}\right) ^{-1}\left(\sum_{i}\hat{R}_{i}^{D^{T}}\hat{W}_{i}\hat{R}_{i}^{Y}\right). \tag{7}\] Here the weight matrices \(\hat{W}_{i}\) are formed as the inverse conditional covariances estimated using (restricted) maximum likelihood. In practice, sample-splitting and cross-fitting are used in place of auxiliary data, permitting semiparametric efficient estimates, provided the model is well-specified. However, the flexibility in modelling the conditional mean afforded by DML comes with implications for potential misspecification of the conditional covariance. One key requirement of the approach above is that, in addition to assuming a parametric form for the conditional covariance, it should also not depend on \(D_{i}\), i.e. we must have \(\text{Cov}(\varepsilon_{i}\,|\,D_{i},X_{i})=\text{Cov}(\varepsilon_{i}\,|\,X_ {i})\). This restriction is in fact a fundamental limitation of approaches based on DML. It comes as a consequence of requiring Neyman orthogonality, a certain first-order insensitivity to plugging in potentially biased machine learning estimators. Writing \(l_{0}(X_{i}):=\mathbb{E}(Y_{i}\,|\,X_{i})\) and \(\hat{l}\) for the corresponding regression estimate, in our case, this entails \[\sqrt{I}\,\mathbb{E}\left[\left(\hat{R}_{i}^{Y}-\beta\hat{R}_{i}^{D}\right)^{ T}\hat{W}_{i}\xi_{i}\right]=\sqrt{I}\,\mathbb{E}\left[\left(l_{0}(X_{i})- \hat{l}(X_{i})\right)^{T}\mathbb{E}\left[\hat{W}_{i}\xi_{i}\,\big{|}\,X_{i} \right]\right] \tag{8}\] being approximately zero, which may not hold unless \(\hat{W}_{i}\) is a function of \(X_{i}\) alone. Thus misspecification of the conditional covariance, and the worrying consequences this may bring, deserve even greater attention when modelling the conditional mean in a flexible fashion through a partially linear model. ### An overview of our contributions To address the difficulties resulting from (inevitable) misspecification of the conditional covariance, we introduce a new approach for determining weight matrices in weighted least squares estimators (2) or the DML estimator in a partially linear model setting (7). Our proposal is to minimise a sandwich estimate of the variance of \(\hat{\beta}\) (i.e. (3) or the equivalent for the DML estimate) over a given parametrisation of weight matrices, thereby directly targeting the primary objective of interest: the estimation performance of \(\hat{\beta}\). We thus treat this sandwich estimate of the variance as a loss function--the'sandwich loss' (SL)--by which we determine the weights. The asypmtotic variance plotted in Figure 1 is precisely the population version of this sandwich loss, and minimising this will, by the very definition of the loss, deliver an estimator of \(\beta\) of minimal asymptotic variance among those considered. Returning to Example 2, Table 1 demonstrates that although ML and GEE are to be preferred in terms of estimating the true variance function \(\sigma_{0}^{2}\), they are worse when estimating \(\beta\) compared to our choice of weights tailored specifically for this purpose. While in these simple examples, the performances of the different methods are noticeably, but not radically different, we show later in Theroem 4 that there exist data generating distributions for which for a large class of misspecified working covariance models, the ratio of the variance for the \(\beta\)-estimators between either GEE or ML and our optimal weighting scheme can be arbitrarily large. One key message of our paper therefore is that particularly when there is a high risk of misspecification of the conditional covariance, the sandwich loss may be preferable over existing criteria when inference on the (partially) linear model parameter \(\beta\) is of primary interest. In fact, even in the case that the conditional covariance is well-specified, there is a danger that ML or GEE approaches could converge to weights that are only locally optimal for their respective losses. Since it is only the global optima of these losses that correspond to weights with favourable properties for estimation of \(\beta\), their is no guarantee that the weights obtained are even locally optimal in terms of the resulting asymptotic variances. In this sense GEE and ML approaches may be more vulnerable to the consequences of local optima in their objective functions, compared to the sandwich loss; see Section 5.1.1. A second main aim of our work is to introduce a new modelling strategy for working conditional covariances that can harness the power and flexibility of machine learning methods, similarly to how DML uses machine learning to accurately estimate nuisance functions. A challenge however is that standard regression methods cannot be directly deployed to construct weight matrices. In order to make use of these, we first decompose the inverse of the weights into a working conditional variance of each entry of \(\varepsilon_{i}\), and a working correlation that we model parametrically. We introduce a new gradient boosting approach for estimating these two components through minimising our sandwich loss; this takes as input a user-chosen regression method that is used within the boosting procedure to estimate the conditional variances. We demonstrate the favourable performance of our resulting'sandwich boosting' method in a variety of numerical experiments. In Section 2 we introduce the sandwich loss and compare its population version to ML and GEE-based equivalents. In Section 2.3 we verify that despite the unusual form of the sandwich loss, under relatively mild conditions, we can expect a minimiser of the sample version to con \begin{table} \begin{tabular}{c c c} \hline \hline Objective function & Asymptotic MSE of \(\hat{\beta}\) & Asymptotic integrated MSE of \(\hat{\sigma}\) \\ \hline GEE & 4.47 & **0.63** \\ ML & 6.82 & 0.87 \\ Sandwich loss & **4.10** & 1.10 \\ \hline \hline \end{tabular} \end{table} Table 1: Quality of the estimated weights from GEE, ML and minimising the sandwich loss in Example 2 in terms of the derived \(\hat{\beta}\) and estimate \(\hat{\sigma}\) of \(\sigma_{0}\) given by the square root of the inverse of the weights. verge to its population counterpart (Theorem 3). We introduce our general cross-fitted weighted estimation approach in Section 3.1 before describing our proposed sandwich boosting scheme in Section 3.2. Section 4 presents theory showing that our resulting estimator for the partially linear model coefficient \(\beta\) is asymptotically Gaussian under relatively mild conditions on the predictive ability of nuisance function estimators, permitting the construction of honest confidence intervals for \(\beta\). In contrast to existing results in this context, our theory permits the group sizes \(n_{i}\) to grow with the number of groups \(I\), and importantly accommodates misspecification of the conditional covariance. We present the results of a variety of numerical experiments on simulated and real-world data in Section 5 that further explore the themes hinted at in Examples 1 and A.2, and demonstrate the effectiveness of our sandwich boosting approach. We conclude with a discussion in Section 6 outlining avenues for further work, including a sketch of an extension to estimating a coefficient function in a version of (6) with \(\beta\) replaced by a function \(\beta(X_{i})\) that is a linear combination of known basis functions, using a generalisation of the sandwich loss. The appendix contains the proofs of all results presented in the main text, additional theoretical results, further details on the examples and numerical experiments, and a detailed computational analysis of the sandwich boosting methodology. Our sandwich boosting methodology is implemented in the R package sandwich.boost1. Below we briefly review some related work not necessarily covered elsewhere in the introduction, and collect together some notation used throughout the paper. Footnote 1: [https://github.com/elliot-young/sandwich.boost/](https://github.com/elliot-young/sandwich.boost/) ### Other related literature As indicated in the previous sections, our work connects to a vast literature on mixed effects models, also known as multilevel models or hierarchical models, and generalised estimating equations. Some recent developments in this area have looked at such models in high-dimensional contexts. In particular Li et al. (2022) considers using a particular proxy conditional covariance parametrised by a single parameter for computational simplicity. Li et al. (2018) considers a flexible conditional covariance specification through selecting from high-dimensional random effects via regularising terms in the Cholesky decomposition of the covariance matrix of the random effects. Most closely related to our setup here however is the work of Emmenegger and Buhlmann (2023) who consider partially linear mixed effect models (Zeger and Diggle, 1994) in the double machine learning (DML) framework, popularised by Chernozhukov et al. (2018); see also Kennedy (2022) for a recent review of this broad topic. Earlier work considered specific nonparametric estimators for \(g_{0}\) in the (grouped) partially linear model framework, for example Huang et al. (2007) use regression splines to estimate \(g_{0}\) and a GEE approach for estimating weights. Within the DML area, work related to the setting of the (ungrouped) partially linear model includes Vansteelandt and Dukes (2022) who propose new targets of inference and DML estimation strategies in potentially misspecified generalised partial linear models, and Emmenegger and Buhlmann (2021) who look at estimation in partially linear models with unobserved confounding in an instrumental variables setting using a DML approach and additional regularisation to reduce variance. Boosting (Schapire, 1990; Freund and Schapire, 1996), on which our sandwich boosting proposal is built, has received a lot of interest in recent years due to its success on modern datasets of interests. A long line of work (see for example Breiman (1999); Mason et al. (1999); Friedman et al. (2000), and Buhlmann and Hothorn (2007) for a review) in machine learning has resulted in the functional gradient descent perspective of boosting, which we make use of in developing our sandwich boosting proposal. In general terms, our use of the sandwich loss involves selecting among estimators (in our case determined by weight functions) based on estimates of their quality (in our case, their MSEs). In this sense it is related to a number of statistical approaches, including, for example cross-validation. Of particular note is the recent work of Park and Kang (2021) who look at average treatment effect estimation in multilevel studies and pick from among a family of estimators based on augmented inverse propensity weighting (Robins et al., 1994; Robins and Rotnitzky, 1995), one minimising an estimate of their variance. ### Notation We denote the maximum and minimum eigenvalues of a matrix \(M\in\mathbb{R}^{m\times m}\) by \(\Lambda_{\max}(M)\) and \(\Lambda_{\min}(M)\) respectively. Let \(\Phi\) denote the cumulative distribution function of a standard Gaussian distribution. We will also use the shorthand \([I]:=\{1,\ldots,I\}\). For the uniform convergence results we will present, it will be helpful to write, for a law \(P\) governing the distribution of a random vector \(U\in\mathbb{R}^{d}\), \(\mathbb{E}_{P}U\) for its expectation and \(\mathbb{P}_{P}(U\in B)=:\mathbb{E}1_{B}(U)\) for any measurable \(B\subseteq\mathbb{R}^{d}\). Further, given a sequence of families of probability distributions \((\mathcal{P}_{I})_{I\in\mathbb{N}}\), and for a sequence of families of real-valued random variables \((A_{P,I})_{P\in\mathcal{P}_{I},I\in\mathbb{N}}\) (which we note are permitted to depend on \(P\in\mathcal{P}_{I}\)), we write \(A_{P,I}=o_{\mathcal{P}}(1)\) if \(\lim_{I\to\infty}\sup_{P\in\mathcal{P}_{I}}\mathbb{P}_{P}\left(\left|A_{P,I} \right|>\epsilon\right)=0\) for all \(\epsilon>0\), \(A_{P,I}=o_{\mathcal{P}}(g(I))\) for a given function \(g:(0,\infty)\to(0,\infty)\) if \(g(I)A_{P,I}=o_{\mathcal{P}}(1)\), and \(A_{P,I}=O_{\mathcal{P}}(1)\) if for any \(\epsilon>0\) there exists \(M_{\epsilon},I_{\epsilon}>0\) such that \(\sup_{I\geq I_{\epsilon}}\sup_{P\in\mathcal{P}_{I}}\mathbb{P}_{P}\left(\left|A _{P,I}\right|>M_{\epsilon}\right)<\epsilon\). ## 2 The sandwich loss In this section, we first outline a general weighted least squares framework for estimating \(\beta\), within which we formally introduce the notion of the sandwich loss. In Section 2.2 we study properties of the sandwich loss at the population level, compared to the ML and GEE-based approaches described in Section 1.1. In Section 2.3 we then study the behaviour of the sample version of the sandwich loss and show that under mild conditions, a minimiser will converge in probability to the population minimiser. For the remainder of this paper, we will work in the setting of the partially linear model (6), which includes as a special case, the linear model (1). ### Weighted estimation Here we outline a general strategy for weighted estimation of \(\beta\), with which we will introduce our proposed sandwich loss. For simplicity, we describe our approach in terms of an auxiliary dataset, independent of our main data. In practice however, we use sample splitting and cross-fitting to construct our estimator, as described in Section 3.1. As in the approach of Emmenegger and Buhlmann (2023), we begin by regressing each of \(Y\) and \(D\) onto \(X\) using our main data to give estimates \(\hat{l}\) and \(\hat{m}\) of the row-wise conditional expectations \(l_{0}(X_{i})=\mathbb{E}(Y_{i}\left|\,X_{i}\right)\) and \(m_{0}(X_{i})=\mathbb{E}(D_{i}\left|\,X_{i})\). With these we form respective vectors of residuals \(\tilde{R}^{Y}\) and \(\tilde{R}^{D}\). With these, we find an initial estimate \(\tilde{\beta}\) of \(\beta\) using (7) with the \(\hat{W}_{i}\) set to identity matrices: \[\tilde{\beta}=\left(\sum_{i}\tilde{R}_{i}^{D^{T}}\tilde{R}_{i}^{D}\right)^{-1 }\left(\sum_{i}\tilde{R}_{i}^{D^{T}}\tilde{R}_{i}^{Y}\right).\] We then form 'estimates' of the errors \(\xi_{i}\) and \(\varepsilon_{i}\) given by \(\tilde{\xi}_{i}:=\tilde{R}_{i}^{D}\) and \(\tilde{\varepsilon}_{i}:=\tilde{R}_{i}^{Y}-\tilde{\beta}\tilde{R}_{i}^{D}\) to obtain a sandwich estimate of (\(N\) times) the variance of a \(\beta\)-estimator utilising given weight matrices \(W(X_{i})\): \[\hat{L}_{\mathrm{SL}}(W):=\left(\frac{1}{N}\sum_{i}\tilde{\xi}_{i}^{T}W(X_{i}) \tilde{\xi}_{i}\right)^{-2}\left(\frac{1}{N}\sum_{i}\left(\tilde{\varepsilon}_{ i}^{T}W(X_{i})\tilde{\xi}_{i}\right)^{2}\right). \tag{9}\] In the above, \(W\) is a function that given \(X\in\mathbb{R}^{n\times d}\) for any \(n\in\mathbb{N}\), outputs a matrix \(W(X)\in\mathbb{R}^{n\times n}\). The sandwich loss \(\hat{L}_{\mathrm{SL}}\) views the variance estimate as a function of \(W\), and specifically as a measure of the quality of the weights \(W\), somewhat analogously to how the likelihood views the density of the data as a function of parameters to be determined. Given a class \(\mathcal{W}\) of functions \(W\), we then propose to find an (approximate) minimiser \(\hat{W}\) of \(\hat{L}_{\mathrm{SL}}\) over \(\mathcal{W}\) (see Section 3.2 for our sandwich boosting approach for carrying this out). While the sandwich loss is thus a rather trivial (re-)definition, as we try to demonstrate in the present work, this shift in perspective from variance estimate to a loss function to be minimised, can lead to non-trivial improvements in terms of estimating \(\beta\). Note that the sandwich loss is unaffected by any positive scaling of \(W\): this is to be expected since the resulting \(\hat{\beta}\) is equally invariant. On the auxiliary data, we then set \(\hat{W}_{i}:=\hat{W}(X_{i})\), and form a weighted \(\beta\)-estimate of the form (7). Our \(\hat{W}\) should then deliver a low variance estimate of \(\beta\) since this is precisely the way in which it was constructed. Indeed, we show in Section 4 that the variance estimate (9) consistently estimates the true variance (times \(N\)). Note that although we have introduced the sandwich loss via a specific construction of the error estimates \(\hat{\varepsilon}_{i}\) and \(\hat{\xi}_{i}\), it is applicable more broadly to other such estimates. For example, in the simpler linear model setting, applying the Woodbury matrix identity shows that the sandwich variance estimate (2) also takes the form given in (9), for certain error estimates formed through weighted least squares regressions. In the following section, we compare the sandwich loss to the ML and GEE-based approaches outlined in Section 1.1 theoretically by studying population versions of these. ### Population level analysis The examples in Section 1.1 hint at potential issues with the ML and GEE-based approaches, which unlike the sandwich loss, are not explicitly geared towards minimising the variance of the resulting \(\hat{\beta}\): whilst when the conditional covariance is well-specified, their goals coincide with those of the sandwich loss, in the case of misspecification, this need not be the case. Recall that since the weight matrices are restricted to be functions of \(X_{i}\) alone due to the requirement of Neyman orthogonality, it is plausible that some form of misspecification is unavoidable. To study the properties of the approaches under potential misspecification, we work for simplicity in a setting where we observe iid instances of the partially linear model (6) with fixed finite group size \(n_{i}\equiv n\), and consider population versions of the respective losses: \[L_{\mathrm{ML}}(W) :=\mathbb{E}\left[-\log\det W(X)+\varepsilon^{T}W(X)\varepsilon \right],\] \[L_{\mathrm{GEE}}(W) :=\mathbb{E}\left[\left\|W(X)^{-1}-\varepsilon\varepsilon^{T} \right\|^{2}\right],\] \[L_{\mathrm{SL}}(W) :=\left(\mathbb{E}\left[\xi^{T}W(X)\xi\right]\,\right)^{-2}\left( \mathbb{E}\left[\left(\xi^{T}W(X)\varepsilon\right)^{2}\right]\right).\] Here \(\varepsilon,\xi\) and \(X\) are to be understood as generic versions of their counterparts with subscript \(i\) satisfying \(\mathbb{E}\|\varepsilon\|_{2}^{2}<\infty\), \(\mathbb{E}\|\xi\|_{2}^{2}<\infty\) and \(\Lambda_{\min}\mathbb{E}(\varepsilon\varepsilon^{T}\,|\,X)>\epsilon\) almost surely for some \(\epsilon>0\). The weight function \(W\) is in \[\mathcal{W}:=\{W:\mathbb{R}^{n\times d}\to\mathbb{R}^{n\times n}:\delta\leq \Lambda_{\min}(W(X))\leq\Lambda_{\max}(W(X))<\infty\} \tag{10}\] where \(0<\delta\leq\epsilon\) is some arbitrarily small constant. Note that the GEE loss \(L_{\text{GEE}}\) is defined here with respect to an arbitrary matrix norm \(\|\cdot\|\) derived from an inner product, such as the Frobenius norm. Let us write \(V_{\text{ML/GEE}}:=L_{\text{SL}}(\mathbb{E}(\varepsilon\varepsilon^{T}\,|\,X)^{ -1})\) and \(V_{\text{SL}}\) for the infimum of \(L_{\text{SL}}\) over \(\mathcal{W}\). Proposition 1 below shows that \(V_{\text{ML/GEE}}\) is the asymptotic variance of both the ML and GEE losses, and gives a condition under which this coincides with \(V_{\text{SL}}\). **Proposition 1**.: 1. \(L_{\text{ML}}\) _and_ \(L_{\text{GEE}}\) _are both minimised over_ \(\mathcal{W}\) _(_10_) by_ \(\mathbb{E}(\varepsilon\varepsilon^{T}\,|\,X)^{-1}\)_._ 2. _The asymptotic variance_ \(V_{\text{SL}}\) _of the sandwich loss satisfies_ \(V_{\text{SL}}\leq V_{\text{ML/GEE}}\)_, with equality if and only if_ \[\mathbb{E}\left[\left(\varepsilon\varepsilon^{T}\right)\left(\mathbb{E}\left[ \varepsilon\varepsilon^{T}\,\middle|\,X\right]\right)^{-1}\left(\xi\xi^{T} \right)\,\middle|\,X\right]\propto\mathbb{E}\left[\xi\xi^{T}\,\middle|\,X \right].\] (11) Note the condition (11) holds in the instance that \(\text{Cov}(\varepsilon\,|\,D,X)=\text{Cov}(\varepsilon\,|\,X)\); but given variable \(D\) is considered here to be important enough for its associated parameter to be the target of our inference, it is not inconceivable that the errors depend on them after conditioning on \(X\). In such settings, it is possible for the ratio \(V_{\text{ML/GEE}}/V_{\text{SL}}\) to be arbitrarily large, as Theorem 2 below shows. **Theorem 2**.: _Suppose \(X\) is not almost surely constant. Then for all \(M\geq 1\), and for all pairs \(\Sigma,\Omega:\mathbb{R}^{n\times d}\to\mathbb{R}^{n\times n}\) of positive definite matrices that are functions of \(X\), there exists a law on \((\varepsilon,\xi)\) satisfying the conditions of the model (6) with \(\mathbb{E}\left[\varepsilon\varepsilon^{T}\,|\,X\right]=\Sigma(X)\), \(\mathbb{E}\left[\xi\xi^{T}\,|\,X\right]=\Omega(X)\) and_ \[\frac{V_{\text{ML/GEE}}}{V_{\text{SL}}}\geq M.\] While the discrepancy between \(A\) and \(V_{\text{SL}}\) is not always expected to be very large, it is nevertheless potentially a cause for concern that this can happen even when \(\Sigma\) and \(\Omega\) are identity matrices, for example. ### Sample level considerations The previous section illustrated some of the advantages of the sandwich loss at the population level. The sandwich loss \(\hat{L}_{\text{SL}}\) (9) we propose however is unusual in the sense that it is not composed of a sum of independent terms typical of the objective functions of M-estimators. A further complication is that \(\hat{L}_{\text{SL}}\) involves estimates of the errors \(\xi_{i}\) and \(\varepsilon_{i}\), rather than these errors themselves. The classical theory of M-estimation (van der Vaart, 1998, Chap. 5) is therefore not immediately applicable here, and it is not clear whether the useful advantages of the population level sandwich loss \(L_{\text{SL}}\) transfer over to its empirical counterpart. The result below however shows that under relatively mild conditions, minimisation of \(\hat{L}_{\text{SL}}\) over a parametric class yields convergence in probability to the minimiser of \(L_{\text{SL}}\). We continue to work in the setup of the previous section, where our data consist of \(I\) iid groups of finite size \(n\) following the partially linear model (6). **Theorem 3**.: _Let \(\Psi\subset\mathbb{R}^{d}\) be some compact set and_ \[\mathcal{W}:=\left\{W(\psi):\mathbb{R}^{n\times d}\to\mathbb{R}^{n\times n}\text { with }\sup_{z\in\mathbb{R}^{n\times d}}\Lambda_{\max}(W(\psi)(X))=1\text{ for }\psi\in\Psi\right\}.\] _Suppose \(\psi^{*}\in\Psi\) is such that for all \(\epsilon>0\), \(\inf_{\psi\in\Psi:\|\psi-\psi^{*}\|\geq\epsilon}L_{\text{SL}}(\psi)>L_{\text{SL }}(\psi^{*})\). Assume the regularity conditions set out in Appendix B.3, and additionally that for estimates \((\tilde{\xi}_{i},\tilde{\varepsilon}_{i})\) of the error terms \((\xi_{i},\varepsilon_{i})\) either_ 1. \(\mathbb{E}\left[\frac{1}{N}\sum_{i=1}^{I}\|\tilde{\xi}_{i}-\xi_{i}\|_{2}^{4} \right]\vee\mathbb{E}\left[\frac{1}{N}\sum_{i=1}^{I}\|\tilde{\xi}_{i}-\varepsilon _{i}\|_{2}^{4}\right]=o(1)\)_, or_ 2. \(\mathbb{E}\left[\frac{1}{N}\sum_{i=1}^{I}\|\tilde{\xi}_{i}-\xi_{i}\|_{2}^{2} \right]\vee\mathbb{E}\left[\frac{1}{N}\sum_{i=1}^{I}\|\tilde{\xi}_{i}- \varepsilon_{i}\|_{2}^{2}\right]=o(N^{-\frac{1}{2}})\)_._ _Then any sequence of approximate minimisers \(\hat{\psi}_{I}\) with \(\hat{L}_{\mathrm{SL}}(\hat{\psi}_{I})\leq\hat{L}_{\mathrm{SL}}(\psi^{*})+o_{P} (1)\) satisfies_ \[\hat{\psi}_{I}=\psi^{*}+o_{P}(1).\] ## 3 Methodology In this section we present our sandwich boosting weighted regression procedure. We first describe the basic outline of our approach for generic weight matrices employing cross-fitting in Section 3.1 and then in Section 3.2 present our boosting strategy to approximately optimise the sandwich loss within a flexible class of working covariance models. ### Cross-fitting In Section 2.1, we outlined a simplified approach for estimating \(\beta\) in the partially linear model (6) involving first obtaining 'estimates' \(\tilde{\xi}_{i}\) and \(\tilde{\varepsilon}_{i}\) of the errors \(\xi_{i}\) and \(\varepsilon_{i}\) with which to determine a weight function \(\hat{W}\) through minimising the sandwich loss (9). The second stage involved forming on an independent dataset, an estimate \(\hat{\beta}\) through (7), so in particular, conditioning on the initial dataset, \(\hat{W}\) would be fixed. In practice, only a single dataset would be available, and we construct the two independent datasets through sample splitting, employing a \(K\)-fold cross-fitting scheme to recover the loss in efficiency in using only part of the data to construct the final estimator. Cross-fitting is a popular approach in semiparametric problems for ensuring the independence of nuisance parameter estimates from the data on which the final estimate of the target parameter is formed. This independence means that certain empirical process terms can be controlled straightforwardly even when arbitrary nuisance parameter estimators are used. Chernozhukov et al. (2018) and Emmenegger and Buhlmann (2023) use such cross-fitting in the regular and grouped partially linear models respectively, where the nuisance function estimates in question are \(\hat{l}\) and \(\hat{m}\). In our case, cross-fitting additionally serves to guarantee independence of the weight function estimates. Algorithm 1 details our method, with observation groups indexed by \(\mathcal{I}_{k}^{c}\) playing the roles of the initial datasets for \(k=1,\ldots,K\), and those indexed by \(\mathcal{I}_{k}\) involved in the construction of the final estimator \(\hat{\beta}\). Note that rather than forming separate estimates of \(\beta\) corresponding to each \(\mathcal{I}_{k}\), we instead form sets of residuals \(\hat{R}_{i}^{D}\) and \(\hat{R}_{i}^{Y}\), finally forming \(\hat{\beta}\) using these via (7), an approach known as DML2 (Chernozhukov et al., 2018). As well as obtaining an estimate \(\hat{\beta}\), we also calculate a sandwich estimate \(\hat{V}\) of the variance, with which an approximate \((1-\alpha)\)-level confidence interval \(\hat{C}(\alpha)\) may be constructed. ### Sandwich boosting Algorithm 1 introduced a generic approach for incorporating weight functions learnt from the data into an estimator for \(\beta\) via approximately minimising the sandwich loss over some class of functions \(\mathcal{W}\). We now introduce an approach for performing this approximate minimisation over a class \(\mathcal{W}\) defined implicitly through a user-chosen regression method. To introduce our approach, it is helpful to consider a class of proxy conditional covariances parametrised as \[D_{\sigma}(\cdot)\,C_{\theta}(\cdot)\,D_{\sigma}(\cdot), \tag{12}\] where, given an input \(X_{i}\in\mathbb{R}^{n_{i}\times d}\), the functions \(D_{\sigma}\) and \(C_{\theta}\) output \[D_{\sigma}(X_{i}) :=\operatorname{diag}(\sigma(X_{i1}),\ldots,\sigma(X_{in_{i}}))\] \[C_{\theta}(X_{i}) :=\big{(}\mathbb{1}_{\{j=k\}}+\mathbb{1}_{\{j\neq k\}}\rho_{\theta }(X_{ij},X_{ik})\big{)}_{(j,k)\in[n_{i}]^{2}}\,.\] Here, \(\sigma:\mathbb{R}^{d}\to(0,\infty)\) and \(\rho_{\theta}:\mathbb{R}^{d}\times\mathbb{R}^{d}\to[0,1]\) for \(\theta\in\Theta\) (where \(\Theta\) is some closed convex set) are proxy conditional standard deviation and correlation functions that are to be modelled nonparametricaly and parametrically respectively. Note that the working covariances (12) have the property that the \(jk\)th entry depends only on \(X_{ij}\) and \(X_{ik}\); this need not be the case for \(\{\operatorname{Cov}(Y_{i}\,|\,X_{i})\}_{jk}\) for example, but is nevertheless a reasonable simplification. Redefining \(s\equiv 1/\sigma\), the corresponding weight class consists of functions of the form \[W(s,\theta)(\cdot)=D_{s}(\cdot)\,C_{\theta}^{-1}(\cdot)\,D_{s}(\cdot), \tag{13}\] understanding, \(C_{\theta}^{-1}(X_{i}):=\{C_{\theta}(X_{i})\}^{-1}\) up to an arbitrary positive scale factor. As an example, an equicorrelated working correlation may be parametrised as \[C_{\theta}^{-1}(X_{i})=\left(\mathds{1}_{\{j=k\}}-\frac{\theta}{1+\theta n_{i}} \right)_{(j,k)\in[n_{i}]^{2}}\] for \(\theta\in[0,\infty)\), with corresponding correlation \(\rho_{\theta}\) given by \(\theta/(1+\theta)\). We also consider a version for nested group structures permitting two constant correlations and an autoregressive form suitable for longitudinal data where \(\{C_{\theta}(X_{i})\}_{jk}=\theta^{|j-k|}\); see Appendix C. Such inverse working correlations are among the classes of weights considered in the GEE1 framework (Liang and Zeger, 1986; Zeger and Liang, 1986; Ziegler, 2011). A key difference here however is the greater flexibility afforded by learning the working inverse standard deviation \(s\) function through through a boosting scheme, as we now explain. We also outline in Section 3.2.1 how our boosting scheme may be initialised at estimated weight functions derived using existing ML or GEE-based methods, for example, thereby increasing the flexibility of the functional forms considered. Boosting has emerged as one of the most successful learning methods, with the XGBoost implementation (Chen and Guestrin, 2016) in particular dominating machine learning competitions such as those hosted on Kaggle (Bojer and Meldgaard, 2021). Since its introduction in the work of Schapire (1990), it has been generalised and reinterpreted as a form of functional gradient descent of an objective function \(\hat{L}\) based on the data (Friedman et al., 2000; Mason et al., 1999; Buhlmann and Yu, 2003; Buhlmann and Hothorn, 2007). For an objective function \(f\mapsto\hat{L}(f)\in\mathbb{R}\) applied to function \(f:\mathbb{R}^{d}\to\mathbb{R}\), an individual boosting iteration involves perturbing the \(f\)-function by a step in the 'direction' of the \(f\)-score \[U^{(f)}(f)\left(x\right):=\partial_{f}\hat{L}(f)\left(x\right):=\frac{\partial }{\partial\alpha}\hat{L}\left(f+\alpha\delta_{x}\right)\Big{|}_{\alpha=0}, \tag{14}\] where \(\delta_{x}:\mathbb{R}^{d}\to\mathbb{R}\) is the indicator function at \(x\in\mathbb{R}^{d}\). Procedurally, the \(f\)-score \(U^{(f)}(f)\) is evaluated at the data points and the regressed onto the data using a user-chosen 'base learner'. Typically \(\hat{L}\) takes the form of an empirical risk, so \(\hat{L}(f)=\sum_{i}\ell(y_{i},f(x_{i}))\) for some loss function \(\ell\) and predictor-response pairs \((x_{i},y_{i})\). The corresponding \(f\)-score evaluated at the data point \(x_{i}\) then takes the simple form \(\frac{\partial}{\partial\alpha}\ell(y_{i},\alpha)|_{\alpha=f(x_{i})}\), a function of the \(i\)th observation alone. This allows for \(f\)-score calculation in linear time, as well as the possibility of parallelising computations for large data sets as exploited by XGBoost. In our case however where we wish to minimise the sandwich loss \((s,\theta)\mapsto\hat{L}_{\text{SL}}(W(s,\theta))\) (which recall is defined in terms of estimates \(\tilde{\varepsilon}_{i}\) and \(\tilde{\xi}_{i}\) of the errors) over weight functions parametrised by \((s,\theta)\) (13), we obtain as the \(s\)-score \[U_{\text{SL}}^{(s)}(s,\theta)(X_{ij})=-2b^{-3}\left\{\left(\tilde{\xi}_{i}^{T}A _{ij}\tilde{\xi}_{i}\right)\sum_{i^{\prime}}c_{i^{\prime}}^{2}-bc_{i}\left( \tilde{\varepsilon}_{i}^{T}A_{ij}\tilde{\xi}_{i}\right)\right\}, \tag{15}\] where \(W_{i}(s,\theta):=W(s,\theta)(X_{i})\) and \[A_{ij} :=\left(\left\{C_{\theta}^{-1}(X_{i})\right\}_{kj}s(X_{k}) \mathds{1}_{\{l=j\}}+\left\{C_{\theta}^{-1}(X_{i})\right\}_{jl}s(X_{l}) \mathds{1}_{\{k=j\}}\right)_{k,l\in[n_{i}]^{2}},\] \[b :=\sum_{i=1}^{I}\tilde{\xi}_{i}^{T}W_{i}(s,\theta)\tilde{\xi}_{i},\qquad c_{i}:=\tilde{\varepsilon}_{i}^{T}W_{i}(s,\theta)\tilde{\xi}_{i}.\] Thus the \(s\)-score at \(X_{ij}\) is a function of all the data points. Nevertheless, it may be computed for all \(X_{ij}\) at a cost of \(O\big{(}\sum_{i}n_{i}^{3}\big{)}\). However, as we show in Appendix C, for the equicorrelated, nested and autoregressive working correlation structures, this cost is reduced to \(O(N)\) and may be parallelised similarly to the standard setting of minimising an empirical risk. The critical factors in allowing this are: (a) that computing the matrix inverse \(C_{\theta}^{-1}(X_{i})\) present in \(W_{i}(s,\theta)\) which for an arbitrary correlation \(\rho_{\theta}\) may take \(O(n_{i}^{3})\) time, has a simple closed form; and (b) computation of the terms \(\tilde{\xi}_{i}^{T}A_{ij}\tilde{\xi}_{i}\) and \(\tilde{\varepsilon}_{i}^{T}A_{ij}\tilde{\xi}_{i}\) involving the sparse matrix \(A_{ij}\) can be arranged to be \(O(1)\) by precomputing other terms appropriately. Along with updating \(s\) by regressing the \(s\)-score above onto the \(X_{ij}\) and taking a step in the direction of the negative of this fitted regression function, we may also perform a regular projected gradient descent update for \(\theta\) using the \(\theta\)-score vector \[U_{\text{SL}}^{(\theta)}(s,\theta)=-2b^{-3}\left\{\left(\sum_{i}c_{i}^{2} \right)\left(\sum_{i}\tilde{\xi}_{i}^{T}\partial_{\theta}W_{i}(s,\theta) \tilde{\xi}_{i}\right)-b\sum_{i}c_{i}\left(\tilde{\varepsilon}_{i}^{T} \partial_{\theta}W_{i}(s,\theta)\tilde{\xi}_{i}\right)\right\}, \tag{16}\] which may be computed at no greater cost than the \(s\)-score above. With these scores, our sandwich boosting algorithm is summarised in Algorithm 2; note \(\pi_{\Theta}\) denotes projection onto the set \(\Theta\). Recall that in our cross-fitting scheme (Algorithm 1), we envisage applying boosting to approximately minimise a version of the sandwich loss corresponding to subsets of the observation indices. As is standard in boosting, the algorithm requires a choice of initialisers (in our case \(\hat{s}_{1}\) and \(\hat{\theta}_{1}\)) and a base learner. In all of our numerical experiments, we take \(\hat{s}_{1}=1\), \(\hat{\theta}_{1}=0\) and use additive penalised cubic regression splines implemented in the R package mgcv(Wood, 2017). We select the number of boosting iterations \(m_{\text{stop}}\) by cross-validation, as recommended by (Buhlmann and Hothorn, 2007), though using our sandwich loss as the evaluation criterion. Note that the algorithm is stated for fixed step sizes \(\lambda^{(\theta)}\) and \(\lambda^{(s)}\) for simplicity; in Appendix E, we describe the specific choices and variable step size schemes used in our numerical results. #### 3.2.1 Initialising from other weighting schemes The classes of weight functions that may be fitting using our computationally efficient boosting schemes with equicorrelated, autoregressive or nested correlations can be rather rich when used in conjunction with a flexible base learner. However, these classes would not encompass all those available using classical mixed effects modelling, for example. In order to further broaden the classes of weight functions that may be considered, one can start with an initial estimated weight or conditional covariance function \(\hat{\Sigma}_{\text{init}}\) estimated through GEE or ML-based approaches, and fit a weight function of the form \[\{\hat{\Sigma}_{\text{init}}(\cdot)\}^{-1/2}\,W(\cdot)\,\{\hat{\Sigma}_{\text {init}}(\cdot)\}^{-1/2},\] where \(W(\cdot)\) is of the form given by (13), using sandwich boosting. The boosting algorithm then serves to push the initial \(\hat{\Sigma}_{\text{init}}\) in a better direction for the purposes of estimating \(\beta\). This may be carried out easily by running Algorithm 2 on transformed error estimates \(\tilde{\varepsilon}_{i}\mapsto\{\hat{\Sigma}_{\text{init}}(X_{i})\}^{-1/2} \tilde{\varepsilon}_{i}\) and similarly for \(\tilde{\xi}_{i}\). In fact, one can use multiple initialisers in this way, and pick among the best sandwich-boosted versions via cross-validation with the sandwich loss as the quality criterion. ## 4 Theory In this section we present results on asymptotic normality of the \(\beta\)-estimator of Algorithm 1, and coverage guarantees for the confidence interval construction therein. Recalling the setup of Section 3.2, we consider the case where the estimated weight functions are such that \(\{\hat{W}^{(k)}(\cdot)\}^{-1}\) takes the form (12). Let us define \(\hat{\sigma}^{(k)}\) and \(\hat{\theta}^{(k)}\) by \[D_{\hat{\sigma}^{(k)}}(\cdot)\,C_{\hat{\theta}^{(k)}}(\cdot)\,D_{\hat{\sigma}^ {(k)}}(\cdot):=\{\hat{W}^{(k)}(\cdot)\}^{-1},\] and write \(\hat{\rho}^{(k)}:=\rho_{\hat{\theta}^{(k)}}\). For simplicity of the exposition, similarly to Sections 2.2, here we consider the case where our data are iid copies of the group of \(n\) observations \((Y,D,X)\in\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n\times d}\) following the partially linear model (6). Recall that \(X\) is a matrix whose rows, denoted \(X_{j}\in\mathbb{R}^{d}\), are not necessarily independent or identically distributed. We also relax the iid assumption at the group level to permit non-identically distributed groups of unequal size in Appendix D. Our results here are based in part on Emmenegger and Buhlmann (2023), but build on them in two key ways. Firstly, we permit the conditional covariance \(\text{Cov}(Y\,|\,D,X)\) to be misspecified, i.e., for the (likely) possibility that the probability limit of the \(\hat{W}^{(k)}(X)\) is not some multiple of its inverse. Secondly, we consider asymptotic regimes that allow the group size \(n\) to diverge with the total number of observations \(N=nI\) at rates we will formalise later. Throughout, we assume that the number of folds \(K\) used in cross-fitting is finite. We state our results as uniform convergence results over the sequence of classes of distributions \((\mathcal{P}_{I})_{I\in\mathbb{N}}\) such that for all \(I\) sufficiently large and for all \(P\in\mathcal{P}_{I}\), the following hold. Note that in the below, \(\delta\), \(\mu_{\Sigma}\), \(\mu_{\Omega}\), \(\gamma\), \(\alpha\) and \(\kappa\) are to be thought of as constants, not depending on \(P\). The values of these are not relevant in the case where the group size \(n\) is finite, but play a role in the rate of growth permitted when it is diverging. Moreover \(a\lesssim b\) denotes \(a\leq cb\) for constant \(c>0\) not depending on \(P\). We have however suppressed the dependence on \(P\) in \(l_{0}\), \(\sigma^{*}\) etc. **Assumption A1** (Moment assumptions).: 1. There exists \(\delta>0\) such that \(\mathbb{E}_{P}\big{[}\,\|\varepsilon\|_{2}^{4+\delta}\,\big{]}^{\frac{1}{4+ \delta}}\lesssim\sqrt{n}\) and \(\mathbb{E}_{P}\big{[}\,\|\xi\|_{2}^{4+\delta}\,\big{]}^{\frac{1}{4+\delta}} \lesssim\sqrt{n}\). 2. The covariance matrices \(\Sigma(D,X):=\mathbb{E}_{P}\left[\varepsilon\varepsilon^{T}\,|\,D,X\right]\) and \(\Omega(X):=\mathbb{E}_{P}\left[\xi\xi^{T}\,|\,X\right]\) satisfy \(\Lambda_{\min}(\Sigma)\gtrsim 1\) and \(\Lambda_{\min}(\Omega)\gtrsim 1\) almost surely. Further, \(\Lambda_{\max}(\Sigma)\lesssim n^{\mu_{\Sigma}}\) and \(\Lambda_{\max}(\Omega)\lesssim n^{\mu_{\Omega}}\) almost surely for some \(\mu_{\Sigma},\mu_{\Omega}\in[0,1]\). Lower values of \(\mu_{\Sigma}\) and \(\mu_{\Omega}\) will permit faster rates of divergence of \(n\) (see Appendix D). Note that when \(\Sigma\) is close to the equal correlation working covariance of Section 3.2, we can expect \(\mu_{\Sigma}=1\). For our simplified result in Corollary 5 we set \(\mu_{\Omega}=1\). **Assumption A2** (Accuracy of regression function estimators).: Define the maximum within group estimation errors of regression functions \(l_{0}\) and \(m_{0}\): \[\mathcal{R}_{l}:=\max_{j\in[n]}\mathbb{E}_{P}\left[\left(\hat{l}^{(1)}(X_{j})-l_{ 0}(X_{j})\right)^{2}\,\bigg{|}\,\hat{l}^{(1)}\right],\quad\mathcal{R}_{m}:= \max_{j\in[n]}\mathbb{E}_{P}\left[\left(\hat{m}^{(1)}(X_{j})-m_{0}(X_{j}) \right)^{2}\,\bigg{|}\,\hat{m}^{(1)}\right].\] Then the errors of these nuisance function estimators satisfy: 1. \(\mathcal{R}_{m}\left(\mathcal{R}_{l}\vee\mathcal{R}_{m}\right)=o_{P}(N^{-1})\), 2. \(\mathcal{R}_{m}\vee\mathcal{R}_{l}=o_{P}(1)\), 3. \(\max_{j\in[n]}\mathbb{E}_{P}\left[\left|\hat{l}^{(1)}(X_{j})-l_{0}(X_{j}) \right|^{4+\delta}\,\Big{|}\,\hat{l}^{(1)}\right]=O_{\mathcal{P}}(1)\) and \[\max_{j\in[n]}\mathbb{E}_{P}\left[\left|\hat{m}^{(1)}(X_{j})-m_{0}(X_{j}) \right|^{4+\delta}\,\Big{|}\,\hat{m}^{(1)}\right]=O_{\mathcal{P}}(1).\] The assumptions on the regression function estimates are relatively weak and identical to those in Emmenegger and Buhlmann (2023), with what is typically the strongest requirement (A2.1) permitting nonparametric rates of \(o_{P}(N^{-1/2})\) for each of \(\mathcal{R}_{m}\) and \(\mathcal{R}_{l}\). Faster rates than this however weaken conditions on how \(n\) may diverge; see Corollary 5 below. **Assumption A3** (Stability of weight function estimates).: Suppose there also exists deterministic functions \(\sigma^{*}:\mathbb{R}^{d}\to\mathbb{R}\) and \(\rho^{*}:\mathbb{R}^{d}\times\mathbb{R}^{d}\to(0,\infty)\) whose estimators satisfy: 1. \(\mathcal{R}_{\sigma}:=\max_{j\in[n]}\mathbb{E}_{P}\left[\left(c^{*}\hat{\sigma }^{(1)}(X_{j})-\sigma^{*}(X_{j})\right)^{2}\,\Big{|}\,\hat{\sigma}^{(1)} \right]=o_{\mathcal{P}}(1)\) where \(c^{*}:=\underset{c>0}{\operatorname{arginf}}\max_{j\in[n]}\mathbb{E}_{P} \left[\left(c\hat{\sigma}^{(1)}(X_{j})-\sigma^{*}(X_{j})\right)^{2}\right]\), 2. \(\mathcal{R}_{\rho}:=\max_{j\neq j^{\prime}}\mathbb{E}_{P}\left[\left(\hat{ \rho}^{(1)}(X_{j},X_{j^{\prime}})-\rho^{*}(X_{j},X_{j^{\prime}})\right)^{2}\, \Big{|}\,\hat{\rho}^{(1)}\right]=o_{\mathcal{P}}(1)\). Further suppose the associated weights \(W^{*}:=W(\sigma^{*},\rho^{*})(X)\) satisfy 1. \(\Lambda_{\max}(W^{*})\lesssim 1\) and \(\Lambda_{\min}(W^{*})\gtrsim n^{-\gamma}\) for some \(\gamma\in[0,\mu_{\Sigma}]\) almost surely. 2. \(\Lambda_{\min}\Big{(}W^{*\frac{1}{2}}\Sigma W^{*\frac{1}{2}}\Big{)}\gtrsim n^{-\kappa}\) and \(\Lambda_{\max}\Big{(}W^{*\frac{1}{2}}\Sigma W^{*\frac{1}{2}}\Big{)}\lesssim n^ {\alpha}\) for some \(\kappa\in[0,\gamma]\) and \(\alpha\in[0,\mu_{\Sigma}]\) almost surely. Assumptions A3.1 and A3.2 require a probabilistic limit for our estimates of the weight function, but this need not correspond closely to the inverse of \(\Sigma\). The eigenvalue assumption A3.4 however does loosely quantify the discrepancy between these, and \(\kappa\) and \(\alpha\) impact the permitted divergence rate of \(n\). The reason for introducing the \(c^{*}\) is that the estimated weights need not be on the same scale as \(\Sigma^{-1}\) (recall that the sandwich loss is invariant to positive scaling of its argument). **Theorem 4**.: _Consider Algorithm 1. Let the sequence of distribution families \((\mathcal{P}_{I})_{I\in\mathbb{N}}\) for \((Y,D,X)\) be such that for all \(I\) sufficiently large, and for all \(P\in\mathcal{P}_{I}\), Assumptions A1, A2 and A3 are satisfied. Further, suppose that the group size \(n\) is either finite, or diverges at a rate satisfying Assumption A4 in Appendix D. Then defining_ \[V:=\left(\frac{1}{N}\sum_{i=1}^{I}\mathbb{E}_{P}\left[\xi_{i}^{T}W_{i}^{*}\xi_ {i}\right]\right)^{-2}\left(\frac{1}{N}\sum_{i=1}^{I}\mathbb{E}_{P}\left[ \left(\varepsilon_{i}^{T}W_{i}^{*}\xi_{i}\right)^{2}\right]\right),\] _we have that \(\hat{\beta}\) is uniformly asymptotically Gaussian_ \[\lim_{I\to\infty}\sup_{P\in\mathcal{P}_{I}}\sup_{t\in\mathbb{R}}\Big{|}\mathbb{P }_{P}\left(\sqrt{N/V}\big{(}\hat{\beta}-\beta\big{)}\leq t\right)-\Phi\left(t \right)\Big{|}=0,\] _and moreover the above holds with \(V\) replaced by \(\hat{V}\)._ The result shows in particular that \(\hat{C}(\alpha)\) constructed in Algorithm 1 is an asymptotic \((1-\alpha)\)-level honest confidence interval, under the given assumptions. Corollary 5 below specialises a version of Theorem 4 for diverging group sizes for two cases of interest where relatively simple forms of (conservative) rate requirements on \(n\) are available. **Corollary 5**.: _Adopt the setup and notation of Theorem 4 but suppose \(\delta\geq 4\) in A1.1 and A2.3, and additionally that \(\mathcal{R}_{\rho}=O_{\mathcal{P}}(N^{-1})\). Suppose the estimated weight functions \(\hat{W}^{(k)}\) are constructed to fall with classes \(\mathcal{W}\) (see Section 3.2) corresponding to one of the following two settings:_ 1. _Equicorrelated working correlation, but where the true conditional correlation_ \(\operatorname{Corr}(Y\,|\,X,D)\) _may be arbitrary;_ 2. _Autoregressive AR_\((1)\) _working correlation and when_ \(\mu_{\Sigma}=0\) _(see_ A1.2_)._ _Then the conclusions of Theorem 4 holds for diverging group sizes at the following rates:_ **Equicorrelated (i):**__ \[n =o\left(N^{\frac{1}{3+2\kappa}}\wedge\left(N\,\mathcal{R}_{m} \big{(}\mathcal{R}_{l}\vee\mathcal{R}_{m}\big{)}\right)^{-\frac{1}{\kappa}} \wedge\mathcal{R}_{\sigma}^{-\frac{1}{2(\kappa+2)}}\right),\] **Autoregressive (ii):**__ \[n =o\Big{(}N^{\frac{1}{3}}\wedge\mathcal{R}_{l}^{-1}\wedge\mathcal{ R}_{\sigma}^{-1}\Big{)}.\] We discuss each of the cases (i) and (ii) in turn. Case (i) places few restrictions on the true conditional covariance and so results in a more stringent requirement on the growth rate of \(n\). Recall that the middle term \(N\,\mathcal{R}_{m}\big{(}\mathcal{R}_{l}\vee\mathcal{R}_{m}\big{)}\) is required to be \(o_{\mathcal{P}}(1)\) by A2.1 and small values of the parameter \(\kappa\in[0,1]\) indicate better approximation of \(\Sigma\) in A3.4. Case (ii) enforces that \(\mu_{\Sigma}=0\): this occurs for instance when the true correlation function is upper bounded by an exponentially decaying function with separation (satisfied e.g. for ARMA processes). As such, this condition may be appropriate for longitudinal data. The rate requirement on \(n\) is relatively weak: both \(\mathcal{R}_{l}\) and \(\mathcal{R}_{\sigma}\) need only satisfy a rate requirement weaker than that on \(\mathcal{R}_{m}\) entailed by A2.1 in order for \(n\) to be permitted to grow at any rate \(o(N^{1/3})\). ## 5 Numerical experiments In this section we explore the empirical properties of the sandwich boosting estimator on a number of simulated and real-world datasets. In all cases where covariates \(X_{i}\) are available in addition to our covariate of interest \(D_{i}\), we fit partially linear models using the approach of Algorithm 1, estimating nuisance regression functions \(m_{0}\) and \(l_{0}\) using cubic regression splines implemented in the mgcv package (Wood, 2017); however in addition to using weight functions \(\hat{W}^{(k)}\) selected using sandwich boosting, we also compare to versions with these selected using quasi pseudo Gaussian maximum likelihood (ML) and GEE1 (GEE) based methods. Note that the use of Algorithm 1 with ML is essentially the approach of Emmenegger and Buhlmann (2023) but using a robust sandwich estimate of the variance. Section 5.1 explores four simulated settings with varying degrees of misspecification of the conditional covariance. Section 5.2 looks at two datasets: the first, on orange juice sales grouped by store, highlights the benefits of flexible variance modelling via the nonparametric \(s\equiv 1/\sigma\) component in our sandwich boosting scheme (Section 3.2); and the second, a longitudinal study on women's wages, provides a real-life example of the phenomena seen in Examples 1 and 2 where more complex conditional covariance models can lead to poorer \(\beta\)-estimation when weights are selected using ML or GEE-based approaches, and where the minimiser of the sandwich loss is rather different to those corresponding to the ML and GEE objectives. ### Simulated data We look at four simulated scenarios, and in each case consider different weight classes \(\mathcal{W}\): we describe these classes below in terms of their implied working covariances i.e. in terms of the inverses of the weight matrices. For all the approaches we use the cross-fitting scheme (Algorithm 1) with \(K=2\) folds. * **Homoscedastic:** Depending on the setting, this consists of either equicorrelated or autoregressive AR(1) working correlations scaled by a constant variance, with the single parameter estimated either by maximising a Gaussian likelihood (ML), an approach of the form given in (4) (GEE) or minimising the sandwich loss through projected gradient descent (only used in the setting of Section 5.1.3). * This uses the same working correlations as in the homoscedastic case, but allows for more flexibility in the working conditional variance function with specifics depending on the estimation method used. * We model the logarithm of the conditional variance function with a polynomial basis in the covariate \(X_{i}\) with the number of basis functions (restricted to at most 4 to avoid numerical instabilities) determined by cross-validation using Gaussian log-likelihood loss. This is carried out using the nlme package (Pinheiro et al., 2022). * We perform a cubic penalised spline regression of the squared residuals \(\hat{\varepsilon}_{ij}^{2}\) onto \(X_{i}\) using the mgcv package (Wood, 2017)) to obtain an estimate of the working conditional variance function with the parameter of the correlation subsequently estimated using the geeglm package (Halekoh et al., 2006). * **Sandwich loss:* * We use sandwich boosting as described in Section 3.2. Section 5.1.1 considers a well-specified setting where the optimal true conditional covariance weights can in principle be replicated by the heteroscedastic weight estimators; Section 5.1.2 looks at a misspecified setting where the optimal weights \(W_{0}\) depend on \(D\) and varies the degree of misspecification; and Sections 5.1.3 and 5.1.4 explore the effect of varying group sizes in a settings with mildly misspecified conditional correlation and variance respectively. In all cases, we simulate data with equal independent and identically distributed groups of equal size \(n\), which we vary across the settings. The mean squared errors of \(\beta\)-estimators corresponding to each method and setting pair, averaged over 500 repetitions, are shown in Figure 4. #### 5.1.1 Increasing model complexity Consider \(I=2000\) iid instances \((Y,D,X)\in\mathbb{R}^{10}\times\mathbb{R}^{10}\times\mathbb{R}^{10}\) of the partially linear model (6) with group size \(n=10\) where \(\beta=1\) is the target parameter of interest, \(X\) is componentwise iid uniform \(U[-5,5]\), \(m_{0}(x)=\cos x\), \(g_{0}(x)=\tanh x\) (with the \(\cos\) and \(\tanh\) functions applied component-wise), \(\xi\,|\,X\sim N_{10}(\mathbf{0},\Omega(X))\) and \(\varepsilon\,|\,(D,X)\sim N_{10}(\mathbf{0},\Sigma(X))\) with covariance matrices given by \[\Sigma(X) =\Big{(}\big{(}\mathbb{1}_{\{j=k\}}+0.2\cdot\mathbb{1}_{\{j\neq k \}}\big{)}\sigma_{0}(X_{j})\sigma_{0}(X_{k})\Big{)}_{(j,k)\in[10]^{2}},\quad \sigma_{0}(x)=2+\cos(\lambda x),\] \[\Omega(X) =\big{(}\mathbb{1}_{\{j=k\}}+0.1\cdot\mathbb{1}_{\{j\neq k\}} \big{)}_{(j,k)\in[10]^{2}}.\] Here \(\lambda\geq 0\) is the Lipschitz constant (complexity parameter) of the conditional variance function \(\sigma_{0}\), that we will vary. We use homoscedastic and heteroscedastic working covariance classes with equicorrelated working correlation (noting that the true correlation is also constant here). As \(\varepsilon\,|\,(D,X)\stackrel{{ d}}{{=}}\varepsilon\,|\,X\), by Proposition 1 population minimiser of the sandwich loss and those corresponding to the ML and GEE approaches should all coincide, and so from this perspective, the former should have no clear advantage in terms of the performance of the resulting \(\beta\)-estimator, and one might expect all heteroscedastic methods to perform similarly here since they need only model the conditional variance \(\sigma_{0}\) sufficiently well to yield the semiparametrically optimal MSE (referred to as the oracle MSE). The top left panel of Figure 4 shows that this appears to be the case when \(\lambda\leq 0.75\), but for larger values of the complexity parameter, both ML and GEE approaches appear to struggle. The latter displays a somewhat erratic trajectory, peaking at an MSE 9 times that of the oracle, and even greatly exceeding that of its homoscedastic counterpart (note that the curves for the homoscedastic GEE and ML estimators almost coincide). This behaviour seems likely to be due to the GEE approach finding local optima, which recall, need not be local optima for the asymptotic MSE objective, i.e., sandwich loss; see also Example 1 Setting (b) and Appendix A.1 for a similar phenomenon. The sandwich boosted estimator in comparison remains relatively robust to this increase in model complexity, maintaining performance comparable to the oracle estimator. #### 5.1.2 Increasing covariance misspecification We consider \(I=10^{4}\) instances of the partially linear model (6) with \(n=4\), \(\beta=1\), \(g_{0}(x)=\tanh x\) and \(m_{0}(x)=\cos x\). Errors \((\varepsilon,\xi)\) are generated by introducing an unobserved confounder \(\zeta\) between \(\varepsilon\) and \(\xi\) inspired by the proof of Theorem 2 as follows: \[X\sim N_{4}(\mathbf{0},0.91\mathbf{1}^{T}+0.1I_{4}),\quad B\,|\, X\sim\mathrm{Ber}(p(X)),\quad p(x):=\mathbbm{1}_{[0,\infty)}(\bar{x})+\eta^{-1} \mathbbm{1}_{(-\infty,0)}(\bar{x}),\] \[\zeta:=p(X)^{-1}B,\quad\omega^{(\varepsilon)},\omega^{(\xi)}\,| \,(\zeta,X)\sim N_{4}(\mathbf{0},I_{4}),\quad\xi:=\zeta^{\frac{1}{2}}\omega^{( \xi)},\] \[\varepsilon:=\zeta^{\frac{1}{2}}\Sigma^{\frac{1}{2}}\omega^{( \varepsilon)},\quad\Sigma=\big{(}0.2^{|j-k|}\big{)}_{(j,k)\in[4]^{2}}.\] Here \(\eta\geq 1\) acts as'misspecification parameter', with larger values indicating greater confounding and violation of the condition (11) for equivalence of the ML, GEE and sandwich losses. Note that \(\bar{x}\) denotes the mean of the entries \(x\in\mathbb{R}^{n}\). As we see in the top right panel of Figure 4, the performances of the heteroscedastic ML and GEE estimators deteriorate with increasing \(\eta\) and yield worse MSEs than even an unweighted least squares estimator as the extent of covariance misspecification increases. In contrast, despite being equally restricted to use a misspecified class of weights that are a function of \(X\) alone, the sandwich boosted \(\hat{\beta}\) has a substantially smaller MSE compared to the approaches considered, with its advantage increasing with increasing \(\eta\). #### 5.1.3 Mild conditional correlation misspecification We consider the simple setting of (grouped) linear regression: \[Y=\beta D+\varepsilon,\qquad D\sim N_{n}\left(\mathbf{0},\frac{ 1}{8}\mathbf{1}\mathbf{1}^{T}+\frac{7}{8}I_{n}\right),\] \[\varepsilon\sim\mathrm{ARMA}(2,1),\quad\phi=(0.3,0.6),\quad\vartheta =-0.5,\] where \(\phi\) and \(\vartheta\) are the autoregressive and moving average parameters respectively. We take \(\beta=1\) and iid Gaussian innovations for the ARMA process. We consider settings with \(I=2^{15}/n\) and group sizes \(n\in\{2^{r}:r=1,2,\ldots,8\}\). To demonstrate the effect of correlation misspecification, we use a constant working variance and fit all models with an AR(1) working correlation, and thus we have a misspecified correlation for group sizes \(n\geq 3\). Example 1, setting (a) corresponds to this setting with \(n=100\). The bottom left panel of Figure 4 shows that the sandwich loss here outperforms the competing approaches, with the GEE aproach leading to an inflation in MSE over an unweighted approach for moderate group sizes. #### 5.1.4 Mild conditional variance misspecification We generate \(I=2^{15}/n\) iid instances of the the partially linear model (6) via \[X_{j}\stackrel{{\text{i.i.d.}}}{{\sim}}U[-2,2],\qquad m_{0}(x)=- 6e^{-x},\qquad g_{0}(x)=\tanh(x),\qquad\xi\,|\,X\sim N_{n}\left(\mathbf{0},9I_ {n}\right)\] \[\varepsilon\,|\,(D,X)\sim N_{n}\left(\mathbf{0},\Sigma(D,X)\right),\qquad \Sigma_{jk}(D,X)=\begin{cases}\sigma_{0}^{2}(D_{j},X_{j})&\text{if }j=k\\ 0.2\sigma_{0}(D_{j},X_{j})\sigma_{0}(D_{k},X_{k})&\text{if }j\neq k\end{cases},\] \[\sigma_{0}(d,x)=2+\tanh\left(d-3x\right).\] for \(n\in\left\{2^{r}:r=1,2,\ldots,8\right\}\) and taking \(\beta=1\). Note therefore that as \(\text{Cov}(\varepsilon\,|\,D,X)\neq\text{Cov}(\varepsilon\,|\,X)\), the optimal weights that depend also on \(D\) are not in the weight classes considered. All methods are fit using an equicorrelated working correlation. We again see in the bottom right Figure 4 superior performance of sandwich boosting over the competitors here. Figure 4: Mean squared error of \(\beta\)-estimators for the four simulated experiments in Section 5.1 over 500 independent repeats. The top left plot gives the MSE relative to the oracle, while the other are relative to the unweighted estimator. ### Real-world data analyses Here we present analyses of two datasets. We fit GEE and (heteroscedastic) sandwich boosting approaches as in the previous section, but use mixed effects models (MEM) to give a family of (working) conditional covariance functions in the ML framework by taking certain covariates as random effects. We continue to use these within Algorithm 1, using the lme4 package (Bates et al., 2015) to obtain the weights in the MEM case as in Emmenegger and Buhlmann (2023), but reporting robust sandwich estimates of the variance of the \(\hat{\beta}\) constructed. We use \(K=5\) folds for cross-fitting. To mitigate the randomness of the resulting estimators on the sample splits themselves we aggregate the \(\beta\) and variance estimators obtained over 50 random independent sample splits using the approach of Chernozhukov et al. (2018), Emmenegger and Buhlmann (2023); see Appendix E for details. #### 5.2.1 Orange juice price elasticity We analyse historical data on orange juice sales, available from the James M. Kilts Center, University of Chicago Booth School of Business (James M. Kilts Center, Accessed: 2022). The dataset is composed of grouped store-level scanner price and sales data over a 121 week period from 83 Dominick's Finer Foods stores and consists of \(N=9649\) observations. Our goal is to estimate the price elasticity of a brand of orange juice (Tropicana) during this time period. We do this via a partially linear model (6) of the logarithm of the quantity of sales (\(Y=\log(\text{Sales})\)) on the logarithm of the price (\(D=\log(\text{Price})\)) accounting for confounding by events in time (\(X=\text{week number}\)), the coefficient \(\beta\) of \(D\) giving the price elasticity. Table 2 shows the price elasticity estimators and associated sandwich variances estimates \(\hat{V}\); note GEE and sandwich boosting approaches here use an equicorrelated working correlation. We see that our sandwich boosting estimator has a 26.0% reduction in variance compared to the second best homoscedastic GEE estimator. Note that in the cases of both the mixed effects models (MEM) and GEE estimators, as a broader weight class is used, we observe poorer performance in estimating the price elasticity, illustrating the phenomenon described in Figure 3. Also note that the sandwich boosted and heteroscedastic GEE estimators model weights in the same class. However, whilst the heteroscedastic GEE estimator does not usefully model any heteroscedasticity in the data (and in fact exhibits the worst performance out of all the estimators considered), sandwich boosting successfully estimates a helpful weighting scheme. Figure 4(a) demonstrates an \(s\)-function output from sandwich boosting, effectively capturing the general and seasonal trends in volatility that are not learned by the other estimators. \begin{table} \begin{tabular}{c c c c} \hline \hline Method & \(\hat{\beta}\) & \(\hat{V}\) & \begin{tabular}{c} Reduction in \(\hat{V}\) relative to \\ homoscedastic GEE estimator (\%) \\ \end{tabular} \\ \hline **Sandwich Boosting** & \(-2.97\) & \(\mathbf{30.9}\) & \(\mathbf{26.0}\) \\ Homoscedastic GEE & \(-3.18\) & \(41.7\) & \(0\) \\ Heteroscedastic GEE & \(-3.19\) & \(44.6\) & \(-7.0\) \\ Intercept only MEM & \(-3.17\) & \(41.8\) & \(-0.17\) \\ Intercept + Time MEM & \(-3.18\) & \(42.5\) & \(-2.0\) \\ \hline \hline \end{tabular} \end{table} Table 2: Estimates for the price elasticity of orange juice and corresponding variance estimates. #### 5.2.2 National longitudinal survey of young working women Here we consider a dataset from the National Longitudinal Survey of Young Women (Bureau of Labor Statistics, 2004) containing the wages of 4711 young working women, each measured at approximately 6 time points per woman, totalling \(N=28,534\) observations. We measure the effect of work experience in their current related sector (\(D=\text{work experience}\)) on the logarithm of wages (\(Y=\log(\text{wage})\)), controlling for age and tenure (\(X=(\text{age},\text{tenure})\)), with weights a function of \(X\), using an equicorrelated working correlation for sandwich boosting and GEE approaches. Table 3 gives the \(\beta\)-estimators and associated variances \(\hat{V}\) for the approaches considered. Similarly to the orange juice price elasticity analysis, we see that the broader model classes of the mixed effects model and heteroscedastic GEE estimators yield larger variances than their respective homoscedastic counterparts. The sandwich boosting estimator gives a modest \(7.4\%\) reduction in variance over the second smallest homoscedastic GEE variance. Interestingly however, this improvement is entirely due to the sandwich loss rather than the potentially richer model class used by sandwich boosting: the \(s\) function output by sandwich boosting is almost constant, and so it effectively uses a constant working variance and estimates a single working correlation parameter. The sandwich loss objective for this correlation (parametrised in terms of \(\theta\); see Section 3.2) is plotted in Figure 4(b) alongside the objectives corresponding to the homoscedastic GEE and intercept only MEM approaches. We see that the respective \(\theta\)-minimisers differ substantially, with the asymptotic variance (the sandwich loss) evaluated at the minimisers of the GEE and MEM approaches being larger than the asymptotic variance evaluated at any \(\theta\in[0.14,1.09]\), corresponding to any working correlation in the range \(\rho\in[0.12,0.51]\). ## 6 Discussion In this work we have highlighted and clarified the shortcomings of some popular classical methods in the estimation of weights for weighted least squares-type estimators in partially linear models when the conditional covariance is misspecified. We instead advocate for choosing weights to minimise a sandwich estimate of the variance, what we call the sandwich loss in this context. A main contribution of ours, in the spirit of the trend towards using machine learning methods for the purposes of statistical inference, is a practical gradient boosting scheme for approximately minimising this loss over a potentially flexible family of functions defined implicitly through a user-chosen base-learner. Despite the unusual form of our loss that does not decompose as a sum over data points as with the standard case of the empirical risk, we show \begin{table} \begin{tabular}{c c c c} \hline \hline Method & \(\hat{\beta}\) & \(\hat{V}\) & Reduction in \(\hat{V}\) relative to \\ & & & homoscedastic GEE estimator (\%) \\ \hline **Sandwich Boosting** & 0.040 & **0.083** & **7.4** \\ Homoscedastic GEE & 0.040 & 0.089 & 0 \\ Heteroscedastic GEE & 0.039 & 0.091 & \(-1.6\) \\ Intercept only MEM & 0.040 & 0.092 & \(-3.5\) \\ Intercept + Age + Tenure MEM & 0.042 & 0.120 & \(-34.9\) \\ \hline \hline \end{tabular} \end{table} Table 3: Estimates of \(\beta\) and associated variance estimates \(\hat{V}\) relating to the effect of work experience on wages from the National Longitudinal Survey of Young Women. that for certain versions of our algorithm, the boosting updates can be performed in linear time. Our work offers a number of directions for future research. On the computational side, it would be useful to investigate broader classes of working correlations that could be accommodated within sandwich boosting to yield linear time updates. In could also be fruitful to consider the use of the sandwich loss in other classes of models, for example it would be of interest to develop these ideas in the context of generalised (partially) linear marginal models, and beyond. Thus far we have only considered estimator of a single scalar quantity. In other situations, one may be interested in estimating several parameters simultaneously, and in such cases there are several modifications of the basic sandwich loss that may be helpful to explore. For example, consider the following generalisation of the partially linear model (6): \[\begin{split} Y&=\beta(X)\circ D+g_{0}(X)+\varepsilon, \\ D&=m_{0}(X)+\xi.\end{split} \tag{17}\] Here \(\circ\) denotes the Hadamard product, \(\beta(X)\) is a row-wise function of \(X\), and all other terms are as before; thus the response of the \(j\)th observation within the \(i\)th group satisfies (with a slight abuse of notation) the mean function relationship \(\mathbb{E}[Y_{j}\,|\,D,X]=\beta(X_{j})D_{j}+g_{0}(X_{j})\). We suppose \(\beta(X)\) admits the basis expansion \[\beta(X)=\sum_{l=1}^{L}\phi_{l}\varphi_{l}(X), \tag{18}\] for some \(L\) known basis functions \((\varphi_{l})_{l\in[L]}\) (also row-wise functions of \(X\)), and unknown vector of parameters \(\mathbf{\phi}:=(\phi_{1},\dots,\phi_{L})\in\mathbb{R}^{L}\) to be estimated. For example, the model where \(\beta(X)=\phi_{1}+\phi_{2}X\) corresponds in the classical linear model setting to fitting an 'interaction term' between Figure 5: Outputs relating to data analyses in Section 5.2. \(D\) and \(X\). Given a consistent estimator \(\hat{\mathbf{\phi}}\) of \(\mathbf{\phi}\), the mean squared error of the resulting plug-in \(\beta\) function estimator \(\hat{\beta}(X)=\sum_{l=1}^{L}\hat{\phi}_{l}\varphi_{l}(X)\) satisfies \[\mathbb{E}\left[\left(\hat{\beta}(X)-\beta(X)\right)^{2}\right]\approx\text{tr }\left(\Phi V_{\mathbf{\phi}}\right),\;\Phi:=\left(\mathbb{E}\left[\varphi_{l}(X) \varphi_{l^{\prime}}(X)\right]\right)_{l\in[L],l^{\prime}\in[L]}\quad\text{and} \quad V_{\mathbf{\phi}}:=\text{Var}\hat{\mathbf{\phi}}. \tag{19}\] This suggests the following approach. Consider a class of weighted \(\mathbf{\phi}\)-estimators \(\hat{\mathbf{\phi}}=\hat{\mathbf{\phi}}(W)\) where \(W\) is a weight function among a class of \(\mathcal{W}\), similarly to our previous framework in Section 2.1 of weighted \(\beta\)-estimators. Then given estimates \(\hat{\Phi}\) of \(\Phi\) and \(\hat{V}_{\mathbf{\phi}}(W)\) of \(\text{Var}\hat{\mathbf{\phi}}(W)\), we can consider a generalised sandwich loss of the form \[\hat{L}_{\text{MSE}(\hat{\beta})}(W):=\text{tr}\big{(}\hat{\Phi}\hat{V}_{\bm {\phi}}(W)\big{)}, \tag{20}\] which we may attempt to minimise using a sandwich boosting approach; further details are given in Appendix F. Our sandwich loss \(\hat{L}_{\text{SL}}\) is defined with respect to estimated errors \(\tilde{\xi}_{i}\) and \(\tilde{\varepsilon}_{i}\) derived from initial regressions, which in particular, take no advantage of the dependence structure in the data, unlike the final estimate \(\hat{\beta}\). Clearly initial weighted regressions could deliver improved estimates of the errors, in turn giving an improved estimate of \(\hat{\beta}\). This suggests a scheme with weights and residuals being updated iteratively, analogous to iterative generalised least squares (Goldstein, 1986, 1989). In the simple linear model setting, where the residuals are derived from linear regressions, a generalised sandwich loss of the form (20) may be appropriate for delivering accurate estimates of the errors. How to do this for a general regression is less clear but would certainly be worthy of further investigation.
2305.10166
The Virtual Research Environment: towards a comprehensive analysis platform
The Virtual Research Environment is an analysis platform developed at CERN serving the needs of scientific communities involved in European Projects. Its scope is to facilitate the development of end-to-end physics workflows, providing researchers with access to an infrastructure and to the digital content necessary to produce and preserve a scientific result in compliance with FAIR principles. The platform's development is aimed at demonstrating how sciences spanning from High Energy Physics to Astrophysics could benefit from the usage of common technologies, initially born to satisfy CERN's exabyte-scale data management needs. The Virtual Research Environment's main components are (1) a federated distributed storage solution (the Data Lake), providing functionalities for data injection and replication through a Data Management framework (Rucio), (2) a computing cluster supplying the processing power to run full analyses with Reana, a re-analysis software, (3) a federated and reliable Authentication and Authorization layer and (4) an enhanced notebook interface with containerised environments to hide the infrastructure's complexity from the user. The deployment of the Virtual Research Environment is open-source and modular, in order to make it easily reproducible by partner institutions; it is publicly accessible and kept up to date by taking advantage of state of the art IT-infrastructure technologies.
Elena Gazzarrini, Enrique Garcia, Domenic Gosein, Alba Vendrell Moya, Agisilaos Kounelis, Xavier Espinal
2023-05-17T12:34:49Z
http://arxiv.org/abs/2305.10166v1
# The Virtual Research Environment: towards a comprehensive analysis platform ###### Abstract The Virtual Research Environment is an analysis platform developed at CERN serving the needs of scientific communities involved in European Projects. Its scope is to facilitate the development of end-to-end physics workflows, providing researchers with access to an infrastructure and to the digital content necessary to produce and preserve a scientific result in compliance with FAIR principles. The platform's development is aimed at demonstrating how sciences spanning from High Energy Physics to Astrophysics could benefit from the usage of common technologies, initially born to satisfy CERN's exabyte-scale data management needs. The Virtual Research Environment's main components are (1) a federated distributed storage solution (the Data Lake), providing functionalities for data injection and replication through a Data Management framework (Rucio), (2) a computing cluster supplying the processing power to run full analyses with Reana, a re-analysis software, (3) a federated and reliable Authentication and Authorization layer and (4) an enhanced notebook interface with containerised environments to hide the infrastructure's complexity from the user. The deployment of the Virtual Research Environment is open-source and modular, in order to make it easily reproducible by partner institutions; it is publicly accessible and kept up to date by taking advantage of state of the art IT-infrastructure technologies. ## 1 Introduction Physicists working at CERN's Large Hadron Collider (LHC) experiments were historically among the first scientists to face large amounts of incoming data, and were therefore forced to find efficient, data-intensive software solutions from an early stage - the implementation of algorithms for the LHC project started back in the 80's -, much before the 'big data' trend emerged on a global scale. Nowadays, PBs of data are saved every day in the CERN Data Center; as an example, the LHCb experiment currently selects 10 GBs of the most interesting LHC collisions each second (after proccessing 4 TBs of data per second in real-time) for physics analysis [1]. CERN developers have therefore accumulated experience and expertise in engineering tools for handling, processing and analysing large data volumes. While High Energy Physics (HEP) sciences have faced these challenges for a long time, non-HEP sciences are more recently entering the exabyte-scale era [2], and therefore need the ability to efficiently track and process the generated data while meeting FAIR (Findabile, Accessibile, Interoperabile, Reusabile) data principles 1. However, Open Data alone is not sufficient to foster reuse and reproducibility in physics. It is also essential to capture structured information about the analysis workflows to ensure the usability and longevity of results [3, 4]. A common problem, as stated in the literature [5], is that half of the researchers cannot reproduce their own results; this tragic evidence can be alleviated by preserving data and code via (re-)analysis platforms that apply logical techniques to describe, illustrate, condense and evaluate data. EU-funded H2020 projects aim to 'democratise' data-intensive technologies, allowing different sciences outside the HEP field - from High Energy Astrophysics to Gravitational Waves searches - to gain expertise on new solutions, eventually fostering cross-fertilisation of sciences. All in all, scientific collaborations are becoming more international; as a consequence, common infrastructures that allow reliable and efficient (i) Federated Data Management and Data Transfer Services, (ii) Federated Distributed Storage, (iii) Data Processing and Orchestration and (iv) Software and Analysis Reproducibility are becoming increasingly popular. The Virtual Research Environment (VRE) tries to encompass all of the above, while placing special attention on the user experience by providing the scientist with an enhanced notebook interface. The VRE's configuration can in addition be flexibly modified to access heterogeneous external resources (storage and computing) managed by EU partner institutions. The following sections will introduce the scientific value of the VRE and illustrate its main components. Footnote 1: [https://www.go-fair.org/fair-principles/](https://www.go-fair.org/fair-principles/) ## 2 Scientific value The VRE concept was incubated in the European Science Cluster of Astronomy, Astroparticle and Particle Physics (ESCAPE 2) project and is currently being developed and deployed within the EOSC (European Open Science Cloud) Future 3 project, both addressing Open Science challenges to ensure optimised access, management, organisation, processing and preservation of the enormous amount of data handled by the experiments. The tools and concepts initially developed by the ESCAPE work packages, such as the Data Lake (see next section), are hosted and implemented within the VRE, which aims at demonstrating an interdisciplinary science example from bottom-up efforts originating from different scientific domains. In fact, the experiments currently involved in the project come not only from Particle Physics (CERN), but also from High-energy Astrophysics (CTA [6], FermiLAT [7]), Neutrino Observations (KM3NET [8], Darkside [9]), Radio Astronomy (SKA [10], LOFAR [11]) and Gravitational Waves searches (LIGO [12], Virgo [13]). Footnote 2: [https://projectescape.eu/](https://projectescape.eu/) Footnote 3: [https://eoscfuture.eu/](https://eoscfuture.eu/) Many of the aforementioned experiments tackle Dark Matter (DM) exploration from different perspectives. The problem of imposing limits on the mass of DM is a fundamental question in physics: Direct Detection methods study the interaction of particles inside underground detectors, Collider physics produces DM candidates from accelerating protons, Astrophysics observes distant phenomena in the sky and compares them with the theory to detect abnormal behaviours, while Indirect Detection methods investigate annihilating DM by looking at its decay products, such as neutrinos. Figure 1 illustrates this concept and shows how a platform such as the VRE is a useful place to collect results and to generate combined plots to impose universal DM mass limits. ## 3 VRE Components In its endeavour to homogenise the technological needs of diverse scientific communities, the VRE consists of (1) a data management framework, (2) access to computing processing resources, (3) users management through a reliable Authorisation and Authentication Infrastructure (AAI) and (4) exposure to a user interface to facilitate the interaction with the underlying infrastructure. Figure 2 graphically illustrates the architecture, supported by CI/CD cycles, container orchestration and Infrastructure-as-Code (IaC) processes. The VRE's deployment is centrally managed on CERN's Cloud infrastructure (details can be found in Table 1) with Kubernetes (K8s). ### Data Management, the Data Lake The data management and storage orchestration for the VRE is largely based on a scientific software developed at CERN, Rucio [14]. Rucio is an open-source project initially developed by the ATLAS [15] experiment for managing community data. It provides services and associated libraries to manage large volumes of data spread across facilities at multiple institutions and organizations. The VRE Rucio instance is composed of (i) a cloud infrastructure described in Table 1, where Rucio servers, daemons and webUI, installed through Helm charts, manage API requests, user authentication, data upload, access, download and replication, (ii) a central relational database hosted at CERN, providing backup services in case of major disruptions and (iii) multiple Rucio Storage Elements (RSEs) with quotas varying from 1 to 300 TB, hosted at each partner institution, supporting various storage technologies (EOS [16], StoRM [17], dCache [18], DPM [19], XRootD 4) and using diverse back-ends (classic RAID systems, Ceph, \begin{table} \begin{tabular}{|c c c c c c|} \hline vCPUs & RAM (GB) & Masters & Nodes & Remote Storage (TB) & CephFS (TB) \\ \hline 184 & 335.8 & 3 & 23 & 646 & 1.8 \\ \hline \end{tabular} \end{table} Table 1: The VRE technical components. The first 4 columns refer to the cloud infrastructure managed with K8s, while the last two columns refer to the total quota of the remote storage elements and of the shared object storage (CephFS) attached to the processing nodes. Figure 1: EOSC-Future’s Dark Matter Science Project aims at bringing together different search approaches (Astrophysics, Theory, Direct Detection, Collider Physics, Indirect Detection), to ultimately investigate limits on DM mass. and multi-replica). Data can be accessed through gridFTP, HTTP(S), XRootD and S3 protocols. Such policy-driven, reliable, distributed data infrastructure, commonly referred to as the Data Lake [20, 21], is able to deliver data on-demand at low latency to all types of processing facilities. The main functionalities that Rucio offers are (1) data upload, download and streaming, executed by exploiting the power of GFAL 5, and (2) third-party asynchronous transfers (between RSEs) and deletion, achieved instead by CERN's File Transfer Service (FTS) [22]. The latter is granted permission to access the various RSEs by Rucio processes called daemons, which are responsible for any data management action on the infrastructure. Footnote 5: [https://dmc-docs.web.cern.ch/dmc-docs/gfal2/gfal2.html](https://dmc-docs.web.cern.ch/dmc-docs/gfal2/gfal2.html) ### Computing: Reana cluster The processing of data in the VRE is managed by an instance of CERN's reproducible analysis platform, Reana [23], which allows to run analyses on various computing backends (K8s, HTCondor, Slurm). Navigating the platform is made intuitive for the scientist, who only needs to prepare a declarative.yaml file containing instructions on where to find: (i) input data and parameters, (ii) code, (iii) computing environment and (iv) computational steps needed to perform a full analysis. In this way, scientists can maintain and compare lists of past runs and share the results with colleagues. Reana's workflow distribution on the cluster's virtual nodes is managed by default by K8s. However, computationally heavier analysis steps can be dispatched to High Performance Computers (HPCs) via HTCondor or Slurm, given the assumption the user has access rights to remote resources. The independence of the Reana framework from local storage (when calling input data) was implemented by the VRE team by adding a feature 6 that allows user authentication to the Rucio Data Lake as the first step of the Reana analysis Figure 2: A graphical representation of the VRE components, i.e. (1) a federated distributed storage solution (blue), (2) a computing cluster (red), (3) a federated AAI layer (pink) and (4) an enhanced notebook interface (purple). run (represented in Figure 2 by a green thick arrow). In this way, data can be moved between the Data Lake's storage elements and the Reana shared storage, located close to the K8s processing nodes where the analysis steps are distributed. ### Authentication and Authorization Infrastructure Given the heterogeneous composition of the VRE infrastructure, it is essential to have a single authentication and authorisation method that comprises all services and grants users the correct permissions to access them. The VRE's AAI layer is based on the INDIGO Identity and Access Management (IAM) service [24]. The VRE IAM instance (inherited by the ESCAPE one) is deployed on a K8s cluster at INFN-CNAF and supports authentication via EduGAIN, via OIDC tokens and via X.509 certificates/Virtual Organization Membership Service (VOMS 7) attribute provisioning services. The token authentication to remote storage elements for data access and transfer - initially representing the biggest challenge - has been successfully tested on all the VRE's RSEs. The IAM authentication to the Reana instance is currently being implemented and will appear in the next software release version. Footnote 7: [https://github.com/italiangrid/voms](https://github.com/italiangrid/voms) ### User interface: enhanced notebook service The online entry point of the VRE is a JupyterHub interface, where scientists can run preliminary analysis. The user gets authenticated via IAM and selects the desired computational environment, automatically pulled from the VRE container registry. The software is therefore already installed in the Jupyterlab session specific to each user pod. Software distribution services such as the CERN Virtual Machine File System (CVMFS 8) additionally allow software installation on a CephFS 800GB shared volume compatible with POSIX standards, mounted on the JupyterHub node. In order to ensure better data security on the platform and to avoid a user filling up the shared volume (leading to an interruption of the JupyterLab session for all users), the JupyterLab interface has been enhanced with a Rucio plug-in 9 (represented in the purple box of Figure 2). This feature enables the user to browse the data in the Data Lake and make a copy of it on a CERN RSE of \(\sim\)0.5 T, which has been FUSE mounted on the Jupyterhub node (yellow arrow in Figure 2); the data is therefore stored close to the processing power, minimising latency. On the other hand, the Jupyterhub node consists of 14GB of RAM and its usage is limited to an exploratory analysis run; to start larger analyses, it is necessary to connect to the VRE Reana cluster via the terminal of the Jupyterhub and dispatch the computation to distributed HPCs. Footnote 8: [https://cernvm.cern.ch/fs/](https://cernvm.cern.ch/fs/) Footnote 9: [https://github.com/rucio/jupyterlab-extension](https://github.com/rucio/jupyterlab-extension) ## 4 Conclusion The modular ecosystem of services and tools constituting the VRE represents an European attempt to demonstrate a bottom-up, FAIR approach to scientific collaboration. The weekly on-boarding of new members requesting an account to access the VRE (with a total of more than 200 users) signifies the community need of such novel infrastructure, that encompasses all the resources needed to easily run an end-to-end analysis. The project has been useful in contributing to improve the software stack of consolidated technologies inside CERN, such as Rucio and Reana. The VRE's applicability to different scientific use cases has been proven successful: postdocs coming from HEP and astrophysics are already using Reana to preserve their workflows on the VRE. The deployment of the infrastructure is kept simple and is extensively documented so it can be used by other institutes as a blueprint: site administrators from collaborations such as the Einstein Telescope and the Deutsches Zentrum fur Astrophysik have already demonstrated interest in emulating the VRE at their home institutions. This represents a fundamental achievement under both sociological and technological aspects for European collaborations that should address upcoming data management and computing challenges in the next decade. ## Code Availability The deployment of the VRE infrastructure is still under construction, but the code is available on the public CERN VRE Github project 10, along with the necessary documentation to reproduce it. The VRE landing page 11, provides links to the source codes and description of the various EOSC-Future Science Projects. Footnote 10: [https://github.com/cern-vre](https://github.com/cern-vre) Footnote 11: [https://escape2020.pages.in2p3.fr/virtual-environment/home/](https://escape2020.pages.in2p3.fr/virtual-environment/home/) ## Acknowledgements Authors acknowledge support from the ESCAPE and EOSC-Future projects. ESCAPE has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement 824064. EOSC-Future is co-funded by the European Union Horizon Programme call INFRAEOSC-03-2020, Grant Agreement 101017536.
2304.11151
Many-Body Coherence in Quantum Transport
In this study, we propose the concept of harnessing quantum coherence to control electron transport in a many-body system. Combining an open quantum system technique based on Hubbard operators, we show that many-body coherence can eliminate the well-known Coulomb staircase and cause strong negative differential resistance. To explore the mechanism, we analytically derive the current-coherence relationship in the zero electron-phonon coupling limit. Furthermore, by incorporating a gate field, we demonstrate the possibility of constructing a coherence-controlled transistor. This development opens up a new direction for exploring quantum electronic devices based on many-body coherence.
Ching-Chi Hang, Liang-Yan Hsu
2023-04-21T17:51:19Z
http://arxiv.org/abs/2304.11151v6
# Many-Body Coherence in Quantum Transport ###### Abstract In this study, we propose the concept of harnessing quantum coherence to control electron transport in a many-body system. Combining an open quantum system technique based on Hubbard operators, we show that many-body coherence can eliminate the well-known Coulomb staircase and cause strong negative differential resistance. To explore the mechanism, we analytically derive the current-coherence relationship in the zero electron-phonon coupling limit. Furthermore, by incorporating a gate field, we demonstrate the possibility of constructing a coherence-controlled transistor. This development opens up a new direction for exploring quantum electronic devices based on many-body coherence. _Introduction.--_ Quantum coherence is a fundamental concept in quantum mechanics that sets it apart from classical physics. The unique properties of quantum coherence have been applied in a diverse range of fields across various disciplines. For instance, quantum coherence has been utilized to enhance the energy transfer efficiency in quantum biology [1; 2; 3; 4] and the performance of nanoscale heat engines in quantum thermodynamics [5; 6; 7; 8; 9]. Moreover, quantum coherence can be exploited to store and transfer information for quantum communication [10; 11; 12; 13]. In nanoelectronics, the importance of quantum coherence is manifested in the interference of an single electron passing through a junction with multiple tunneling pathways, e.g., a quantum interference transistor [14; 15; 16; 17; 18; 19]. Despite extensive studies on quantum interference in quantum transport, how to directly connect quantum coherence and transport properties, particularly in many-body systems, remains an open question. Many-body effects in quantum transport have attracted considerable attention due to their critical significance in open quantum systems and their potential applications in nanoelectronics [20; 21; 22; 23; 24; 25; 26]. Numerous intriguing many-body quantum transport phenomena, including Coulomb blockade [27; 28], Kondo resonance [29; 30], Franck-Condon blockade [31; 32; 33], and current hysteresis [34; 35], have been extensively explored in semiconductor nanostructures, 2D materials, and single-molecule junctions. However, the concept of many-body coherence, which refers to quantum coherence between two many-body states, has not received enough attention in the field of quantum transport. In this Letter, we introduce quantum coherence from a Redfield-type fermionic quantum master equation and study quantum transport in a minimal model that incorporates many-body effects such as electron-electron and electron-phonon interactions. Based on this minimal model, we aim to clarify the role of many-body coherence in quantum transport, thus shedding light on how to harness many-body coherence to design quantum electronic devices. _Model Hamiltonian.--_ To demonstrate the effect of many-body coherence on quantum transport, we consider a quantum electronic device shown in Fig. 1. The device is described by the total Hamiltonian \(\hat{H}=\hat{H}_{\rm sys}+\hat{H}_{\rm lead}+\hat{H}_{\rm sys-lead}+\hat{H}_{\rm gate}\) composed of the system Hamiltonian \(\hat{H}_{\rm sys}\), the lead Hamiltonian \(\hat{H}_{\rm lead}\), the system-lead coupling term \(\hat{H}_{\rm sys-lead}\), and the gate Hamiltonian \(\hat{H}_{\rm gate}\). Furthermore, to simplify the complexity of a many-body system while retaining electron-electron and electron-phonon interactions, we consider the two-site Peierls-Hubbard model [36; 37] to be the system, including on-site energy \(\varepsilon\), on-site Coulomb repulsion \(U\), intersite electron hopping \(t\) and electron-phonon coupling strength \(g\). For the convenience of theoretical analysis, we rearrange the terms in the two-site Peierls-Hubbard model as \(\hat{H}_{\rm sys}=\hat{H}_{\rm el}+\hat{H}_{\rm ph}+\hat{H}_{\rm el-ph}\) composed of the electronic Hamiltonian \(\hat{H}_{\rm el}\), the phonon Hamiltonian \(\hat{H}_{\rm ph}\), and the electron-phonon coupling \(\hat{H}_{\rm el-ph}\). The electronic Hamiltonian has the form \(\hat{H}_{\rm el}=\varepsilon\sum_{l,\sigma}\hat{c}_{l\sigma}^{\dagger}\hat{c}_ {l\sigma}+U\sum\hat{c}_{l\uparrow}^{\dagger}\hat{c}_{l\uparrow}\hat{c}_{l \downarrow}^{\dagger}\hat{c}_{l\downarrow}-t\sum_{\sigma}(\hat{c}_{2\sigma}^{ \dagger}\hat{c}_{1\sigma}+\mathrm{H.c.})\), where \(\hat{c}_{l\sigma}^{\dagger}\) (\(\hat{c}_{l\sigma}\)) is the fermion operator which creates (annihilates) an electron on site \(i=1,2\) with spin \(\sigma=\uparrow,\downarrow\). The model can accommodate at most 4 electrons and generate 16 different many-body electronic states in total [38]. To properly describe many-body states, we denote the many-body states of the system as \(|N_{a},a\rangle\) with energy \(\varepsilon_{a}\) as shown in Table 1, where \(N_{a}\) represents the number of electrons of state \(a\). We consider the phonon Hamiltonian Figure 1: (a) Illustration of a quantum electronic device in a many-body system. The system coupled with one gate and two leads L and R contains on-site Coulomb repulsion \(U\), intersite electron hopping \(t\), and electron-phonon coupling \(g\). \(\hat{H}_{\rm ph}=\sum_{\alpha}\hbar\omega_{\alpha}(\hat{b}_{\alpha}^{\dagger}\hat{b} _{\alpha}+\frac{1}{2})\) and the electron-phonon coupling \(\hat{H}_{\rm el-ph}=g\sum_{\sigma,\alpha}a_{\ell\sigma}^{\dagger}\hat{c}_{i \sigma}(\hat{b}_{\alpha}^{\dagger}+\hat{b}_{\alpha})\), where \(\omega_{\alpha}\) and \(\hat{b}_{\alpha}^{\dagger}\) (\(\hat{b}_{\alpha}\)) stand for phonon frequency and bosonic creation (annihilation) operator of the phonon mode \(\alpha\), respectively. According to the previous study [39], we believe that a two-site system, such as thiolated arlythynberg with 9,10-dihydroanthracene core (AH), is experimentally feasible for the demonstration of the effect of many-body coherence on quantum transport. The two leads and the gate are modeled as follows. For the gate, we model its Hamiltonian as \(\hat{H}_{\rm gate}=-\mathrm{e}V_{\rm g}\sum_{\downarrow,\sigma}\hat{c}_{i\sigma} ^{\dagger}\hat{c}_{i\sigma}\), where the gate voltage \(V_{\rm g}\) shifts the on-site energy \(\varepsilon\) by \(-\mathrm{e}V_{\rm g}\). The two leads are described by a noninteracting electron gas model, \(\hat{H}_{\rm lead}=\sum_{l,k,\sigma}\varepsilon_{\rm k\sigma}\hat{a}_{l\sigma }^{\dagger}\hat{d}_{lk\sigma}\), where \(\hat{a}_{lk\sigma}^{\dagger}(\hat{d}_{lk\sigma})\) creates (annihilates) an electron in the state \(\left|lk\sigma\right\rangle\) with energy \(\xi_{lk\sigma}\) in the lead \(l\), and \(l=\mathrm{L}\) and R represents the left and the right leads. Assuming that the electrons in the leads stay at equilibrium, we express the average occupation number as \(\langle\hat{d}_{lk\sigma}^{\dagger}\hat{d}_{l^{\prime}k^{\prime}\sigma^{ \prime}}\rangle=\delta_{l,l^{\prime}}\delta_{k,k^{\prime}}\delta_{\sigma, \sigma^{\prime}}f_{l}(\xi_{k\sigma})\), where \(f_{l}(\xi_{k\sigma})=(1+e^{\langle\xi_{k\sigma}-\mu_{l}\rangle/k_{\rm B}T})^{-1}\) is the Fermi function of lead \(l\) with chemical potential \(\mu_{l}\) at temperature \(T\). In this work, we consider the symmetric bias condition \(\mu_{l}=\mu_{0}+\zeta_{\rm q}\mathrm{e}V_{\rm sd}/2\) with \(\zeta_{\rm L}=1\) and \(\zeta_{\rm R}=-1\), where \(V_{\rm sd}\) is the bias voltage, and \(\mu_{0}\) is the equilibrium chemical potential for the electrodes. The system-lead coupling is modeled as \(\hat{H}_{\rm sys-lead}=\sum_{k,\sigma}(T_{{\rm L}k,1}\hat{c}_{1\sigma}^{ \dagger}\hat{d}_{{\rm L}k\sigma}+T_{{\rm R}k,2}\hat{c}_{2\sigma}^{\dagger}\hat {d}_{{\rm R}k\sigma}+\mathrm{H.c.})\), where we assume the left (right) lead is only coupled to the first (second) site of the system. Furthermore, we specify the transitions between many-body states using Hubbard operators \(\hat{X}^{a,b}\equiv\left|N_{a},a\right\rangle\!\left\langle N_{b},b\right|\) (see Supplemental Material [40] for more details). The advantage of using Hubbard operators is to provide a convenient way to describe many-body state transitions and incorporate characteristics of fermions in the coefficient of each operator [38; 41]. As a result, we rewrite the coupling Hamiltonian as \(\hat{H}_{\rm sys-lead}=\sum_{ab,k,\sigma}(V_{Lk\sigma,ab}^{*}\hat{\mathcal{K}} ^{b,a}\hat{d}_{{\rm L}k\sigma}+V_{Rk\sigma,ab}^{*}\hat{\mathcal{K}}^{b,a}\hat {d}_{Rk\sigma}+\mathrm{H.c.})\) based on the Hubbard operator techniques, where the transformed coupling becomes \(V_{lk\sigma,ab}=T_{{\rm L}k,i}^{*}\cdot\left\langle N_{a},a|\hat{c}_{i\sigma} \middle|N_{b},b\right\rangle\). The index \(i\) is neglected in \(V_{lk\sigma,ab}\) because \(i\) is uniquely determined by \(l\), i.e., \(i=1\)\((2)\) when \(l=\mathrm{L}\)\((\mathrm{R})\). Here we do not consider the effect of the external potential exerted by the bias, i.e., the on-site energy \(\varepsilon\) does not vary with the source-drain voltage \(V_{\rm sd}\). This effect can lead to level renormalization and slightly modify the pattern of Coulomb staircase [42; 43]. _Quantum master equation analysis._-- To incorporate the effect of many-body coherence into quantum transport, instead of using the Pauli master equation (PME) or the Lindblad quantum master equation, we adopt the Redfield formalism, which has been used extensively to describe electronic bath in the electrodes [45; 46; 47] or phonon effects on quantum transport [48; 49; 50; 51]. We start from the quantum Liouville equation, treat the two leads \(\hat{H}_{\rm lead}\) and phonons \(\hat{H}_{\rm ph}\) as bath, make the Born-Markov approximation, and finally derive a Redfield-type fermionic quantum master equation based on Hubbard operators (see Supplemental Material [40] for more details) as follows, \[\frac{\mathrm{d}\hat{\rho}_{\rm el}(t)}{\mathrm{d}t}=-\frac{i}{\hbar}[\hat{H}_ {\rm el},\hat{\rho}_{\rm el}(t)]+\mathcal{R}_{\rm lead}\hat{\rho}_{\rm el}(t) +\mathcal{R}_{\rm ph}\hat{\rho}_{\rm el}(t), \tag{1}\] where \(\hat{\rho}_{\rm el}(t)\) is the electronic density matrix, \(\mathcal{R}_{\rm lead}\) is the lead Redfield superoperator which describes the electron transport processes between the system and electrodes, \(\mathcal{R}_{\rm ph}\) is the phonon Redfield superoperator which accounts for the influence of phonons on electrons in the system. Note that it is well-known that the phonon bath (associated with the phonon Redfield superoperator \(\mathcal{R}_{\rm ph}\)) can lead to electronic state relaxation and decoherence in the electronic density matrix [52; 53; 54], but the effect of the electronic bath (associated with the lead Redfield superoperator \(\mathcal{R}_{\rm lead}\)) on electron transport is quite vague. In order to focus on many-body electronic coherence due to electronic bath, we neglect the effect of \(\mathcal{R}_{\rm ph}\hat{\rho}_{\rm el}(t)\) in the main text and leave the discussion about the effects of phonons on coherence and transport properties in the Supplemental Material [40]. The operation of the lead Redfield superoperator on the electronic density matrix can be expressed as \[\left\langle N_{a},a|\mathcal{R}_{\rm lead}\hat{\rho}_{\rm el}(t)\left|N_{b},b \right\rangle=\sum_{cd}\mathcal{R}_{ab,cd}\rho_{cd}, \tag{2}\] where states \((a,b,c,d)\) serve as the eigenstates of \(\hat{H}_{\rm el}\). Several remarks are listed below. \(\mathcal{R}_{ab,cd}\) in Eq. (2) can be decomposed into four mechanisms \(\mathcal{R}^{I},\mathcal{R}^{II},\mathcal{R}^{III},\) and \(\mathcal{R}^{IV}\). The first mechanism \(\mathcal{R}^{I}\) and the second mechanism \(\mathcal{R}^{II}\) correspond to the two-path quantum inference formed of state-to-state transitions caused by electron and hole injections, respectively. The third mechanism \(\mathcal{R}^{III}\) and the fourth mechanism \(\mathcal{R}^{IV}\) correspond to the indirect interference caused by electron and hole injections, respectively. For example, \(\mathcal{R}^{I}_{ab,cd}\rho_{cd}=-\frac{i}{\hbar}\sum_{l}\left[\Sigma^{(I),<}_{ ab,ca}(\varepsilon_{ab})-\left(\Sigma^{(I),<}_{ca,bb}(\varepsilon_{ca})\right)^{*} \right]\rho_{cd}\) represents the two-path quantum interference formed of \(\left|N-1,c\right\rangle\rightarrow\left|N,a\right\rangle\) and \(\left|N-1,d\right\rangle\rightarrow\left|N,b\right\rangle\) caused by electron injections, where lesser \begin{table} \begin{tabular}{c c c} \hline \hline Hilbert space & Energy \(\varepsilon_{\alpha}\) & Eigenstate \(\left|N_{a},a\right\rangle\) \\ \hline Zero-electron & \(0\) & \(\left|0,S^{0}\right\rangle\) \\ One-electron & \(\varepsilon-t\) & \(\left|1,D_{+,\uparrow}^{1}\right\rangle\), \(\left|1,D_{+,\downarrow}^{1}\right\rangle\) \\ & \(\varepsilon+t\) & \(\left|1,D_{-,\uparrow}^{1}\right\rangle\), \(\left|1,D_{-,\downarrow}^{1}\right\rangle\) \\ Two-electron & \(2\varepsilon-(x-U)/2\) & \(\left|2,S_{+}^{2}\right\rangle\) \\ & \(2\varepsilon\) & \(\left|2,T_{0}^{2}\right\rangle\), \(\left|2,T_{+1}^{2}\right\rangle\), \\ & \(2\varepsilon+U\) & \(\left|2,S_{\rm CS}^{2}\right\rangle\) \\ & \(2\varepsilon+(x+U)/2\) & \(\left|2,S_{-}^{2}\right\rangle\) \\ Three-electron & \(3\varepsilon+U-t\) & \(\left|3,D_{-,\uparrow}^{3}\right\rangle\), \(\left|3,D_{-,\downarrow}^{3}\right\rangle\) \\ & \(3\varepsilon+U+t\) & \(\left|3,D_{+,\uparrow}^{3}\right\rangle\), \(\left|3,D_{+,\downarrow}^{3}\right\rangle\) \\ Four-electron & \(4\varepsilon+2U\) & \(\left|4,S^{4}\right\rangle\) \\ \hline \hline \end{tabular} \end{table} Table 1: The 16 eigenstates of the system Hamiltonian and their corresponding energies [44], \(x\equiv\sqrt{U^{2}+16r^{2}}\). self-energy \(\Sigma^{(l),<}_{db,ca}(\epsilon_{db})\) describes the state-to-state transition accompanied by a single-electron injection with energy \(\epsilon_{db}\equiv\epsilon_{b}-\epsilon_{d}\) (see Supplemental Material [40] for more details). For simplicity, we consider the wideband approximation [55], and the lesser self-energy can be expressed in terms of Hubbard operator \(\hat{X}^{a,b}\) as \[\Sigma^{(l),<}_{db,ca}(\epsilon_{db})=i\frac{\Gamma}{2}f_{i}(\epsilon_{db}) \sum_{\sigma}\mathrm{Tr}\left[\hat{c}^{\dagger}_{i\sigma}\hat{\mathcal{R}}^{d, d}\right]^{*}\mathrm{Tr}\left[\hat{c}^{\dagger}_{i\sigma}\hat{\mathcal{R}}^{c,a} \right], \tag{3}\] which is composed of a coupling constant \(\Gamma\), the occupation of electrons \(f_{i}(\epsilon_{db})\), and the transition amplitude between many-body states of the system due to an injected electron. Similarly, the greater self-energy in the wideband approximation comprises a coupling constant \(\Gamma\), the occupation of holes \(1-f_{i}(\epsilon_{db})\), and the transition amplitude between many-body states of the system due to a hole entering the system. To explore the correlation between the steady-state electric current and many-body coherence, we compute the electric current [56] from the steady-state density matrix \(\hat{\rho}_{\mathrm{el}}(t)\) as \[I=\frac{2\epsilon}{\hbar}\sum_{acd}\mathrm{Im}\bigg{\{}\left[\Sigma^{(\mathrm{ L}),<}_{da,ca}(\epsilon_{da})-\left(\Sigma^{(\mathrm{L}),>}_{ac,ad}(\epsilon_{ ad})\right)^{*}\right]\rho_{cd}\bigg{\}}, \tag{4}\] where \(\Sigma^{(\mathrm{L}),<}_{da,ca}(\epsilon_{da})\) corresponds to a transition from \(N\)-electron to \((N+1)\)-electron state due to an injected electron from the left electrode, while \(\Sigma^{(\mathrm{L}),>}_{ac,ad}(\epsilon_{ad})\) corresponds to a transition from \(N\)-electron to \((N-1)\)-electron state caused by an injected hole. _Many-body coherence and current blockade._-- To demonstrate that the effect of many-body coherence on quantum transport can be experimentally observed in a realistic system, we consider AH with experimental parameters [39]. As shown in Fig. 2a, the electric current (the black solid line) decreases as many-body coherence between eigenstates \(\left|2,S^{2}_{\mathrm{CS}}\right>\) and \(\left|2,S^{2}_{-}\right>\) (the blue dashed line) increases with bias. Furthermore, we find that, for a model system with large Coulomb repulsion and weak intersite electron hopping, many-body coherence can reach a maximum, and the electric current can be completely blocked to zero, as shown in Fig. 2b. It is worth mentioning that the current blockade phenomenon in Fig. 2a and 2b is completely different from the well-known "Coulomb blockade". In Coulomb blockade, the electric current exhibits "Coulomb staircase" with the increasing bias voltage (the orange solid lines in Fig. 2a and 2b), whereas Fig. 2a and 2b show that the electric current decreases with the increasing bias voltage, similar to the behavior of a negative difference resistance. Here, we would like to emphasize that Coulomb staircase can be fully understood by the PME approach, and this approach is extensively employed to study nanodevices [57; 44; 58]. However, the PME approach does not account for the effect of "coherence" induced by the interaction between many-body states and electron baths. Note that the current suppression is found to be robust against electron-phonon couplings when \(g\) is not large (see Supplemental Material [40] for more details). Our numerical simulations clearly demonstrate that coherence between many-body states cannot be neglected and is directly associated with electric current. To quantitatively understand the current blockade in Fig. 2a and 2b, we derive a current-coherence relationship for a system with weak hopping and strong Coulomb repulsion. The relationship is established based on two assumptions. First, to include the effect of Coulomb repulsion \(U\) on currents, we consider that \(eV_{\mathrm{sd}}>U\) in the zero temperature limit. Furthermore, for the simplicity of derivation, we neglect the influence of \(\epsilon\) and \(t\) on the Fermi function. In this condition, we can approximate \(f_{\mathrm{L}}(\epsilon_{ca})=1\) and \(f_{\mathrm{R}}(\epsilon_{ca})=0\) in Eq. (3). Second, we only keep many-body coherence \(\rho_{S^{2}_{+},T^{2}_{0}}\), \(\rho_{S^{2}_{+},T^{2}_{+1}}\), \(\rho_{S^{2}_{+},T^{2}_{-1}}\) and \(\rho_{S^{2}_{\mathrm{CS}},S^{2}_{-}}\) when solving Eq. (1). It is well-known that coherence can be neglected while there is a large energy gap between two states, i.e., the secular approximation for the derivation of the PME approach. When \(t/U\) is small, the energy gap between \(\left|2,S^{2}_{\mathrm{CS}}\right>\) and \(\left|2,S^{2}_{-}\right>\) and the energy gap between \(\left|2,S^{2}_{+}\right>\) and triplet states \(\left|2,T^{2}_{0}\right>\), \(\left|2,T^{2}_{+1}\right>\), \(\left|2,T^{2}_{-1}\right>\) are the smallest. As a result, we consider these coherence terms when solving Eq. (1) and find that only \(\rho_{S^{2}_{\mathrm{CS}},S^{2}_{-}}\) is associated with current. Finally, we obtain a current-coherence relationship as (see Supplemental Material [40] for more details) \[I=\frac{\mathrm{e}\Gamma}{\hbar}\bigg{\{}1-2\big{[}1+\frac{1}{4}(\frac{4t^{2} }{U\Gamma})^{2}\big{]}^{-1/2}\Big{|}\rho_{S^{2}_{\mathrm{CS}},S^{2}_{-}}\Big{|} \bigg{\}}, \tag{5}\] showing that the electric current can be expressed in terms of many-body coherence \(\rho_{S^{2}_{\mathrm{CS}},S^{2}_{-}}\) and the kinetic exchange \(4t^{2}/U\) in the unit of system-lead coupling \(\Gamma\). In Fig. 2a and 2b, the green lines almost coincide with the black lines when current blockade occurs, which reveals that Eq. (5) has successfully captured the physics behind the current blockade and elucidated the influence of many-body coherence on quantum transport. Furthermore, the kinetic exchange \(4t^{2}/U\), resulting from the interplay between hopping and Figure 2: Current blockade induced by many-body coherence in an AH system [39] for (a) \(\epsilon=0.1\) eV, \(t=0.01\) eV, \(U=0.08\) eV, \(\Gamma=0.005\) eV and in a model system for (b) \(\epsilon=-0.25\) eV, \(t=0.005\) eV, \(U=0.8\) eV, \(\Gamma=0.001\) eV. Other parameters are \(T=300\) K, \(V_{\mathrm{g}}=0\) V, and \(g=0\) (no electron-phonon coupling). The cases for \(g\neq 0\) can be found in Supplemental Material [40]. The orange, black, green, and red solid lines correspond to steady-state currents derived from PME, Eq. (4), Eq. (5), and Eq. (6), respectively. The dashed blue line describes the magnitude of coherence \(\rho_{S^{2}_{\mathrm{CS}},S^{2}_{-}}\). many-body interactions, describes the intersite delocalization of electrons. Therefore, when the kinetic exchange is small, electrons accumulate on a single site, and the current is blockaded. Note that \(4t^{2}/U\) corresponds to the energy gap \(\Delta E_{S^{2}_{\text{CS}},S^{2}_{-}}=(\sqrt{U^{2}+16t^{2}}-U)/2\) when \(t/U\ll 1\). If the energy gap \(\Delta E_{S^{2}_{\text{CS}},S^{2}_{-}}\) is small enough, i.e, \(4t^{2}/U\Gamma\) is negligible, then Eq. (5) can be further simplified as \[I=\frac{\text{e}\Gamma}{\hbar}\bigg{\{}1-2\Big{|}\rho_{S^{2}_{\text{CS}},S^{2} _{-}}\Big{|}\bigg{\}}, \tag{6}\] indicating that many-body coherence \(\rho_{S^{2}_{\text{CS}},S^{2}_{-}}\) becomes a dominant factor in current blockade. When \(t/U\) is not small enough, e.g., \(t/U=0.125\) in Fig. 2a, Eq. (6) (the red line) slightly underestimates the electric current in the current blockade region due to the neglect of the kinetic exchange effect. On the other hand, when \(t/U\ll 1\), e.g., \(t/U=0.00625\) in Fig. 2b, the red line matches the black line in the current blockade region, testifying that many-body coherence predominates the current suppression. _Control of current blockade.--_ Control of electric current is a key issue in quantum transport [16, 59, 60, 61, 62]. Here, we demonstrate that it is feasible to operate many-body coherence and current blockade via internal Hamiltonian design and an external gate voltage. First, for Hamiltonian design, the relative magnitudes of intersite coupling \(t\) and Coulomb repulsion \(U\) are directly related to many-body coherence and current blockade. As shown in Fig. 3a, when \(t/U\ll 0.1\), the current decreases to almost zero, and many-body coherence \(\rho_{S^{2}_{\text{CS}},S^{2}_{-}}\) approaches its maximum \(0.5\). In brief, the maximum value of coherence can be understood by the fact that small \(t/U\) reduces the energy gap between \(|2,S^{2}_{\text{CS}}\rangle\) and \(|2,S^{2}_{-}\rangle\) to almost zero and thus leads to the maximum of \(\rho_{S^{2}_{\text{CS}},S^{2}_{-}}=0.5\). The origin of strong current blockade results mainly from many-body coherence \(\rho_{S^{2}_{\text{CS}},S^{2}_{-}}\), i.e., when \(t/U\ll 0.1\), the current calculated from Eq. (6) (the red line), which neglects the kinetic exchange effect, coincides with the current calculated from Eq. (5) (the green line). The small deviation between the green line and the red line in the region \(t/U\approx 0.1\sim 0.7\) indicates that the kinetic exchange \(4t^{2}/U\) can affect the electric current, but many-body coherence is still the main mechanism for the current blockade. When \(t/U\gg 0.7\), many-body coherence \(\rho_{S^{2}_{\text{CS}},S^{2}_{-}}\) reaches zero, so current blockade disappears. Fig. 3a clearly shows that one can control electric current and many-body coherence via the modification of \(t/U\). Second, we find that many-body coherence of a system can be significantly influenced by an external gate field. Fig. 3b shows that, with an increasing gate voltage \(V_{\text{g}}\), many-body coherence \(\rho_{S^{2}_{\text{CS}},S^{2}_{-}}\) transitions from zero to its maximum and the current drops to zero. Moreover, the transition gate voltage increases with the increasing on-site energy \(\epsilon\), where the red, blue, and green line correspond to \(\epsilon=-0.2\), \(-0.15\), and \(-0.1\) eV, respectively. Control of the gate voltage \(V_{\text{g}}\) and the Hamiltonian design correspond to different mechanisms of forming the current blockade because control of \(V_{\text{g}}\) does not change the kinetic exchange \(4t^{2}/U\). To explain the gate dependence of many-body coherence \(\rho_{S^{2}_{\text{CS}},S^{2}_{-}}\), we derive an analytical expression for the coherence-gate relationship, \(|\,\rho_{S^{2}_{\text{CS}},S^{2}_{-}}|=1/2\times\left[2-\Theta(\mu_{\text{L}}- \epsilon-U+\text{eV}_{\text{g}})\right]/\left[8-7\Theta(\mu_{\text{L}}- \epsilon-U+\text{eV}_{\text{g}})\right],\) by making the approximation \(f_{\text{L}}(\epsilon+U-\text{eV}_{\text{g}})=\Theta(\mu_{\text{L}}- \epsilon-U+\text{eV}_{\text{g}})\) and \(t\approx 0\) (see Supplemental Material [40] for more details), where \(\Theta(\mu_{\text{L}}-\epsilon-U+\text{eV}_{\text{g}})\) is the Heaviside step function. According to the coherence-gate relation, when \(\epsilon=-0.2\) eV, \(U=0.8\) eV, and \(V_{\text{sd}}=1.0\) V, many-body coherence \(\rho_{S^{2}_{\text{CS}},S^{2}_{-}}\) has a maximum value \(0.5\) while \(V_{\text{g}}\) exceeds \(0.1\) V, which is consistent with our simulation result (the red line). Fig. 3b also indicates that, with lower on-site energies, the electric current and many-body coherence can be operated with smaller gate voltages, showing potential as transistors. _Conclusions.--_ We have demonstrated the significance of many-body coherence in quantum transport and established a current-coherence relationship Eq. (5) for a model system using the Redfield-type fermionic quantum master equation. The results imply that many-body coherence can eliminate the well-known Coulomb staircase and lead to the negative differential resistance, which cannot be described by the PME approach [57, 44, 58] due to the lack of coherence. Furthermore, it is shown that many-body coherence can be manipulated through modifying the internal system Hamiltonian or applying an external gate voltage. Finally, we find that the electric current can be switched based on many-body coherence at a low gate voltage, indicating potential as coherence-controlled transistors. The results here open a new class of electronic devices in quantum electronics, which will motive further experimental and theoretical investigations on the effects of many-body coherence in condensed matter physics and quantum technology. We thank Chih-En Shen, Hung-Sheng Tsai, Ming-Wei Lee, Yi-Ting Chuang, Qian-Rui Huang, Michitoshi Hayashi, and Yang-Hao Chan for useful discussions. This research was supported by Academia Sinica (AS-CDA-111-M02) and National Science and Technology Council (Grant Nos. 110-2113-M-001-053 and 111-2113-M-001-027-MY4).
2308.00389
Autonomous data extraction from peer reviewed literature for training machine learning models of oxidation potentials
We present an automated data-collection pipeline involving a convolutional neural network and a large language model to extract user-specified tabular data from peer-reviewed literature. The pipeline is applied to 74 reports published between 1957 and 2014 with experimentally-measured oxidation potentials for 592 organic molecules (-0.75 to 3.58 V). After data curation (solvents, reference electrodes, and missed data points), we trained multiple supervised machine learning models reaching prediction errors similar to experimental uncertainty ($\sim$0.2 V). For experimental measurements of identical molecules reported in multiple studies, we identified the most likely value based on out-of-sample machine learning predictions. Using the trained machine learning models, we then estimated oxidation potentials of $\sim$132k small organic molecules from the QM9 data set, with predicted values spanning 0.21 to 3.46 V. Analysis of the QM9 predictions in terms of plausible descriptor-property trends suggests that aliphaticity increases the oxidation potential of an organic molecule on average from $\sim$1.5 V to $\sim$2 V, while an increase in number of heavy atoms lowers it systematically. The pipeline introduced offers significant reductions in human labor otherwise required for conventional manual data collection of experimental results, and exemplifies how to accelerate scientific research through automation.
Siwoo Lee, Stefan Heinen, Danish Khan, O. Anatole von Lilienfeld
2023-08-01T09:11:30Z
http://arxiv.org/abs/2308.00389v1
Autonomous data extraction from peer reviewed literature for training machine learning models of oxidation potentials ###### Abstract We present an automated data-collection pipeline involving a convolutional neural network and a large language model to extract user-specified tabular data from peer-reviewed literature. The pipeline is applied to 74 reports published between 1957 and 2014 with experimentally-measured oxidation potentials for 592 organic molecules (-0.75-3.58 V). After data curation (solvents, reference electrodes, and missed data points), we trained multiple supervised machine learning models reaching prediction errors similar to experimental uncertainty (\(\sim\)0.2 V). For experimental measurements of identical molecules reported in multiple studies, we identified the most likely value based on out-of-sample machine learning predictions. Using the trained machine learning models, we then estimated oxidation potentials of \(\sim\)132k small organic molecules from the QM9 data set, with predicted values spanning 0.21-3.46 V. Analysis of the QM9 predictions in terms of plausible descriptor-property trends suggests that aliphaticity increases the oxidation potential of an organic molecule on average from \(\sim\)1.5 V to \(\sim\)2 V, while an increase in number of heavy atoms lowers it systematically. The pipeline introduced offers significant reductions in human labor otherwise required for conventional manual data collection of experimental results, and exemplifies how to accelerate scientific research through automation. ## I Introduction The accessibility and utilization of literature data through systematic reviews and meta-analyses are of significant importance across all scientific disciplines to rigorously assess the wealth of information contained in multiple studies and compile them in large-scale data sets [1; 2; 3; 4]. However, reproducibility concerns as well as the rapid growth in the number of scientific publications [5; 6] poses significant limitations on efficiently reading, understanding, and extracting the enormous volume of ever growing information. The development of automated retrieval of pertinent information [7] could address the challenge of training meaningful machine learning (ML) models that require sufficiently large scientific data sets [8; 9]. In particular, tabular data in literature sources holds immense importance in scientific research as they organize a large body of information in an easily-readable fashion. Thus, the efficient extraction of tabular information would greatly streamline data collection from a large number of studies. Yet, upon examining different reference sources, it is evident that tables are presented in a variety of layouts, visual appearances, and encoding formats (eg. HTML, PDF, JPG), which poses a significant hurdle in the automated detection of tables in the literature [10]. However, recent advances in algorithmic designs and computing capabilities have seen the development of convolutional neural network (CNN) models, such as TableNet [10; 11; 12], that are trained to locate tables in document pages displayed as images and are capable of reaching state-of-the-art performances on the ICDAR 2013 table competition data set [13]. A secondary challenge that follows table detection using CNN models is the accurate extraction of text from images, a task known as optical character recognition (OCR) [14]. Google's Tesseract-OCR engine [15; 16] and various ML and deep neural network (DNN) models have been demonstrated to successfully convert images of typed, handwritten, or printed text into machine-encoded text with low character-level substitution rates and word-level error-rates [17; 16]. Then, a third, and closely-related problem relevant to scientific research is the ability of these models to extract specific text. This presents a significant challenge due to the need for semantic understanding, especially as documents may display several tables containing different types of data with irrelevant accompanying information [18]. The recent development of large language models (LLMs) presents a promising solution to the challenge of semantic understanding as they can leverage their extensive training on large volumes of text to recognize and interpret the meaning of specified text [19]. Indeed, LLMs have already seen widespread usage for a variety of scientific purposes [20]. For instance, in chemistry, LMs have been utilized to generate code, learn complex molecular distributions, aid in materials and drug design, and to extract chemical information from scientific documents [21; 22; 23; 24; 25; 26]. Generative pre-trained transformers (eg. GPT-2, GPT-3.5, GPT-4) models developed by OpenAI present particularly exciting applications for research in chemistry and other scientific disciplines for their human-like semantic understanding and their ability to generate human-like text when presented with a prompt [27; 28; 29; 30; 31]. In this work, an automated data-collection pipeline is introduced that accurately locates tables and extracts text from literature sources using the CNN TableNet, and the LLM GPT-3.5, respectively. We demonstrate its usefulness by building a chemically-diverse data set of experimentally-measured oxidation potentials (measured in acetonitrile solvent vs standard calomel electrode, SCE) of organic molecules from peer-reviewed literature. Oxidation potentials are important electrochemical stability and reactivity descriptors; modeling them with efficient machine learning and high predictive power could crucially accelerate the computational design and discovery of superior functional materials, such as batteries, supercapacitors, electrolytes, and electrocatalysts for applications in fuel cells and renewable energy conversion [32; 33; 34; 35]. Based on the experimental data extracted using our pipeline, we have trained multiple supervised ML models that reach experimental uncertainty, and that can be used to identify less/more likely values among conflicting data entries. The generalizability of the ML models is used to predict and analyze the oxidation potential distribution in \(\sim\)132k organic molecules coming from the QM9 data set [36]. Previous ML studies of redox potentials of organic molecules were limited to small data sets based on simulated values which typically encode severe approximations making it difficult to draw direct conclusions relevant for experimental decision making. [37; 38; 39; 40; 41; 42; 43; 44; 45; 46]. ## II Methodology ### From Literature to Data Set The first component of the automated tabular data extraction pipeline (Figure 1) after the collection of literature sources is the detection and localization of tables (**CNN** step of Figure 1). This is accomplished by using TableNet with a DenseNet-121 encoder architecture (8,220,550 trainable parameters; 461,504 non-trainable parameters) with dropout (0.6) [48] (see _Section J_ of Supplementary Information for the Python implementation used in this work and Paliwal _et al._[12] for further details about the architecture). This model was trained for 35 epochs on 495 scanned RGB images (816 \(\times\) 1056 pixels) of document pages containing tables with English text compiled in the Marmot data set (80/20 train/test random split) with labelled coordinates of the rectangular table regions in each image [49]. The model learns these coordinates such that it can output cropped images of the documents containing just the detected tables. The generalization capabilities of the CNN were then assessed by its ability to locate tables in 74 literature sources (published 1957-2014), saved as PDFs, that reported the experimentally-measured oxidation potentials of organic molecules (see _Bibliography_ of Supplementary Information for the used references). pdf2image [50] was used to convert the PDF pages to JPGs (816 \(\times\) 1056 pixels), which were inputted into the CNN. The text contained in the outputted cropped Figure 1: A flowchart representation of the automated data acquisition pipeline for extracting experimentally-measured oxidation potentials reported in literature. Pages displaying tables in 74 reference sources (PDFs) are converted to images (JPGs) and inputted into a convolutional neural network, TableNet, trained to locate tables in images and output images cropped around tables. Text contained in the outputted images, extracted using pytesseract [47] are then fed to GPT-3.5 with a prompt to extract the names of molecules and their oxidation potentials. Used Python packages for each step are shown beside the arrows. images were extracted using pytesseract[47], the Python wrapper for Tesseract-OCR (pytesseract step of Figure 1). The blocks of text were each individually forwarded into the GPT-3.5 API once to screen for data of oxidation potentials with the following prompt (**LLM** steps of Figure 1): _Does this following piece of text contain one or more tables of oxidation potentials of organic molecules? If it does, give the code for a neatly-displayed Panda DataFrame explicitly listing only the molecules and their corresponding oxidation potentials. Ensure to list all molecules. Also, if stated, report the reference electrode and the solvent the measurements occurred in._ If GPT-3.5 was able to successfully output the name of molecules, their oxidation potentials, and the reference electrode and solvent used in the experimental measurements, the master data set was compiled by including only neutrally-charged samples measured in acetonitrile to account for typical electrochemical measurement conditions in the laboratory[51]. For samples labelled by their full names, the Leruli API[52] was used to convert the names to their canonical SMILES[53], followed by the use of RDKit[54] to produce XYZ files from the SMILES (Leruli, RDKit steps of Figure 1). The XYZ files were inputted into the extended tight binding (XTB) API[55, 56] to produce (implicit solvation) optimized geometries in acetonitrile. XTB also produced 17 calculated values for each molecule, including their HOMO-LUMO gaps and solvation free energies in acetonitrile. The oxidation potentials of molecules measured in multiple studies were taken as the mean value. Measurements referenced against non-SCE electrodes were converted to be referenced against SCE as according to handbooks on the standard potentials of reference electrodes[57, 58]. The data set was supplemented as necessary by human labor for samples that the pipeline missed or incorrectly reported, as well as for cases in which the reference electrodes and solvents used in the experimental measurements could not be determined from the text contained in the tables. ### eXtreme Gradient Boosting and Kernel Ridge Regression XGBoost Regression (eXtreme Gradient Boosting Regression: XGB) was selected as a candidate ML algorithm due to its exceptional performance and versatility in handling various regression tasks due to gradient boosting and optimized tree-based ensemble learning algorithms[59]. Kernel ridge regression (KRR) was also tested as it is a popular algorithm for ML in quantum chemistry due to its ease of hyperparameter tuning, in addition to its excellent ability to capture non-linear relationships using kernel functions and its efficiently handing of high-dimensional data[60, 61]. It accomplishes this using kernel functions, which in this work are selected to be Laplacian kernels of the form \[K(\mathbf{A}_{i},\mathbf{B}_{j})=\exp\left(-\frac{||\mathbf{A}_{i}-\mathbf{B} _{j}||_{1}}{\sigma}\right) \tag{1}\] where \(\mathbf{A}_{i},\mathbf{B}_{j}\) denote the representation vectors of molecules \(i,j\)[62, 63]. Bayesian optimization implemented with hyperopt[64] was used for hyperparameter-tuning both algorithm types, with hyperparameters selected as those that returned the lowest mean absolute error, MAE, on four-fold cross-validation on the training set (80/20 train/test random split). ### Physics-Based Structural Representations Four XGB models were developed in this work with the following input features: ACSF[65]; ACSF, XTB values; ACSF, MORDRED[66]; ACSF, XTB values, MORDRED. MORDRED is a popular two- and three-dimensional molecular descriptor-calculation software in cheminformatics and is used, in this work, to generate three-dimensional descriptors from MOL files produced from the XTB-geometry-optimized XYZ coordinates. Three KRR models were also developed with input features of ACSF, SOAP[67], and SLATM[68]. The XYZ files were used to produce three popular physics-inspired structural representations[69] of atomic and molecular environments: atom-centered symmetry functions (ACSF), smooth overlap of atomic positions (SOAP)[67], and Spectrum of London and Axilrod-Teller-Muto potentials (SLATM)[68]. These representations were used to predict the oxidation potentials of organic molecules using three KRR models. ACSFs are local descriptors that express a molecule's total energy as a sum of atomic energies by constructing many-body symmetry functions, composed of radial and angular parts, for all atoms within a specified cutoff radius as given by a cutoff function[65]. This work uses radial symmetry functions of \[G_{i}^{2}=\sum\nolimits_{j}\exp\left(-\eta(R_{ij}-R_{s})^{2}\right)\cdot f_{c }(R_{ij}) \tag{2}\] where \(\eta\) defines the width of the Gaussian function and \(R_{s}\) shifts the Gaussian functions by a certain radial distance[65]. This work uses angular symmetry functions \[G_{i}^{4} = 2^{1-\zeta}\sum_{j,k\neq i}^{all}\left(1+\lambda\cos\theta_{ijk} \right)^{\zeta} \tag{3}\] \[\cdot\exp\left(-\eta(R_{ij}^{2}+R_{ik}^{2}+R_{jk}^{2})\right)\] \[\cdot f_{c}(R_{ij})\cdot f_{c}(R_{ik})\cdot f_{c}(R_{jk})\] where \(\zeta\) defines the angular resolution of the symmetry functions and \(\lambda\) shifts the maxima of the cosine functions between 0 and \(\pi\) radians [65]. The ACSF representations are generated using the DScribe library [70] with \(R_{c}=9.0\) A, 6 pairs of \(\eta,R_{s}\) parameters for the \(G^{2}\) radial functions, and 6 triplets of \(\eta,\zeta,\lambda\) parameters for the \(G^{4}\) angular functions. SOAP descriptors represent local atomic environments where each is described by a single power spectrum of the form \[p(\mathbf{r})_{n,m^{\prime},l}^{a_{1}a_{2}}=\pi\sqrt{\frac{8}{2l+1}}\sum_{m}c _{n,l,m}^{a_{1}}(\mathbf{r})^{\dagger}c_{n,l,m}^{a_{2}}(\mathbf{r}) \tag{4}\] where \(a_{1},a_{2}\) index different atoms [70; 67]. DScribe was again used to generate SOAP representations in this work, with parameters selected as \(n_{\text{max}}=6\) (maximum number of radial basis functions), \(l_{\text{max}}=6\) (maximum degree of spherical harmonics), \(\sigma=0.1\), and spherical Gaussian-type orbitals for the radial basis functions, \(g_{n}\). SLATM returns a global representation of the charge density of a given system by concatenating different many-body potential spectra composed of one, two, and three-body terms representing the atomic nuclear charges, London potentials, and Axilrod-Teller-Muto van der Waals potentials, respectively [68]. In this work, SLATM representations were generated using the QML-code library [62]. The best-performing ML model on the test set was then used to screen the oxidation potentials of \(\sim\)132k molecules listed in the QM9 database [71; 36], which reports the geometries of \(\sim\)134k stable small organic molecules with up to 9 heavy (non-hydrogen) atoms (C, N, O, F) computed at the B3LYP/6-31G(2df,p) level of quantum chemistry [72; 73; 74]. The molecules in QM9 thus lie within the domain of the extracted data set by chemical composition and is suitable for estimations of oxidation potentials by the developed ML model based on interpolations. ## III Results and Discussion ### Extracting Data The performance of the automated data collection pipeline in accurately identifying tables containing oxidation potentials and extracting their values was verified via human labor. In the 74 reference sources, one human count returned a total of 182 tables, containing a variety of information such as oxidation potentials, spectroscopic data, product yields, and reaction kinetics. Of these, the CNN failed to locate 19 tables, a 10 % error which is comparable to that associated with some top-performing table detection models [10] (Figure 2) (see _Section I_ of Supplementary Information for an example output from the CNN). The extracted text from the table images outputted from the CNN were then forwarded into GPT-3.5 to screen for measurements of oxidation potentials. One human count returned a total of 1715 measurements. GPT-3.5 failed to accurately report the oxidation potentials of 445 samples (26 % error) (see _Section I_ of Supplementary Information for an example output from GPT-3.5). However, 262 instances of these were due to the molecular samples being labelled with bond-line structures, numbers, or by their substituent groups. 171 samples were simply missed by GPT-3.5, and 12 samples had incorrectly reported oxidation potentials. Therefore, only considering samples that were not detected or were incorrectly reported, GPT-3.5 yields an error rate of 13%. The data extraction performance may be improved by including optical chemical structure recognition tools to screen for molecular names and SMILES of compounds represented as bond-line structures [75; 76; 77]. The compiled data set includes 592 unique molecules with a range of oxidation potentials of -0.75-3.58 V, with a mean value of 1.32 V (Figure 3). See _Section A_ of Supplementary Information for the table listing the oxidation potentials of all molecules. On average the molecules have a molar mass of 184 g/mol (28-680 g/mol), 26 atoms (5-86 atoms), and 13 heavy atoms Figure 2: Performance of CNN on training and testing (80/20 random split) of the Marmot data set [49] evaluated as accuracy of detecting tables (percent overlap of predicted table location area with actual area), vs. number of training epochs. (2-46 heavy atoms) (see _Section B_ of Supplementary Information for distribution plots of these parameters). Out of these 592 molecules, for 155 molecules there are multiple entries in the literature; their deviations are shown in Figure 4. ### ML Model Performance The performances of the XGBR and KRR models were assessed by their MAE and their coefficients of determination, R\({}^{2}\). A target accuracy for the MAE was established as 0.2 V, which was deemed to appropriately represent experimental uncertainties since the average of the min-max range of oxidation potentials of molecules measured in multiple studies is 0.19 V (Figure 4). By assessing these models by these metrics on the test set, the best performance on the out-of-sample test set was observed for the XGBR model trained on ACSF, XTB, MORDRED (\(\text{MAE}_{\text{test}}=0.15\) V, \(\text{R}^{2}_{\text{test}}=0.80\)), followed by ACSF, XTB; ACSF; ACSF, MORDRED (see _Section C_ of Supplementary Information for actual vs predicted oxidation potentials of the test set). Similarly, the KRR model trained on the SLATM representation yields the lowest test set error (\(\text{MAE}_{\text{test}}=0.15\) V; \(\text{R}^{2}_{\text{test}}=0.83\)) (Figure 5a), followed by SOAP, then ACSF (see _Section C_ of Supplementary Information for actual vs predicted oxidation potentials of the test set). From these XGBR and KRR models, the KRR model trained on SLATM achieves the best performance on the test set as it achieved the greatest R\({}^{2}\) value and the lowest MAE. Further, the performances of the XGBR and KRR models were assessed using learning curves, which are key to evaluating the efficiency of ML models (Figure 6). They show the MAEs of the various models at ten different subset sizes, \(N\), of the training set, as evaluated by four-fold cross-validation and plotted on a log-log scale. The hyperparameters of these models were optimized for the largest training set size and were fixed for the training set size. For instance, the KRR model trained on SLATM representation reaches the target MAE of 0.2 V the fastest after training on \(\sim\)416 samples (70 % of the data set), with similar performances achieved for XGBR models trained on ACSF, XTB, MORDRED and ACSF, XTB (Figure 6). Compellingly, it is clear that all representations lead to systematic linear decays in the MAEs of the oxidation potentials, as is generally expected for learning curves [78]. This indicates that these physics-based molecular representations and molecular descriptors are well-suited to machine learn fundamental chemical properties like oxidation potentials. Moreover, it demonstrates that the data collected from the literature through the automated process used in this work are of sufficient quality such that experimental uncertainty in the ML-predictions can be reached with a relatively small data set. Further, these results suggest that the accuracy of these ML models can be systematically improved by increasing training data. Improvements on the automated pipeline used in this work and its application toward a larger volume of literature work may be a method to efficiently expand this data set. We noticed that experimental outcomes for 155 molecules were independently reported in otherwise unrelated publications. The distribution of the corresponding min-max values is featured in Figure 4. For some molecules, the deviation is considerable, and could be due to all sorts of reasons including noise Figure 4: Distribution of min-max ranges of oxidation potentials. (vs. standard calomel electrode in acetonitrile) for 155 molecules with multiple entries. Solid vertical line indicates the mean. Seven molecules with the greatest deviations are shown. Figure 3: Distribution of experimentally-measured oxidation potentials (vs. standard calomel electrode in acetonitrile) of 592 unique neutrally-charged molecules extracted from literature. Solid vertical line indicates the mean. Exemplary molecules at the extremes and near the mean of the distribution are depicted. from use of different experimental set-ups (e.g. use of different reference electrodes) as well as human error. For example, N,N-dimethylacetamide was measured to have an oxidation potential of 1.32 V [79], or of 2.12 V [80]. To estimate which measurement values for molecules with large deviations are more likely, the fifty molecules with the largest deviations were removed from training a KRR model on SLATM (80/20 train/test random split, four-fold cross-validation for hyperparameter tuning; \(\text{MAE}_{\text{test}}=0.15\) V; \(\text{R}_{\text{test}}^{2}=0.85\)), which was subsequently used to predict the oxidation potentials of the fifty "suspicious" molecules (see _Section C_ of Supplementary Information for the performance of the KRR model on the test set). Whichever experimental value that was closest to the predicted value was deemed to be the more likely value. In the case of N,N-dimethylacetamide, the ML prediction amounts to 1.90 V, statistically suggesting that the value of 2.12 V is closer to the truth than the value of 1.32. This kind of scoring has been performed for all the 50 molecules left out of training (see _Section D_ of Supplementary Information). ### Estimated Oxidation Potentials of QM9 Molecules and Descriptor-Property Analyses The XGBR model trained on the ACSF representations and XTB-calculated values (\(\text{MAE}_{\text{test}}=0.16\) V, \(\text{R}_{\text{test}}^{2}=0.78\)) was used to estimate the oxidation potentials of 132k organic molecules contained in the QM9 database. QM9 does not report calculated values of oxidation potentials and as far as the author is aware, no previous work has performed a screen of the database to estimate such values. The geometries reported by the QM9 database were optimized in acetonitrile using XTB which were then used to generate the ACSF representations and inputted with the XTB-calculated values into the XGBR model, resulting in molecules in the QM9 database having oxidation potentials that follow a trimodal distribution, with two distinct peaks, and an average of 1.63 V (0.21-3.46 V) (Figure 7). The oxidation potentials of the molecules are correlated with their corresponding XTB-estimated values of their HOMO-LUMO energy gaps (Figure 8) and solvation free energies calculated for single conformers (see _Section G_ of Supplementary Information for hexbin plot of oxidation potentials and solvation free energies of QM9 molecules) in acetonitrile because these are two fundamental properties of a molecule that determine its propensity to accept or donate an electron, as well as its stability in a particular solvent. There appears to be no obvious correlation between oxidation potentials and the two energy values, which may suggest that more data points encompassing a greater diversity of molecules are required for a clearer trend to emerge. However, the samples in the scatter plots appear to be clustered in certain distributions, suggesting the presence of boundaries in chemical compound space in which small organic molecules can exist with certain combinations of oxidation potentials, HOMO-LUMO gaps, and solvation free energies. Previous work has shown that the distribution of HOMO-LUMO gap energies of molecules in QM9 follows a multimodal distribution with peaks that correspond to Figure 5: Prediction errors of machine learning models of oxidation potentials for 119 out-of-sample molecules. Predictions were obtained by kernel ridge regression (KRR) using SLATM [68] as representations after training on 473 examples. Outliers are shown as insets. **(a)** Scatter plot of experimental vs. predicted. **(b)** Error distribution. sub-distributions based on simple structural features [81]. To determine if the peaks in the distribution of oxidation potentials in QM9 are similarly composed of sub-distributions, a frequency analysis of functional groups and specific atoms, degree of unsaturation, and molecular types was performed using SMILES strings and substructure matching as implemented in RDKit[54] (see _Section H_ of Supplementary Information for full frequency analyses of functional groups and atom types). Intriguingly, upon visual inspection of the distributions, aliphatic molecules are clustered near the peaks at \(\sim\)2.0, 2.5 V (Figure 9a, 9b). However, many molecules with other structural features contribute to the peak at \(\sim\)1.5 V, such as molecules containing halogens, aromatics rings, amines, amides, and carbonyl groups (see _Section H_ of Supplementary Information for corresponding distribution plots). In particular, molecules containing nitrogens exhibit a unimodal distribution of their oxidation potentials with a peak at \(\sim\)1.5 V (Figure 9a). Other trends of note include near-linear increases Figure 8: Hexbin plot of machine learning model based estimated values of oxidation potentials and HOMO-LUMO gap energies of \(\sim\)132k molecules in the QM9 data set [71; 36]. Color bar indicates the density of samples in each bin. Figure 6: Learning curves: Sample test errors for predicted oxidation potentials vs training set size \(N\). The shading indicates standard deviation at each number of training molecules, \(N\), obtained by four-fold cross-validation for feature-based XGBR (ACSF [65]; ACSF, MORDRED [66]; ACSF, XTB [56]; ACSF, XTB, MORDRED) and KRR (ACSF (KRR), SOAP [67] (KRR), SLAT [68] (KRR)), respectively. Dashed line indicates a target MAE of 0.2 V, corresponding to experimental uncertainty. Figure 7: Trimodal distribution of machine learning based predictions of oxidation potentials (vs standard calomel electrode in acetonitrile) for \(\sim\)132k organic molecules in the QM9 data base [71; 36]. Model used corresponds to XGBR/ACSF and XTB-calculated values (green, starred in Figure 6). Solid vertical line indicates the mean. Exemplary molecules at the extreme ends of the distribution and near the three peaks are shown as insets. in oxidation potentials of molecules with greater number of rings, carbons, hydroxyl groups, ethers, and hydrogens (see _Section H_ of Supplementary Information for corresponding violin plots). There also appears to be near-linear decreases in oxidation potentials of molecules with greater numbers of aldehydes, ketones, carbon-oxygen double-bonds, larger degree of unsaturation, and number of heavy atoms, with the latter displaying a particularly prominent linear relationship (Figure 9c). ## IV Conclusion This work introduced an automated data-extraction pipeline involving a convolutional neural network for table detection and a large language model for the selective extraction of scientific information. This pipeline was utilized to extract data from 74 peer-reviewed scientific publications listing tables of experimentally-measured oxidation potentials of organic molecules, resulting in a data set of 592 unique organic molecules, their canonical SMILES, generated XYZ-coordinates, and their oxidation potentials. ML models that reach experimental uncertainty of \(\sim\)0.2 V were trained on this data set, which were subsequently used to estimate the true oxidation potentials of molecules with great discrepancies across multiple measurements and determine which measurements are more reliable. Oxidation potentials of \(\sim\)132k small organic molecules in the QM9 data set were also estimated using the trained ML models and correlated with simple molecular descriptors. This analysis suggests that the oxidation potentials of these molecules depend on the number of heavy atoms and chemical compositions, in particular aliphaticity and nitrogen content. These results suggest that the automated data-extraction pipelines may serve accelerated discoveries of novel molecules and materials through self-driving labs [82]. More specifically, rather than generating training data from scratch, analogous pipelines can be used to train ML models for initializing the experimental planning decisions necessary to launch iterative self-driving lab campaigns. To this end, it could be desirable to develop a deeply-connected neural network, or another algorithmic model that can achieve higher table-detection accuracies, to limit data loss. It might be worth investigating the incorporation of optical chemical structure recognition tools to improve a large language model's ability to recognize as molecules the bond-line structure representations and drawings. Further, it may be valuable to develop a large language model that is specifically trained to understand the semantics and jargon of various scientific disciplines to further improve the extraction of user-specified information. ## V Supplementary Information The supplementary information contains references of the literature sources from which data was extracted, and a table listing the samples' SMILES and experimentally-measured oxidation potentials (V, vs. SCE). Generated xyz-coordinates of the extracted molecules and \(\sim\)132k molecules in QM9, optimized in acetonitrile solvent, are provided. Scatter plots of actual vs predicted oxidation potentials of the XGBoost and KRR models on various molecular representations are shown. For the fifty molecules with the largest measurement deviations across multiple studies, estimations of their true oxidation potentials and the experimental values closest to these estimates are listed. It also shows the molecules with the most positive and most negative oxidation potentials in the extracted data set, as well as for the molecules in the QM9 data set based on their ML-estimated oxidation potentials. Additionally, frequency analyses of the functional groups and atom types in QM9 molecules are displayed. Sample outputs from the CNN and the LLM are displayed. Moreover, Python code to construct the TableNet convolutional neural network for table detection and hyperparameters of the KRR trained on ACSF and XTB-calculated values are available. Figure 9: Explanation of the distributions **(a)** Distributions of predicted oxidation potentials (vs. standard calomel electrode) of aliphatic and N-containing molecules in QM9 **(b)** violin plots of predicted oxidation potentials of non-aliphatic and aliphatic molecules in QM9 **(c)** violin plots of predicted oxidation potentials of molecules in QM9 classified by number of heavy atoms (excluding hydrogen). ###### Acknowledgements. O.A.v.L. has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 772834). This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 957189. This research is part of the University of Toronto's Acceleration Consortium, which receives funding from the Canada First Research Excellence Fund (CFREF). O.A.v.L. has received support as the Ed Clark Chair of Advanced Materials and as a Canada CIFAR AI Chair.
2303.15390
Learning to Zoom and Unzoom
Many perception systems in mobile computing, autonomous navigation, and AR/VR face strict compute constraints that are particularly challenging for high-resolution input images. Previous works propose nonuniform downsamplers that "learn to zoom" on salient image regions, reducing compute while retaining task-relevant image information. However, for tasks with spatial labels (such as 2D/3D object detection and semantic segmentation), such distortions may harm performance. In this work (LZU), we "learn to zoom" in on the input image, compute spatial features, and then "unzoom" to revert any deformations. To enable efficient and differentiable unzooming, we approximate the zooming warp with a piecewise bilinear mapping that is invertible. LZU can be applied to any task with 2D spatial input and any model with 2D spatial features, and we demonstrate this versatility by evaluating on a variety of tasks and datasets: object detection on Argoverse-HD, semantic segmentation on Cityscapes, and monocular 3D object detection on nuScenes. Interestingly, we observe boosts in performance even when high-resolution sensor data is unavailable, implying that LZU can be used to "learn to upsample" as well.
Chittesh Thavamani, Mengtian Li, Francesco Ferroni, Deva Ramanan
2023-03-27T17:03:30Z
http://arxiv.org/abs/2303.15390v1
# Learning to Zoom and Unzoom ###### Abstract Many perception systems in mobile computing, autonomous navigation, and AR/VR face strict compute constraints that are particularly challenging for high-resolution input images. Previous works propose nonuniform downsamplers that "learn to zoom" on salient image regions, reducing compute while retaining task-relevant image information. However, for tasks with spatial labels (such as 2D/3D object detection and semantic segmentation), such distortions may harm performance. In this work (LZU), we "learn to zoom" in on the input image, compute spatial features, and then "unzoom" to revert any deformations. To enable efficient and differentiable unzooming, we approximate the zooming warp with a piecewise bilinear mapping that is invertible. LZU can be applied to any task with 2D spatial input and any model with 2D spatial features, and we demonstrate this versatility by evaluating on a variety of tasks and datasets: object detection on Argevores-HD, semantic segmentation on Cityscapes, and monocular 3D object detection on nuScenes. Interestingly, we observe boosts in performance even when high-resolution sensor data is unavailable, implying that LZU can be used to "learn to upsample" as well. Code and additional visuals are available at [https://tchittesh.github.io/lzu/](https://tchittesh.github.io/lzu/). + Footnote †: Now at Nvidia. + Footnote †: Now at Nvidia. ## 1 Introduction In many applications, the performance of perception systems is bottlenecked by strict inference-time constraints. This can be due to limited compute (as in mobile computing), a need for strong real-time performance (as in autonomous vehicles), or both (as in augmented/virtual reality). These constraints are particularly crippling for settings with high-resolution sensor data. Even with optimizations like model compression [4] and quantization [23], it is common practice to downsample inputs during inference. However, running inference at a lower resolution undeniably destroys information. While some information loss is unavoidable, the usual solution of uniform downsampling assumes that each pixel is equally informative towards the task at hand. To rectify this assumption, Recasens _et al_. [20] propose Learning to Zoom (LZ), a nonuniform downsampler that samples more densely at salient (task-relevant) image regions. They demonstrate superior performance relative to uniform downsampling on human gaze estimation and fine-grained image classification. However, this formulation warps the input image and thus requires labels to be invariant to such deformations. Adapting LZ downsampling to tasks with spatial labels is trickier, but has been accomplished in followup works for semantic segmentation (LDS [11]) and 2D object detection (FOVEA [22]). LDS [11] does not unzoom during learning, and so defines losses in the warped space. This necessitates additional regularization that may not apply to non-pixel-dense tasks like detection. FOVEA [22]_does_ moom bounding boxes for 2D detection, but uses a special purpose solution that avoids computing an inverse, making it inapplicable to pixel-dense tasks like semantic segmentation. Despite these otherwise elegant solutions, there doesn't seem to be a general task-agnostic solution for intelligent downsampling. Figure 1: LZU is characterized by ”zooming” the input image, computing spatial features, then “unzooming” to revert spatial deformations. LZU can be applied to any task and model that makes use of internal 2D features to process 2D inputs. We show visual examples of output tasks including 2D detection, semantic segmentation, and 3D detection from RGB images. Our primary contribution is a general framework in which we zoom in on an input image, process the zoomed image, and then _unzoom_ the output back with an inverse warp. Learning to Zoom and Unzoom (LZU) can be applied to _any_ network that uses 2D spatial features to process 2D spatial inputs (Figure 1) _with no adjustments to the network or loss_. To unzoom, we approximate the zooming warp with a piecewise bilinear mapping. This allows efficient and differentiable computation of the forward and inverse warps. To demonstrate the generality of LZU, we demonstrate performance a variety of tasks: _object detection_ with ReinaNet [17] on Argoverse-HD [14], _semantic segmentation_ with PSPNet [29] on Cityscapes [7], and _monocular 3D detection_ with FCOS3D [26] on nuScenes [2]. In our experiments, to maintain favorable accuracy-latency tradeoffs, we use cheap sources of saliency (as in [22]) when determining where to zoom. On each task, LZU increases performance over uniform downsampling and prior works with minimal additional latency. Interestingly, for both 2D and 3D object detection, we also see performance boosts even when processing low resolution input data. While prior works focus on performance improvements via intelligent downsampling [20, 22], our results show that LZU can also improve performance by intelligently _up_sampling (suggesting that current networks struggle to remain scale invariant for small objects, a well-known observation in the detection community [18]). ## 2 Related Work We split related work into two sections. The first discusses the broad class of methods aiming to improve efficiency by paying "attention" to specific image regions. The second delves into works like LZU that accomplish this by differentiably resampling the input image. ### Spatial Attentional Processing By design, convolutional neural networks pay equal "attention" (perform the same computations) to all regions of the image. In many cases, this is suboptimal, and much work has gone into developing attentional methods that resolve this inefficiency. One such method is Dynamic Convolutions [24], which uses sparse convolutions to selectively compute outputs at only the salient regions. Similarly, gated convolutions are used in [12, 28]. Notably, these methods implement "hard" attention in that the saliency is binary, and non-salient regions are ignored completely. Deformable Convolutions [8, 30] provides a softer implementation of spatial attention by learning per pixel offsets when applying convolutions, allowing each output pixel to attend adaptively to pixels in the input image. SegBlocks [25] also provides a softer attention mechanism by splitting the image into blocks and training a lightweight reinforcement learning policy to determine whether each block should be processed at a high or low resolution. This is akin to our method, which also has variable resolution, albeit in a more continuous manner. Our method is also generalizable to tasks in which it's infeasible to "stitch" together outputs from different blocks of the image (e.g. in detection where an object can span multiple blocks). ### Spatial Attention via Differentiable Image Resampling Spatial Transformer Networks [10] introduces a differentiable method to resample an image. They originally propose this to invert changes in appearance due to viewpoint, thereby enforcing better pose invariance. Learning to Zoom (LZ) [20] later adapts this resampling operation to "zoom" on salient image regions, acting as a spatial attention mechanism. Their key contribution is a transformation parameterized by a saliency map such that regions with higher saliency are more densely sampled. However, this deforms the image, requiring the task to have non-spatial labels. Followup works [11, 19, 22] adapt LZ downsampling to detection and semantic segmentation. For object detection, FOVEA [22] exploits the fact that image resampling is implemented via an inverse mapping to map predicted bounding boxes back into the original image space. This allows all processing to be done in the downsampled space and the final bounding box regression loss to be computed in the original space. However, when there are intermediate losses, as is the case with two-stage detectors containing region proposal networks (RPNs) [21], this requires more complex modifications to the usual delta loss formulation, due to the irreversibility of the inverse mapping. For semantic segmentation, Jin _et al_. [11] apply LZ downsampling to both the input image and the ground truth and computes the loss in the downsampled space. This is elegant and model-agnostic but leads to misalignment between the training objective and the desired evaluation metric. In the extreme case, the model learns degenerate warps that sample "easy" parts of the image to reduce the training loss. To address this, they introduce additional regularization on the downsampler. Independently, [19] handcraft an energy minimization formulation to sample more densely at semantic boundaries. In terms of warping and unwarping, the closest approach to ours is Dense Transformer Networks [13], which also inverts deformations introduced by nonuniform resampling. However, their warping formulation is not saliency-based, which makes it hard to work with spatial or temporal priors and also makes it time-consuming to produce the warping parameters. Additionally, they only show results for semantic segmentation, whereas we show that our formulation generalizes across spatial vision tasks. ## 3 Background Since our method is a generalization of previous works [11, 20, 22], we include this section as a condensed explanation of prerequisite formulations critical to understanding LZU. ### Image Resampling Suppose we want to resample an input image \(\mathbf{I}(\mathbf{x})\) to produce an output image \(\mathbf{I}^{\prime}(\mathbf{x})\), both indexed by spatial coordinates \(\mathbf{x}\in[0,1]^{2}\). Resampling is typically implemented via an _inverse_ map \(\mathcal{T}:[0,1]^{2}\rightarrow[0,1]^{2}\) from output to input coordinates [1]. For each output coordinate, the inverse map computes the source location from which to "steal" the pixel value, i.e. \(\mathbf{I}^{\prime}(\mathbf{x})=\mathbf{I}(\mathcal{T}(\mathbf{x}))\). In practice, we are often given a discretized input image \(\mathbf{I}\in\mathbb{R}^{H\times W\times C}\) and are interested in computing a discretized output \(\mathbf{I}^{\prime}\in\mathbb{R}^{H^{\prime}\times W^{\prime}\times C}\). To do so, we compute \(\mathbf{I}^{\prime}(\mathbf{x})\) at grid points \(\mathbf{x}\in\mathrm{Grid}(H^{\prime},W^{\prime})\), where \(\mathrm{Grid}(H,W):=\mathrm{Grid}(H)\times\mathrm{Grid}(W)\) and \(\mathrm{Grid}(D):=\{\frac{d-1}{D-1}:d\in[D]\}\). However, \(\mathcal{T}(\mathbf{x})\) may return non-integer pixel locations at which the exact value of \(\mathbf{I}\) is unknown. In such cases, we use bilinear interpolation to compute \(\mathbf{I}(\mathcal{T}(\mathbf{x}))\). As proven in [10], such image resampling is differentiable with respect to \(\mathcal{T}\) and \(\mathbf{I}\). ### Saliency-Guided Downsampling When using nonuniform downsampling for information retention, it is useful to parameterize \(\mathcal{T}\) with a saliency map \(\mathbf{S}(\mathbf{x})\) representing the desired sample rate at each spatial location \(\mathbf{x}\in[0,1]^{2}\)[20]. Recasens _et al_. [20] go on to approximate this behavior by having each sample coordinate \(\mathcal{T}(\mathbf{x})\) be "attracted" to nearby areas \(\mathbf{x}^{\prime}\) with high saliency \(\mathbf{S}(\mathbf{x}^{\prime})\) downweighted according to a distance kernel \(k(\mathbf{x},\mathbf{x}^{\prime})\), as illustrated in Figure 2. Concretely, \(\mathcal{T}_{\mathrm{LZ}}(\mathbf{x})=(\mathcal{T}_{\mathrm{LZ},x}(\mathbf{x }),\mathcal{T}_{\mathrm{LZ},y}(\mathbf{x}))\), where \[\mathcal{T}_{\mathrm{LZ},x}(\mathbf{x})=\frac{\int_{\mathbf{x}^{\prime}} \mathbf{S}(\mathbf{x}^{\prime})k(\mathbf{x},\mathbf{x}^{\prime})\mathbf{x}^{ \prime}_{x}\,d\mathbf{x}^{\prime}}{\int_{\mathbf{x}^{\prime}}\mathbf{S}( \mathbf{x}^{\prime})k(\mathbf{x},\mathbf{x}^{\prime})\,d\mathbf{x}^{\prime}}\,, \tag{1}\] \[\mathcal{T}_{\mathrm{LZ},y}(\mathbf{x})=\frac{\int_{\mathbf{x}^{\prime}} \mathbf{S}(\mathbf{x}^{\prime})k(\mathbf{x},\mathbf{x}^{\prime})\mathbf{x}^{ \prime}_{y}\,d\mathbf{x}^{\prime}}{\int_{\mathbf{x}^{\prime}}\mathbf{S}( \mathbf{x}^{\prime})k(\mathbf{x},\mathbf{x}^{\prime})\,d\mathbf{x}^{\prime}}\,. \tag{2}\] [22] proposes _anti-cropping_ and _separable_ variants of this downsampler. The anti-cropping variant \(\mathcal{T}_{\mathrm{LZ},\mathrm{ac}}\) prevents the resampling operation from cropping the image. The separable variant marginalizes the saliency map \(\mathbf{S}(\mathbf{x})\) into two 1D saliency maps \(\mathbf{S}_{x}(x)\) and \(\mathbf{S}_{y}(y)\), and replaces the kernel \(k(\mathbf{x},\mathbf{x}^{\prime})\) with a two 1D kernels \(k_{x}\) and \(k_{y}\) (although generally \(k_{x}=k_{y}\)). Then, \(\mathcal{T}_{\mathrm{LZ},\mathrm{sep}}(\mathbf{x})=(\mathcal{T}_{\mathrm{LZ}, \mathrm{sep},\mathrm{x}}(\mathbf{x}_{x}),\mathcal{T}_{\mathrm{LZ},\mathrm{sep}, \mathrm{y}}(\mathbf{x}_{y}))\) where \[\mathcal{T}_{\mathrm{LZ},\mathrm{sep},x}(x)=\frac{\int_{x^{\prime}}\mathbf{S} _{x}(x^{\prime})k_{x}(x,x^{\prime})x^{\prime}\,dx^{\prime}}{\int_{x^{\prime}} \mathbf{S}_{x}(x^{\prime})k_{x}(x,x^{\prime})\,dx^{\prime}}, \tag{3}\] \[\mathcal{T}_{\mathrm{LZ},\mathrm{sep},y}(y)=\frac{\int_{y^{\prime}}\mathbf{S} _{y}(y^{\prime})k_{y}(y,y^{\prime})y^{\prime}\,dy^{\prime}}{\int_{y^{\prime}} \mathbf{S}_{y}(y^{\prime})k_{y}(y,y^{\prime})\,dy^{\prime}}. \tag{4}\] This preserves axis-alignment of rectangles, which is crucial to object detection where bounding boxes are specified via corners. We refer to the above method and all variants as _LZ downsamplers_, after the pioneering work "Learning to Zoom" [20]. Examples of each variant are shown in Figure 3. ## 4 Method We begin by discussing our general technique for warp inversion. Then, we discuss the LZU framework and how we apply warp inversion to efficiently "unzoom". ### Efficient, Differentiable Warp Inversion Suppose we have a continuous map \(\mathcal{T}:[0,1]^{2}\rightarrow[0,1]^{2}\). Our primary technical innovation is an efficient and differentiable approximation of \(\mathcal{T}^{-1}\), even in cases where \(\mathcal{T}\) has no closed-form inverse. Since \(\mathcal{T}\) is potentially difficult to invert, we first approximate it as \(\widetilde{\mathcal{T}}\), a piecewise tiling of simpler invertible transforms (illustrated in Figure 4). Formally, \[\widetilde{\mathcal{T}}=\bigcup_{\begin{subarray}{c}i\in[h-1]\\ j\in[w-1]\end{subarray}}\widetilde{\mathcal{T}}_{ij}, \tag{5}\] where the \(ij\)-th tile \(\widetilde{\mathcal{T}}_{ij}\) is any bijective map from the rectangle formed by corners \(R_{ij}=\{\frac{i-1}{h-1},\frac{i}{h-1}\}\times\{\frac{j-1}{w-1},\frac{j}{w-1}\}\) Figure 3: Examples of the anti-cropping (ac) and separable (sep) variants of \(\mathcal{T}_{\mathrm{LZ}}\) from [22]. Figure 2: Illustration of \(\mathcal{T}_{\mathrm{LZ}}\)[20]. Suppose we have a saliency map \(\mathbf{S}\in\mathbb{R}^{h\times w}\) (visualized in the background) and want a warped image of size \(H^{\prime}\times W^{\prime}\). (1) We start with a uniform grid of sample locations \(\mathrm{Grid}(h,w)\). (2) Grid points are “attracted” to nearby areas with high saliency. (3) Applying this “force” yields \(\mathcal{T}_{\mathrm{LZ}}[\mathrm{Grid}(h,w)]\). (4) Bilinear upsampling yields \(\widetilde{\mathcal{T}}_{\mathrm{LZ}}[\mathrm{Grid}(H^{\prime},W^{\prime})]\). to quadrilateral \(\mathcal{T}[R_{ij}]\). For our purposes, we choose bilinear maps as our tile function, although homographies could work just as well. Then, so long as \(\widetilde{\mathcal{T}}\) is injective (if each of the tiles \(\widetilde{\mathcal{T}}_{ij}\) is nondegenerate and no two tiles overlap), we are guaranteed a well-defined left inverse \(\widetilde{\mathcal{T}}^{-1}:[0,1]^{2}\to[0,1]^{2}\) given by \[\widetilde{\mathcal{T}}^{-1}(\mathbf{x})=\begin{cases}\widetilde{\mathcal{T}} _{ij}^{-1}(\mathbf{x})&\text{if }\mathbf{x}\in\operatorname{Range}(\widetilde{ \mathcal{T}}_{ij})\\ 0&\text{else}\end{cases}. \tag{6}\] Equation 6 is **efficient** to compute, since determining if \(\mathbf{x}\in\operatorname{Range}(\widetilde{\mathcal{T}}_{ij})\) simply involves checking if \(\mathbf{x}\) is in the quadrilateral \(\mathcal{T}[R_{ij}]\) and computing the inverse \(\widetilde{\mathcal{T}}_{ij}^{-1}\) of a bilinear map amounts to solving a quadratic equation [27]. This efficiency is crucial to maintaining favorable accuracy-latency tradeoffs. \(\widetilde{\mathcal{T}}^{-1}\) is guaranteed to be **differentiable** with respect to \(\mathcal{T}\), since for each \(\mathbf{x}\in\widetilde{\mathcal{T}}[R(i,j)]\), the inverse bilinear map can be written as a quadratic function of the four corners of tile \(ij\) (see Appendix A.1 for exact expression). This allows gradients to flow back into \(\mathcal{T}\), letting us learn the parameters of the warp. In the case of LZ warps, \(\mathcal{T}_{\mathrm{LZ}}\) has no closed form inverse to the best of our knowledge. Because \(\mathcal{T}_{\mathrm{LZ}}[\operatorname{Grid}(h,w)]\) has no foldovers [20], \(\widetilde{\mathcal{T}}_{\mathrm{LZ}}\) must be injective, implying its inverse \(\widetilde{\mathcal{T}}_{\mathrm{LZ}}^{-1}\) is well-defined. When applying an LZ warp, saliency can be learned (with trainable parameters) or unlearned (with fixed parameters), and fixed (invariant across frames) or adaptive (changes every frame). Adaptive saliency maps require efficient warp inversion since a different warp must be applied on every input. Learned saliency maps require differentiability. We note that fixed unlearned saliency maps do not technically require efficiency or differentiability, and most of our current results show that such saliency maps are already quite effective, outperforming prior work. We posit that LZU would shine even more in the learned adaptive setting, where it could make use of temporal priming for top-down saliency. ### Learning to Zoom and Unzoom In the Learning to Zoom and Unzoom (LZU) framework, we use existing LZ downsamplers (see Section 3.2) to "zoom" in on the input image, compute spatial features, and then use our warp inversion formulation to "unzoom" and revert any deformations in the feature map, as shown in Figure 1. This framework is applicable to all tasks with 2D spatial input and all models with some intermediate 2D spatial representation. Notice that a poorly approximated inverse warp \(\widetilde{\mathcal{T}}^{-1}\) would lead to misaligned features and a drop in performance. As a result, we use the approximate forward warp \(\widetilde{\mathcal{T}}\) instead of the true forward warp \(\mathcal{T}\), so that the composition of forward and inverse warps is _actually_ the identity function. See Appendix A.3 for a discussion of the associated tradeoff. To maintain favorable accuracy-latency tradeoffs, we make several optimizations to our forward and inverse warps. As done in previous works [11, 20, 22], for the forward warp or "zoom," instead of computing \(\mathcal{T}_{\mathrm{LZ}}[\operatorname{Grid}(H^{\prime},W^{\prime})]\), we compute \(\mathcal{T}_{\mathrm{LZ}}[\operatorname{Grid}(h,w)]\) for smaller \(h\ll H^{\prime}\) and \(w\ll W^{\prime}\) and bilinearly upsample this to get \(\widetilde{\mathcal{T}}_{\mathrm{LZ}}[\operatorname{Grid}(H^{\prime},W^{\prime})]\). This also reduces the complexity of computing the inverse, by reducing the number of cases in our piecewise bilinear map from \(H^{\prime}\cdot W^{\prime}\) to \(h\cdot w\). We explore efficient implementations of both separable and nonseparable warp inversion, but we find experimentally that nonseparable warps perform no better than separable warps for a strictly higher latency cost, so we use separable warps for our experiments. Details for efficiently inverting nonseparable warps are given in Appendix A.2. For separable warps \(\mathcal{T}_{\mathrm{LZ},\mathrm{sep}}\), we invert each axis separately and take the Cartesian Product: \[\widetilde{\mathcal{T}}_{\mathrm{LZ},\mathrm{sep}}^{-1}[ \operatorname{Grid}(H^{\prime},W^{\prime})]= \tag{7}\] \[\widetilde{\mathcal{T}}_{\mathrm{LZ},\mathrm{sep},\mathrm{x}}^{-1} [\operatorname{Grid}(H^{\prime})]\times\widetilde{\mathcal{T}}_{\mathrm{LZ}, \mathrm{sep},\mathrm{y}}^{-1}[\operatorname{Grid}(W^{\prime})].\] Figure 4: Given a warp \(\mathcal{T}\), we construct an approximation \(\widetilde{\mathcal{T}}\) designed for efficient inversion. As illustrated, \(\widetilde{\mathcal{T}}\) is a piecewise tiling of simpler invertible maps. This allows us to approximate the inverse \(\widetilde{\mathcal{T}}^{-1}\), even when \(\mathcal{T}^{-1}\) lacks a closed form. Figure 5: Inverting each axis of a separable warp. LZU first evaluates the forward warp \(\mathcal{T}_{\mathrm{LZ},\mathrm{sep},\mathrm{x}}\) (solid blue arrows) at a uniform grid of target locations (blue points). The resulting source locations are shown as red points. LZU then approximates the warp in between these samples via a _linear_ transform; this piecewise linear map is \(\widetilde{\mathcal{T}}_{\mathrm{LZ},\mathrm{sep},\mathrm{x}}\) (dotted blue arrows). To evaluate the inverse \(\widetilde{\mathcal{T}}_{\mathrm{LZ},\mathrm{sep},\mathrm{x}}^{-1}\) (dotted green arrows), we must determine for each green point which red points it falls between and invert the corresponding linear transform. An example is shown in the top-right. This further reduces our problem from inverting a piecewise bilinear map with \(h\cdot w\) pieces to inverting two piecewise _linear_ maps with \(h\) and \(w\) pieces each. Figure 5 visualizes how to invert each axis. When unwarping after feature pyramid networks (FPNs) [16], we may have to evaluate the inverse \(\widehat{\mathcal{T}}_{\mathrm{LZ}}^{-1}\) at multiple resolutions \(\mathrm{Grid}(H^{\prime},W^{\prime})\), \(\mathrm{Grid}(H^{\prime}/2,W^{\prime}/2)\), etc. In practice, we evaluate \(\widehat{\mathcal{T}}_{\mathrm{LZ}}^{-1}[\mathrm{Grid}(H^{\prime},W^{\prime})]\) and then approximate the inverse at lower resolutions via bilinear downsampling. This is surprisingly effective (see Appendix A.3) and leads to no observable loss in performance. Finally, as introduced in [22], we can also use a fixed warp to exploit dataset-wide spatial priors, such as how objects are concentrated around the horizon in many autonomous driving datasets. This allows us to cache forward and inverse warps, greatly reducing additional latency. ## 5 Experiments First, we compare LZU to naive uniform downsampling and previous works on the tasks of 2D object detection and semantic segmentation. We include ablations to evaluate the effectiveness of training techniques and explore the upsampling regime. Then, we evaluate LZU on monocular 3D object detection, a task which no previous works have applied "zooming" to. We perform all timing experiments with a batch size of 1 on a single RTX 2080 Ti GPU. Figure 6 contains qualitative results and analysis across all tasks. Full implementation details and hyperparameters are given in Appendix A.6. ### 2D Object Detection For 2D object detection, we evaluate LZU using RetinaNet [17] (with a ResNet-50 backbone [9] and FPN [16]) on Argoverse-HD [14], an object detection dataset for autonomous driving with high resolution \(1920\times 1200\) videos. For our baselines, we compare to uniform downsampling and FOVEA [22], a previous work that applies LZ downsampling to detection by unwarping bounding boxes. We keep the same hyperparameters and setup as in FOVEA [22]. Experiments are run at \(0.25\)x, \(0.5\)x, \(0.75\)x, and 1x scales, to measure the accuracy-latency tradeoff. Our LZU models "unzoom" the feature map at each level after the FPN [16]. We adopt the low-cost saliency generators introduced in [22] -- a "fixed" saliency map exploiting dataset-wide spatial priors, and an "adaptive" saliency map exploiting temporal priors by zooming in on detections from the previous frame. When training the adaptive version, we simulate previous detections by jittering the ground truth for the first two epochs. For the last epoch, we jitter _detections_ on the current frame to better simulate previous detections; we call this "cascaded" saliency. To determine saliency hyperparameters, we run grid search at \(0.5\)x scale on splits of the training set (details in Appendix A.4, A.6). We use a learning rate of \(0.01\) and keep all other training settings identical to the baseline. Latency is measured by timing only the additional operations (the "zoom" and "unzoom") and adding it to the baseline. This is done to mitigate the impact \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline Scale & Method & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{S}\) & AP\({}_{M}\) & AP\({}_{L}\) & Lat (ms) \\ \hline 0.25x & Uniform & 10.5 & 18.0 & 9.9 & 0.3 & 5.2 & 38.6 & **23.3** \\ 0.25x & LZU, fixed & **12.4** & 22.6 & 11.2 & 1.0 & **7.1** & **39.2** & 23.6 \\ 0.25x & LZU, adaptive & 12.3 & **22.8** & **11.3** & **1.4** & 6.6 & 38.0 & 26.4 \\ \hline 0.5x & Uniform & 22.6 & 38.7 & 21.7 & 3.7 & 22.1 & **53.1** & **36.0** \\ 0.5x & FOVEA [22] & 24.9 & 40.3 & **25.3** & **7.1** & **27.7** & 50.6 & 37.9 \\ 0.5x & LZU, fixed & 25.2 & 42.1 & 24.8 & 5.5 & 26.7 & 51.8 & 36.4 \\ 0.5x & LZU, adaptive & **25.3** & **43.0** & 24.6 & 6.1 & 25.9 & 52.6 & 39.3 \\ 0.5x & LZU, adaptive & 22.8 & 39.3 & 22.3 & 5.1 & 22.7 & 48.9 & 39.3 \\ & w/o cascade sal. & & & & & & & \\ \hline 0.75x & Uniform & 29.5 & 48.4 & 29.6 & 9.1 & 32.4 & **55.1** & **62.9** \\ 0.75x & LZU, fixed & **30.8** & **50.4** & **31.8** & **10.9** & **33.5** & 54.1 & 63.5 \\ 0.75x & LZU, adaptive & 26.5 & 44.6 & 26.7 & 8.3 & 28.7 & 48.7 & 66.3 \\ \hline 1x & Uniform & 31.9 & 51.5 & 33.1 & 11.4 & 35.9 & 54.5 & **98.3** \\ 1x & LZU, fixed & **32.6** & **52.8** & **34.0** & **13.2** & 36.0 & **54.7** & 99.3 \\ 1x & LZU, adaptive & 32.0 & 52.4 & 33.1 & 12.5 & **36.3** & 52.9 & 102.0 \\ \hline \hline \end{tabular} \end{table} Table 1: 2D object detection results of RetinaNet [17] on Argoverse-HD [14]. Fixed LZU uses a dataset-wide spatial prior, and adaptive LZU uses a temporal prior based on previous frame detections. LZU consistently outperforms the uniform downsampling baseline and prior work across all scales, with additional latency less than 4ms. We hypothesize that the drop in AP\({}_{L}\) is because objects that are already large benefit less from zooming. Still, this drawback is offset by larger improvements on small and medium objects. \begin{table} \begin{tabular}{l l l l l l l l l l l l} \hline \hline \multicolumn{4}{c}{2D Object Detection} \\ \hline \multicolumn{4}{c}{Uniform Resampling} & \multicolumn{4}{c}{LZU Resampling} \\ \hline & \multicolumn{4}{c}{From} & \multicolumn{4}{c}{From} \\ \cline{2-13} To & 0.25x & 0.5x & 0.75x & 1x & To & 0.25x & 0.5x & 0.75x & 1x \\ \hline 0.25x & 10.5 & 10.5 & 10.5 & 0.25 & **11.7** & **12.4** & **12.4** & **12.4** \\ 0.5x & 17.0 & 22.6 & 22.6 & 22.6 & 0.5x & **20.9** & **24.8** & **24.8** & **25.2** \\ 0.75x & **23.5** & 28.5 & 29.5 & 29.5 & 0.75x & 22.5 & **29.4** & **30.0** & **30.8** \\ 1x & 13.5 & 28.4 & 30.9 & 31.9 & 1x & **22.1** & **30.7** & **31.2** & **32.6** \\ \hline \hline \end{tabular} \end{table} Table 2: 2D and 3D object detection results in the upsampling and downsampling regimes, using the “Uniform” and “LZU, fixed” models from Tables 1 and 5. LZU is surprisingly effective even in the upsampling regime! This demonstrates that simply allocating more pixels to small objects (without retaining extra information) can help performance, suggesting that detectors still struggle with scale invariance for small objects. of variance in the latency of the backbone and detector head. Results are given in Table 1. We outperform both uniform downsampling and FOVEA in all but one case, while incurring an additional latency of less than \(4\)ms. The one exception is adaptive LZU at \(0.75\)x, which is evidence that our adaptive saliency hyperparameters, chosen at \(0.5\)x scale, struggle to generalize to other resolutions. We also confirm that using cascaded saliency to train adaptive LZU is crucial. Although adaptive LZU outperforms fixed LZU at \(0.5\)x scale, plotting the accuracy-latency curves (Figure 7) reveals that fixed LZU is Pareto optimal at all points. Finally, we explore how LZU performs in the _upsampling_ Figure 6: Examples of the success and failure cases of LZU. Rows A and E show examples where zooming in on the horizon helps the detector pick up smaller objects. On the other hand, sometimes zooming leads to false negatives, such as the black car in Row B and objects near the edge in Row F. For segmentation, LZU consistently improves quality near the center of the image. The last column shows the saliency map used in each case and the resulting spatial magnification ratios. For the Argoverse-HD [14] dataset, the magnification ratio at the center is nearly 2x, meaning the “zoom” is preserving nearly all information in that region, at the cost of information at the corners. regime. We reuse the same models trained in our previous experiments, testing them with different pre-resampling resolutions. Results are shown in Table 2. In this regime, LZU consistently outperforms uniform downsampling, even though information retention is no longer a factor. ### Semantic Segmentation For our semantic segmentation experiments, we compare to previous works ES [19] and LDS [11], so we adopt their setup. We test the PSPNet [29] model (with a ResNet-50 backbone [9] and FPN [16]) on Cityscapes [7]. Cityscapes is an urban scene dataset with high resolution \(1024\times 2048\) images and \(19\) classes. We perform our experiments at several image scales (\(64\times 64\), \(128\times 128\), \(256\times 256\), and \(512\times 512\)), taken by resizing a centered square crop of the input image. Our simple baseline trains and tests PSPNet with uniform downsampling. To reduce overfitting, we allot 500 images from the official training set into a mini-validation split. We train our model on the remaining training images and evaluate at 10 equally spaced intervals on the mini-validation split. We choose the best performing model and evaluate it on the official validation set. For our LZU model, we unzoom spatial features after the FPN and use a fixed saliency map. Inspired by the idea of zooming on semantic boundaries [19], we generate our fixed saliency by averaging the ground truth semantic boundaries over the train set. Notably, our saliency hyperparameters are chosen qualitatively (for producing a reasonably strong warp) and tested one-shot. We report our full results in Table 3 and compare to previous works in Table 4. Since our baseline results are slightly different than reported in previous works [19, 11], we compare results using a percent change relative to the corresponding baseline. We find increased performance over the baseline at all scales, and at \(256\times 256\), we beat both previous works with only \(2.3\)ms of additional latency. Plotting the accuracy-FLOPs tradeoff (Figure 7) reveals that the large improvements of LDS [11] at \(64\times 64\) and \(128\times 128\) input scales come at significant cost in FLOPs. In actuality, ES [19] is Pareto optimal at \(64\times 64\) and \(128\times 128\), LDS [11] Figure 7: Plotting the accuracy-latency/FLOPs tradeoffs reveals the Pareto optimal methods for each task. Fixed LZU is Pareto optimal for both 2D and 3D object detection, outperforming uniform downsampling and FOVEA [22]. For semantic segmentation, we use FLOPs in lieu of latency to enable fair comparisons (ES [19] only reports FLOPS and LDS [11] has an unoptimized implementation). Although LDS boasts large improvements in raw accuracy at each scale, it also incurs a greater cost due to its expensive saliency generator. Overall, the Pareto frontier for segmentation is very competitive, with ES dominating at \(64\times 64\), LDS at \(128\times 128\), and LZU at \(256\times 256\). \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Crop} & \multicolumn{10}{c}{IOU} & Latency \\ \cline{3-14} & \multicolumn{1}{c}{mIOU} & road & swalk & build. & wall & fence & pole & tight & sign & veg. & terr. & sky & person & rider & car & truck & bus & train & mbike & bike & (ms) \\ \hline 64 & Uniform & 26.4 & **93.9** & 35.6 & 68.6 & 3.5 & **2.9** & **0.5** & **0.0** & **0.1** & 72.5 & 21.1 & **76.0** & 26.8 & 0.9 & **57.1** & 8.9 & **16.9** & **8.0** & **0.0** & 8.2 & **15.5** \\ 64 & LZU, fixed & **26.7** & 93.4 & **36.1** & **68.9** & **5.8** & 2.3 & 0.4 & **0.0** & 0.0 & **72.6** & **23.4** & 75.9 & **29.4** & **1.2** & 56.7 & **15.0** & 10.2 & 4.1 & **0.0** & **11.7** & 16.9 \\ \hline 128 & Uniform & 39.3 & 96.3 & 54.0 & 78.4 & **15.0** & 7.9 & **8.1** & 8.5 & 16.6 & 81.2 & 34.4 & **86.7** & 42.9 & 13.8 & 74.4 & **22.9** & 41.6 & 24.4 & 10.2 & 29.6 & **16.1** \\ 128 & LZU, fixed & **41.7** & **96.4** & **55.2** & **78.7** & 12.7 & **13.4** & **8.1** & **11.4** & **19.0** & **81.7** & **39.0** & 86.5 & **45.7** & **17.9** & **76.8** & 21.9 & **48.2** & **31.7** & **11.6** & **36.3** & 18.0 \\ \hline 256 & Uniform & 53.6 & 97.5 & 64.0 & 84.7 & 20.0 & 19.0 & **22.1** & 34.8 & 41.6 & **87.0** & 41.9 & **91.2** & 59.3 & 33.7 & 84.1 & 39.2 & 62.9 & **57.9** & 27.7 & 49.1 & **19.1** \\ 256 & LZU, fixed & **55.1** & **97.7** & **67.0** & **84.9** & **24.4** & **24.4** & 21.3 & **35.2** & **42.9** & **87.0** & **44.5** & 90.7 & **61.5** & **35.7** & **85.7** & **40.8** & **67.9** & 52.8 & **29.3** & **53.4** & 21.2 \\ \hline 512 & Uniform & 63.8 & **98.3** & 73.3 & **88.8** & 29.2 & 34.3 & **40.6** & 54.4 & 61.6 & **90.7** & **47.7** & **94.0** & **72.7** & **50.6** & 89.1 & 45.6 & 72.1 & 59.1 & 44.5 & 64.9 & **32.3** \\ 512 & LZU, fixed & **64.2** & **98.3** & **73.4** & 88.6 & **30.0** & **35.7** & 38.8 & **56.0** & **63.8** & 90.4 & 47.0 & 93.4 & 72.4 & 43.9 & **90.1** & **50.5** & **76.4** & **59.6** & **45.4** & **65.3** & 34.4 \\ \hline \hline \end{tabular} \end{table} Table 3: Full semantic segmentation results of PSPNet [29] on Cityscapes [7]. At each resolution, LZU outperforms uniform downsampling. at \(128\times 128\), and LZU at \(256\times 256\). We hypothesize that further improvements might be possible using an adaptive, learned formulation for saliency. ### Monocular 3D Object Detection Finally, we evaluate LZU on monocular 3D object detection. To the best of our knowledge, no previous work has applied LZ downsampling to this task. The closest existing solution, FOVEA [22], cannot be extended to 3D object detection, because 3D bounding boxes are amodal and cannot be unwarped in the same manner as 2D bounding boxes. For our base model, we use FCOS3D [26], a fully convolutional model, with a ResNet-50 backbone [9] and FPN [16]. For our dataset, we use nuScenes [2], an autonomous driving dataset with multi-view \(1600\times 900\) RGB images for 1000 scenes and 3D bounding box annotations for 10 object classes. As is standard practice, we use the nuScenes Detection Score (NDS) metric, which is a combination of the usual mAP and measures of translation error (mATE), scale error (mASE), orientation error (mAOE), velocity error (mAVE), and attribute error (mAAE). We run experiments at \(0.25\)x, \(0.5\)x, \(0.75\)x, and 1x scales and test against a uniform downsampling baseline. We train for 12 epochs with a batch size of 16 with default parameters as in MMDetection3D [5]. For our LZU model, again we unzoom post-FPN features and use a fixed saliency map. Inspired by FOVEA [22], our fixed saliency is generated by using kernel density estimation on the set of projected bounding boxes in the image space. We reuse the same saliency hyperparameters from 2D detection. All other training settings are identical to the baseline. Results are given in Table 5. LZU performs consistently better than uniform downsampling, with less than \(1\)ms of additional latency. Specifically, LZU improves mAP and the aggregate metric NDS, with mixed results on mATE, mASE, mAOE, mAVE, and mAAE. Since the latter five metrics are computed on only _true positives_, this demonstrates that LZU increases overall recall, while maintaining about equal performance on true positives. Plotting the accuracy-latency curves (Figure 7) shows that LZU is Pareto optimal. We also repeat the same upsampling experiments as performed in 2D object detection. Results, shown in Table 2, reaffirm the viability of LZU in the upsampling regime. ## 6 Conclusion We propose LZU, a simple attentional framework consisting of "zooming" in on the input image, computing spatial features, and "unzooming" to invert any deformations. To unzoom, we approximate the forward warp as a piecewise bilinear mapping and invert each piece. LZU is highly general and can be applied to any task with 2D spatial input and any model with 2D spatial features. We demonstrate the versatility of LZU empirically on a variety of tasks and datasets, including monocular 3D detection which has never been done before. We also show that LZU may even be used when high-resolution sensor data is unavailable. For future work, we can consider alternatives to the "unzoom" formulation that are perhaps less destructive than simple resampling of features. **Broader impact.** Our work focuses on increasing the efficiency and accuracy of flagship vision tasks (detection, segmentation, 3D understanding) with high-resolution imagery. We share the same potential harms of the underlying tasks, but our approach may increase privacy concerns as identifiable information may be easier to decode at higher resolutions (e.g., facial identities or license plates). Because our approach is agnostic to the underlying model, it is reproducible with minimal changes to existing codebases. **Acknowledgements:** This work was supported by the CMU Argo AI Center for Autonomous Vehicle Research. \begin{table} \begin{tabular}{c c c} \hline \hline & \multicolumn{2}{c}{Downsampled Resolution} \\ \cline{2-3} Method & \(64\times 64\) & \(128\times 128\) & \(256\times 256\) \\ \hline Uniform (theirs) & 29 & 40 & 54 \\ Uniform (ours) & 26.4 & 39.3 & 53.6 \\ \hline ES [19] & 32 (+10.3\%) & 43 (+7.5\%) & 54 (+0.0\%) \\ LDS [11] & 36 (**+24.1\%**) & 47 (**+17.5\%**) & 55 (+1.9\%) \\ LZU, fixed & 26.7 (+1.1\%) & 41.7 (+6.1\%) & 55.1 (**+2.9\%**) \\ \hline \hline \end{tabular} \end{table} Table 4: Semantic segmentation results of PSPNet [29] on Cityscapes [7], in mIOU. Due to differing implementation, the performance of our baseline varies from reported values, so we report relative improvements. At \(256\times 256\), we outperform prior works. At \(64\times 64\) and \(128\times 128\), LZU performs worse than prior work, perhaps because “unzooming” features at such small scales is more destructive. We posit the performance losses from such aggressive downsampling factors (across all methods) may be too impractical for deployment, and so focus on the \(256\times 256\) regime. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Scale Method & NDS & mAP & mATE & mASE & mAOE & mAVE & mAE & Lat (ms) \\ \hline 0.25x Uniform & 21.8 & 11.4 & **96.7** & 32.6 & 90.1 & **125.0** & **19.8** & **54.7** \\ 0.25x LZU, fixed & **23.4** & **13.1** & 96.8 & **31.9** & **82.7** & 129.4 & 20.0 & 55.1 \\ \hline 0.5x Uniform & 27.5 & 17.5 & 90.1 & 28.8 & 75.5 & 131.6 & 17.8 & **58.1** \\ 0.5x LZU, fixed & **29.3** & **20.1** & **88.9** & **28.3** & **73.9** & **100.6** & **16.7** & 58.5 \\ \hline 0.75x Uniform & 30.5 & 21.0 & 87.3 & 27.9 & **67.0** & **132.8** & 17.5 & **59.2** \\ 0.75x LZU, fixed & **31.8** & **22.4** & **83.8** & **27.5** & 67.2 & 134.6 & **15.9** & 59.7 \\ \hline 1x Uniform & 31.2 & 22.4 & **84.2** & **27.4** & 70.9 & **129.6** & **17.4** & **88.7** \\ 1x LZU, fixed & **32.6** & **24.8** & 84.6 & 27.5 & **68.2** & 131.6 & 18.3 & 89.4 \\ \hline \hline \end{tabular} \end{table} Table 5: 3D object detection results of FCOS3D [26] on nuScenes [2]. Higher NDS and mAP is better, and lower is better on all other metrics. Intuitively, size is an important cue for depth, and image deformations would stifie this signal. Suprisingly, this is _not_ the case. LZU improves upon the uniform downsampling baseline at all scales with less than \(1\)ms of additional latency. Notably, LZU at \(0.75\)x scale even outperforms uniform downsampling at \(1\)x.
2305.11278
Real-Time Variational Method for Learning Neural Trajectory and its Dynamics
Latent variable models have become instrumental in computational neuroscience for reasoning about neural computation. This has fostered the development of powerful offline algorithms for extracting latent neural trajectories from neural recordings. However, despite the potential of real time alternatives to give immediate feedback to experimentalists, and enhance experimental design, they have received markedly less attention. In this work, we introduce the exponential family variational Kalman filter (eVKF), an online recursive Bayesian method aimed at inferring latent trajectories while simultaneously learning the dynamical system generating them. eVKF works for arbitrary likelihoods and utilizes the constant base measure exponential family to model the latent state stochasticity. We derive a closed-form variational analogue to the predict step of the Kalman filter which leads to a provably tighter bound on the ELBO compared to another online variational method. We validate our method on synthetic and real-world data, and, notably, show that it achieves competitive performance
Matthew Dowling, Yuan Zhao, Il Memming Park
2023-05-18T19:52:46Z
http://arxiv.org/abs/2305.11278v1
# Real-time variational method for learning neural trajectory and its dynamics ###### Abstract Latent variable models have become instrumental in computational neuroscience for reasoning about neural computation. This has fostered the development of powerful offline algorithms for extracting latent neural trajectories from neural recordings. However, despite the potential of real time alternatives to give immediate feedback to experimentalists, and enhance experimental design, they have received markedly less attention. In this work, we introduce the exponential family variational Kalman filter (eVKF), an online recursive Bayesian method aimed at inferring latent trajectories while simultaneously learning the dynamical system generating them. eVKF works for arbitrary likelihoods and utilizes the constant base measure exponential family to model the latent state stochasticity. We derive a closed-form variational analogue to the _predict_ step of the Kalman filter which leads to a provably tighter bound on the ELBO compared to another online variational method. We validate our method on synthetic and real-world data, and, notably, show that it achieves competitive performance. ## 1 Introduction Population of neurons, especially in higher-order perceptual and motor cortices, show coordinated pattern of activity constrained to an approximately low dimensional 'neural manifold' (Sohn et al., 2019; Churchland et al., 2012; Saxena et al., 2022). The dynamical structure of latent trajectories evolving along the neural manifold is thought to be a valid substrate of neural computation. This idea has fostered extensive experimental studies and the development of computational methods to extract these trajectories directly from electrophysiological recordings. Great strides have been made in developing computational tools for the purpose of extracting latent neural trajectories in _post hoc_ neural data analysis. However, while recently developed tools have proven their efficacy in accurately inferring latent neural trajectories (Pandarinath et al., 2018; Pei et al., 2021; Yu et al., 2009; Zhao and Park, 2017), learning their underlying dynamics has received markedly less attention. Furthermore, even less focus has been placed on real-time methods that allow for online learning of neural trajectories and their underlying dynamics. Real-time learning of neural dynamics would facilitate more efficient experimental design, and increase the capability of closed-loop systems where an accurate picture of the dynamical landscape leads to more precise predictions (Peixoto et al., 2021; Bolus et al., 2021). In this work, we consider the problem of inferring latent trajectories while simultaneously learning the dynamical system generating them in an online fashion. We introduce the exponential family variational Kalman filter (eVKF), a novel variational inference scheme that draws inspiration from the 'predict' and 'update' steps used in the classic Kalman filter (Anderson and Moore, 1979). We theoretically justify our variational inference scheme by proving it leads to a tighter 'filtering' evidence lower bound (ELBO) than a'single step' approximation that utilizes the closed form solution of the proposed 'variational prediction' step. Finally, we show how parameterization of the dynamics via a universal function approximator in tandem with exponential family properties facilitates an alternative optimization procedure for learning the generative model. Our contributions are as follows: **(i)** We propose a novel variational inference scheme for online learning analogous to the predict and update steps of the Kalman filter. **(ii)** We show the variational prediction step offers a closed form solution when we restrict our variational approximations to _constant base measure_ exponential families (Theorem 1). **(iii)** We justify our two step procedure by showing that we achieve a tighter bound on the ELBO, when compared to directly finding a variational approximation to the filtering distribution (Theorem 2). **(iv)** We show that when using universal function approximators for modeling the dynamics, we can optimize our model of the dynamics without propagating gradients through the ELBO as is typically done in variational expectation maximization (vEM) or variational autoencoders (VAEs) (Kingma & Welling, 2014). ## 2 Background ### State-space models In this paper, we consider observations (e.g. neural recordings), \(\mathbf{y}_{t}\), arriving in a sequential fashion. It is assumed these observations depend directly on a latent Markov process (e.g. structured neural dynamics), \(\mathbf{z}_{t}\), allowing us to write the generative model in state-space form: \[\mathbf{z}_{t}\mid\mathbf{z}_{t-1} \sim p_{\boldsymbol{\theta}}(\mathbf{z}_{t}\mid\mathbf{z}_{t-1}) \text{(latent dynamics model)}\] \[\mathbf{y}_{t}\mid\mathbf{z}_{t} \sim p_{\boldsymbol{\psi}}(\mathbf{y}_{t}\mid\mathbf{z}_{t}) \text{(observation model)}\] where \(\mathbf{z}_{t}\in\mathbb{R}^{L}\), \(\mathbf{y}_{t}\in\mathbb{R}^{N}\), \(\boldsymbol{\psi}\) parameterize the observation model, and \(\boldsymbol{\theta}\) parameterize the dynamics model. After observing \(\mathbf{y}_{t}\), any statistical quantities of interest related to \(\mathbf{z}_{t}\) can be computed from the filtering distribution, \(p(\mathbf{z}_{t}\mid\mathbf{y}_{1:t})\). Since we are considering a periodically sampled data streaming setting, it is important that we are able to compute \(p(\mathbf{z}_{t}\mid\mathbf{y}_{1:t})\) in a recursive fashion, with constant time and space complexity. In addition to inferring the filtering distribution over latent states, we will also be interested in learning the dynamics as the (prior) conditional probability distribution, \(p_{\boldsymbol{\theta}}(\mathbf{z}_{t}\mid\mathbf{z}_{t-1})\), which captures the underlying dynamical law that governs the latent state \(\mathbf{z}\) and may implement neural computation. Learning the dynamics facilitates higher quality inference of the latent state, accurate forecasting, and generation of new data. In this paper we will be focused mainly on models where the dynamics are non-linear and parameterized by flexible function approximators. For example, we may model the dynamics as \(\mathbf{z}_{t}\mid\mathbf{z}_{t-1}\sim\mathcal{N}(\mathbf{f}_{\boldsymbol{ \theta}}(\mathbf{z}_{t-1}),\mathbf{Q})\), with \(\mathbf{f}_{\boldsymbol{\theta}}:\mathbb{R}^{L}\rightarrow\mathbb{R}^{L}\) parameterized by a neural network. ### Kalman filter Before diving into the general case, let's revisit the well-established Kalman filter (Sarkka, 2013). Given linear Gaussian dynamics and observations, the state-space model description is given by \[p_{\boldsymbol{\theta}}(\mathbf{z}_{t}\mid\mathbf{z}_{t-1}) =\mathcal{N}(\mathbf{z}_{t}\mid\mathbf{A}\mathbf{z}_{t-1},\mathbf{ Q}) \boldsymbol{\theta} =\{\mathbf{A},\mathbf{Q}\}\] \[p_{\boldsymbol{\psi}}(\mathbf{y}_{t}\mid\mathbf{z}_{t}) =\mathcal{N}(\mathbf{y}_{t}\mid\mathbf{C}\mathbf{z}_{t}+\mathbf{b},\mathbf{R}) \boldsymbol{\psi} =\{\mathbf{C},\mathbf{b},\mathbf{R}\}\] The Kalman filter recursively computes the Bayes optimal estimate of the latent state \(\mathbf{z}_{t}\). Given the filtering posterior of previous time step, \(p(\mathbf{z}_{t-1}\mid\mathbf{y}_{1:t-1})=\mathcal{N}(\mathbf{m}_{t-1},\mathbf{ P}_{t-1})\), we first _predict_ the latent state distribution (a.k.a. the filtering prior) at time \(t\) \[\bar{p}(\mathbf{z}_{t}\mid\mathbf{y}_{1:t-1}) =\mathbb{E}_{p(\mathbf{z}_{t-1}|\mathbf{y}_{1:t-1})}\;\big{[}p_{ \boldsymbol{\theta}}(\mathbf{z}_{t}\mid\mathbf{z}_{t-1})\big{]} \tag{1}\] \[=\mathcal{N}(\mathbf{z}_{t}\mid\mathbf{A}\mathbf{m}_{t-1},\mathbf{ A}\mathbf{P}_{t-1}\mathbf{A}^{\top}+\mathbf{Q}) \tag{2}\] Secondly, we _update_ our belief of the current state with the observation \(\mathbf{y}_{t}\) by Bayes' rule \[p(\mathbf{z}_{t}\mid\mathbf{y}_{1:t})\propto p(\mathbf{y}_{t}\mid\mathbf{z}_{ t})\;\bar{p}(\mathbf{z}_{t}\mid\mathbf{y}_{1:t-1})=\mathcal{N}(\mathbf{z}_{t} \mid\mathbf{m}_{t},\mathbf{P}_{t}) \tag{3}\] In order to learn the underlying dynamics \(\mathbf{A}\), the linear readout \(\mathbf{C}\), state noise \(\mathbf{Q}\) and observation noise \(\mathbf{R}\), the EM algorithm can be employed (Ghahramani & Hinton, 1996). If a calibrated measure of uncertainty over the model parameters is important, then a prior can be placed over those quantities, and approximate Bayesian methods can be used to find the posterior (Barber & Chiappa, 2006). When the dynamics are nonlinear, then approximate Bayesian inference can be used to compute the posterior over latent states (Kamthe et al., 2022; Hernandez et al., 2018; Pandarinath et al., 2018). Note that these methods are for learning the parameters in the offline setting. ## 3 Exponential family variational Kalman filter (eVKF) When the models are not linear and Gaussian, the filtering prior Eq. (1) and filtering distribution Eq. (3) are often intractable. This is unfortunate since most models of practical interests deviate in one way or another from these linear Gaussian assumptions. Drawing inspiration from the _predict_ and _update_ procedure for recursive Bayesian estimation, we propose the _exponential family variational Kalman filter_ (eVKF), a recursive variational inference procedure for exponential family models that jointly infers latent trajectories and learns their underlying dynamics. ### Exponential family distributions We first take time to recall exponential family distributions, as their theoretical properties make them convenient to work with, especially when performing Bayesian inference. An exponential family distribution can be written as \[p(\mathbf{z})=h(\mathbf{z})\exp\left(\boldsymbol{\lambda}^{\top}t(\mathbf{z}) -A(\boldsymbol{\lambda})\right) \tag{4}\] where \(h(\mathbf{z})\) is the base measure, \(\boldsymbol{\lambda}\) is the natural parameter, \(t(\mathbf{z})\) is the sufficient statistics, and \(A(\boldsymbol{\lambda})\) is the log-partition function (Wainwright & Jordan, 2008). Many widely used distributions reside in the exponential family; a Gaussian distribution, \(p(\mathbf{z})=\mathcal{N}(\mathbf{m},\mathbf{P})\), for example, has \(t(\mathbf{z})=\left[\mathbf{z}\quad\mathbf{z}\mathbf{z}^{\top}\right]\), \(\boldsymbol{\lambda}=\left[-\frac{1}{2}\mathbf{P}^{-1}\mathbf{m}\quad-\frac{ 1}{2}\mathbf{P}^{-1}\right]\) and \(h(\mathbf{z})=(2\pi)^{-L/2}\). Note that the base measure \(h\) does not depend on \(\mathbf{z}\) for a Gaussian distribution. We hereby call such an exponential family distribution a _constant base measure_ if its base measure, \(h\), is constant w.r.t. \(\mathbf{z}\). This class encapsulates many well known distributions such as the Gaussian, Bernoulli, Beta, and Gamma distributions. An additional and important fact we use is that, for a _minimal_1 exponential family distribution, there exists a one-to-one mapping between the natural parameters, \(\boldsymbol{\lambda}\), and the mean parameters, \(\boldsymbol{\mu}\coloneqq\mathbb{E}_{p(\mathbf{z})}\left[t(\mathbf{z})\right]\). This mapping is given by \(\boldsymbol{\mu}=\nabla_{\boldsymbol{\lambda}}A(\boldsymbol{\lambda})\), and its inverse by \(\boldsymbol{\lambda}=\nabla_{\boldsymbol{\mu}}\mathbb{E}_{p(\mathbf{z}; \boldsymbol{\lambda}(\boldsymbol{\mu}))}\left[\log p(\mathbf{z};\boldsymbol{ \lambda}(\boldsymbol{\mu}))\right]\), though \(\mathbb{E}_{p(\mathbf{z};\boldsymbol{\lambda}(\boldsymbol{\mu}))}\left[\log p (\mathbf{z};\boldsymbol{\lambda}(\boldsymbol{\mu}))\right]\) is usually intractable (Seeger, 2005). Footnote 1: minimality means that all sufficient statistics are linearly independent. If we have a conditional exponential family distribution, \(p_{\boldsymbol{\theta}}(\mathbf{z}_{t}\mid\mathbf{z}_{t-1})\), then the natural parameters of \(\mathbf{z}_{t}\mid\mathbf{z}_{t-1}\) are a function of \(\mathbf{z}_{t-1}\). In this case, we can write the conditional density function as \[p_{\boldsymbol{\theta}}(\mathbf{z}_{t}\mid\mathbf{z}_{t-1})=h(\mathbf{z}_{t}) \exp(\boldsymbol{\lambda}_{\boldsymbol{\theta}}(\mathbf{z}_{t-1})^{\top}t( \mathbf{z}_{t})-A(\boldsymbol{\lambda}_{\boldsymbol{\theta}}(\mathbf{z}_{t-1}))) \tag{5}\] where \(\boldsymbol{\lambda}_{\boldsymbol{\theta}}(\cdot)\) maps \(\mathbf{z}_{t-1}\) to the space of valid natural parameters for \(\mathbf{z}_{t}\). This allows us to use expressive natural parameter mappings, while keeping the conditional distribution in the constant base measure exponential family. Assume that at time \(t\), we have an approximation to the filtering distribution, \(q(\mathbf{z}_{t-1})\), and that this approximation is a constant base measure exponential family distribution so that \[p(\mathbf{z}_{t-1}\mid\mathbf{y}_{1:t-1})\approx q(\mathbf{z}_{t-1})=h\exp( \boldsymbol{\lambda}^{\top}t(\mathbf{z}_{t-1})-A(\boldsymbol{\lambda})) \tag{6}\] The primary goal of filtering is to efficiently compute a good approximation \(q(\mathbf{z}_{t})\) of \(p(\mathbf{z}_{t}\mid\mathbf{y}_{1:t})\), the filtering distribution at time \(t\). As we will show, following the two-step variational prescription of, _predict_ and then _update_, leads to a natural variational inference scheme and a provably tighter ELBO than a typical single-step variational approximation. ### variational prediction step Now that we have relaxed the linear Gaussian assumption, the first problem we encounter is computing the predictive distribution (a.k.a. filtering prior) \[\bar{p}(\mathbf{z}_{t}\mid\mathbf{y}_{1:t-1})=\mathbb{E}_{p(\mathbf{z}_{t-1} \mid\mathbf{y}_{1:t-1})}\left[p_{\boldsymbol{\theta}}(\mathbf{z}_{t}\mid \mathbf{z}_{t-1})\right] \tag{7}\] This is generally intractable, since the filtering distribution, \(p(\mathbf{z}_{t-1}\mid\mathbf{y}_{1:t-1})\), can only be found analytically for simple SSMs. Similar to other online variational methods (Marino et al., 2018; Zhao & Park, 2020; Campbell et al., 2021), we substitute an approximation for the filtering distribution, \(q(\mathbf{z}_{t-1})\approx p(\mathbf{z}_{t-1}\mid\mathbf{y}_{1:t-1})\), and consider \[\mathbb{E}_{q(\mathbf{z}_{t-1})}\left[p_{\boldsymbol{\theta}}(\mathbf{z}_{t} \mid\mathbf{z}_{t-1})\right] \tag{8}\] Unfortunately, due to the nonlinearity in \(p_{\mathbf{\theta}}(\mathbf{z}_{t}\mid\mathbf{z}_{t-1})\), Eq. (8) is still intractable, making further approximation necessary. We begin by considering an approximation, \(\bar{q}(\mathbf{z}_{t})\), restricted to a minimal exponential family distribution with natural parameter \(\bar{\mathbf{\lambda}}\), i.e. \[\mathbb{E}_{q(\mathbf{z}_{t-1})}\left[p_{\mathbf{\theta}}(\mathbf{z}_{t}\mid \mathbf{z}_{t-1})\right]\approx\bar{q}(\mathbf{z}_{t})=h\exp(\bar{\mathbf{\lambda }}^{\top}t(\mathbf{z}_{t})-A(\bar{\mathbf{\lambda}})) \tag{9}\] Taking a variational approach (Hoffman et al., 2013), our goal is to find the natural parameter \(\bar{\mathbf{\lambda}}\) that minimizes \(\mathbb{D}_{\text{KL}}\left(\bar{q}(\mathbf{z}_{t})||\mathbb{E}_{q(\mathbf{z} _{t-1})}\left[p_{\mathbf{\theta}}(\mathbf{z}_{t}\mid\mathbf{z}_{t-1})\right]\right)\). Since this quantity cannot be minimized directly, we can consider the following upper bound: \[\mathcal{F}=-\mathcal{H}(\bar{q}(\mathbf{z}_{t}))-\mathbb{E}_{\bar{q}( \mathbf{z}_{t})}\mathbb{E}_{q(\mathbf{z}_{t-1})}\left[\log p_{\mathbf{\theta}}( \mathbf{z}_{t}\mid\mathbf{z}_{t-1})\right]\geq\mathbb{D}_{\text{KL}}\big{(} \bar{q}(\mathbf{z}_{t})||\mathbb{E}_{q(\mathbf{z}_{t-1})}\left[p(\mathbf{z}_{ t}\mid\mathbf{z}_{t-1})\right]\big{)} \tag{10}\] Rather than minimizing \(\mathcal{F}\) with respect to \(\bar{\mathbf{\lambda}}\) through numerical optimization, if we take \(q(\mathbf{z}_{t-1})\), \(\bar{q}(\mathbf{z}_{t})\), and \(p_{\mathbf{\theta}}(\mathbf{z}_{t}\mid\mathbf{z}_{t-1})\) to be in the same _constant base measure exponential family_, then we can show the following theorem which tells us how to compute the \(\bar{\mathbf{\lambda}}^{*}\) that minimizes \(\mathcal{F}\). **Theorem 1** (Variational prediction distribution).: _If \(p_{\mathbf{\theta}}(\mathbf{z}_{t}\mid\mathbf{z}_{t-1})\), \(q(\mathbf{z}_{t-1})\), and \(\bar{q}(\mathbf{z}_{t})\) are chosen to be in the same minimal and constant base measure exponential family distribution, \(\mathcal{E}_{c}\), then \(\bar{q}^{*}(\mathbf{z}_{t})=\operatorname*{argmin}_{\bar{q}\in\mathcal{E}_{c} }\mathcal{F}(\bar{q})\) has a closed form solution given by \(\bar{q}^{*}(\mathbf{z}_{t})\) with natural parameters, \(\bar{\mathbf{\lambda}}_{\mathbf{\theta}}\)_ \[\bar{\mathbf{\lambda}}_{\mathbf{\theta}}=\mathbb{E}_{q(\mathbf{z}_{t-1})}\left[\mathbf{ \lambda}_{\mathbf{\theta}}(\mathbf{z}_{t-1})\right] \tag{11}\] Eq. (11) demonstrates that the optimal natural parameters of \(\bar{q}\) are the expected natural parameters of the prior dynamics under the variational filtering posterior. While \(\bar{\mathbf{\lambda}}_{\mathbf{\theta}}\) cannot be found analytically, computing a Monte-Carlo approximation is simple; we only have to draw samples from \(q(\mathbf{z}_{t-1})\) and then pass those samples through \(\mathbf{\lambda}_{\mathbf{\theta}}(\cdot)\). This also reveals a very nice symmetry that exists between closed form conjugate Bayesian updates and variationally inferring the prediction distribution. In the former case we calculate \(\mathbb{E}_{q(\mathbf{z}_{t-1})}\left[p_{\mathbf{\theta}}(\mathbf{z}_{t}\mid \mathbf{z}_{t-1})\right]\) while in the latter we calculate \(\mathbb{E}_{q(\mathbf{z}_{t-1})}\left[\mathbf{\lambda}_{\mathbf{\theta}}(\mathbf{z}_ {t-1})\right]\). We summarize the eVKF two-step procedure in Algorithm 1, located in Appendix E.3. ### Variational update step Analogous to the Kalman filter, we _update_ our belief of the latent state after observing \(\mathbf{y}_{t}\). When the likelihood is conjugate to the filtering prior, we can calculate a Bayesian update in closed form by using \(\bar{q}(\mathbf{z}_{t})\) as our prior and computing \(p(\mathbf{z}_{t}\mid\mathbf{y}_{1:t})\approx q(\mathbf{z}_{t})\propto p( \mathbf{y}_{t}\mid\mathbf{z}_{t})\bar{q}(\mathbf{z}_{t})\) where \(q(\mathbf{z}_{t})\), with natural parameter \(\mathbf{\lambda}\), belongs to the same family as \(q(\mathbf{z}_{t-1})\). In the absence of conjugacy, we use variational inference to find \(q(\mathbf{z}_{t})\) by maximizing the evidence lower bound (ELBO) \[\mathbf{\lambda}^{*}=\operatorname*{argmax}_{\mathbf{\lambda}}\mathcal{L}_{t}(\mathbf{ \lambda},\mathbf{\theta})=\operatorname*{argmax}_{\mathbf{\lambda}}\left[\mathbb{E}_{q( \mathbf{z}_{t})}\left[\log p(\mathbf{y}_{t}\mid\mathbf{z}_{t})\right]- \mathbb{D}_{\text{KL}}(q(\mathbf{z}_{t}\mid\mathbf{\lambda})||\bar{q}(\mathbf{z}_ {t}))\right] \tag{12}\] If the likelihood happens to be an exponential family family distribution, then one way to maximize Eq. (12) is through conjugate computation variational inference (CVI) (Khan and Lin, 2017). CVI is appealing in this case because it is equivalent to natural gradient descent, and thus converges faster, and conveniently it operates in the natural parameter space that we are already working in. ### Tight lower bound by the predict-update procedure A natural alternative to the variational _predict_ then _update_ procedure prescribed is to directly find a variational approximation to the filtering distribution. One way is to substitute \(\mathbb{E}_{q(\mathbf{z}_{t-1})}p_{\mathbf{\theta}}(\mathbf{z}_{t}\mid\mathbf{z}_{t -1})\) for \(\bar{q}(\mathbf{z}_{t})\) into the ELBO earlier (Marino et al., 2018; Zhao and Park, 2020). Further details are provided in Appendix B, but after making this substitution and invoking Jensen's inequality we get the following lower bound on the log-marginal likelihood at time \(t\) \[\mathcal{M}_{t}=\mathbb{E}_{q(\mathbf{z}_{t})}\left[\log p(\mathbf{y}_{t}\mid \mathbf{z}_{t})\right]-\mathbb{E}_{q(\mathbf{z}_{t})}\left[\log q(\mathbf{z}_ {t})-\mathbb{E}_{q(\mathbf{z}_{t-1})}\left[\log p(\mathbf{z}_{t}\mid\mathbf{z}_ {t-1})\right]\right] \tag{13}\] However, as we prove in Appendix B, this leads to a provably looser bound on the evidence compared to eVKF, as we state in the following theorem. **Theorem 2** (Tightness of \(\mathcal{L}_{t}\)).: _If we set_ \[\Delta(q)=\mathcal{L}_{t}(q)-\mathcal{M}_{t}(q) \tag{14}\] _then, we have that_ \[\Delta(q)=\mathbb{E}_{q(\mathbf{z}_{t-1})}\left[A(\mathbf{\lambda}_{\mathbf{\theta}}( \mathbf{z}_{t-1}))\right]-A(\bar{\mathbf{\lambda}}_{\mathbf{\theta}})\geq 0. \tag{15}\] _so that_ \[\log p(\mathbf{y}_{t})\geq\mathcal{L}_{t}(q)\geq\mathcal{M}_{t}(q) \tag{16}\] In other words, the bound on the evidence when using the variational _predict_ then _update_ procedure is always tighter than the one step procedure. Thus, not only do the variational predict then update steps simplify computations, and make leveraging conjugacy possible, they also facilitate a better approximation to the posterior filtering distribution. ### Learning the dynamics Our remaining desiderata is the ability to learn the parameters of the dynamics model \(p_{\mathbf{\theta}}(\mathbf{\mathbf{z}}_{t}\mid\mathbf{\mathbf{z}}_{t-1})\). One way of learning \(\mathbf{\theta}\), is to use variational expectation maximization; with \(\mathbf{\lambda}^{*}\) fixed, we find the \(\mathbf{\theta}^{*}\) that maximizes the ELBO \[\mathbf{\theta}^{*} =\operatorname*{argmax}_{\mathbf{\theta}}\ \mathcal{L}(\mathbf{\lambda}^{*},\mathbf{ \theta}) \tag{17}\] \[=\operatorname*{argmin}_{\mathbf{\theta}}\ \mathbb{D}_{\text{KL}} \big{(}q(\mathbf{\mathbf{z}}_{t};\mathbf{\lambda}^{*})||\bar{q}_{\mathbf{\theta}}(\mathbf{ \mathbf{z}}_{t};\bar{\mathbf{\lambda}}_{\mathbf{\theta}})\big{)} \tag{18}\] This objective may require expensive computation in practice, e.g. the log-determinant and Cholesky decomposition for Gaussian \(q\) and \(\bar{q}_{\mathbf{\theta}}\). However, since we chose \(\bar{q}_{\mathbf{\theta}}\) and \(q\) to be in the same exponential family, then as described in the following Proposition, we can consider the more computationally tractable square loss function as an optimization objective. **Proposition 1** (Optimal \(\mathbf{\theta}\)).: _If the mapping from \(\mathbf{\mathbf{z}}_{t-1}\) to the natural parameters of \(\mathbf{\mathbf{z}}_{t}\), given by \(\mathbf{\lambda}_{\mathbf{\theta}}(\mathbf{\mathbf{z}}_{t-1})\), is a universal function approximator with trainable parameters, \(\mathbf{\theta}\), then setting_ \[\mathbf{\theta}^{*}=\operatorname*{argmin}_{\mathbf{\theta}}\ \frac{1}{2}||\mathbf{ \lambda}^{*}-\bar{\mathbf{\lambda}}_{\mathbf{\theta}}||^{2} \tag{19}\] _is equivalent to finding \(\mathbf{\theta}^{*}=\operatorname*{argmax}_{\mathbf{\theta}}\ \mathcal{L}_{t}(\mathbf{\lambda}^{*},\mathbf{ \theta})\)._ The proposition indicates that we find the optimal \(\mathbf{\theta}^{*}\) that matches the natural parameters of predictive distribution to that of the filtering distribution. The proof can be found in Appendix C. Empirically, we have found that even for small neural networks, following Eq. (19), works better in practice than directly minimizing the KL term. ### Correcting for the underestimation of variance It might be illuminating to take a linear and Gaussian dynamical system, and compare the variational approximation of eVKF to the closed form solutions given by Kalman filtering. Given \(p_{\mathbf{\theta}}(\mathbf{\mathbf{z}}_{t}\mid\mathbf{\mathbf{z}}_{t-1})=\mathcal{N}( \mathbf{\mathbf{z}}_{t}\mid\mathbf{\mathbf{A}}\mathbf{\mathbf{z}}_{t-1},\mathbf{\mathbf{Q}})\), the mapping from a realization of \(\mathbf{\mathbf{z}}_{t-1}\) to the natural parameters of \(\mathbf{\mathbf{z}}_{t}\) is given by \(\mathbf{\lambda}_{\mathbf{\theta}}(\mathbf{\mathbf{z}}_{t-1})=\left[-\frac{1}{2}\mathbf{ \mathbf{Q}}^{-1}\mathbf{\mathbf{A}}\mathbf{\mathbf{z}}_{t-1}\quad-\frac{1}{2}\operatorname {vec}(\mathbf{\mathbf{Q}}^{-1})\right]\). With this mapping, we can determine, in closed form, the prediction distribution given by eVKF. Assuming that \(q(\mathbf{\mathbf{z}}_{t-1})=\mathcal{N}(\mathbf{\mathbf{z}}_{t-1}\mid\mathbf{\mathbf{m}} _{t-1},\mathbf{\mathbf{P}}_{t-1})\), we can find the optimal variational prediction distribution by plugging \(\mathbf{\lambda}_{\mathbf{\theta}}(\mathbf{\mathbf{z}}_{t-1})\), into Eq. (11) to find \[\bar{q}(\mathbf{\mathbf{z}}_{t})=\mathcal{N}(\mathbf{\mathbf{z}}_{t}\mid\mathbf{ \mathbf{A}}\mathbf{\mathbf{m}}_{t-1},\mathbf{\mathbf{Q}}) \tag{20}\] However, we know that the prediction step of the Kalman filter returns \[\bar{p}(\mathbf{\mathbf{z}}_{t})=\mathcal{N}(\mathbf{\mathbf{z}}_{t}\mid\mathbf{ \mathbf{A}}\mathbf{\mathbf{m}}_{t-1},\mathbf{\mathbf{Q}}+\mathbf{\mathbf{A}}\mathbf{\mathbf{P} }_{t-1}\mathbf{\mathbf{A}}^{\top}) \tag{21}\] Though this issue has been examined when applying VI to time series models, as in Turner & Sahani (2011), it demonstrates that eVKF underestimates the true variance by an amount \(\mathbf{\mathbf{A}}\mathbf{\mathbf{P}}_{t-1}\mathbf{\mathbf{A}}^{\top}\). For this example, we see that because the second natural parameter does not depend on at least second order moments of \(\mathbf{\mathbf{z}}_{t-1}\), the uncertainty provided by \(\mathbf{\mathbf{P}}_{t-1}\) will not be propagated forward. At least for the linear and Gaussian case, we can correct this with a post-hoc fix by adding \(\mathbf{\mathbf{A}}\mathbf{\mathbf{P}}_{t-1}\mathbf{\mathbf{A}}^{\top}\) to the variance of the variational prediction. If we consider nonlinear Gaussian dynamics with \(p_{\mathbf{\theta}}(\mathbf{\mathbf{z}}_{t}\mid\mathbf{\mathbf{z}}_{t-1})=\mathcal{N}( \mathbf{\mathbf{z}}_{t}\mid\mathbf{\mathbf{m}}_{\mathbf{\theta}}(\mathbf{\mathbf{z}}_{t-1}), \mathbf{\mathbf{Q}})\), then there does not exist an exact correction since the true prediction distribution will not be Gaussian. Empirically, we have found that adding an extended Kalman filter (Sarkka, 2013) like correction of \(\mathbf{\mathbf{M}}_{t-1}\mathbf{\mathbf{P}}_{t-1}\mathbf{\mathbf{M}}_{t-1}^{\top}\) to the prediction distribution variance, where \(\mathbf{\mathbf{M}}_{t-1}=\nabla\mathbf{\mathbf{m}}_{\mathbf{\theta}}(\mathbf{\mathbf{m}}_{t-1})\), helps to avoid overconfidence. In the Appendix E.4 we show a case where not including an additional variance term gives unsatisfactory results when dynamical transitions are Gamma distributed. ## 4 Related works Classic recursive Bayesian methods such as the particle filter (PF), extended Kalman filter (EKF), and unscented Kalman filter (UKF) are widely used for online state-estimation (Sarkka, 2013). Typically, these methods assume a known generative model, but unknown parameters can also be learned by including them through expectation maximization (EM), or dual filtering (Haykin, 2002; Wan & Van Der Merwe, 2000; Wan & Nelson, 1997). While the PF can be used to learn the parameters of the dynamics in an online fashion, as in Kantas et al. (2015), it suffers from the well known issue of "weight degeneracy" limiting its applicability to low dimensional systems. While methods from the subspace identification literature are frequently employed to estimate the underlying dynamics in an offline setting, they are often limited to linear systems (Buesing et al., 2012). Marino et al. (2018) and Zhao & Park (2020)(VJF), in contrast to eVKF, perform a single step approximation each time instant, which leads to a provably looser bound on the ELBO as stated in Theorem 2. Zhao et al. (2022)(SVMC) use particle filtering to infer the filtering distribution and derives a surrogate ELBO for parameter learning, but because of weight degeneracy it is hard to scale this method to higher dimensional SSMs. Campbell et al. (2021)(OVS) use a backward factorization of the joint posterior. Note that it updates the second most recent state with the most recent observation so that it is technically smoothing rather than filtering, furthermore, the computational complexity of this method can be prohibitive in the online setting as is evident from Table 2. ## 5 Experiments ### Synthetic data and Performance measures We first evaluate and compare eVKF to other online variational methods as well as classic filtering methods using synthetic data. Since the ground truth is available for synthetic examples, we can measure the goodness of inferred latent states and learned dynamical system in reference to the true ones. To measure the filtering performance, we use the temporal average log density of inferred filtering distribution evaluated at the true state trajectory: \(T^{-1}\sum_{t=1}^{T}\log q(\mathbf{Z}_{t};\boldsymbol{\lambda}_{t}^{*})\), where \(\boldsymbol{\lambda}_{t}^{*}\) are the optimal variational parameters of the approximation of the filtering distribution at time \(t\). To assess the learning of the dynamics model, we sample points around the attractor manifold, evolve them one step forward, and calculate the KL divergence to the true dynamics: \(S^{-1}\sum_{i=1}^{S}\mathbb{D}_{\text{KL}}\left(p_{\boldsymbol{\theta}^{*}}( \mathbf{z}_{t+1}\mid\mathbf{Z}_{t}^{*})||p_{\boldsymbol{\theta}}(\mathbf{z}_{ t+1}\mid\mathbf{Z}_{t}^{*})\right)\), where \(\mathbf{Z}_{t}^{*}\) are the perturbed samples around the attractor manifold (e.g. stable limit cycle) of the true dynamics, \(p_{\boldsymbol{\theta}^{*}}\) is the learned distribution over the dynamics, and \(p_{\boldsymbol{\theta}}\) is the true distribution over the dynamics. This helps us evaluate the learned dynamics in the vicinity of the attractor where most samples originate from. The above divergence measures only the local structure of the learned dynamical system. To evaluate the global structure, we employ the Chamfer distance (Wu et al., 2021) \[\mathbb{D}_{CD}(S_{1}||S_{2})=|S_{1}|^{-1}\sum\nolimits_{\mathbf{x}\in S_{1} }\min_{\mathbf{y}\in S_{2}}||\mathbf{x}-\mathbf{y}||_{2}+|S_{2}|^{-1}\sum \nolimits_{\mathbf{y}\in S_{2}}\min_{\mathbf{x}\in S_{1}}||\mathbf{y}- \mathbf{x}||_{2} \tag{22}\] where \(S_{1}\) and \(S_{2}\) are two distinct sets of points. Usually, this metric is used to evaluate the similarity of point clouds. Intuitively, a low Chamfer distance would mean that trajectories from the learned dynamics would generate a manifold (point cloud) close to the true dynamics--a signature that the attractor structure can be generated. Since the Chamfer distance is not symmetric, we symmetrize it as \(\mathbb{D}_{CD}(S_{1},S_{2})=\frac{1}{2}(\mathbb{D}_{CD}(S_{1}||S_{2})+ \mathbb{D}_{CD}(S_{2}||S_{1}))\) and take the logarithm. **Chaotic recurrent neural network dynamics.** We first evaluate the filtering performance of eVKF. We consider the chaotic recurrent neural network (CRNN) system used in Campbell et al. (2021); Zhao et al. (2022) \[p_{\boldsymbol{\theta}}(\mathbf{z}_{t+1}\mid\mathbf{z}_{t})=\mathcal{N}( \mathbf{z}_{t+1}\mid\mathbf{z}_{t}+\Delta\tau^{-1}(\gamma\mathbf{W}\tanh( \mathbf{z}_{t})-\mathbf{z}_{t}),\mathbf{Q})\] and vary the latent dimensionality. Since we restrict ourselves to filtering, we fix the model parameters at their true values. In addition to the online variational methods, we also include classical filtering algorithms: ensemble Kalman filter (enKF) and bootstrap particle filter (BPF) (Douc et al., 2014). Table 1 shows the RMSEs (mean \(\pm\) standard deviation over \(10\) trials of length \(250\)) under increasing latent dimensionality. Surprisingly, eVKF offers competitive performance to the BPF for the 2D case, a regime where the BPF is known to excel. The results show eVKF offers satisfactory results compared to the classic filtering algorithms as well as similar online variational algorithms. We see that OVS performs better in the case \(L=64\), however, this is at the cost of significantly higher computational complexity, as shown in Table 2. **Learning nonlinear dynamics.** In this experiment we evaluate how well eVKF can learn the dynamics of a nonlinear system that we only have knowledge of through a sequential stream of observations \(\mathbf{y}_{1},\mathbf{y}_{2},\cdots\) and so on. These observations follow a Poisson likelihood with intensity given by a linear readout of the latent state. For the model of the dynamics we consider a noise corrupted Van der Pol oscillator so that the state-space model for this system is given by \[\mathbf{z}_{t+1,1}=\mathbf{z}_{t,1}+\tfrac{1}{\tau_{1}}\Delta \mathbf{z}_{t,2}+\sigma\epsilon\qquad\mathbf{z}_{t+1,2}=\tfrac{1}{\tau_{2}} \Delta(\gamma(1-\mathbf{z}_{t,1})^{2}\mathbf{z}_{t,2}-\mathbf{z}_{t,1})+\sigma\epsilon \tag{23}\] \[\mathbf{y}_{t}\mid\mathbf{z}_{t}\sim\text{Poisson}(\mathbf{y}_{t }\mid\Delta\exp(\mathbf{C}\mathbf{z}_{t}+\mathbf{b})) \tag{24}\] where \(\exp(\cdot)\) is applied element wise, \(\Delta\) is the time bin size, and \(\epsilon\sim\mathcal{N}(0,1)\). In order to focus on learning the dynamical system, we fix \(\mathbf{\psi}=\{\mathbf{C},\mathbf{b}\}\) at the true values, and randomly initialize the parameters of the dynamics model so that we can evaluate how well eVKF performs filtering and learning the dynamics. We train each method for 3500 data points, freeze the dynamics model, then infer the filtering posterior for 500 subsequent time steps. In Table 2 we report all metrics in addition to the average time per step for both the Poisson and Gaussian likelihood cases. In Figure 1E, we see that eVKF quickly becomes the lowest RMSE filter and remains that way for all 4000 steps. \begin{table} \begin{tabular}{l l l l l} \hline \hline Method & \(L=2\) & \(L=16\) & \(L=32\) & \(L=64\) \\ \hline eVKF (ours) & \(\mathbf{0.047}\pm 6.4e{-4}\) & \(\mathbf{0.150}\pm 5.8e{-4}\) & \(\mathbf{0.250}\pm 1.5e{-3}\) & \(0.450\pm 5.8e{-3}\) \\ OVS & \(0.103\pm 6.4e{-4}\) & \(0.178\pm 5.8e{-4}\) & \(0.302\pm 1.5e{-3}\) & \(\mathbf{0.323}\pm 1.5e{-3}\) \\ VJF & \(0.105\pm 2.8e{-2}\) & \(0.288\pm 4.0e{-2}\) & \(0.400\pm 1.1e{-2}\) & \(0.711\pm 4.4e{-2}\) \\ EnKF (1,000) & \(0.115\pm 3.3e{-3}\) & \(0.437\pm 6.0e{-2}\) & \(0.619\pm 8.2e{-2}\) & \(0.620\pm 2.8e{-2}\) \\ BPF (10,000) & \(\mathbf{0.047}\pm 6.7e{-4}\) & \(0.422\pm 9.3e{-3}\) & \(0.877\pm 2.5e{-2}\) & \(1.660\pm 4.2e{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 1: **RMSEs of state estimation for Chaotic RNN dynamics.** We show the mean \(\pm\) one standard deviation (over \(10\) trials) of latent state RMSEs. The latent dimensionality \(L\) varies from \(2\) up to \(64\). Those in the parentheses are the size of ensemble and the number of particles. Figure 1: **Van der Pol oscillator with Poisson observations.** **A)** The filtering distribution inferred by eVKF over time, shading indicates the 95% credible interval. **B)** Zoomed in view at the beginning observations. We plot the mean, and trajectories evolved from the filtered mean 5 steps ahead using a “snapshot” of the dynamics at that time, their ending positions are given by the \(\times\)’s. **C)** Same as before, but at the ending observations. eVKF has learned the dynamics, leading to better filtering capabilities. **D)** True Van der Pol velocity field compared to the dynamics inferred by eVKF. **E)** Moving average RMSE of the filtering mean to the true dynamics, averaged over 10 trials, error bars indicate two standard errors. To examine the computational cost, we report the actual run time per step. Note that OVS took a multi-fold amount of time per step. **Continuous Bernoulli dynamics.** The constant base measure exponential family opens up interesting possibilities of modeling dynamics beyond additive, independent, Gaussian state noise. Such dynamics could be bounded (i.e. Gamma dynamics) or exist over a compact space (i.e. Beta dynamics). In this example, we consider nonlinear dynamics that are conditionally continuous Bernoulli (CB) (Loaiza-Ganem & Cunningham, 2019) distributed, i.e. \[p_{\mathbf{\theta}}(\mathbf{z}_{t+1}\mid\mathbf{z}_{t})=\prod_{i} \mathcal{CB}(\mathbf{z}_{t+1,i}\mid\mathbf{f}_{\mathbf{\theta}}(\mathbf{z}_{t})_{ i})\qquad p(\mathbf{y}_{n,t}\mid\mathbf{z}_{t})=\mathcal{N}(\mathbf{y}_{n,t}\mid \mathbf{C}_{n}^{\top}\mathbf{z}_{t},\mathbf{r}_{n}^{2}) \tag{25}\] where \(\mathbf{f}_{\mathbf{\theta}}:[0,1]^{L}\rightarrow[0,1]^{L}\), and \(n=1,\dots,N\). We choose a factorized variational filtering distribution such that \(q(\mathbf{z}_{t})=\prod_{i}\mathcal{CB}(\mathbf{z}_{t,i}\mid\mathbf{\lambda}_ {t,i})\), where \(\mathbf{\lambda}_{t,i}\) is the \(i\)-th natural parameter at time \(t\). In Fig. 2 we show that eVKF is able to learn an accurate representation of the dynamics underlying the observed data. Fig. 2B also demonstrates that a CB prior over the dynamics is able to generate trajectories much more representative of the true data compared to a Gaussian approximation. These results show CB dynamics could be a proper modeling choice if a priori the dynamics are known to be compact, and exhibit switching like behavior. In Table 3 we report the performance of eVKF and the other methods on synthetic data generated from the state-space model above when using both CB and Gaussian approximations. Notably, we see the Chamfer metric is lower within each method when using CB approximation, showing that even though the true filtering distribution might not exactly be a CB distribution, it is still a good choice. ### Electrophysiological recording during a reaching task To evaluate eVKF with real-world neural data, we considered electrophysiological recordings taken from monkey motor cortex during a reaching task (Churchland et al., 2012). This dataset has typically been used to evaluate latent variable modeling of neural population activity (Pei et al., 2021). In each trial of the experiment, a target position is presented to the monkey, after which it must wait a randomized amount of time until a "Go" cue, signifying that the monkey should reach toward the target. We first take 250 random trials from the experiment, and use latent states inferred by Gaussian process factor analysis (GPFA) (Yu et al., 2009) to pretrain eVKF's model of the dynamics. Then, we \begin{table} \begin{tabular}{l|c c c c|c c c} & \multicolumn{3}{c|}{Gaussian likelihood} & \multicolumn{3}{c}{Poisson likelihood} \\ \hline Method & \(\log q(\mathbf{z}_{t})\uparrow\) & KL \(\downarrow\) & log(Chamfer) \(\downarrow\) & time (ms) & \(\log q(\mathbf{z}_{t})\uparrow\) & KL \(\downarrow\) & log(Chamfer) \(\downarrow\) & Time (ms) \\ \hline eVKF & **1.15** & **6.87** & **5.66 \(\pm\) 0.93** & 104 & **0.57** & **7.131** & **2.30 \(\pm\) 0.32** & **13** \\ OVS & -0.92 & 13.48 & 7.76 \(\pm\) 0.16 & 6270 & -0.21 & 9.132 & 3.76 \(\pm\) 0.32 & 4150 \\ VJF & -3.58 & 134.3 & 6.61 \(\pm\) 0.26 & **30** & -1.24 & 325.5 & 3.99 \(\pm\) 0.23 & 100 \\ SVMC & – & 84.83 & 5.85 \(\pm\) 0.39 & 314 & – & 410.2 & 4.06 \(\pm\) 0.22 & 730 \\ \hline \end{tabular} \end{table} Table 2: **Metrics of inference for Van der Pol dynamics. We report the log-likelihood of the ground truth under the inferred filtering distributions, the KL of one-step transitions, the log symmetric Chamfer distance of trajectories drawn from the learned prior to trajectories realized from the true system, and computation time per time step. SVMC uses 5000 particles.** \begin{table} \begin{tabular}{l|c c c|c c} & \multicolumn{3}{c|}{Continuous Bernoulli} & \multicolumn{2}{c}{Gaussian} \\ \hline Method & \(\log q(\mathbf{z}_{t})\uparrow\) & KL \(\downarrow\) & log(Chamfer) \(\downarrow\) & \(\log q(\mathbf{z}_{t})\uparrow\) & log(Chamfer) \(\downarrow\) \\ \hline eVKF & **2.01** & **0.057** & **-0.19 \(\pm\) 0.25** & **6.15** & **2.55 \(\pm\) 0.36** \\ OVS & – & – & – & 3.61 & 45.74 \(\pm\) 16.9 \\ VJF & 1.94 & 2.78 & 3.22 \(\pm\) 0.50 & -20.3 & 3.43 \(\pm\) 0.13 \\ SVMC (5000) & – & 2.37 & 3.24 \(\pm\) 0.45 & – & 3.66 \(\pm\) 0.22 \\ \hline \end{tabular} \end{table} Table 3: **Metrics of inference for continuous Bernoulli dynamics. We use both CB and Gaussian approximations for the methods that are applicable. eVKF achieves the highest log-likelihood of latent trajectories, lowest KL-divergence of the learned dynamics, and lowest Chamfer distance. The downside of using Gaussian approximations is most apparent when we look at the Chamfer distance, which is always worse within each method. Note, we do not calculate the KL measure when Gaussian approximations are used.** use eVKF to perform filtering and update the dynamics model on a disjoint set of 250 trials. In order to determine if eVKF learns a useful latent representation, we examine if the velocity of the monkey's movement can be linearly decoded using the inferred filtering distribution. In Fig. 3B, we show the decoded hand position from the smoothed firing rates inferred by eVKF in parallel to the result of GPFA. eVKF is able to achieve competitive performance even though GPFA is a smoothing method. In Fig. 3C, we plot the single trial firing rates of some neurons over selected reaching conditions, showing that even for single trials, eVKF can recover firing rates decently. ## 6 Conclusion We tackled the problem of inferring latent trajectories and learning the dynamical system generating them in real-time-- for Poisson observation, processing took \(\sim 10\) ms per sample. We proposed a novel online recursive variational Bayesian joint filtering method, eVKF, which allows rich and flexible stochastic state transitions from any constant base measure exponential family for arbitrary observation distributions. Our two-step variational procedure is analogous to the Kalman filter, and achieves a tighter bound on the ELBO than the previous methods. We demonstrated that eVKF performs on par with competitive online variational methods of filtering and parameter learning. For future work, we will focus on extensions to the full exponential family of distributions, characterizing the variance lost in more generality, and improving performance as latent dimensionality is scaled up. Future work will also incorporate learning the parameters of the likelihood \(\mathbf{\psi}\) into eVKF, rather than focusing only on the dynamics model parameters and filtering states. Figure 3: **A)** True hand movements from fixation point to target. **B)** The hand position given by the velocity that we linearly decode using eVKF’s inferred firing rates. **C)** Same as previous, but for GPFA. We see that the \(R^{2}\) value, and decoded hand positions using eVKF are competitive with GPFA. **D)** Single trial (thin lines), and condition average (bold lines) firing rates for select neurons and tasks, aligned to the movement onset (demarcated with green dots) Figure 2: Continuous Bernoulli dynamics. **A)** Velocity field for both \(\mathbb{E}(\mathbf{z}_{t}\mid\mathbf{f}_{\mathbf{0}}(\mathbf{z}_{t-1}))\) and \(\mathbf{f}_{\mathbf{0}}(\mathbf{z}_{t-1})\) from the synthetically created continuous Bernoulli dynamics, and those inferred by eVKF. We see that in mean there are limit cycle dynamics, but for the states to actually saturate at the boundary there have to be strong attractor dynamics in parameter space. **B)** Inferred filtering distributions when using Gaussian approximations compared to continuous Bernoulli approximations; Gaussian distributions are able to infer the latent state well – but they cannot generate similar trajectories, as we see from trajectories propagated forward through the learned dynamics (shaded in gray) ## Acknowledgements MD and IP were supported by an NSF CAREER Award (IIS-1845836) and NIH RF1DA056404. YZ was supported in part by the National Institute of Mental Health Intramural Research Program (ZIC-MH002968). We thank the anonymous reviewers for their helpful feedback and comments, and Josue Nassar for helpful suggestions for improving the manuscript.
2308.06920
ChatGPT in Drug Discovery: A Case Study on Anti-Cocaine Addiction Drug Development with Chatbots
The birth of ChatGPT, a cutting-edge language model-based chatbot developed by OpenAI, ushered in a new era in AI. However, due to potential pitfalls, its role in rigorous scientific research is not clear yet. This paper vividly showcases its innovative application within the field of drug discovery. Focused specifically on developing anti-cocaine addiction drugs, the study employs GPT-4 as a virtual guide, offering strategic and methodological insights to researchers working on generative models for drug candidates. The primary objective is to generate optimal drug-like molecules with desired properties. By leveraging the capabilities of ChatGPT, the study introduces a novel approach to the drug discovery process. This symbiotic partnership between AI and researchers transforms how drug development is approached. Chatbots become facilitators, steering researchers towards innovative methodologies and productive paths for creating effective drug candidates. This research sheds light on the collaborative synergy between human expertise and AI assistance, wherein ChatGPT's cognitive abilities enhance the design and development of potential pharmaceutical solutions. This paper not only explores the integration of advanced AI in drug discovery but also reimagines the landscape by advocating for AI-powered chatbots as trailblazers in revolutionizing therapeutic innovation.
Rui Wang, Hongsong Feng, Guo-Wei Wei
2023-08-14T03:43:57Z
http://arxiv.org/abs/2308.06920v2
# ChatGPT in Drug Discovery: A Case Study on Anti-Cocaine Addiction Drug Development with Chatbots ###### Abstract The birth of ChatGPT, a cutting-edge language model-based chatbot developed by OpenAI, ushered in a new era in AI. However, due to potential pitfalls, its role in rigorous scientific research is not clear yet. This paper vividly showcases its innovative application within the field of drug discovery. Focused specifically on developing anti-cocaine addiction drugs, the study employs GPT-4 as a virtual guide, offering strategic and methodological insights to researchers working on generative models for drug candidates. The primary objective is to generate optimal drug-like molecules with desired properties. By leveraging the capabilities of ChatGPT, the study introduces a novel approach to the drug discovery process. This symbiotic partnership between AI and researchers transforms how drug development is approached. Chatbots become facilitators, steering researchers towards innovative methodologies and productive paths for creating effective drug candidates. This research sheds light on the collaborative synergy between human expertise and AI assistance, wherein ChatGPT's cognitive abilities enhance the design and development of potential pharmaceutical solutions. This paper not only explores the integration of advanced AI in drug discovery but also reimagines the landscape by advocating for AI-powered chatbots as trailblazers in revolutionizing therapeutic innovation. Keywords: Drug Discovery, ChatGPT, Cocaine Addition, AutoEncoder, Langevin Equation Introduction Chatbots represent a typical artificial intelligence system capable of comprehending user queries and providing automated and human-like responses [1], standing as one of the most prevalent instances of intelligent Human-Computer Interaction (HCI) [2]. Harnessing the power of natural language processing (NLP) and machine learning technologies, chatbots offer significant potential in various domains, including customer service, healthcare, banking, language translation, content writing, code debugging, and scientific discovery, despite the relative novelty of applying chatbots in scientific field. The advent of chatbots, especially large language models (LLMs) such as ChatGPT developed by OpenAI in late 2022, has revolutionized scientific discovery [3]. Firstly, ChatGPT optimizes research processes by rapidly parsing vast amounts of literature and identifying key findings with its built-in plugin called web browser. This can save considerable time for researchers, thus facilitating the exploration of complex scientific problems. Secondly, ChatGPT provides researchers with a platform to analyze data, visualize results, convert files among various formats, and solve mathematical problems with its built-in code interpreter. Thirdly, ChatGPT can assist in enhancing scientific writing by providing feedback for clarity and logical structuring of scientific content. The combination of these powerful capabilities fosters a new era in research, improving the efficiency and accuracy of scientific exploration in various fields, including molecular and biological science. By expediting the pace of molecular discovery and offering novel perspectives, chatbots such as ChatGPT, is reshaping the landscape of life science research. Chatbots can be applied to assist molecular science research in a variety of ways. For example, ChatGPT has been leveraged to accurately annotate single-cell RNA sequencing data, connecting rare cell types to their functions and unveiling specific differentiation trajectories of cell subtypes that were previously overlooked [4]. This assistance by ChatGPT could potentially lead to the discovery of key cells that disrupt differentiation pathways, offering fresh insights into cellular biology and related diseases. Moreover, White et al demonstrated that InstructGPT can help in writing accurate code across a variety of topics in chemistry [5]. The application of prompt engineering strategies further improved the accuracy of models by 30 percentage points, significantly enhancing the efficiency and accuracy of computational chemical studies. In addition, ChatGPT has shown potential in identifying disease-specific agents, compounds, genes, and more. This enables faster and more accurate pinpointing of potential targets for therapeutic intervention [6]. Futhermore, ChatGPT can generate novel compound structures that have a high likelihood of clinical success [7] and predict the pharmacokinetic (PK), pharmacodynamic (PD), and toxicity properties of these compounds [6]. This capacity to predict compound behavior, which has potential to reduce the need for expensive and time-consuming lab tests. Moving forward to more specific challenges within molecular science, chatbots could make significant contributions to the drug addiction treatment and prevention, which is global health crisis. Effective strategies to combat drug addiction often involve a combination of behavioral therapy, counseling, and medication, all directed towards assisting individuals in regaining control of their lives and attaining prolonged sobriety. Drug addiction is intrinsically complex, characterized by a convergence of biological, psychological, and social elements. These intricacies, compounded by profound neurobiological transformations, present formidable challenges in both its understanding and its mitigation. Chatbots, with their capabilities, could offer valuable assistance in this domain. For example, a study by Lee et al. introduced an "anti-drug chatbot" specifically tailored for the younger demographic. This innovative system has the capability to discern potential risks from user queries and directs the individual to professional consultants for further assistance and guidance [8]. It is worth noting that machine learning (ML) and artificial intelligence (AI) tools have been pivotal in advancing our understanding of drug addiction and substance abuse. Gong et al., developed a data-driven and end-to-end generative AI framework that integrates dynamic brain network modeling with novel network architecture. This framework highlights the potential of AI in detecting addiction-related brain circuits with dynamic properties, offering insights into the underlying mechanisms of addiction [9]. In our prior research, we underscored the critical roles of dopamine transporter (DAT), serotonin transporter (SERT), and norepinephrine transporter (NET) as central players in cocaine dependence. Leveraging machine learning algorithms, we meticulously dissected protein-protein interaction (PPI) networks and constructed models from extensive datasets of inhibitors. Our models forecasted drug repurposing avenues and potential side effects, providing a systematic protocol AI-driven framework for anti-cocaine addiction drug development [10]. ML-based approaches have been extensive applied to drug discovery [11, 12, 13]. Given the rise of chatbots and AI, we recognize the promising potential of these technologies to enhance AI-driven algorithms in drug addiction research projects. The objective of this project is to harness the capabilities of ChatGPT, specifically GPT-4 equipped with multiple plugins, to promote the development of multi-target anti-cocaine addiction drugs. In this study, we investigate the utility of ChatGPT as a virtual assistant that offers insightful concepts, elucidates mathematical and statistical methodologies, and provides coding support. To optimize our anti-cocaine addiction drug discovery project, we assign ChatGPT with three human-like personas: 1) idea generation, 2) methodology clarification, and 3) coding assistance to frequently assist us to develop a model that could generate potential multi-target anti-cocaine addiction leads. Beyond these three characteristics, we engage in regular consultations with GPT-4 on interpreting properties of potential leads, seeking guidance on scientific writing, etc. Although the benefits of using ChatGPT in drug discovery are significant, challenges of ensuring the accuracy and reliability of the responses provided by ChatGPT remain a major concern. Despite being trained on extensive datasets, ChatGPT does not come with a guarantee of consistent precision or relevance in its responses. As such, it is imperative for researchers to utilize ChatGPT judiciously and always cross-reference its suggestions with authoritative sources. ChatGPT could substantially accelerate the pace of drug discovery and other scientific pursuits by applying it properly and wisely with a discerning mind. In this work, the first persona of ChatGPT is tasked with understanding related works on AI-assisted drug addiction research, with a particular focus on our prior projects that utilized the Generative Network Complex (GNC) [14, 15] for drug-like molecule generations. Concurrently, this persona will offer recommendations on enhancing the GNC model mathematically and statistically, aiming to generate anti-cocaine addiction leads targeting multiple transporters, namely DAT, NET, and SERT. After consultation with GPT-4, we decided to integrate stochastic-based methodologies to steer the optimization process within the latent space of the existing GNC model. Specifically, we employed the Langevin equation to modify the latent space vector in the molecular generator of GNC (see Figure 1 Stochastic-based Molecular Generator). In addition, upon advice from GPT-4, we examined the binding affinities for multiple targets concurrently (see Figure 1 Binding Affinities Predictors). This involved the creation of a series of binding affinity predictors, capable of estimating potential lead affinities to DAT, NET, and SERT simultaneously. Moreover, the second persona of GPT-4 will act as an adept browser, facilitating our comprehension of various mathematical and statistical principles, including Ito's lemma, the Wiener process, white noise, Langevin equation, Fokker-Planck equation, etc. Furthermore, we applied the third persona of GPT-4 to provide instant coding assistance, including debugging, generating figures, and interpreting code. With the combined expertise of these three personas, we successfully developed a new platform called Stochastic Generative Network Complex (SGNC) that could generate 15 promising multi-target anti-cocaine addiction leads. The workflow of the SGNC assited by ChatGPT can be viewed in Figure 1. However, we must point out that the application of chatbots for drug discovery is full of challenges due to the current limits of generative AI. There is a pressing need to understand chatbots' capabilities and recognize their boundaries in their assistant role to drug discovery. ## 2 Results assisted drug discovery. ### A case study: Anti-cocaine addiction drug discovery assisted by ChatGPT #### 2.2.1 Personifying ChatGPT: Role designation Personification refers to the process of assigning human-like characteristics or a persona to an AI model. In this project, we have strategically personified ChatGPT to improve its capacity to better assist our anti-cocaine addiction drug discovery initiative. In this project, we have tailored three persona of ChatGPT to fit three roles within the project: 1) idea generation, 2) methodology clarification, and 3) coding assistance. It is worth mentioning that we personified ChatGPT in three individual chatbox. Each individual chatbox does not have access to acquire data from other chatbox. For the role of idea generation, we assigned ChatGPT the 1st persona of a professor with specific expertise in AI-assisted drug discovery, focusing particularly on treating cocaine addiction (see Dialogue 1). This persona was designed to guide Ph.D. students and postdocs on this specific project, offering insightful explanations, suggestions, or expert advice based on extensive knowledge and experience in the field. We provided it with questions, scenarios, and research plans related to the application of AI in drug discovery for treating cocaine addiction, and instructed it to focus exclusively on the subject matter and offer guidance as if it were mentoring in a real-life research setting. For the first persona of ChatGPT, we have enabled three plugins: WebPilot, ScholarAI, and AsKYourPDF. These additional plugins aim to enhance ChatGPT's ability to comprehend the background of anti-cocaine addiction drug discovery comprehensively. With these plugins enabled, ChatGPT is capable of enumerating up-to-date sources on the web, as well as accessing insights from previous works by other researchers. Complete dialogues regarding the 1st persona of ChatGPT can be found in the Supporting Information S4.1. In order to elucidate the methodology that will be involved in this project, we assigned the 2nd persona of ChatGPT the role of a professional researcher who is well-versed in diffusion models and statistical methodologies (see Dialogue 2). This persona aims to provide clear explanations, insights, or recommendations in LaTex format. This specific persona was chosen as our 1st ChatGPT persona provided an insightful idea which based on the statistical strategies and diffusion models (refer to Section 2.2.3 for details). Furthermore, we have enabled three plugins (WebPilot, Link Reader, and Wolfram) for this second persona. The choice of WebPilot and Link Reader helps ChatGPT to unlock web sources related to statistical methods, while the inclusion of Wolfram provides access to computational resources, mathematical tools, curated knowledge, and real-time data through Wolfram's software, significantly enhancing the mathematical and statistical utility of this persona. Complete dialogues regarding the 2nd persona of ChatGPT can be found in the Supporting Information S4.2. We designated the third persona of ChatGPT as a Python coding specialist, with an emphasis on artificial interlingence and figure generation (see Dialogue 3). This persona is tasked with offering clear explanations, code snippets, and efficiency optimization for our coding tasks. Specifically, for figure generation, we prefer that ChatGPT utilizes Plotly, which is a Python-based plotting library. Additionally, we have enabled three plugins for this persona: WebPilot, ChatwithGit, and Prompt Perfect. WebPilot ensures easy access to websites regarding coding skills, ChatwithGit enables accessibility to GitHub, and Prompt Perfect aids in generating perfect prompts. Complete dialogues regarding the 3rd persona of ChatGPT can be found in the Supporting Information S4.3. #### 2.2.2 Background comprehension: ChatGPT summary of past work For the 1st persona of ChatGPT, we initiated the process by feeding GPT-4 with relevant literature to ensure it has a thorough understanding of the fundamental concepts in cocaine addiction. These concepts include neurotransmitters, the dopamine hypothesis of addiction, the reward pathway of the mesolimbic dopamine system, pharmacotherapy for cocaine addiction, and machine learning approaches in cocaine addiction-related analysis. Next, we acquainted GPT-4 with our prior research on a generative model for the automated generation of drug-like molecules [14]. This step is crucial for ensuring that GPT-4 is well-versed in the context of our previous work, enabling it to provide tailored assistance that is directly aligned with our specific objectives. In particular, we have two primary goals: 1) to apply mathematical or statistical techniques to develop an enhanced model, building upon our former Generative Network Complex (GNC) model [14], and 2) to refine this model so that it is capable of generating new molecules that could bind to multiple targets simultaneously. To ensure that GPT-4 has effectively comprehended the background materials, we tasked it with summarizing the main concepts of the paper we provided and explaining the key components of the GNC model, as shown in Dialogue 4. Upon evaluation and based on our expertise, we believed that GPT-4 had successfully integrated the background materials that could assist our project tailored to our needs. Therefore, the next step is to consult our 1st persona of ChatGPT to provide some valuable ideas. The results were obtained in Section 4.3. Compared to the automated generation of fading into molecules, there is the first (first-transitory) sequence of femtosecond pulses of 2000 pulses/second pulses (Greuse) at the most frequent feature the entire stored first, full photo-tail extraction with two-hop pulses shaping and limited temporal or G- tion 2.3.1. Finally, GPT-4 recommended integrating the GNC model with other techniques specifically tailored for multi-target tasks. For instance, GPT-4 proposed the incorporation of alternative machine learning methodologies to predict the effectiveness of a molecule against multiple targets, which would then guide the generation of new molecules within the GNC framework. While this recommendation appeared somewhat vague, we sought more detailed explanations in Section 2.3.3. After assessing the recommendations from the 1st persona of GPT-4, we were specifically intrigued by its first suggestion concerning adjustments to the optimization algorithm. Given that our previous optimization process in the GNC model was conducted through gradient descent in the latent space, we solicited insights from GPT-4 on potential mathematical or statistical approaches that could be employed to enhance this optimization process within the latent space. Consequently, GPT-4 provided us with five potential strategies, including: 1) Multi-Objective Optimization, 2) Regularization Techniques, 3) Stochastic Optimization, 4) Bayesian Optimization, and 5) Reinforcement Learning. Among these, stochastic optimization, our interest as strategies involving stochastic-related algorithms have gained popularity in the diffusion models, which have achieved remarkable success in generative tasks. In light of this, we would like to delve deeper into stochastic-related approaches to tap its potential in generating promising new molecules with multi-target specificity, especially in advancing our research in anti-cocaine addiction drug discovery. Therefore, our follow-up question to GPT-4 pertained to the application of stochastic-based methods, particularly those employed in diffusion models [19], to the optimization process involved in latent space editing within our GNC model. The Dialogue 7 shows the feedback from GPT-4. First, GPT-4 provided a concise idea of the diffusion model, which elucidated that this model introduce stochastic noise into data through a series of diffusion steps that guided by a neural network, which is trained to reverse the diffusion process to reconstruct desired data samples from the noise. This explanation aligns well with existing The first approach is to identify the relationship between the different components of diffusion models. Next, GPT-4 advised applying an approach similar to that used in diffusion models to guide the optimization process within the latent space of our GNC model. Instead of employing the conventional gradient descent, GPT-4 recommended the integration of stochastic updates for enhanced manipulation of our latent space vectors. As highlighted by GPT-4, this approach has several benefits: 1) avoidance of local minima issue, which is often a challenge in optimization tasks, 2) a balance between exploration and exploitation through noise, which is imperative for the generation of multi-target inhibitors, and 3) the capability to generate more diverse and natural molecules with the noise introduced. We decided to partially accept the suggestions from GPT-4, given that our previous work had already incorporated a perturbation of the encoded latent vector using standard Gaussian noise to aid in the generation of novel compounds [15]. This regulatory scheme is referred to as the Latent Space Randomization (LSR) output. Although LSR can help generate new compounds that significantly diverge from the initial seed (note: the term'seed' refers to the initial point of origin or reference from which further variations or iterations are developed), it compromises the faithfulness of the decoder. This is because the LSR vector from the generator deviates from the original distribution that the well-trained decoder is accustomed to. Therefore, in this work, rather than merely adding Gaussian noise to the latent space vector, we aim to seek deeper and more detailed insights from GPT-4 regarding how to implement stochastic-related approaches to guide the optimization process within the latent space. Our intention is to maintain the faithfulness of the decoder while also promoting diversity in the generation of novel, multi-target inhibitors. Seeking further insights from GPT-4 on how we might implement stochastic-related approaches to guide the optimization process in the latent space, we received an initially vague response. GPT-4 suggested that we need to define a stochastic process to direct the optimization in the latent space. However, this response lacked the specificity and utility we needed. Thus, we posed a follow-up question, seeking more clarity on the specific stochastic differential equations (SDEs) that could be employed in our GNC model. With more specific request presented to GPT-4, it suggested us to apply Langevin equation to our GNC model. The Langevin equation describes the dynamics of diffusion processes, such as the random motion of particles over time in the particle's velocity space. This equation takes into account both deterministic forces and random forces. We decided to proceed with this suggestion, as in our context, we can treat the force that pushes the system towards lower energy as the deterministic force, while the random force in the Langevin equation can be considered as the force prompting the system to explore the latent space. With an initial seed (i.e., the initial latent space vector) given to our molecular generator, we can iteratively update it according to the Langevin equation. This process could potentially lead to the creation of a new and optimized molecule. We will detail the development of this Langevin dynamic inspired optimization method in the following section. #### 2.2.4 Methodology clarification: ChatGPT's explanatory function We also give our GPT-4 a second person as a professional researcher who is well-versed in diffusion models and statistical methodologies. This persona will take the role of methodology clarification and explanatory, which would guide us in understanding complex mathematics and statistical approaches. Notably, this persona has been instrumental in helping us understand the concepts such as Langevin equation, Fokker-Planck equation, Ito's lemma, Wiener process, and Gaussian white noise [20]. Despite the significant contributions of this second persona in understanding a range of theoretical concepts, it provided inaccurate definitions of the Fokker-Planck equation and Langevin equation on certain occasions. We had to correct the model and prompt it repeatedly until it produced the accurate definitions. Importantly, we wish to emphasize that this persona of GPT-4 primarily serves as a source of explanations and references. It is always the responsibility of researchers to ensure the reliability of responses from GPT-4 through meticulous cross-validation of the provided information. Details about the dialogue with 2nd persona of GPT-4 can be found in the Supporting Information S4.2. #### 2.2.5 Coding efficiency: Utilizing ChatGPT's coding ability Our third persona assignment to GPT-4 is as an expert Python coder, specifically knowledgeable in artificial intelligence and figure generation using tools such as Plotly, a popular data visualization library. This persona is intended to provide coding assistance, including debugging, generating figures, and offering insightful feedback based on error messages. This is to aid researchers in enhancing their coding efficiency. Furthermore, we integrated GitHub Copilot into our VS Code development environment. GitHub Copilot, a product developed collaboratively by GitHub, OpenAI, and Microsoft, provides autocomplete-style suggestions to expedite the coding process. It employs a generative AI model capable of understanding code context and generating appropriate code snippets, thereby significantly aiding in coding tasks and offering a smooth coding experience. Details about the dialogue with 3rd persona of GPT-4 can be found in the Supporting Information S4.3. ChatGPT assisted strategization of anti-cocaine addiction drug discovery: Key interventions and results #### 2.3.1 ChatGPT guided strategy for selection of references and seed molecules Choosing suitable reference compounds is crucial as they guide the SGNC in generating novel molecules effective against multiple cocaine transporters. The 1st persona of ChatGPT suggested us consider modifications to the similarity constraints (refers to Dialogue 5 suggestion 4). Pursuing further clarity, we asked GPT-4 about what similarity score that we can use. In response, we were provided with five distinct metrics, including 1) Tanimoto similarity, 2) cosine similarity, 3) dice similarity, 4) euclidean distance, 5) molecular shape similarity as indicated in Dialogue 8. After limiting our molecule representations to latent space vectors, GPT-4 pinpointed cosine similarity as the most suitable metric. The reasons are given in the Dialogue 9. After checking multiple references [21, 22], we found that cosine similarity \(S_{\text{C}}\) is widely used in measuring similarities between molecules. Therefore, we decided to proceed with the suggestion from GPT-4. The mathematical definition of cosine similarity \(S_{\text{C}}\) can be found in the Supporting Information S1.2. In addition to similarity scores, we consulted with GPT-4 regarding additional factors to consider when selecting reference molecules. GPT-4 highlighted four critical parameters: binding affinity, pharmacokinetics, molecular weight, log \(P\), and number of rotatable bonds of each reference molecule (refers to Dialogue 10). Given that our focus here is on choosing candidate reference compounds in silicon rather than optimizing leads, we decided not to factor in the pharmacokinetic properties. Furthermore, since the number of rotatable bonds correlates with binding affinity, will take binding affinities into consideration. Besides, as suggested in Dialogue 12, the selection of reference compounds will also follow the Lipinski's rule of five [23]. Therefore, guided by the GPT-4, we decided to select one reference compound from each of the DAT-Inhibitors, NET-Inhibitors, and SERT-Inhibitors datasets (detailed information of datasets can be found in Section 4.1). They are CHEMBL113621 from DAT-Inhibitors, CHEMBL1275709 from NET-Inhibitors, and CHEMBL173344 from BERT-Inhibitors. Each reference molecule has binding affinity to its respective transporter less than -9.54 kcal/mol. To be noted that a \(\Delta G\) value less than -9.54 kcal/mol (or \(K_{\mathrm{i}}\) less than 0.1 \(\mu\)M) indicates the drug binds very tightly to its target [24]. Moreover, the selection of reference compounds follow the Lipinski's rule of five, which stipulates an orally active drug should meet four physicochemical criteria: 1) molecular weight (MW) \(\leq 500\) daltons, 2) octanol-water partition coefficient (log \(P\)) \(\leq\) 5, 3) the number of hydrogen bond donors (nHD) \(\leq\) 5, 4) the number of hydrogen bond acceptors (nHA) \(\leq\) 10. Furthermore, each reference molecule displayed an average cosine similarity (Avg \(S_{\mathrm{C}}\)) greater than 0.40 to its respective dataset. Notably, within the DAT-Inhibitors dataset, 31 molecules showed a similarity score exceeding 0.7 for their selected reference molecules. Similarly, 15 compounds in the NET-Inhibitors and 12 in the SETI-Inhibitors also achieved scores above 0.7 with their chosen reference molecules. A summary of physicochemical properties of three reference compounds can be found in Table 1 and their 2D molecular structures can be viewed in Figure 2**a)**, **b)**, and **c)**. For the seed compound, we selected a molecule with predicted binding affinities of -7.44, 13.36, and -13.13 kcal/mol for DAT, NET, and SERT, respectively. Despite its weak inhibitory effect on DAT, we adjusted the hyperparameters in the stochastic molecular generator to enable the newly generated compounds to share more moieties with DAT inhibitors, thereby compensating for the deficiency of this week binding to DAT. #### 2.3.2 ChatGPT aided multi-objective drug-target interaction modeling To predict the binding affinities of newly generated molecules to four targets (DAT, NET, SERT, and hERG), we aimed to construct four binding affinity predictors. Initially, we sought guidance from GPT-4's 3rd persona on the most suitable machine learning algorithms, given our dataset's specific attributes (sample size, feature size, and label). The recommendations of GPT-4 are detailed in Dialogue 11. In our former study [10], gradient boosting decision trees (GBDT) were utilized to train binding affinity predictors on DAT-Inhibitors, NET-Inhibitors, SERT-Inhibitors, and hERG-Inhibitors datasets. The resulting 10-fold Pearson correlation coefficients were 0.78, 0.76, 0.76, and 0.68 respectively, serving as our baseline. Given our access to robust computational resources via high-performance computers (HPC), we decided to develop four deep neural networks. All four predictors were built and trained using PyTorch. Each network consisted of three hidden layers, with 512, 1024, and 512 hidden neurons respectively. The networks were trained over 1000 epochs, with a learning rate of 0.0001 for the first 500 epochs and 0.00001 for the remaining 500 epochs. Additionally, The Adam optimizer was chosen for this task. Researchers can inquire template of PyTorch code to build a deep neural network via ChatGPT (see Supporting Information S4.3). Moreover, as suggested by GPT-4, to get a more robust estimate of the model performance, we also evaluate the Pearson correlation coefficient (R) and root-mean-square error (RMSE) of 10-fold cross validation of four predictors, which are reported in the Table 2. Figure 2**d)** and **e)** shows the experimental and predicted binding affinity distribution on the four training sets: DAT-Inhibitors, NET-Inhibitors, SERT-Inhibitors, and hERG-Inhibitors. The distribution of predicted binding affinities align well with the experimental values, which shows that our binding affinity predictor is reliable. The grey region represents the zone where the binding affinity is less than -9.54 kcal/mol (i.e \(K_{\mathrm{i}}=0.1\mu\)M). This value is generally considered as the cut-off for recognizing active compounds. The pink \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline ChEMBL ID & Transporter & MW (dalton) & log \(P\) & nHD & nHA & \(\Delta G\) (kcal/mol) & Avg \(S_{\mathrm{C}}\) \\ \hline CHEMBL113621 & DAT & 300.140 & 4.464 & 0 & 2 & -14.18 & 0.45 \\ CHEMBL1275709 & NET & 283.190 & 3.153 & 1 & 2 & -13.77 & 0.43 \\ CHEMBL173344 & SERT & 253.160 & 3.174 & 1 & 3 & -13.58 & 0.40 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of three reference molecules target to DAT, NET, and SERT, respectively. The molecular weight (MW), log of octanol-water partition coefficient (log \(P\)), the number of hydrogen bond donors (nHD), and the number of hydrogen bond acceptors (nHA) of each reference molecule satisfy the Lipinski’s rule of five. The binding affinity (\(\Delta G\)) corresponds to each transporter are all less than -9.54 kcal/mol. The average cosine similarity (Avg \(S_{\mathrm{C}}\)) of each reference molecules are all greater than 0.40. The training dataset including single samples, the feature data set size and baseline values in the range of (6438, -2490). To build a well trained predictor, could use these recurrent synthetic machine learning algorithm. #### 2.3.3 ChatGPT assisted virtual screening of multi-target drug candidates By editing the latent space vector of the Seq2Seq AutoEncoder (AE), we were able to generate a vast number of vectors (around 16 billion) using our stochastic-based molecular generator. These vectors are then decoded into molecules through the GRU Decoder of the Seq2Seq AE. Next, we proceeded by implementing a filtering process in which we removed any duplicated molecules and predicted the corresponding binding affinities of remaining molecules to four target proteins: DAT, NET, SERT, and hERG. Any generated molecules meeting the binding affinity requirement (i.e., \(\Delta G<\) -9.54 kcal/mol for DAT, NET, SERT and \(\Delta G>\) -8.18 kcal/mol for hERG) were considered preliminary multi-target drug candidates. A total of 330 preliminary drug candidates pass the filtering test. Moreover, the similarities between 330 preliminary drug candidates and three references compounds are all less than 0.5, indicating the the high novelties of generated multi-target molecules. Then, we sought advice from GPT-4's 1st persona on criteria for selecting drug-like lead compounds. Due to paper length constraints, a concise version of responses can be found in Dialogue 12. Acting on these \begin{table} \begin{tabular}{c c c c c} Dataset name & Sample size & Binding affinity range (kcal/mol) & 10-fold R & 10-fold RMSE \\ \hline DAT-Inhibitors & 2662 & [-14.18, -2.90] & 0.8212 & 0.8979 \\ NET-Inhibitors & 2981 & [-14.63, -5.47] & 0.7732 & 0.9683 \\ SERT-Inhibitors & 4341 & [-15.00, -5.64] & 0.8022 & 0.9448 \\ hERG-Inhibitors & 6298 & [-13.84, -3.27] & 0.8092 & 0.7981 \\ \end{tabular} \end{table} Table 2: Dataset summary. Four datasets are utilized, each containing SMILES strings of inhibitors targeting DAT, NET, SERT, and hERG respectively. Alongside each SMILES string, the respective binding affinity in the unit of kcal/mol is also included as the label for each sample. Additionally, the final two columns represent the 10-fold cross validation Pearson correlation coefficient (R) and root-mean-square rrror (RMSE) for each binding affinity predictor across the four datasets. \begin{table} \begin{tabular}{c c c c} Dataset name & Sample size & Binding affinity range (kcal/mol) & 10-fold R & 10-fold RMSE \\ \hline DAT-Inhibitors & 2662 & [-14.18, -2.90] & 0.8212 & 0.8979 \\ NET-Inhibitors & 2981 & [-14.63, -5.47] & 0.7732 & 0.9683 \\ SERT-Inhibitors & 4341 & [-15.00, -5.64] & 0.8022 & 0.9448 \\ hERG-Inhibitors & 6298 & [-13.84, -3.27] & 0.8092 & 0.7981 \\ \end{tabular} \end{table} Table 2: Dataset summary. Four datasets are utilized, each containing SMILES strings of inhibitors targeting DAT, NET, SERT, and hERG respectively. Alongside each SMILES string, the respective binding affinity in the unit of kcal/mol is also included as the label for each sample. Additionally, the final two columns represent the 10-fold cross validation Pearson correlation coefficient (R) and root-mean-square rror (RMSE) for each binding affinity predictor across the four datasets. Figure 2: 2D molecular structures of reference compound with ChEMBL ID **a)** CHEMBL1113621 from DAT-Inhibitors dataset, **b)** CHEMBL1275709 from NET-Inhibitors dataset, and **c)** CHEMBL173344 from BERT-Inhibitors dataset. 2D molecular structures are rendered by an online software SmilesDrawer 2.0 [25]. **d)** Distribution of experimental binding affinities for the four training datasets (DAT-Inhibitors, NET-Inhibitors, SERT-Inhibitors, and hERG-Inhibitors). **e)** Distribution of predicted binding affinities derived from the four deep neural network predictors. **f)** Distribution of predicted binding affinities for newly generated inhibitors targeting DAT, NET, SERT, and hERG. **g)** Screening of 330 preliminary multi-target drug candidates. The color of each point represents the predicted binding affinities to DAT (purple, **g)**, NET (green, **h)**, and SERT (blue, **i)**. The light purple, green, and blue frames outline the medium ranges of 10 ADMEI, physicochemical, and medicinal chemistry properties, respectively, while the dark purple, dark green, and dark blue frames outline the excellent ranges of these properties. suggestions, we utilized in silico tools to predict the Absorption, Distribution, Metabolism, Excretion, and Toxicity (ADMET) properties of each candidate molecule. Specifically, we examined 10 properties of 330 preliminary multi-target drug candidates through ADMETlab 2.0. This platform aims to provide systematic evaluation of ADMET properties, physicochemical properties, and an assessment of medicinal chemistry friendliness. The 10 properties assessed in this work included: Caco-2 (the human colon adenocarcinoma cell lines) permeability, F\({}_{20\%}\) (the human oral bioavailability 20%), Pgp-substrate (the substrate of P-glycoprotein), Pgp-inhibitor (the inhibitor of P-glycoprotein), VD (volumn density), T\({}_{1/2}\) (The half-life of a drug), FDAMDD (The maximum recommended daily dose), SAS (synthetic accessibility score), log \(P\) (the logarithm of the n-octanol/water distribution coefficient), and log \(S\) (the logarithm of aqueous solubility value). The optimal range of 10 properties can be found in Table 3. Figure 2**g)**, **h)**, and **i)** depict the screening results on 330 preliminary multi-target drug candidates. The color gradient in each panel signifies the predicted binding affinities of the molecules to their respective targets. Specifically, in the Figure 2**g)**, the color of each point indicates the binding affinities to DAT. Similarly, in the Figure 2**h)** and **i)**, the colors of points represent the binding affinities to NET and SERT, respectively. It can be seen that the binding affinity of drug candidates for SERT is stronger than that for DAT and NET. The frames outline the medium (light purple, green, and blue) and excellent ranges (dark purple, green, and blue) for the 10 evaluated ADMET, physicochemical, and medicinal chemistry properties. Researchers can access the code of scatter plot in python via ChatGPT swiftly see Supporting Information S4.3. Figure 2**g)**, **h)**, and **i)** indicate that all the drug candidates have favorable volume density (VD) and synthetic accessibility score (SAS) values. However, only a select few drug candidates demonstrate preferable FDAMDD, F\({}_{20\%}\), T\({}_{1/2}\), Pgp-sub, and Pgp-inh. Among all, 15 candidate drugs fall within the medium range for all properties, thus are considered as potential multi-target anti-cocaine lead compounds. We also evaluated the SMILES strings of 15 potential anti-cocaine addiction lead compounds that could target multiple transporters: DAT, NET, and SERT. Notably, all 15 lead compounds satify Lipinski's rule of five. Section 2.3.5. Furthermore, we also did molecular docking analysis of 15 lead compounds following the suggestions of ChatGPT, which can be found in Section 2.3.5. [MISSING_PAGE_POST] Footnote 21: https://github frequently found in bioactive compounds. Lead 2 comprises a benzene ring attached to a modified piperazine ring, which is further connected with a cyclopentane group.. This type of structure is prevalent in numerous bioactive molecules, including some pharmaceutical drugs. Lead 3 contains a chlorobenzene ring coupled with a substituted piperazine ring. Besides, there is a methylamine group attached to the benzene ring. This structure might have potential psychoactivity, as structures featuring a nitrogen-containing ring connected to a benzene ring are commonly observed in many psychoactive compounds such as Phenethylamine, Tryptamines, and Ergolines. Lead 4 encompasses a benzene ring with an attached dimethylamine group. In addition, the benzene ring is linked to a bicyclic structure that includes a piperidine ring and an aldehyde group. This molecule could potentially be bioactive due to the presence of both a benzene ring and a nitrogen-containing ring. Lead 5 shares very similar structure as Lead 4. The only difference is that the bicyclic structure of Lead 5 includes propionaldehyde group instead of aldehyde group. Lead 6, 10, 11, and 12 all feature a benzene ring connected to a dimethylamine group and a bicyclic structure. Lead 7 consists of a benzene ring linked to a substituted alkene group, along with a bicyclic structure that includes both a pyrrolidine ring and a piperazine ring. Additionally, this bicyclic structure is connected by a propionadehyde group. Lead 8 includes a benzene ring with an attached dimethylamine group, connected to a bicyclic structure that incorporates a piperidine ring and an additional pyrrolidine ring. Lead 9 comprises a benzene ring linked to an alkyne group, and a complex structure with a piperidine ring and a three-membered nitrogen-sulfur ring. Notably, molecules with sulfur-containing rings, such as penicillin and angiotensin-converting enzyme (ACE) inhibitors, are recognized as bioactive. Lead 13 incorporates a benzene ring with an attached dimethylamine group, connected to a bicyclic structure that includes pyrrolidine ring and a cyclohexane ring which is also connected by a formyl group. Lead 14 is composed of a chloroethane group linked to two pyrrolidine rings and a benzene ring. Finally, Lead 15 consists of a benzene ring linked to a dimethylamine group via an alkene group and attached to a bicyclic structures composed by two pyrrolidine rings. #### 2.3.5 ChatGPT assisted analysis of cocaine transporter and inhibitor interactions As mentioned in Dialogue 12, ChatGPT suggested to perform molecular docking to predict how each molecule binds to the target. We decided to accept this suggestion as the understanding of the molecular mechanism of drug-target interactions is vital in identifying effective drug candidates. We also seek the expertise from ChatGPT for the installation guidance of AutoDock Vina [26] and guidance to execute molecular docking procedures (see Dialogue 15) between 15 lead compounds and target proteins DAT (PDB ID: Figure 3: **a)** The SMILE string of 15 potential anti-cocaine lead compounds that could target to multiple transporters DAT, NET, and SERT. Green pixels indicate a given candidate falls within the excellent range for each of the 10 evaluated ADMET, physicochemical, and medicinal chemistry properties, while the blue pixels describe a give candidate drug only falls within the medium range of these properties. The color gradient represents the percentage of properties within the excellent range for each given compound. **b)** Illustration of the 2D molecular structures of three reference compounds and 15 potential anti-cocaine lead compounds, which may target multiple transporters (DAT, NET, and SERT). Purple, green, and blue spots represent the CHEMBL113621-like, CHEMBL1275709-like, and CHEMBL173344-like moieties, respectively. Red spots highlight novel moieties that are not present in the three reference compounds. All 2D molecular structures are rendered by an online software SmilesDrawer 2.0 [25]. 4XPA) and SERT (PDB ID: 6DZZ). To be noted, due to the lack of NET structures in the Protein Data Bank, we do not included molecular interaction analysis of candidate leads with NET. Moreover, we want to visualize 2D protein-ligand interaction diagrams, as they offer a streamlined representation of protein-ligand interactions, highlighting crucial residues, hydrogen bonds, and more. ChatGPT recommended several popular software tools for this purpose, including \(\text{LigPlot}^{+}\) and Maestro. In this work, we chose \(\text{LigPlot}^{+}\) for our visualization needs. Our observations highlight the critical role of hydrogen bonds in the molecular interactions. The interactions of the drug candidate with DAT and SERT feature two and one hydrogen bonds, respectively, thereby contributing to the high potency of the molecule on the transporters. The first and third columns in Figure 4 illustrate the docking of Lead 15 and its molecular interactions with DAT and SERT. We have identified 15 nearly optimal leads. As demonstrated in the second and fourth columns of Figure 4**a)**, Lead 4 establishes two hydrogen bonds with DAT and four hydrogen bonds with SERT. Of the two bonds with DAT, one is formed between an oxygen atom on the residue Gln209(A) and a nitrogen atom on the compound, and the other involves an oxygen atom in a hydroxyl group of the compound interacting with a nitrogen atom on residue Asn207(A) of DAT. Among the four hydrogen bonds formed between Lead 4 and SERT, two involve the same oxygen atom interacting with a nitrogen atom on residues Leu99(A) and Tyr176(A) of SERT, while the other two bonds are formed by nitrogen atoms on Lead 4, interacting with oxygen atoms on residues Ser438(A) and another unidentified residue of SERT. Figure 4**b)** depicts a hydrogen bond in the molecular interactions between candidate Lead 9 and SERT. This bond is formed by a nitrogen atom on the compound and an oxygen atom in a hydroxyl group on residue Phe335(A) of SERT. However, no hydrogen bond is observed in its interactions with DAT. This suggests that other types of interactions, such as hydrophobic bonds, may play a major role in the high binding affinity between Lead 9 and DAT. The molecular docking poses of Lead 13 on DAT and SERT are illustrated in the 1st and 3rd columns of Figure 4**c)**. In the second column of Figure 4**c)**, a single hydrogen bond can be observed between a nitrogen atom of Lead 13 and an oxygen atom on the residue Glu161(A) of DAT. Conversely, no hydrogen bond is detected between Lead 13 and SERT, as demonstrated in the 4th column of Figure 4**c**. The molecular docking poses of Lead 15 on DAT and SERT are portrayed in Figure 4**d)**, presenting the compound's docking positions at the centers of both transporters. In its interaction with DAT, Lead 15 forms two hydrogen bonds through a nitrogen atom in a five-membered nitrogen heterocycle. This nitrogen atom interacts with oxygen atoms in two hydroxyl groups, which are attached to the residues Asp475(A) and Tyr123(A) of DAT. Moreover, a hydrogen bond exists between the candidate drug Lead 15 and SERT. This bond is formed by the same nitrogen atom in the five-membered nitrogen heterocycle, which interacts with an oxygen atom in a hydroxyl group attached to the residue Ala169(A) of SERT. The molecular interaction with other 11 candidate leads can be found in the Supporting Information S3. ## 3 Discussion ### Scrutinizing chatbots While chatbots are powerful large language models, they are not infallible. Their predictions are heavily reliant on the training data, which may lead to incomplete, outdated, bias, or skewed understandings of certain contexts. Consequently, this could result in the generation of misleading narratives and incorrect information. Therefore, it is essential for researchers to employ chatbots with appropriate care and vigilance. Scientists should not solely rely on chatbots for their research pursuits and should consistently cross-check the information generated by chatbots. Notably, the role of a chatbot like GPT-4 is to assist researchers, not to replace them. In our current project, we have employed GPT-4 to assist in our anti-cocaine addiction drug discovery process, as delineated in Figure 5**a)**. We first will assign a proper persona to GPT-4 and then ask it with questions. Once we get the response from GPT-4, it is crucial to decide whether to accept the responses or not. If the information aligns well with the literature and our expertise, we will accept the responses and proceed with the suggestions of GPT-4. Otherwise, we will either reject the answer or seek further clarification to GPT to get alternative feedback. For example, when acting as a chemist to analyze the functional groups of Leads 2, 7, 8, 9, 13, and 15, ChatGPT provided inaccurate information. Specifically, for Lead 15, ChatGPT identified a structure where a benzene ring is linked to a dimethylamine group via an alkene group and connected to a piperazine ring. However,the dimethylamine group is connected to a piperazine ring instead of a piperazine ring. This misinformation in the interaction with ChatGPT is documented in Dialogue 16. Thus, it is paramount for researchers to verify the accuracy and reliability of responses from ChatGPT using their expertise. Additionally, we noticed that ChatGPT does not perform well when providing methodological explana tions. Its responses from the 2nd persona contain some incorrect definitions and explanations. In such cases, we opted not to accept responses from ChatGPT, and seek further clarification followed by the workflow in Figure 5**a)**. An effective approach involved pointing out the inaccuracies to ChatGPT and supplying it with accurate references or information, prompting it to adjust its responses. Specific instances of these methodological inaccuracies are detailed in Dialogue 17. Despite the inaccurate responses provided by the 2nd persona of ChatGPT, it remained invaluable in helping us grasp numerous theoretical concepts and their interrelations, serving as an effective browser tool. Figure 4: Predicted docking poses of selected lead candidates to DAT (1st column) and SERT (3rd column) by AutoDock Vina. DAT is colored in purple, and SERT is presented in blue. The 2nd and 4th columns demonstrate the molecular interaction of these leads with DAT and SERT, respectively. The final column portrays the physicochemical properties of the lead candidates, which include MW (molecular weight), log P (Qarithm of octanol/water partition coefficient), log S (logarithm of the aqueous solubility), log D (log P at physiological pH 7.4), nHA (number of hydrogen bond acceptors), nHD (number of hydrogen bond donors), TPSA (topological polar surface area), nRot (number of rotatable bonds), nRing (number of rings), MaxRing (number of atoms in the largest ring), nHet (number of heteroatoms), fChar (formal charge), and nRig (number of rigid bonds). Here the purple dots denote the minimal value and the blue dots indicate the maximal value within the optimal range. The red lines represent the values of the properties for each lead candidate. The figures are categorized as follows: **a)** candidate lead 3, **b)** candidate lead 9, **c)** candidate lead 13, and **d)** Candidate lead 15. ### Autoencoder reconstruction rate of molecules A Sequence-to-Sequence Autoencoder (Seq2Seq AE) is a specific type of neural network model designed to learn a compressed representation of input data and reconstruct this data from the obtained representation. The core objective of such an autoencoder is to minimize the discrepancy between its input and output data. In this study, we initially fed the Seq2Seq AE with SMILES strings derived from four distinct datasets to examine their respective reconstruction rates. The calculated reconstruction rates for DAT-Inhibitors, NET-Inhibitors, SERT-Inhibitors, and hERG-Inhibitors datasets are 0.958, 0.970, 0.968, and 0.950 respectively. These values signify a successfully implemented autoencoder model. In addition to the aforementioned, we verified the reconstruction rate of molecules generated via the Seq2Seq AE. After eliminating duplicated SMILES strings from the generated set, the resulting reconstruction rate stood at 0.996. This high reconstruction rate implies that the distribution of our generated molecules closely mirrors that of the original dataset processed by Seq2Seq AE, underscoring the reliability of the molecules generated by our method. ### Patterns sensitive latent space vector distributions Initially, we introduced random Gaussian noise, with a range from -1 to 1, into the stochastic-based molecular generator. However, these perturbations in the latent space vectors resulted in weird SMILE strings once decoded. Seeking guidance, we consulted ChatGPT as referenced in Dialogue 18, which provided us with eight potential solutions. After reviewing these suggestions, we aligned with the first and second suggestions that echoed findings from our previous work, emphasizing that random perturbations in the latent space can destabilize the Seq2Seq AE model [14]. To ensure the reliability and effectiveness of the decoder in Seq2Seq AE model, it is essential to maintain a similar distribution pattern between the original latent space vectors and those derived from the stochastic-based molecular generator. Therefore, we take efforts to tune the noise that added to the stochastic-based molecular generator, to guarantee the modified/edited latent space vectors retain a representation that the GNC model has learned to decode effectively. Figure 5**b)** and **c)** depict distribution of latent space vectors across various datasets. Here, the \(x\)-axis represents the latent space index (ranges from 0 to 511), and the \(y\)-axis shows the absolute average value of the latent space vector corresponding to each index. The representation of absolute average value helps in visualizing the pattern and magnitude of latent space vectors across a broad index range. The purple, green, and blue panels depict the latent space vector distribution from the DAT-Inhibitors, NET-Inhibitors, and SERT-Inhibitors respectively. The discrepancy can be observed in the grey panel of Figure 5**c)** (particularly in the red boxed area). None of the molecules generated by this untuned stochastic-based molecular generator passed either the binding affinity requirements or ADMET tests. Subsequently, we adjusted the Gaussian noise in the stochastic-based molecular generator to ensure the edited latent spaces (represented in brown) exhibited a similar distribution to the original ones as shown in Figure 5**b)**. This controlled noise (ranges from -0.1 to 0.1) proved beneficial, leading to the generation of 15 promising leads capable of targeting DAT, NET, and SERT. Additionally, enlightened by ChatGPT (suggestion 8 in Dialogue 18), we also implemented a feedback loop where the molecules generated by our fine-tuned stochastic molecular generator is re-encoded into the latent space. It is worth noting that these re-encoded latent space vectors maintained a similar distribution, suggesting that the modifications made to the latent vectors are within the learned parameters of the SGNC. ## 4 Methods ### Datasets preparation Four pharmaceutical targets are key to treating cocaine addiction and drug discovery: Dopamine Transporter (DAT), Norepinephrine Transporter (NET), Serotonin Transporter (SERT), and Human Ethera-go-go-Related Gene (hERG). DAT is responsible for dopamine reuptake from synapses back into neurons, terminating neurotransmitter signaling. This causes dopamine accumulation in synapses and inducing intense euphoria. Similarly, NET is inhibited by cocaine, leading to elevated norepinephrine levels in synapses, contributing to stimulant effects. Furthermore, the inhibition of cocaine and SERT increases serotonin levels in the synapse, resulting in mood elevation, anxiety, and paranoia. Thus, a compound that concurrently modulates DAT, NET, and SERT activities could potentially treat cocaine addiction. Additionally, blocking the hERG potassium ion channel can lead to potentially fatal cardiac arrhythmias. Therefore, it is also critical to consider binding affinity between hERG and new generated leads. In this study, we collected SMILES strings and binding affinities of inhibitors targeting DAT, NET, SERT, and Figure 5: **a)** Flowchart of implementing ChatGPT as a virtual guide. **b)** Distribution of latent space vectors across various datasets. The \(x\)-axis represents the latent space index, which ranges from 0 to 511, while the y-axis denotes the absolute average value of the latent space vector corresponding to each index. The purple, green, and blue figures represent the latent space vector distribution from the DAT-Inhibitors, NET-Inhibitors, and SERT-Inhibitors respectively. **c)** Distribution of latent space vectors of generated molecules. The brown panel illustrates the latent space vector distribution from generated molecules from the fine-tuned stochastic-based molecular generator. The yellow distribution portrays generated molecules that have been processed by the GNC a second time. The grey distribution corresponds to generated molecules from an untuned stochastic-based molecular generator. In addition, the \(K_{\rm ### Stochastic-based generative network complex (SGNC) In this section, after thorough evaluation and incorporation of suggestions from GPT-4, we introduce the stochastic-based generative network complex (SGNC) as a novel mathematical-AI model, which is designed to generate novel molecules that potentially serve as effective treatments for cocaine addiction. Specifically, these molecules are intended to target multiple sites such as the Dopamine Transporter (DAT), Norepinephrine Transporter (NET), and Serotonin Transporter (SERT). Figure 1 illustrates the workflow of SGNC, which essentially consists of four main components: 1) Sequence-to-Sequence AutoEncoder (shown in green), 2) Binding Affinity Predictors (shown in yellow), 3) Stochastic-based Molecular Generator (shown in blue), and 4) Analysis via ADMET Lab (shown in purple). The dark arrows represent the training process, the brown arrows show the validation process, and the red arrows indicate the generation process. For the training process, we leveraged a well-established translation model, specifically a sequence-to-sequence (Seq2Seq) AutoEncoder (AE). This model was developed to map the International Union of Pure and Applied Chemistry (IUPAC) representation of a molecule to its Simplified Molecular Input Line Entry System (SMILES) representation, as mentioned in [29]. In our prior research, we modified this model by switching the input from the IUPAC representation of molecules to their corresponding SMILES strings. The generation process involves the following main steps: 1. We initially selected one molecule each from the DAT-Inhibitors, NET-Inhibitors, and SERT-Inhibitors datasets. These molecules were chosen because of their relatively high similarity within the three datasets, thereby acting as our reference compounds. In addition, we selected a compound known for its potency against all three targets to serve as our seed compound. 2. We then put the reference and seed compounds into the pretrained encoder and extracted the corresponding latent vectors from the latent space of the Seq2Seq AE. Subsequently, we modified the seed vector in the stochastic-based molecular generator, using the information from the reference molecules as a guide. As a result, the generator was capable of producing a large number of new latent vectors. These vectors were then decoded into SMILES strings, which are potentially effective against multiple targets, namely DAT, NET, and SERT. 3. Furthermore, we put these decoded SMILES strings into our binding affinity (BA) predictors to filter out molecules that meet our BA requirements (i.e., \(\Delta G<-9.54\) kcal/mol on DAT, NET, SERT and \(\Delta G>-8.18\) kcal/mol on hERG). 4. Finally, we used ADMETlab 2.0 to select drugable molecules from the generated SMILES with desirable BA properties. This final step in the generation process ensures that the compounds not only bind effectively to the desired targets but also have the necessary absorption, distribution, metabolism, excretion, and toxicity (ADMET) properties for a potential lead compound. During the validation phase, we first input the SMILES strings from the DAT-Inhibitors, NET-Inhibitors, SERT-Inhibitors, and hERG-Inhibitors datasets into our well-trained Seq2Seq AE model to obtain decoded SMILES. Successful reconstruction of the input SMILES indicates the reliability of the Seq2Seq AE model. Furthermore, we put the generated SMILES, which have been processed through the stochastic-based molecular generator, into the pre-trained Seq2Seq AE model. In case of unsuccessful reconstruction, we adjust the hyperparameters of the stochastic-based molecular generator until a high reconstruction rate is achieved. This process indicates that the latent vectors edited by the stochastic-based molecular generator maintain a similar distribution to the original latent space vectors from the encoder, which further ensures that our SGNC model is capable of generating chemically feasible compounds, reflecting its potential in drug discovery applications. #### 4.2.1 Sequence-to-sequence autoencoder The Sequence-to-sequence autoencoder (Seq2Seq AE) is an artificial neural network model used for translating the IUPAC representation of a given molecule into its SMILES string representation [29]. In our study, the Seq2Seq AE accepts the SMILES representation of a molecule as the input for the encoder. Subsequently, the latent space of the Seq2Seq AE preserves the structural and functional properties of the provided SMILES. This low-dimensional latent space representation can then be processed by the decoder of the Seq2Seq AE to reconstruct the original SMILES representation. Here, the network that used in encoder and decoder is the gated recurrent unit (GRU). In this work, the pretrained Seq2Seq AE model was utilized from a previous work by Winter et al [29]. The Seq2Seq AE model employed in our study was pretrained on 72 million compounds [29] sourced from the ZINC15 and PubChem databases. All duplicate entries within these databases were eliminated and subjected to RDKit [30] filtering using the following criteria: 1) only organic molecules, 2) molecular weight between 12 and 600 daltons, 3) more than 3 heavy atoms, 4) partition coefficient log \(P\) between -5 and 5, 5) sterochemistry was removed, 6) salts were stripped. #### 4.2.2 Binding affinity predictors We constructed four binding affinity predictors based on four training datasets: DAT-Inhibitors, NET-Inhibitors, SERT-Inhibitors, and hERG-Inhibitors. These predictors are designed to estimate the binding affinity of potential molecules to four critical targets: DAT, NET, SERT, and hERG. The construction of the predictors involved the following steps: 1. Feature extraction: molecular features (or fingerprints), were derived from the latent space of a sequence-to-sequence AutoEncoder (Seq2Seq AE). 2. Label assignment: The labels used for model training were the binding affinities of the molecules to their respective targets. 3. Model training: we trained the predictors using PyTorch. Each network consisted of three hidden layers with 512, 1024, and 512 neurons, respectively. The networks were trained over 1000 epochs, with a learning rate of 0.0001 for the first 500 epochs and 0.00001 for the remaining 500 epochs. We chose the Adam optimizer and batch size 16 for this task. #### 4.2.3 Stochastic-based molecule generator Generative models have gained prominence as potent tools for the generation of prospective new leads. Building upon our prior work, we introduced the Generative Network Complex (GNC), a model specifically tailored to produce novel, drug-like molecules [14]. To augment the efficacy of the GNC model, and with guidance from GPT-4, we decided to integrate principles from diffusion-based models [19, 31]. The Langevin equation is a stochastic differential equation (SDE) that used to decribe diffusion processes. This equation equips the random trajectories of particles in their velocity space, accounting for both deterministic and stochastic forces. A pivotal goal of this research is to employ the Langevin equation suggested by ChatGPT as a mechanism to enhance the molecular generator present in the GNC model. Assume \(\mathbf{X}\) is a latent space vector of a molecule with 512 dimensions, and \(\mathbf{X}_{k}\) represents its \(k\)-th latent space reference vector. Then the Langevin equation of our drug generator system is: \[\frac{d\mathbf{X}}{dt}=\alpha\sum_{k}a_{k}(\mathbf{X}_{k}-\mathbf{X})+\mathbf{ \xi}(t), \tag{1}\] where \(a_{k}\) is a positive weighting parameter corresponds to \(\mathbf{X}_{k}\) satisfying \(\sum_{k}a_{k}=1\), \(\boldsymbol{\xi}(t)\) is a Gaussian white noise, and \(\alpha\) is a hyperparameter. Then according to the Langevin equation in the Supporting Information S1.1.4, the general solution of this system is given by: \[\mathbf{X}(t)=\mathbf{C}^{-\alpha t}+\int_{0}^{t}e^{-\alpha(t-u)}(\alpha\sum_{ k}a_{k}\mathbf{X}_{k}+\boldsymbol{\xi}(u))du, \tag{2}\] where the initial state \(\mathbf{X}(0)=\mathbf{C}\). While the Langevin equation offers a microscopic depiction of the diffusion process, a comprehensive understanding of the temporal evolution of particle distribution requires a more macroscopic viewpoint. To bridge this gap, we also introduce the Fokker-Planck equation. Derived from the Langevin equation (a detailed derivation is available in Supporting Information S1.1.5), this equation provide the connection between the dynamics of individual particles and the overarching behavior of the entire system. #### 4.2.4 Drug screening In drug discovery, several criteria are leveraged to filter promising drug candidates. In our work, we consider molecules that fulfill the following requirements as viable drug prospects: 1) exhibit favorable ADMET properties, 2) comply with Lipinski's rule of five, 3) are synthetically accessible, 4) possess proper physicochemical properties. These properties are crucial for determining the drug-like nature and potential practical applicability of the generated molecules, which will be elaborated on in the following paragraphs. First, undesirable pharmacokinetics and toxicity are leading causes of drug development failure. Therefore, the assessment of absorption, distribution, metabolism, excretion, and toxicity (ADMET) properties should occur as early as possible in the drug development process. In this work, we applied ADMETlab 2.0 to offer us systematic evaluation of ADMET properties, along with certain physicochemical properties and an assessment of medicinal chemistry friendliness. In this work, we consider seven seven ADMET properties, including Caco-2 (the human colon adenocarcinoma cell lines) permeability, F\({}_{20\%}\) (the human oral bioavailability 20%), Pgp-substrate (the substrate of P-glycoprotein), Pgp-inhibitor (the inhibitor of P-glycoprotein), VD (column density), T\({}_{1/2}\) (The half-life of a drug), and FDAMDD (The maximum recommended daily dose). The optimal range of these properties are listed in Table 3. Second, the Lipinski's rule of five help to evaluate druglikeness or determine if a chemical compound with \begin{table} \begin{tabular}{l l l l} \hline \hline Property & Profile & Excellent range & Medium range \\ \hline Absorption & Caco-2 permeability & \(>\) -5.15 & / \\ Absorption & F\({}_{20\%}\) & 0 - 0.3 & 0.3 - 0.7 \\ Absorption & Pgp-sub & 0 - 0.3 & 0.3 - 0.7 \\ Absorption & Pgp-inh & 0 - 0.3 & 0.3 - 0.7 \\ Distribution & VD & 0.04 - 20 L/kg & / \\ Excretion & T\({}_{1/2}\) & 0 - 0.3 & 0.3 - 0.7 \\ Toxicity & FDAMDD & 0 - 0.3 & 0.3 - 0.7 \\ Medicinal Chemistry & SAS & \(<\) 6 & / \\ Physicochemical & log \(P\) & 0 - 3 log mol/L & / \\ Physicochemical & log \(S\) & -4 - 0.5 log mol/L & / \\ \hline \hline \end{tabular} \end{table} Table 3: The optimal ranges of 10 properties that are used to screen nearly optimal compounds, including seven selected ADMET properties, two physicochemical properties, and one medicinal chemistry properties. The seven ADMET properties include Caco-2 (the human colon adenocarcinoma cell lines) permeability, F\({}_{20\%}\) (the human oral bioavailability 20%), Pgp-sub (the substrate of P-glycoprotein), Pgp-inh (the inhibitor of P-glycoprotein), VD (column density), T\({}_{1/2}\) (The half-life of a drug), and FDAMDD (The maximum recommended daily dose). Moreover, SAS represents the synthetic accessibility score, log \(P\) is the logarithm of the n-octanol/water distribution coefficient, and log \(S\) indicates the logarithm of aqueous solubility value. a certain pharmacological or biological activity has properties that would make it a likely orally active drug in humans, which should satifies four physicochemical criteria: 1) molecular weight (MW) \(\leq 500\) daltons, 2) octanol-water partition coefficient (log \(P\)) \(\leq 5\), 3) the number of hydrogen bond donors (nHD) \(\leq 5\), 4) the number of hydrogen bond acceptors (nHA) \(\leq 10\). Thirdly, synthetic accessibility is crucial to ensuring the feasibility of large-scale production of a potential drug candidate. In this study, we used RDKit to evaluate the synthetic accessibility score (SAS). A candidate drug with an SAS score less than 6 indicates that it is relatively easy to synthesize. Lastly, physicochemical properties can significantly influence the solubility, permeability, and stability of potential drug candidates. In this study, we primarily focused on the logarithm of the n-octanol/water distribution coefficient (log \(P\)) and the logarithm of aqueous solubility value (log \(S\)). Drug candidates with a log \(P\) in the range of 0 - 3 log mol/L and log \(S\) in the range of -4 - 0.5 log mol/L are considered to have suitable physicochemical properties. ## Code and Data availability The code and data are available at the public repository [https://github.com/wangru25/SGNC](https://github.com/wangru25/SGNC). The datasets including SMILES strings and binding affinities of inhibitors targeting DAT, NET, SERT. In addition, these datasets can also be found in the 'Training Datasets' folder within the SupplementaryData.zip file, available under Supporting Information S2 for readers interested in further exploration. Trained models from this study are saved within the aforementioned code repository. This repository includes the stochastic-based generative network complex (SGNC) developed in Python, as well as Python scripts for calculating reconstruction rates, evaluating synthetic accessibility, and generating visual plots. ## Supporting Information The Supporting Information is available for: 1. Supplementary methods 1. Fokker-Planck equation-embedded multi-target drug molecule generator 1. Random variables 1. Wiener process and white noise 1. Ito's lemma 1. Langevin equation 1. Derivation of Fokker-Planck equation from Langevin equation 1. Evaluation metrics 2. Supplementary Data: The SupplementaryData.zip consists 3 folders, namely Training Datasets, Predictions, and Generated Molecules. 1. Training Datasets: This folder has the datasets used for training purposes. 2. Predictions: Within this folder, one can find data related to the predicted binding affinity of inhibitors in 4 training datasets. 3. Generated Molecules: This folder documents the molecules that have been produced using the stochastic-based molecular generator. * Supplementary Figures S3.1 Radar plots of physicochemical properties for 15 lead candidates S3.2 Molecular docking and molecular interaction of 15 leads with DAT and SERT * Supplementary Dialogues S4.1 The 1st persona of ChatGPT S4.2 The 2nd persona of ChatGPT S4.3 The 3rd persona of ChatGPT S4.4 Other dialogues ## Competing interests The authors declare no competing interests. ## Acknowledgment This work was supported in part by NIH grants R01GM126189, R01AI164266, and R35GM148196, National Science Foundation grants DMS2052983, DMS-1761320, and IIS-1900473, NASA grant 80NSSC21M0023, Michigan State University Research Foundation, and Bristol-Myers Squibb 65109.
2306.10436
Qubit entanglement generated by classical light driving an optical cavity
We study the generation of entanglement between two qubits which communicate through a single cavity mode of quantum light but have no direct interaction. We show that such entanglement can be generated simply by exchanging quanta with a third party, which is in our case the cavity mode. Exchanging only a single quantum creates maximal entanglement. A single quantum can be provided by an external quantum light source. However, we use a classical light source to pump quanta which are used for the exchange, and investigate the degree of two-qubit entanglement. We first identify a characteristic timescale of the interaction between the cavity mode and each qubit. We investigate two regimes of the driving pulse length, one is short and the other is long compared to the characteristic timescale of the interaction. In the first regime, it is known that the pulse can pump the system by generating a displacement of the cavity mode. We show that, by using a specific pulse shape, one can make the displacement to essentially vanish after the pulse finishes interaction with the cavity mode. In this case, a rotation of the qubits can be invoked. In addition, higher-order effects of the pulse including a non-local operation on the joint system of the cavity mode and the qubits are found, and we present a formalism to compute each term up to a given order. An explicit condition on the pulse shape for each term to be nonzero or suppressed is derived to enable an experimental design for verifying the entanglement generation using a classical light source. In the opposite regime where the driving is sufficiently long, we utilize a squeezed state which may be obtained adiabatically. We study how the squeezing and the accompanied rotation of qubits affect the generated two-qubit entanglement.
Seongjin Ahn, Andrey S. Moskalenko, Vladimir Y. Chernyak, Shaul Mukamel
2023-06-17T22:54:18Z
http://arxiv.org/abs/2306.10436v1
# Qubit entanglement generated by classical light driving an optical cavity ###### Abstract We study the generation of entanglement between two qubits which communicate through a single cavity mode of quantum light but have no direct interaction. We show that such entanglement can be generated simply by exchanging quanta with a third party, which is in our case the cavity mode. Exchanging only a single quantum creates maximal entanglement. A single quantum can be provided by an external quantum light source. However, we use a classical light source to pump quanta which are used for the exchange, and investigate the degree of two-qubit entanglement. We first identify a characteristic timescale of the interaction between the cavity mode and each qubit. We investigate two regimes of the driving pulse length, one is short and the other is long compared to the characteristic timescale of the interaction. In the first regime, it is known that the pulse can pump the system by generating a displacement of the cavity mode. We show that, by using a specific pulse shape, one can make the displacement to essentially vanish after the pulse finishes interaction with the cavity mode. In this case, a rotation of the qubits can be invoked. In addition, higher-order effects of the pulse including a non-local operation on the joint system of the cavity mode and the qubits are found, and we present a formalism to compute each term up to a given order. An explicit condition on the pulse shape for each term to be nonzero or suppressed is derived to enable an experimental design for verifying the entanglement generation using a classical light source. In the opposite regime where the driving is sufficiently long, we utilize a squeezed state which may be obtained adiabatically. We study how the squeezing and the accompanied rotation of qubits affect the generated two-qubit entanglement. ## I Introduction If there is a direct interaction between two systems, classical light can generate entanglement between them [1; 2]. However, when the two parties are not coupled, they cannot be entangled by classical light, which only allows a local unitary transformation on each party. However, quantum light can be used to create entanglement between two noninteracting systems [3]. Several types of quantum light have been considered to generate entanglement between qubits. Especially, the generation of entanglement between two noninteracting two-level systems (or qubits) based on cavity electrodynamics has been studied for various states of quantum light, including the Fock [4], thermal [5], coherent [6; 7], and squeezed state [3; 8]. As one of the most effective and simple methods, a single-photon state of the cavity mode can be used to entangle two qubits. This can be done by exchanging a quantum, which is in this case a photon, between the cavity mode and the qubits. Suppose both qubits are in their ground states. Since there is only one photon for two qubits, only one qubit or the other, but not both at the same time, can receive the photon to get excited. Thus, the resulting state is the superposition of those two possibilities, which is an entangled state between the qubits. Pumping only a single quantum in a cavity typically requires an external single-photon source [9; 10], which is a quantum state with no classical counterpart. With such a quantum light source, even a maximally entangled state can be achieved. Then, how much entanglement can be generated if we use a classical light source for pumping the cavity mode? This is the question to investigate in this paper. We consider an exactly solvable model of two qubits interacting with a single cavity mode [11] and compute the entire time-dependent state. We explore the resulting two-qubit concurrence [12; 13] as a measure for their entanglement. Previously, this has been done for several initial states of the system [3; 4; 5; 6; 7; 8]. In this work, we do not assume a particular initial state other than the ground state of the total system. Instead, we drive the cavity mode with an external classical field and investigate what kind of state can be prepared. Since the cavity mode is driven by a classical light, a coherent state may be expected to a good approximation if the interaction of the cav ity mode with the qubits is negligible. Some correction may be needed since the cavity mode would interact with the qubits even during the driving. We show analytically how the joint state of the cavity mode and the two qubits depends on the external classical field. The interaction between the cavity mode and the qubits is considered consistently with the external driving of the cavity mode, to identify the classes of states that can be prepared with a classical light source. We study then the entanglement generated by the prepared state. We demonstrate how the two-qubit entanglement dynamics can be controlled in terms of the strength, duration, phase and temporal shape of the classical light field. There are two regimes that can be distinguished in terms of the duration of the driving. Namely, it can be short or long compared to a characteristic timescale which we denote as \(T_{g}\). The timescale \(T_{g}\) determines how fast the entanglement is generated after the system is pumped. \(T_{g}\) is determined by how strong the cavity mode and each qubit are coupled. When there is no coupling, no quantum can be exchanged and thus no entanglement is generated, which means \(T_{g}\rightarrow\infty\). When there is a coupling, quanta can be exchanged and thus entanglement can be generated. The stronger the coupling is, the faster the exchange of quanta would be, which means \(T_{g}\) gets shorter. Precise expression of \(T_{g}\) is discussed. Once the characteristic timescale \(T_{g}\) is identified, we investigate the two mentioned driving regimes. In the regime when the cavity mode is driven by a pulse which is sufficiently short compared to \(T_{g}\), we study how the two-qubit entanglement depends on the pulse strength, duration and the shape. We show that by selecting an appropriate pulse shape, the pumping can result in a displacement of the cavity mode or a rotation of the qubits, to a good approximation. The entanglement dynamics can be controlled by selecting the type of pumping through the pulse shape. In the latter regime, one can adiabatically generate a squeezed state and rotated qubits. The effect of the squeezing and the rotation on the entanglement formation is investigated. This paper is organized as follows. In Sec. II, we describe the model system, where a cavity mode is driven by a classical light source. In Sec. III, we show how the entanglement can be generated by exchanging quanta between the qubits and the cavity mode. The characteristic timescale \(T_{g}\) of the cavity-qubit interaction is identified. In Sec. IV, we consider one of the regimes where the driving duration is sufficiently short with respect to the characteristic timescale of the cavity-qubit interaction. In Sec. V, we investigate the other regime where the driving is quasistatic. In Sec. VI and VII we discuss a set of parameters for an experimental realization and conclude the paper with a summary. ## II Model We consider two qubits coupled to a resonant cavity mode of frequency \(\omega\). A classical external light drives the cavity mode. In a rotating frame at the frequency \(\omega\), the model Hamiltonian can be written as \[H(t)=H_{g}+H_{e}(t). \tag{1}\] The first term, \[H_{g}=\hbar g(\sigma^{+}a+\sigma^{-}a^{\dagger}), \tag{2}\] describes the interaction between the cavity mode and each qubit under the rotating wave approximation (RWA), which is justified for \(g\ll\omega\). The two-qubit Pauli operator is defined as \[\sigma^{\pm}=\sigma^{\pm}_{A}+\sigma^{\pm}_{B}, \tag{3}\] where \(\sigma^{\pm}_{A}\) and \(\sigma^{\pm}_{B}\) are the ladder operators for qubits \(A\) and \(B\), respectively. \(a\) and \(a^{\dagger}\) are the annihilation and creation operator of the cavity photon, respectively. The second term of Eq. (1) is given as \[H_{e}(t)=\hbar\Omega f(t)x_{\omega}(t), \tag{4}\] where \(\Omega\) is the driving strength and \(f(t)\) is the temporal shape of the external field. \(x_{\omega}(t)\) is the quadrature operator defined as \[x_{\omega}(t)\equiv ae^{-i\omega t}+a^{\dagger}e^{i\omega t}, \tag{5}\] corresponding to the (normalized) electric field of the cavity mode. The interaction Hamiltonian \(H_{e}(t)\) describes a linearly driven oscillator, representing a cavity mode coupled to an external field. The Hamiltonian has been used theoretically [14; 15; 16; 17] and demonstrated experimentally [18]. We consider a pulsed driving. Let \(\tau_{d}\) be the duration of the pulse and \(t=0\) be the center of the pulse. The total considered time interval shall be \([-T,T]\), where \[T/\tau_{d}\equiv T_{u}\gg 1, \tag{6}\] in order to make this time interval long enough to accommodate the pulse. Consider a pulse with a central frequency \(\omega\) which is resonant to the cavity mode and each qubit. Let \(f_{0}(t)\) be the envelope of the pulse shape and \(\phi\) be the carrier-envelope offset phase. We write the pulse shape as \[f(t)=f_{0}(t)\cos(\omega t+\phi). \tag{7}\] The envelope function can be expanded in a complete set of localized functions, e.g. a set of Hermite-Gaussian (HG) functions, which can be written as \[f_{\rm HG,m}(u)=N_{m}H_{m}(u)e^{-u^{2}/2}. \tag{8}\] Here, \(u\equiv t/\tau_{d}\) and \(H_{m}(u)\) is the \(m\)-th order Hermite polynomial for \(m\geq 0\). The normalization factor \(N_{m}\) is given by \[N_{m}=\frac{\pi^{-1/4}}{\sqrt{m!\,2^{m}}},\] so that \[\int_{-\infty}^{\infty}|f_{\mathrm{HG},m}(u)|^{2}=1.\] ## III Exchanging quanta generates entanglement How can the two qubits become entangled? One way is to exchange a quantum with the cavity mode. For example, suppose there is no driving and consider an initial state where all the qubits are in their ground state and the cavity mode has one photon. In this case, there is only one excitation in the system, a photon. Due to the coupling between each qubit and the cavity mode, Eq. (2), the quantum starts to'move' from the cavity mode to the qubits. However, there are two qubits for a single quantum. Since the coupling strength between each qubit and the cavity mode is the same, the probability that the quantum will be found after some time at one of the qubits is identical as for the other qubit. This state, where the two possibilities of a bipartite system (two qubits) are superposed, is entangled. We shall trace this entanglement generation. Let us denote the initial state \(|\psi(0)\rangle\) as \(|00;1\rangle\equiv|00\rangle|1\rangle\). \(|00\rangle\in\mathcal{H}_{q}\) represents the state of two qubits where both of them are in their ground states. \(|n\rangle\) with \(n\geq 0\) denotes the Fock state with \(n\) photons in the cavity mode. \(\mathcal{H}_{q}\) and \(\mathcal{H}_{\gamma}\) represent the Hilbert space of the qubits and of the cavity mode, respectively. At time \(t\), the state evolves into a certain state, denoted \(|\psi(t)\rangle\). The time evolution is governed by the Hamiltonian \(H_{g}\) in Eq. (2). One can diagonalize the Hamiltonian to calculate the exact expression of \(|\psi(t)\rangle\). However, we note that \(|\psi(t)\rangle\) would be a superposition of only two states, \(|00;1\rangle\) and \(|\Psi^{+};0\rangle\equiv|\Psi^{+}\rangle|0\rangle\), where \(|\Psi^{+}\rangle\equiv(1/\sqrt{2})(|01\rangle+|10\rangle)\). This can be seen by noting that \(\sigma^{+}|00\rangle=\sqrt{2}|\Psi^{+}\rangle\) and that \(H_{g}\) consists of two terms, \(\sigma^{+}a\) and \(\sigma^{-}a^{\dagger}\), which describe exchanges of quanta between both qubits and the cavity mode. Considering a time evolution with a finite time \(t\) as a succession of infinitesimal steps \(\Delta t\), each approximated as \(1+(-i/\hbar)H_{g}\Delta t\), all possible paths that a state may evolve along can be indicated by the following diagram: \[0\xrightleftharpoons[\sigma^{-}a^{\dagger}]{00}|1\rangle\xrightleftharpoons[ \sigma^{-}a^{\dagger}]{\sigma^{+}a_{\lambda}}|\Psi^{+}\rangle|0\rangle\xrightarrow{ \sigma^{+}a_{\lambda}}0. \tag{9}\] From this diagram, one can expect that the state will be a superposition of the two states. An exact calculation shows that \[U_{g}(t)|00;1\rangle=\cos{(g_{1}t)}|00;1\rangle+\sin{(g_{1}t)}|\Psi^{+};0\rangle,\] where \(U_{g}(t)=\exp{\left[-\frac{i}{\hbar}H_{g}t\right]}\) and \(g_{1}=\sqrt{2}g\). At \(t=0\), the total state is \(|00;1\rangle\) and the two qubits are not entangled. When \(t=\pi/2g_{1}\), the total state becomes \(|\Psi^{+};0\rangle\), where the two qubits are in a maximally entangled state. Note that the timescale of the entanglement dynamics is proportional to \(g_{1}^{-1}\sim g^{-1}\). When there are \(n\geq 2\) quanta, the state can have an additional component, namely \(|11;n-2\rangle\equiv|11\rangle|n-2\rangle\), which can be noticed by considering the possible paths as the following diagram: \[0\xrightleftharpoons[\sigma^{-}a^{\dagger}]{00}|n\rangle\xrightleftharpoons[ \sigma^{-}a^{\dagger}]{\sigma^{-}a^{\dagger}}|\Psi^{+}\rangle|n-1\rangle \xrightleftharpoons[\sigma^{+}a_{\lambda}]{\sigma^{-}a^{\dagger}}|11\rangle| n-2\rangle\xrightarrow{\sigma^{+}a_{\lambda}}0.\] An exact calculation shows that the dynamics timescale is proportional to \(g_{n}^{-1}\sim(\sqrt{n}g)^{-1}\), where \[g_{n}=\sqrt{4n-2}\,g \tag{10}\] for \(n\geq 1\). Here, we define a timescale, denoted as \(T_{g}\), of the system containing \(n\) quanta as \[T_{g}=(\sqrt{n}g)^{-1}.\] The dynamics of observables for a state with \(n\) quanta will characteristically unfold at this timescale. We expect that the formation of entanglement takes about this amount of time when there are \(n\) quanta in the system. To confirm the timescale of the entanglement dynamics, we quantify the entanglement of the reduced density operator \(\rho\) of the two qubits. The density operator is defined as \[\rho(t)=\mathrm{tr}_{\gamma}[|\psi(t)\rangle\langle\psi(t)|], \tag{11}\] where \(|\psi(t)\rangle\) represents the state of the total system at time \(t\) and \(\mathrm{tr}_{\gamma}\) is a partial trace with respect to the degree of freedom of the cavity mode. After tracing out the cavity mode, the qubits are in a mixed state in general. A mixed state can be represented as a statistical ensemble of pure states with their associated probabilities. Each possible pure state in the ensemble has a well-defined entanglement, defined via the von Neumann entropy that represents the upper bound on the purification/entanglement cost, the latter being defined in terms of the cooperative game that uses the Local Operations and Classical Communication (LOCC) protocols [19]. It has been also demonstrated [19] that the von Neumann entropy can only go down in average when non-unitary operations, such as measurements, are performed. To quantify entanglement of mixed states of two qubits, the notion of entanglement of formation has been introduced as the minimal average von Neumann entropy of an ensemble of pure states that represents the given mixed state (the pure states do not have to be mutually orthogonal, and their number is not fixed), and the minimum is taken over all such ensembles [20]. It has been shown in Ref. [20] that the entanglement of formation defined in this way has the analogous property of decreasing upon non-unitary transformations, associated with measurements, and therefore is considered as a good measure of entanglement for mixed states of two qubits. Since the definition involves an optimization problem, the entanglement of formation is apparently hard to compute. Therefore, an important result is an explicit formula for the entanglement of formation in terms of the so-called concurrence, postulated in [12]. There, the concurrence was defined in terms of the eigenstates of a matrix acting in the Hilbert space of two qubits. This matrix is composed of the product of the density matrix of the given mixed state and its involuted counterpart, with the involution that comes from the anti-linear operator acting in the Hilbert space of a single qubit that represents the time-reversal symmetry. The formula for the entanglement of formation in terms of the concurrence has been proven there, for a particular case of the density matrices with at least two zero eigenvalues. The proof has been extended to a general mixed state of two qubits in [13]. The calculation of concurrence is related to a time-reversal operation. For qubits, which are pseudospins, this corresponds to a'spin-flip'. The concurrence is defined in terms of how similar is a state to its time-reversed, or spin-flipped counterpart. For example, \(|00\rangle\langle 00|\) is a product state. Its spin-flipped counterpart is \(|11\rangle\langle 11|\). The similarity between the two states is quantified by the absolute value of their inner product, namely, \(|\langle 11|00\rangle|=0\), which is consistent with the zero entanglement of the state \(|00\rangle\). If one does the same procedure for a maximally entangled state, say, \(|\Psi^{+}\rangle\langle\Psi^{+}|\), one notices that its spin-flipped counterpart is the same, thus yielding the maximal similarity \(|\langle\Psi^{+}|\Psi^{+}\rangle|=1\). For a mixed state, the concurrence is defined as \[C\equiv\max\{0,\tilde{C}\}, \tag{12}\] which is either \(0\) or a quantity called 'naive' concurrence \(\tilde{C}\). The naive concurrence is given as \[\tilde{C}\equiv\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4}, \tag{13}\] where \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\geq\lambda_{4}\) are square roots of the eigenvalues of \(\rho\tilde{\rho}\). Here, \(\tilde{\rho}\equiv\sigma_{A}^{y}\sigma_{B}^{y}\rho^{*}\sigma_{A}^{y}\sigma_{ B}^{y}\) is a spin-flipped counterpart of \(\rho\). The multiplication of \(\rho\) with \(\tilde{\rho}\) and calculating its eigenvalues quantifies the similarity between \(\rho\) and \(\tilde{\rho}\). By combining with signs in a special way as given in Eq. (13), it is known that the concurrence defined as Eq. (12) indeed is a measure of the entanglement of formation [13]. We denote the concurrence as \(C_{n}\equiv C_{n}(t)\) for the state \(|\psi_{n}(t)\rangle=U_{g}(t)|00;n\rangle\) with \(n\geq 0\). We get \[C_{n}=\begin{cases}0&(n=0)\\ \max\left\{0,\rho_{n}^{\Psi^{+}}-2\sqrt{\rho_{n}^{00}\rho_{n}^{\rm{IT}}} \right\}&(n\geq 1)\end{cases}, \tag{14}\] where \(\rho_{n}^{\mu}\equiv\rho_{n}^{\mu}(t)\equiv\langle\mu|\rho_{n}(t)|\mu\rangle\) is the population of a two-qubit state \(|\mu\rangle\) for \(\mu\in\{00,\Psi^{+},11\}\). \(\rho_{n}(t)\) is the reduced density operator of the qubits, which is defined as Eq. (11) with \(|\psi(t)\rangle=|\psi_{n}(t)\rangle\). The populations are given as \[\rho_{n}^{00}(t) =[p_{n}+q_{n}\cos(g_{n}t)]^{2} \tag{15}\] \[\rho_{n}^{\Psi^{+}}(t) =q_{n}\sin^{2}{(g_{n}t)}\] \[\rho_{n}^{11}(t) =p_{n}q_{n}[1-\cos(g_{n}t)]^{2},\] where \(p_{n}=(n-1)/(2n-1)\), \(q_{n}=n/(2n-1)\) and \(g_{n}\) is defined by Eq. (10). The maximal concurrence is achieved when there is only \(n=1\) photon in the initial state. To see this, we notice that \(C_{1}(t)=\sin^{2}(g_{1}t)\), which follows from Eqs. (14) and (15). Similarly, the maximal value achievable for each \(n\) can be derived as, \[\max_{t}C_{n}(t)=\begin{cases}0&(n=0)\\ 1/n&(n\geq 1)\end{cases}, \tag{16}\] which shows that the entanglement vanishes as \(n\rightarrow\infty\). In Fig. 1, we plot the concurrence \(C_{n}(t)\) and its maximal value \(\max_{t}C_{n}(t)\) for each initial photon number \(n\). Although the photon number changes with time, the total number of quanta is conserved, which is a sum of the number of photons and the number of excited qubits. In other words, the state of the system always belongs to a subspace with \(n\) quanta. Thus, each concurrence has a well-defined period \(2\pi/g_{n}\sim(\sqrt{n}g)^{-1}=T_{g}\), which determines the timescale of entanglement generation in the subspace of \(n\) quanta. Regarding the entanglement generation mechanism, we note that what creates or elimin Figure 1: The concurrence \(C_{n}(t)\), as given by Eq. (14), when there are \(n\) photons in the initial state. The gray solid lines shown on the photon-number–concurrence planes represent the maximal concurrence, as given by Eq. (16). is the set of two ladder operators, \(\sigma^{+}\) and \(\sigma^{-}\). On top of that, what triggers the action of these ladder operators is an event of exchanging a quantum with the cavity mode, see Eq. (9). Thus, when there is no quantum in the system, such as \(|00\rangle|0\rangle\), no exchange of quanta can occur and no entanglement is generated. ## IV Entanglement generation by subcycle driving, \(\tau_{d}\ll T_{g}\) In this section, we consider entanglement generation by a short pulse. With \(\tau_{d}\ll T_{g}\), the action of the pulse on the system lasts shorter than the timescale of the interaction between the cavity and the qubits. However, we need to set also a lower bound on \(\tau_{d}\). Firstly, there seems to exist a finite lower bound for the pulse duration that can be realized experimentally. Secondly, the pulse shall possess a well-defined carrier frequency \(\omega\). This is to prevent exciting other cavity modes and efficiently couple the external mode to the cavity mode with the desired frequency. The condition reads \(\Delta\omega\ll\omega\), where \(\Delta\omega\) is the bandwidth of the pulse. Combining this condition with the uncertainty relation between time and frequency, namely, \((\Delta\omega)^{-1}<\tau_{d}\) (or \(\sim\tau_{d}\) for a Fourier-limited pulse), we get \(\omega^{-1}\ll\tau_{d}\). Using both limits, one arrives at the range of pulse durations \[g/\omega\ll g\tau_{d}\ll 1/\sqrt{n}. \tag{17}\] Again, \(\sqrt{n}\) is determined by the number of quanta involved in the dynamics of the state, as in Sec. III. In this short-pulse regime, the interaction between cavity mode and the qubits would seem almost frozen during the pulse. Since only the cavity mode is externally driven, the qubits can notice the effect of the pulse only through the state of the cavity mode, which is coupled to the qubits. The higher the cavity mode-qubit coupling \(g\), the faster can the changes in the cavity mode affect the qubits. As described in Sec. III, the interaction speed is roughly proportional to \(\sqrt{n}\), where \(n\) is on the same magnitude as the number of quanta in the state undergoing the dynamics. Thus, if the pulse duration \(\tau_{d}\) is sufficiently shorter than the interaction timescale \(T_{g}=(\sqrt{n}g)^{-1}\), then in the leading order we can leave the qubits out of consideration while the cavity is pumped. Formally, neglecting the cavity mode-qubit interaction translates into \(g\to 0\). In this case, the total Hamiltonian in Eq. (1) reduces to \(H(t)=H_{e}(t)\), which is a linearly driven harmonic oscillator in the rotating frame. Classically, it has an exact solution obtained by solving the Hamilton equations. In the original frame it reads \[\begin{split}\dot{x}(t)&=+\omega p(t),\\ \dot{p}(t)&=-\omega x(t)-2\Omega f(t),\end{split} \tag{18}\] with the following classical-quantum correspondence, \(x\leftrightarrow a+a^{\dagger}\), \(p\leftrightarrow-ia+ia^{\dagger}\). Expressing the two real variables \(x\) and \(p\) by a single complex variable \(z_{\omega}\equiv(x+ip)/2\), the Hamilton equations Eq. (18) can be combined as \[\dot{z}_{\omega}(t)=-i\omega z_{\omega}(t)-i\Omega f(t). \tag{19}\] In the absence of driving, i.e. \(\Omega=0\), the system exhibits a harmonic motion, \(z_{\omega}(t)=e^{-i\omega(t-t_{0})}z_{\omega}(t_{0})\), for a given reference time point \(t_{0}\). In the rotating frame defined by \(z_{\omega}(t)\equiv e^{-i\omega t}z(t)\), the Hamilton equation for \(z_{\omega}(t)\) given by Eq. (19) translates to \[\dot{z}(t)=-i\Omega f(t)e^{i\omega t}.\] The solution describes a displacement with an amplitude, \[z(t)-z(t_{0})=-i\Omega\int_{t_{0}}^{t}f(t^{\prime})e^{i\omega t^{\prime}}. \tag{20}\] Although this result comes from the classical Hamilton equations, Eq. (18), the same expression can be obtained from a fully quantum-mechanical description, governed by \(H(t)\) in Eq. (1) with \(g\to 0\). Even if \(g\) is not exactly zero, the displacement with the amplitude given in Eq. (20) is a good approximation as long as the pulse is sufficiently shorter than \((\sqrt{n}g)^{-1}\). In this approximation, the relevant number of quanta \(n\) at time \(t\) is around the average photon number \(|z(t)|^{2}\). Thus, self-consistency requires \[\tau_{d}\ll(\sqrt{n}g)^{-1}\sim(|z(t)|g)^{-1}\] for all relevant times \(t\). We may increase the amplitude \(z\) until \(g\tau_{d}\ll 1/|z|\) still holds. In order to generate a large amplitude with a certain level of accuracy, a sufficiently short pulse is required. Exactly how small the duration should be is determined by the pulse shape. If we can understand how the accuracy depends on the pulse shape, we may be able to find a pulse shape which generates a large enough amplitude with a good fidelity, even for a moderately short pulse. In Sec. IV.5, we express the fidelity as a functional of the pulse shape and utilize it to properly tailor the pulse for mitigating the error. We expect the error to come from the fact that we neglected the interaction between the qubits and the cavity. This is shown in Sec. IV.3. The magnitudes of this second-order and higher-order terms are identified in the following sections. Generally, these terms occur to be smaller than the leading-order term. However, they can become essential when \(z(t)\) converges to zero after the end of the pulse so that the leading-order term vanishes. We discuss the condition for such pulses with almost no displacement and generalize the idea so that one can switch on or off a term with a specific order. This can be useful since each term has its own signature. For example, the leading-order term induces a displacement of the cavity mode and the second-order term induces a rotation of the qubits. In this section, we show that the leading-order effect of the pulse is to create a coherent state in the cavity mode. Further, we formulate the conditions when this effect can be turned off by pulse shaping. Then, the second-order term becomes relevant. It acts only on the state of the qubits. We demonstrate that by shaping the pulse appropriately, one can select which part of the system is pumped, either the cavity mode or the qubits. Understanding how the pulse shape controls both the amplitude of the generated coherent state and the states of the qubits is essential for an experimental realization of the entanglement generation as well as other phenomena including the collapse and revival of the qubits observables [21] and the existence of the 'attractor' state [7; 22]. ### Interaction picture for the pulse In order to describe the effect of the pulse, we switch to a variant of the interaction picture. This is done as follows: in the absence of external driving, i.e. \(\Omega=0\), the Hamiltonian, Eq. (1), becomes time-independent, \(H(t)=H_{g}\). The time evolution from the initial time \(-T\) to \(t\in[-T,T]\) can be described by the total time-evolution operator \[U(t,-T)=U_{g}(t,-T)=U_{g}(t,0)U_{g}(0,-T),\] where \(U(t,-T)\) and \(U_{g}(t,t^{\prime})\equiv U_{g}(t-t^{\prime})\equiv\exp[-iH_{g}(t-t^{\prime})/ \hbar]\) are the time-evolution operators generated by \(H(t)\) and \(H_{g}\) respectively. In the presence of an external pulse, i.e. \(\Omega>0\), centered at time \(t=0\), we define a time-evolution operator \(\mathcal{U}\) \[U(t,-T)\equiv U_{g}(t,0)\,\mathcal{U}(t;0,-T)\,U_{g}(0,-T). \tag{21}\] The time-evolution operator \(\mathcal{U}\) accounts for the effect of the pulse. For brevity, we denote \(\mathcal{U}(t;0,-T)\) as \(\mathcal{U}(t)\). \(\mathcal{U}(t)\) satisfies \[\dot{\mathcal{U}}(t)=-\frac{i}{\hbar}H_{I}(t)\mathcal{U}(t). \tag{22}\] Here \(H_{I}(t)\) is the Hamiltonian in the interaction picture: \[\begin{split} H_{I}(t)&=U_{g}^{\dagger}(t)[H(t)-H_ {g}]U_{g}(t)\\ &=\hbar\Omega\tilde{H}_{I}(t),\end{split} \tag{23}\] with \[\tilde{H}_{I}(t)=f(t)\,U_{g}^{\dagger}(t)\,x_{\omega}(t)\,U_{g}(t). \tag{24}\] Since \(\tilde{H}_{I}(t)\) is proportional to the pulse shape \(f(t)\), it impacts the evolution of the system only for a short duration of time, \(\tau_{d}\). If the pulse is sufficiently shorter than the cavity-qubits timescale, i.e. \(\tau_{d}\ll T_{g}\), then \(U_{g}(t)\) in Eq. (24) essentially remains identity during the interaction with the pulse, so that \(\tilde{H}_{I}(t)\simeq f(t)x_{\omega}(t)\). Formally, this can be seen by using the identity of Campbell [23], \[e^{X}Ye^{-X}=Y+[X,Y]+\frac{1}{2}[X,[X,Y]]+\cdots, \tag{25}\] to expand \(\tilde{H}_{I}(t)\) in powers of \(g\tau_{d}\), \[\begin{split}\tilde{H}_{I}(u)=&\,(g\tau_{d})^{0}f (u)x_{\omega}(u)\\ +&\,(g\tau_{d})^{1}f(u)u[i\tilde{H}_{g},x_{\omega}(u)]\\ +&\,\mathcal{O}[(g\tau_{d})^{2}],\end{split} \tag{26}\] where \(\tilde{H}_{g}\equiv H_{g}/\hbar g\) and \(u\equiv t/\tau_{d}\). ### Leading-order effect: displacement of the cavity mode The solution of Eq. (22) can be expressed in terms of the Magnus expansion [24], \[\mathcal{U}(t) \equiv\exp\left[-iA_{I}(t)\right], \tag{27a}\] \[A_{I}(t) =\sum_{m=1}^{\infty}A_{I}^{(m)}(t). \tag{27b}\] The exponent \(A_{I}(t)\) of \(\mathcal{U}\) is expanded in powers of \(\Omega\tau_{d}\) so that \[A_{I}^{(m)}(t) \equiv(\Omega\tau_{d})^{m}\tilde{A}_{I}^{(m)}(t) \tag{28a}\] \[=\mathcal{O}[(\sqrt{n}\Omega\tau_{d})^{m}] \tag{28b}\] for each \(m\geq 1\). For brevity, let us denote \(\tilde{\Omega}_{n}\equiv\sqrt{n}\Omega\tau_{d}\). We then get \[A_{I}^{(m)}(t)=\mathcal{O}[(\tilde{\Omega}_{n})^{m}]. \tag{29}\] The factor \(\sqrt{n}\) in Eq. (28b) comes from the fact that the total degree of \(\tilde{A}_{I}^{(m)}(t)\) is \(m\) in \(a\) and \(a^{\dagger}\), whose matrix elements in the subspace of \(n\) quanta are on the order of \(\sqrt{n}\). A derivation of Eq. (28) is presented in Appendix B. When \(\tilde{\Omega}_{n}\equiv\sqrt{n}\Omega\tau_{d}\ll 1\), the leading-order term is \(A_{I}^{(1)}(t)\), which can be written in terms of \[\tilde{A}_{I}^{(1)}(t)=\int_{-T/\tau_{d}}^{t/\tau_{d}}du\tilde{H}_{I}(u).\] In order to see the dominant effect of a short pulse such that \(\tau_{d}/T_{g}=\sqrt{n}g\tau_{d}\equiv\tilde{g}_{n}\ll 1\), we substitute \(\tilde{H}_{I}(u^{\prime})\) with Eq. (26) and take the leading-order term in \(g\tau_{d}\). We get then \[\tilde{A}_{I}^{(1)}(t)=\tilde{A}_{I}^{(1,0)}(t)+\mathcal{O}(g\tau_{d}), \tag{30}\] where \[\begin{split}\tilde{A}_{I}^{(1,0)}(t)&\equiv A_{I}^{( 1,0)}(t)/(\Omega\tau_{d})\\ &=s_{1}(t)\,a+s_{1}^{*}(t)\,a^{\dagger},\end{split} \tag{31}\] with \[s_{1}(t)=\int_{-T/\tau_{d}}^{t/\tau_{d}}duf(u)e^{-i\omega\tau_{d}u}. \tag{32}\] After the pulse, i.e. when \(t\gg\tau_{d}\), and with \(T\) satisfying Eq. (6), \(s_{1}(t)\) becomes \[s_{1}\simeq\hat{f}(\omega\tau_{d}) \tag{33}\] where \(\hat{f}(k)\equiv\int_{-\infty}^{\infty}duf(u)e^{-iku}\) is the Fourier transform of \(f(u)\). By controlling the central frequency component of the pulse, one can make \(s(t)\) either zero or non-zero for \(t\gg\tau_{d}\). In our case, the pulse consists of a central frequency \(\omega\) and an envelope \(f_{0}(t)\) with the carrier-envelope phase \(\phi\), as written in Eq. (7). Thus, the Fourier component of \(f(u)\) at \(\omega\tau_{d}\) can be expressed as \[\hat{f}(\omega\tau_{d})=\frac{1}{2}e^{i\phi}\hat{f}_{0}(0)+\frac{1}{2}e^{-i \phi}\hat{f}_{0}(2\omega\tau_{d}), \tag{34}\] where \(\hat{f}_{0}(k)\) is the Fourier transform of the envelope function \(f_{0}(u)\). Note that \(f_{0}(u)\) is defined in the scaled time domain \(u\equiv t/\tau_{d}\), with duration \(\tau_{d}/\tau_{d}=1\). Thus, the width of \(\hat{f}_{0}(k)\) in the Fourier domain is also on the order of 1. If one considers a pulse with a well-defined carrier frequency, we get \(\omega\tau_{d}\gg 1\), inline with Eq. (17). In this regime, \(\hat{f}_{0}(2\omega\tau_{d})\) in Eq. (34) almost vanishes, so that the functional \(s_{1}\) can be approximated as \[s_{1}\simeq\frac{1}{2}e^{i\phi}\hat{f}_{0}(0). \tag{35}\] From Eqs. (27), (28) and (30), we get the leading-order term of \(\mathcal{U}\), \[\mathcal{U}(t) =\exp[-iA_{I}(t)] \tag{36a}\] \[\simeq\exp[-iA_{I}^{(1)}(t)]\] (36b) \[\simeq\exp[-iA_{I}^{(1,0)}(t)]\] (36c) \[\equiv U_{1}(t), \tag{36d}\] where the second line holds for \(\sqrt{n}\Omega\tau_{d}\ll 1\) and the third line for \(\sqrt{n}g\tau_{d}\ll 1\). The \(U_{1}(t)\) denotes the leading-order term. From Eq. (31), one can show that the leading term \(U_{1}\) is a displacement operator, \[U_{1}(t)=D[z(t)],\] where the complex amplitude of the displacement can be written as \[z(t)=-i\Omega\tau_{d}s_{1}^{*}(t). \tag{37}\] The phase, or direction, of the displacement can be controlled by the carrier-envelope offset phase \(\phi\), as can be seen from Eq. (35). If \(z(t)\simeq 0\) after the pulse, i.e. \(t\gg\tau_{d}\), the leading-order term \(U_{1}\) has only a transient modulation during the pulse. In order to drive the cavity mode into a coherent state with a nonzero amplitude \(z\) after the pulse, a pulse with \(s_{1}(t)\neq 0\) for \(t\gg\tau_{d}\) is required. With the asymptotic expression for \(s_{1}\) given by Eq. (35), this requires an envelope \(f_{0}(t)\) such that \(\hat{f}_{0}(0)=\int_{-\infty}^{\infty}dt\,f_{0}(t)\neq 0\). For example, a Gaussian shape \(f_{\text{HG},0}(t)\) can be used. The concurrence induced by a subcycle pulse with such a shape is shown in Fig. 2. Depending on the pulse area \(\Omega\tau_{d}\), characteristics of the time-dependent concurrence vary. For example, for a small pulse area, i.e. \(\Omega\tau_{d}\ll 1\), the average number of quanta pumped into the cavity, which is given by \(|z(T)|^{2}\), is much smaller than one. Thus, the concurrence is dominated by the interference between states belonging to few-quanta subspaces. This can be seen in Fig. 2(a). When \(\Omega\tau_{d}\sim 1\) such that the generated coherent state has an average photon number of around one, i.e. \(|z|^{2}\sim 1\) Figure 2: Naive concurrence induced by a subcycle pulse with different driving strengths \(\Omega\), given by (a) \(\Omega\tau_{d}=0.0531\), (b) \(\Omega\tau_{d}=1.29\) and (c) \(\Omega\tau_{d}=8\). The pulse duration, envelope, carrier-envelope phase and the cavity-qubit coupling are fixed as \(\tau_{d}=\pi/\omega\), \(f_{0}(t)=f_{\text{HG},0}(t)\), \(\phi=0\) and \(g=0.05\omega\), respectively. The black solid lines (blue dotted lines) represent numerically (analytically) evaluated naive concurrences. The analytical results are obtained essentially by approximating the time-evolution operator by a displacement operator, i.e. \(\mathcal{U}(t)\simeq D[z(t)]\) based on Eq. (36). The displacement amplitude \(z(t)\) is given by Eq. (37). In (a)-(c), the corresponding average photon numbers \(|z(t)|^{2}\) at the end of the pulse \(t=T\) are indicated in each panel. The gray solid line and the light gray dashed line in (d) show the pulse shape \(f(t)\) and its envelope \(f_{0}(t)\), respectively. a value of concurrence larger than \(0.75\) can be achieved, which is shown in Fig. 2(b). This is consistent with the case of a Fock state where the single-photon state allows to achieve maximal entanglement. When many photons are pumped into the cavity mode, so that \(|z|^{2}\gg 1\), a smooth oscillation of concurrence appears, as shown in Fig. 2(c). This is consistent with the concurrence generated by a strong coherent state [7]. ### The second-order effect: rotation of the qubits The propagator \(U_{1}(t)\) is a good approximation of \(\mathcal{U}(t)\) only when \(\sqrt{n}\Omega\tau_{d}\ll 1\) and \(\sqrt{n}g\tau_{d}\ll 1\). Thus, even if the pulse is short, satisfying the latter condition, if the pulse area \(\sqrt{n}\Omega\tau_{d}\) is not small enough, \(U_{1}(t)\) may not be sufficient to describe the dynamics. This is because the series Eq. (27b) may diverge for a large \(\sqrt{n}\Omega\tau_{d}\), in which even the inclusion of high-order terms may not work. In order to describe the case where \(\sqrt{n}\Omega\tau_{d}\) is not too small, we proceed with the following decomposition of \(\mathcal{U}\): \[\mathcal{U}(t)\equiv U_{1}(t)\,\mathcal{U}_{2}(t). \tag{38}\] Note that Eq. (38) holds for any finite value of \(\sqrt{n}\Omega\tau_{d}\), as it is merely a definition of another interaction picture in which the contribution of the leading-order effect \(U_{1}(t)\) is subtracted. In order to evaluate \(\mathcal{U}_{2}(t)\), we find the Hamiltonian which generates \(\mathcal{U}_{2}(t)\). Let us denote the Hamiltonian as \(H_{II}(t)\). Then the time-evolution operator \(\mathcal{U}_{2}(t)\) satisfies \(\dot{\mathcal{U}}_{2}(t)=(-i/\hbar)H_{II}(t)\,\mathcal{U}_{2}(t)\). From Eq. (38), we get the Hamiltonian, \[H_{II}(t)=U_{1}^{\dagger}(t)[H_{I}(t)-H_{1}(t)]U_{1}(t), \tag{39}\] where \(H_{I}(t)\) and \(H_{1}(t)\) are the Hamiltonians generating \(\mathcal{U}(t)\) and \(U_{1}(t)\), respectively. \(H_{I}(t)\) can be evaluated from Eqs. (23) and (24). \(H_{1}(t)\) can be obtained by differentiating \(U_{1}(t)\) and using the definition of \(H_{1}(t)\), namely \(\tilde{U}_{1}(t)=(-i/\hbar)H_{1}(t)\,U_{1}(t)\). The derivative of \(U_{1}(t)\) can be obtained by using the Zassenhaus formula [24] or \[\frac{d}{dt}e^{A(t)}=\int_{0}^{1}ds\,e^{sA(t)}\,\frac{dA}{dt}\,e^{-sA(t)}e^{A (t)},\] which is shown in, e.g., Ref. [25]. We can then get the Hamiltonian \(H_{1}(t)\) as \[H_{1}(t)=H_{e}(t)-\frac{1}{2}\langle z(t)|H_{e}(t)|z(t)\rangle, \tag{40}\] where \(|z(t)\rangle\equiv D[z(t)]|0\rangle\) is a coherent state with the amplitude \(z(t)\) given as Eq. (37). Inserting Eqs. (23) and (40) into the expression for \(H_{II}(t)\) in Eq. (39), we get \[H_{II}(t)=H_{II}^{\prime}(t)+H_{II,z}(t),\] where \[H_{II}^{\prime}(t) \equiv U_{1}^{\dagger}(t)[U_{g}^{\dagger}(t)H_{e}(t)U_{g}(t)-H_{e }(t)]U_{1}(t), \tag{41a}\] \[H_{II,z}(t) \equiv\frac{1}{2}\langle z(t)|H_{e}(t)|z(t)\rangle. \tag{41b}\] Since \(H_{II,z}(t)\) is a scalar, it can be subtracted from \(H_{II}(t)\) by the transformation \[\mathcal{U}_{2}\equiv U_{II,z}(t)\,\mathcal{U}_{2}^{\prime}(t), \tag{42}\] with \[U_{II,z}(t)=\exp\left[-\frac{i}{\hbar}\int_{-T}^{t}dt^{\prime}H_{II,z}(t^{ \prime})\right].\] We get then the differential equation \[\dot{\mathcal{U}}_{2}^{\prime}(t)=-\frac{i}{\hbar}H_{II}^{\prime}(t)\, \mathcal{U}_{2}^{\prime}(t),\] whose formal solution is again given by the Magnus expansion, \[\mathcal{U}_{2}^{\prime}(t) \equiv\exp[-iA_{II}^{\prime}(t)], \tag{43a}\] \[A_{II}^{\prime}(t) =\sum_{m=1}^{\infty}A_{II}^{\prime(m)}(t). \tag{43b}\] However, the order of magnitude of each term is different from that of \(A_{I}^{(m)}(t)\) in Eq. (27b). In Appendix B, we show that \[A_{II}^{\prime(m)}(t)=\mathcal{O}[(\tilde{\Omega}_{n}\tilde{g}_{n})^{m}]. \tag{44}\] It has an additional factor, \(\tilde{g}_{n}\equiv\sqrt{n}g\tau_{d}\), compared to the previous case, \(A_{I}^{(m)}(t)=\mathcal{O}[(\tilde{\Omega}_{n})^{m}]\) in Eq. (29). This arises from a property of \(H_{II}^{\prime}(t)\), given by Eq. (41a), where \(H_{e}(t)\) is subtracted from \(U_{g}^{\dagger}(t)H_{e}(t)U_{g}(t)\). Applying the identity (25) to \(U_{g}^{\dagger}(t)H_{e}(t)U_{g}(t)\), the Hamiltonian can be expanded as \[\tilde{H}_{II}^{\prime}(t) \equiv H_{II}^{\prime}(t)/\hbar\Omega \tag{45}\] \[= (g\tau_{d})^{1}f(u)uU_{1}^{\dagger}(u)[i\tilde{H}_{g},x_{\omega}(u )]U_{1}(u)\] \[+ \mathcal{O}[(g\tau_{d})^{2}].\] We note that \(\tilde{H}_{II}^{\prime}(t)=\mathcal{O}[(g\tau_{d})^{1}]\), whereas \(\tilde{H}_{I}(t)=\mathcal{O}[(g\tau_{d})^{0}]\), as can be seen from Eq. (26). For \(\tilde{\Omega}_{n}\tilde{g}_{n}\ll 1\), the dominant term in the expansion Eq. (43b) is \(A_{II}^{\prime(1)}(t)\), cf. Eq. (44). This condition allows us to use a pulse such that \(1<\tilde{\Omega}_{n}\ll(\tilde{g}_{n})^{-1}\) by using a sufficiently short pulse \(\tilde{g}_{n}\ll 1\). Note that if we use the Magnus expansion in Eq. (27b) to get the second- and higher-order terms, the pulse area has to be limited by \(\tilde{\Omega}_{n}\ll 1\) to assure the expansion convergence. By shifting to the second interaction picture, Eq. (21), we can use a pulse with a larger \(\tilde{\Omega}_{n}\). This is required to generate a coherent state with amplitude much larger than \(1\) since \(z(t)=\mathcal{O}[\Omega\tau_{d}]\). Phenomena such as the collapse and revival of qubits observables are visible only in this regime [21]. Using Eq. (45) to evaluate \(A_{II}^{\prime(1)}(t)\), we get \[A_{II}^{\prime(1)}(t)=A_{II}^{\prime(1,1)}(t)+\mathcal{O}[\tilde{\Omega}_{n}\tilde {g}_{n}^{2}]. \tag{46}\] The leading term is given as \[\begin{split}\tilde{A}^{\prime(1,1)}_{II}(t)&\equiv A^{ \prime(1,1)}_{II}(t)/(\Omega\tau_{d})(g\tau_{d})\\ &=s_{(1,1)}(t)(-i\sigma^{-})+s^{*}_{(1,1)}(t)(i\sigma^{+}),\end{split} \tag{47}\] where \[s_{(1,1)}(t)=\int_{-T/\tau_{d}}^{t/\tau_{d}}du\,uf(u)e^{-i\omega\tau_{d}u}.\] Similar to \(s_{1}\) in Eq. (33), when \(t\gg\tau_{d}\), \(s_{(1,1)}(t)\) can be approximated as \[s_{(1,1)}\simeq i\hat{f}^{(1)}(\omega\tau_{d}),\] where \(\hat{f}^{(1)}\equiv d\hat{f}/dk\) is the derivative of the Fourier transform \(\hat{f}\) of \(f\). In our case, the pulse has a well-defined carrier frequency \(\omega\), entering Eq. (7), which means \(\omega\tau_{d}\gg 1\). Therefore, the functional can be approximated as \[s_{(1,1)}\simeq\frac{1}{2}e^{i\phi}i\hat{f}^{(1)}_{0}(0), \tag{48}\] where \(\hat{f}^{(1)}_{0}\equiv d\hat{f}_{0}/dk\). Note that for any envelope \(f_{0}(u)\) with even parity, \(\hat{f}^{(1)}_{0}(0)=0\). From Eqs. (43), (44) and (46), we now get the second leading-order term as \[\mathcal{U}^{\prime}_{2}(t) =\exp[-iA^{\prime}_{II}(t)] \tag{49a}\] \[\simeq\exp[-iA^{\prime(1)}_{II}(t)]\] (49b) \[\simeq\exp[-iA^{\prime(1,1)}_{II}(t)]\] (49c) \[\equiv U_{2}(t), \tag{49d}\] where the second line holds for \(\tilde{\Omega}_{n}\tilde{g}_{n}\ll 1\) and the third line for \(\tilde{g}_{n}\ll 1\). The last line defines the second-leading term \(U_{2}(t)\). With Eq. (47), one can show that \(U_{2}(t)\) is a rotational operator: \[U_{2}(t)=R[\theta(t);\mathbf{n}(t)],\] where \(R[\theta;\mathbf{n}]\equiv\exp[-i\theta\mathbf{n}\cdot\boldsymbol{\sigma}/2]\) for an angle \(\theta\), a rotational axis \(\mathbf{n}\) of unit length and \(\boldsymbol{\sigma}=(\sigma^{x},\sigma^{y},\sigma^{z})\) with \(\sigma^{j}\equiv\sigma^{j}_{A}+\sigma^{j}_{B}\) for \(j\in\{x,y,z\}\). It rotates the Bloch vector of each qubit. The angle and the rotational axis are given as \[\theta(t) =2(\Omega\tau_{d})(g\tau_{d})|s_{(1,1)}(t)|, \tag{50a}\] \[\mathbf{n}(t) =\big{(}\sin[\phi_{(1,1)}(t)],-\cos[\phi_{(1,1)}(t)],0\big{)}, \tag{50b}\] for \(s_{(1,1)}(t)\equiv|s_{(1,1)}(t)|e^{i\phi_{(1,1)}(t)}\). In order to have a nonzero rotation after the pulse, i.e. \(\theta(t)\neq 0\) for \(t\gg\tau_{d}\), we need a pulse envelope \(f_{0}(t)\) with \(s_{(1,1)}(t)\neq 0\) for \(t\gg\tau_{d}\), as follows from Eq. (50). Using Eq. (48), the condition reads \(i\hat{f}_{0}(0)=\int_{-\infty}^{\infty}dt\,tf_{0}(t)\neq 0\). Thus, if the pulse envelope is even, the rotational angle essentially vanishes. If we use an odd envelope, a nonzero rotation is possible. In this case, the leading-order contribution \(U_{1}\) is turned off after the pulse since \(s_{1}(t)\simeq 0\) for \(t\gg\tau_{d}\). The concurrence induced by a subcycle pulse with an odd envelope is shown in Fig. 3. Comparing Fig. 3 with Fig. 2, the cavity-qubit coupling \(g\), the normalized pulse duration \(g\tau_{d}\) and the pulse area \(\Omega\tau_{d}\) are the same in both figures. The only difference is the shape of the pulse envelope \(f_{0}(t)\), resulting in qualitatively distinct dynamics of the concurrence. A stronger driving results in an increased angle of rotation. An angle larger than \(\pi/2\) is achieved in Fig. 3(b). There, the numerical result shows that there is an additional effect of the pulse on top of the displacement \(U_{1}=D[z]\) and the rotation \(U_{2}=R[\theta,\mathbf{n}]\). Those additional corrections can be attributed to higher-order terms which are treated in the following section. Figure 3: Same as Fig. 2, except that the pulse envelope is given by \(f_{0}(t)=f_{\text{HG},1}(t)\) and the driving strength \(\Omega\) is given by (a) \(\Omega\tau_{d}=2.05\) and (b) \(\Omega\tau_{d}=4.1\). The analytical solution is based on the approximation \(\mathcal{U}(t)\simeq D[z(t)|R[\theta(t);\mathbf{n}(t)]\), i.e. including the terms up to the second order. The expressions for the displacement amplitude \(z(t)\), the rotational angle \(\theta(t)\) and the rotational axis \(\mathbf{n}(t)\) are given by Eqs. (37), (50a) and (50b), respectively. For the given envelope, the displacement amplitude almost vanishes after the pulse, i.e. at \(t=T\). The rotational angle at the end of the pulse, \(\theta(T)\), is indicated in panels (a) and (b). ### Higher-order effects: conditional displacement and rotation around the \(z\)-axis The first two leading-order effects are identified as a displacement of the cavity mode and a rotation of the qubits. One may describe the effect of the pulse approximately with these two operations. However, there are higher-order terms which do not vanish in general. Understanding the higher-order terms might help to find other possible types of operations apart from the displacement or the rotation. As an example, we present two quadratic terms in \(g\tau_{d}\). They can be found by expanding the exponent \(A^{\prime}_{II}(t)\), Eq. (43b), in the orders of \(\Omega\tau_{d}\) and \(g\tau_{d}\). There are only two such terms, \(A^{\prime(1,2)}_{II}(t)\) and \(A^{\prime(2,2)}_{II}(t)\), which are quadratic in \(g\tau_{d}\). The first term \(A^{\prime(1,2)}_{II}(t)\) generates a conditional displacement where the direction to which the cavity mode is displaced depends on the state of the qubits. To be explicit, \[\tilde{A}^{\prime(1,2)}_{II}(t) \equiv A^{\prime(1,2)}_{II}(t)/(\Omega\tau_{d})(g\tau_{d})^{2}\] \[= \sigma^{z}[s_{(1,2)}(t)a+s^{*}_{(1,2)}(t)a^{\dagger}].\] The functional \(s_{(1,2)}(t)\) modulates the displacement amplitude and is given as \[s_{(1,2)}(t)=\int_{-T/\tau_{d}}^{t/\tau_{d}}du\frac{u^{2}}{2!}f(u)e^{-i\omega \tau_{d}u}.\] The second term \(A^{\prime(2,2)}_{II}(t)\) represents a rotation of qubits around the \(z\)-axis, \[\tilde{A}^{\prime(2,2)}_{II}(t) \equiv A^{\prime(2,2)}_{II}(t)/(\Omega\tau_{d})^{2}(g\tau_{d})^{2}\] \[= \sigma^{z}[s_{(2,2)}(t)+s^{*}_{(2,2)}(t)].\] The functional \(s_{(2,2)}(t)\) determines the corresponding angle of rotation and can be written as \[s_{(2,2)}(t)=s_{(2,2),1}(t)+s_{(2,2),2}(t),\] where \[s_{(2,2),1}(t) =\int_{-T/\tau_{d}}^{t/\tau_{d}}du\frac{u^{2}}{2!}f(u)(-i)s^{*}_{ 1}(u)e^{-i\omega\tau_{d}u},\] \[s_{(2,2),2}(t) =\int_{-T/\tau_{d}}^{t/\tau_{d}}du\frac{u}{1!}f(u)(-i)s^{*}_{(1,1) }(u)e^{-i\omega\tau_{d}u},\] originate from the first and the second Magnus term generated by \(H^{\prime}_{II}(t)\), Eq. (43b), respectively. ### Towards exact operation Another advantage of the identifying the higher-order terms is improvement of the accuracy of a given operation. One can enhance the accuracy of an operation by eliminating irrelevant terms. This can be done by searching a set of the driving parameters, including the pulse shape, which makes those terms vanish. As an example, let us discuss how to obtain a displacement with a given amplitude \(z_{0}\) with a certain accuracy. From the displacement amplitude \(z(t)\) given by Eq. (37) we require that at the end of the operation, \(t=T\), the amplitude of the displacement reaches the desired value \(z_{0}\), i.e. \(z(T)=z_{0}\). One can indeed find parameters satisfying this requirement. For \(T\gg\tau_{d}\) and \(\omega\tau_{d}\gg 1\), being valid in the considered regime, we can use Eq. (35). By selecting a pulse shape such that \(\tilde{f}(0)>0\), we obtain: \[\Omega\tau_{d} \simeq 2|z_{0}|/\hat{f}_{0}(0), \tag{51a}\] \[\phi \simeq-\phi_{0}-\pi/2, \tag{51b}\] where \(z_{0}\equiv|z_{0}|e^{i\phi_{0}}\). The error in the resulting operation with respect to the displacement determined by Eq. (51a) can be estimated in terms of the normalized pulse duration \(g\tau_{d}\). One can show that as \(g\tau_{d}\to 0\) the error becomes arbitrarily small. In practice, the pulse cannot be infinitesimally short but has a finite duration. For a given finite pulse duration, what is important is the order of the error in terms of \(g\tau_{d}\). In order to quantify the error, we use the state fidelity, comparing the target state with the actual state. Starting from the ground state \(|\psi(-T)\rangle=|00;0\rangle\) of the system at time \(t=-T\), the pulse finishes displacing the Figure 4: State fidelity for a displacement operation \(D[z_{0}]\) with \(z_{0}=-0.05i\), implemented by a subcycle pulse. The blue solid line with circles (blue dashed line) indicates numerically (analytically) evaluated fidelity for an envelope shown in inset (a). Likewise, the orange solid line with crosses (orange dotted line) represents numerically (analytically) evaluated fidelity for an envelope shown in inset (b). cavity mode by \(z(T)=z_{0}\) at time \(t=T\). In the interaction picture defined by Eq. (21), the target state, denoted as \(|\psi_{0}\rangle=|00;z_{0}\rangle\equiv|00\rangle|z_{0}\rangle\), is a displaced ground state, where \(|z_{0}\rangle\equiv D[z_{0}]|0\rangle\) is a coherent state. The state fidelity \(F\) is defined as the probability of measuring the target state from the actual state of the system \(|\psi(T)\rangle\equiv\mathcal{U}(T)|\psi(-T)\rangle\): \[F=|\langle\psi_{0}|\psi(T)\rangle|^{2}.\] Using Eq. (38) with (42) and identifying \(U_{1}(T)=D[z(T)]=D[z_{0}]\), the fidelity can be written as \[F=|\langle 00;0|\,\mathcal{U}_{2}^{\prime}(T)|00;0\rangle|^{2}. \tag{52}\] The leading-order term, which is the displacement, cancels out the displacement of the target state and what is left is \(\mathcal{U}_{2}^{\prime}(T)\). Thus, the dominant error term for the displacement operation is determined by the leading term of \(\mathcal{U}_{2}^{\prime}(T)\), which is \(A_{II}^{\prime(1,1)}(T)\). For a fixed \(\Omega\tau_{d}\) as given in Eq. (51a), the order of magnitude of the error term is \[A_{II}^{\prime(1,1)}(T)=\mathcal{O}[(g\tau_{d})].\] Note that \(A_{II}^{\prime(1,1)}(T)\) is the only term that is linear in \(g\tau_{d}\) in the exponent \(A_{II}^{\prime}(T)\) of \(\mathcal{U}_{2}^{\prime}(T)\). Let us denote \(A_{II}^{\prime(:,1)}(T)=A_{II}^{\prime(1,1)}(T)\), where '\(:\)' in the superscript implies 'all' orders of \(\Omega\tau_{d}\) for the given order of \(g\tau_{d}\) which is \(1\) in this case. Expanding \(\mathcal{U}_{2}^{\prime}(T)\) with respect to \(g\tau_{d}\), we arrive at \[F=1-(g\tau_{d})^{2}\mathrm{Var}[\tilde{A}_{II}^{\prime(:,1)}]+\mathcal{O}[(g \tau_{d})^{3}],\] where \(\tilde{A}_{II}^{\prime(:,1)}(t)\equiv A_{II}^{\prime(:,1)}(t)/(g\tau_{d})= \tilde{A}_{II}^{\prime(1,1)}(t)\) and the variance is evaluated with respect to the initial state \(|\psi(-T)\rangle=|00;0\rangle\). In general, especially for a pulse envelope without definite parity, \(\tilde{A}_{II}^{\prime(1,1)}(T)\) is nonzero, as can be seen from Eq. (47). An example of such pulse envelope is \(f_{0}(t)=(1/\sqrt{2})f_{\mathrm{HG},0}(t)+(1/\sqrt{2})f_{\mathrm{HG},1}(t)\). However, when the pulse envelope has an even parity, e.g. \(f_{0}(t)=f_{\mathrm{HG},0}(t)\), then the leading error term, \(\tilde{A}_{II}^{\prime(1,1)}(T)\), almost vanishes, which can be seen from Eqs. (47) and (48). Since the linear term in \(g\tau_{d}\) is almost zero, the error is dominated by quadratic terms. As shown in Sec. IV.4, there are two quadratic terms in \(g\tau_{d}\). Denoting their sum as \[A_{II}^{\prime(:,2)}(t) =A_{II}^{\prime(1,2)}(t)+A_{II}^{\prime(2,2)}(t)\] \[\equiv(g\tau_{d})^{2}\tilde{A}_{II}^{\prime(:,2)}(t),\] the fidelity can be written as \[F=1-(g\tau_{d})^{4}\mathrm{Var}[\tilde{A}_{II}^{\prime(:,2)}]+\mathcal{O}[(g \tau_{d})^{5}].\] In Fig. 4, we show the fidelity for \(z_{0}=-0.05i\) and for two exemplary shapes of the pulse envelope \(f_{0}(t)\). For both pulse shapes, the error converges to zero as the pulse duration becomes shorter. For \(f_{0}(t)=f_{\mathrm{HG},0}(t)\) the convergence rate is higher. In this case the leading term \(A_{II}^{\prime(1,1)}(T)\) of the error is suppressed since the functional \(s_{(1,1)}(T)\) in Eq. (47) almost vanishes. A higher convergence rate implies that for a given requirement on the fidelity a longer pulse can be utilized. This is more desirable for experimental realization, because too short pulses can be problematic both in terms the generation and avoiding certain types of operation errors, as mentioned in the beginning of this section. One may go beyond the presented convergence rate by eliminating even higher-order error terms \(A_{II}^{\prime(:,k)}(T)\) for \(k\geq 2\), through the pulse shaping. By this method, one may systematically increase the convergence rate to the extent that a desired operation with a required fidelity can be implemented based on a pulse of available duration. ## V Entanglement generation by quasistatic driving, \(\tau_{d}\gg T_{g}\) We consider a quasistatic driving where the duration \(\tau_{d}\) of the external driving is longer than the characteristic timescale of the system, \(T_{g}\). We first assume that the envelope function is constant, i.e. \(f_{0}(t)=1\). For a fixed driving amplitude \(\Omega\), we describe the ground state of the Hamiltonian. Then, we increase the driving strength adiabatically from zero to a finite value, so that the system remains in the ground state corresponding to the instantaneous value of the driving strength at each time moment. For this quasistatic driving, we set \(\phi=0\) and \(f_{0}(t)=1\) in Eq. (7). Notice that a nonzero \(\phi\) would correspond to a rotation of both the state of the cavity mode in its phase space and the state of the qubits, by the same angle \(\phi\). Thus, we have \(f(t)=\cos{(\omega t)}=(1/2)(e^{-i\omega t}+e^{i\omega t})\). Using this in Eq. (4) and applying the RWA, we get \[H_{e}^{\mathrm{RWA}}=\hbar\Omega\frac{1}{2}(a+a^{\dagger}).\] Note that for the RWA to hold, the driving strength \(\Omega\) should be small enough with respect to the driving frequency \(\omega\). The total Hamiltonian given in Eq. (1) becomes time-independent: \[H^{\mathrm{RWA}}=H_{g}+H_{e}^{\mathrm{RWA}}. \tag{53}\] The ground state of \(H^{\mathrm{RWA}}\) is known for an arbitrary number of qubits [26]. It is normalizable when \(\Omega<Ng\), where \(N\) is the number of qubits. Let us denote the state vector corresponding to the ground state as \(|E_{0};r\rangle\) with energy \(E_{0}\). One can show that the ground state is given by a product state of the rotated qubits and a squeezed state of the cavity mode, with the ground-state energy \(E_{0}=0\). For our two-qubit case (\(N=2\)), it can be written as \[|E_{0};r\rangle=|\theta_{r}\theta_{r}\rangle|r\rangle,\] where \[|\theta_{r}\theta_{r}\rangle\equiv R[\theta_{r};\mathbf{e}_{y}]|00\rangle \tag{54}\] and \(R[\theta_{r};\mathbf{e}_{y}]\) rotates the qubits by an angle \(\theta_{r}\) around the \(y\)-axis. The angle depends on the driving strength \(\Omega\), as determined by the relation \[\sin\theta_{r}=\frac{\Omega}{Ng}=\frac{\Omega}{2g},\] where \(0\leq\theta<\pi/2\) for \(\Omega<Ng=2g\). The cavity mode is squeezed, \[|r\rangle\equiv S(r)|0\rangle,\] where \(S(r)=\exp[(r/2)(a^{\dagger})^{2}-(r/2)a^{2})]\) is the squeezing operator with \(r\geq 0\) being the squeezing parameter. The average photon number of \(|r\rangle\) is \[\bar{n}_{\gamma}\equiv\langle a^{\dagger}a\rangle=\sinh^{2}r.\] The rotational angle is connected to the squeezing parameter by \[\cos\theta_{r}=e^{-2r}, \tag{55}\] where \(0\leq r<\infty\). Figure 5(a) illustrates the relation between \(r\) and \(\theta_{r}\). The average number of excited qubits is given as \[\bar{n}_{q}\equiv\sum_{j=1}^{N}\langle\sigma_{j}^{+}\sigma_{j}^{-}\rangle=N \sin^{2}(\theta_{r}/2),\] where \(N=2\) is the number of qubits and the average is taken with respect to the rotated state, Eq. (54). The total average excitation number \(\bar{n}\) is defined as the sum of the average number of photons and that of the excited qubits, \[\bar{n}=\bar{n}_{\gamma}+\bar{n}_{q}.\] By turning off the external driving at time \(t=0\), we mean \(\Omega=2g\sin\theta_{r}\to 0\) instantaneously. Then the system starts to evolve with the initial condition \(|\psi(0)\rangle=|E_{0};r\rangle\) and with the Hamiltonian \(H(t)=H_{g}\) since \(\Omega=0\). The concurrence evolves accordingly for \(t\geq 0\), which is shown in Fig. 5(b). In order to find the squeezing parameter with the maximal entanglement of formation, we evaluate the naive concurrence \(\tilde{C}(t)\) maximized with respect to time, i.e. \(\max_{t}\tilde{C}(t)\), which is shown in Fig. 5(c). The maximization is done for the time range presented in Fig. 5(b). The maximal concurrence occurs for \(r=0.899\) corresponding to the average photon number \(\bar{n}_{\gamma}=1.05\), average number of excited qubits \(\bar{n}_{q}=0.83\) and the average total excitation number \(\bar{n}=1.89\). For larger squeezing, the maximal concurrence decreases. Note that the rotational angle converges to \(\pi/2\) as \(r\rightarrow\infty\). To see the pure effect of the squeezed state on the generation of entanglement, one may rotate the qubits back to their ground states while keeping the squeezing of the cavity mode. For the rotations, one may address the qubits directly, by shining a laser along a direction perpendicular to the axis of the cavity. If it is not feasible to access the qubits directly, one can still rotate them by driving the cavity mode. In order to achieve this, one may drive the cavity mode with a specific pulse shape that satisfies \(z(T)=0\), in order to avoid displacement of the cavity mode but still to induce the required rotation of the qubits, facilitated by the second-order term discussed in Sec. IV.3. An example of such a pulse shape can be found in Fig. 3(c). In Fig. 6(b), we show the naive concurrence induced purely by a squeezed state, without any rotation, i.e. \(\theta_{r}=0\), as shown in Fig. 6(a). We notice that the naive concurrence can be negative when the rotation is involved as can be seen in Fig. 5(b), whereas, when the rotation is subtracted out resulting in a pure squeezed state, the Figure 6: Same as Fig. 5, except that the rotational angle is zero for all values of the squeezing parameter \(r\). Figure 5: (a) The relation between the squeezing parameter \(r\) of the cavity mode and the rotational angle \(\theta_{r}\) for the qubits. (b) The naive concurrence \(\tilde{C}(t)\) versus the squeezing parameter \(r\) and time \(t\). (c) Naive concurrence maximized with respect to time for each given squeezing \(r\). naive concurrence stays always nonnegative. Another difference concerns the value of the squeezing parameter \(r\) which maximizes the concurrence. As can be seen from Fig. 6, the concurrence reaches its maximum at \(r=1.11\), which corresponds to the average photon number \(\bar{n}_{\gamma}=1.84\). The rotation shifts the optimal squeezing parameter \(r\) and the corresponding average photon number. Without rotation, a higher average photon number is required to get the maximal concurrence. This difference in the average photon number is compensated by the pumping of quanta through the rotation, see Fig. 7. In order to generate a maximal concurrence, what matters most is the total number of excitations in the system, including both the photons and the excitations of the qubit system. We finish this section by considering two limits. The first is the low-excitation limit, \(\bar{n}\ll 1\). In Fig. 8(a), we show the time-dependent naive concurrence in the low-squeezing regime, where \(r=0.0492\) corresponding to \(\bar{n}=0.0962\) with rotation and \(\bar{n}=0.00243\) without rotation. We see that when there is no rotation, an oscillation with a well-defined period is present. The reason is that in this limit the initial state consists of \(|00;0\rangle\) and a small amount of \(|00;2\rangle\). The former has no time dependence. The latter belongs to the two-quanta subspace and is essentially the only contribution to the time dependence in this regime. When the rotation enters, however, the oscillation of the concurrence has multiple frequencies, as can be seen in Fig. 8(a), indicated by the black solid line. This can be understood as a consequence of an additional interference with another state with a single quantum, namely \(|\Psi^{+};0\rangle\), introduced by the rotation of \(|00;0\rangle\). We next turn to the opposite regime, where the excitation number is much higher than \(1\). One example is shown in Fig. 8(b), where the squeezing parameter is \(r=3.15\). The corresponding average total excitation number with rotation is \(\bar{n}=137\) and that without rotation is \(\bar{n}=136\). After some time passes, the naive concurrence in both cases shows complicated fluctuations. When the rotation is not subtracted, the naive concurrence fluctuates around an average value close to zero, whereas when the initial state is purely a squeezed state with the rotation removed, the system can maintain positive entanglement of formation for longer duration. ## VI Discussion Let us discuss the relevant parameters for experimental realization of the presented results. Firstly, since the theory employs the RWA for the coupling between each qubit and the cavity mode, the coupling strength should be small compared to the frequency of the cavity mode and the qubits, i.e. \(g\ll\omega\). Secondly, for the subcycle, or sub-Rabi, driving, we require \(g\tau_{d}\ll 1\), which follows from \(\tau_{d}\ll T_{g}=(\sqrt{n}g)^{-1}\). For the efficient coupling of the external field to the cavity mode, without affecting much the other modes, the pulse needs to have a well-defined carrier frequency which is resonant with the frequency of the cavity mode. From this condition, we require \(\omega\tau_{d}\gg 1\), which follows from \(T_{\omega}\equiv 2\pi/\omega\ll\tau_{d}\). All the conditions can be summarized by Eq. (17). As long as the cavity-qubit coupling \(g\), the mode frequency \(\omega\) and the pulse duration \(\tau_{d}\) satisfies this condition, one can test the demonstrated results. To have a concrete example, we consider a quantum dot in a photonic crystal [27], where the resonant frequency corresponds to a wavelength \(\lambda=928\,\mathrm{nm}\) and the cavity-qubit coupling \(g/2\pi=16\,\mathrm{GHz}\). This imposes a condition on the pulse Figure 8: Comparison of the naive concurrence with and without the rotation. The squeezing parameters are shown in each panel. The black solid lines represent the inducing of concurrence by a squeezed state with no rotation. The blue dashed lines represent the naive concurrence from a squeezed and rotated state with the rotational angle \(\theta_{r}\) given as Eq. (55). Figure 7: Naive concurrence maximized with respect to time versus the average total excitation number. The black solid line with circles denotes the case where the rotation is accompanied by the squeezing, whereas the blue dashed line represents the case where no rotation is involved. duration as \[T_{\omega}\equiv 2\pi/\omega\sim 3\,\mathrm{fs}\ll\tau_{d}\ll 55\,\mathrm{ps}/ \sqrt{n}\sim 2\pi/(\sqrt{n}g).\] For the low-excitation limit, where \(n\sim 1\), we get \(\tau_{d}\ll 55\,\mathrm{ps}\). If one selects \(\tau_{d}=5.5\,\mathrm{ps}\), it corresponds to \(g\tau_{d}\sim 0.6\), which is used in our calculations, e.g. in Fig. 4. ## VII Conclusions We have considered the generation of entanglement between two qubits by using a classical light source and a quantized cavity mode. We have shown how two qubits can be entangled by exchanging quanta with a third party which in our case is the cavity mode. Quanta can be pumped into the system through an external driving by a classical light source coupled to the cavity mode, with no direct driving of the qubits. The quanta exchange timescale \(T_{g}=(\sqrt{n}g)^{-1}\) is identified. With respect to this characteristic timescale of the cavity-qubits system, we considered two regimes of the external driving. We first discussed the subcycle driving, where it is performed by a pulse with duration shorter than the characteristic timescale of the system, \(T_{g}\). We showed that the leading-order effect of a pulsed driving is a displacement of the cavity mode, which can be expected since the cavity mode is directly coupled to the pulse. We further showed that by shaping the pulse, one can also rotate the qubits, and if desired, one can let the cavity remain intact after the passage of the pulse. The entanglement generation for each type of the pulse shape was demonstrated, showing good agreement with exact results. We showed that the error for the displacement operation can be set arbitrarily small by choosing a sufficiently small \(g\tau_{d}\), which represents the shortness of the pulse duration with respect to \(T_{g}\). The error was estimated by identifying the convergence rate. Furthermore, enhancing the convergence rate by shaping the pulse was demonstrated, indicating how to perform a desired operation with a given fidelity. Higher-order effects including the phase shift of the qubits and the displacement of the cavity mode conditional to the qubits state are found. As the opposite regime of the driving, we discussed a quasistatic driving where its duration is much longer than \(T_{g}\). We considered a continuous-wave driving with the driving amplitude such that there exists a normalizable ground state in the rotating frame. In this regime, the ground state is a squeezed state with rotated qubits. Assuming adiabatic driving to prepare the ground state with nonzero squeezing, we studied the entanglement induced by the squeezed and rotated state. We observed a maximal entanglement of formation when the total number of excitations, which is a sum of the average photon number and the average number of excited qubits, is on the order of \(1\). We compared the result with the case of pure squeezing where there is no rotation of the qubits and found that the optimal value of the squeezing parameter slightly changes. However, the average total number of excitations which generates the maximal entanglement was found to remain essentially the same. The studied cavity-qubits system is a useful testbed for fundamental quantum properties of light-matter interaction and entanglement. The presented framework enables selecting specific operations on the joint cavity-qubits state by an appropriate pulse shaping of an external classical light. The set of all possible operations accessible by the subcycle or the quasistatic driving and how each operation can be activated or suppressed with prescription for a high fidelity can be used for a laser-based experimental generation and control of the entanglement between non-interacting systems. ## Appendix A Validity of the RWA for a short pulse We check the validity of the usage of the RWA in Eq. (2) on the coupling between the cavity mode and each qubit. In Fig. 9, we compare the naive concurrence obtained with and without the RWA. In each calculation, their respective ground states are used. Note that the ground state under the RWA is \(|00;0\rangle\) which is separable, whereas the ground state without the RWA is entangled with concurrence on the order of \(10^{-5}\). In the full (RWA-free) result, there is a relatively rapid oscillation on top of the longer-scale evolution, coming from the counter-rotating terms in the Hamiltonian, which are Figure 9: The naive concurrence of the two qubits calculated with different methods. The dashed and solid lines are numerical solutions, respectively with and without the rotating wave approximation (RWA) between the cavity and each qubit. The dotted line is obtained by the analytical expression based on \(\mathcal{U}\simeq U_{1}\). The cavity mode is driven by a pulse with duration \(\tau_{d}=T_{\omega}/45\), strength \(\Omega=0.225\omega\) and shape \(f(t)=\exp[-(t/\tau_{d})^{2}]\). The coupling strength between the cavity mode and each qubit is \(g=0.005\omega\). neglected under the RWA. If the amplitude of the rapid oscillation becomes comparable to the magnitude of the naive concurrence, one may not completely ignore the counter-rotating terms. In the studied cases the value of the naive concurrence is large enough with respect to the rapid oscillation, justifying the RWA. We note that all data presented in this paper do not rely on the RWA between the cavity mode and the external field, being consistent with Eq. (4). ## Appendix B Orders of magnitudes of Magnus terms In this section, we derive Eqs. (28) and (44) to obtain sufficient conditions for the convergence of the Magnus expansions in Eq. (27b) and (43b), respectively. For general discussions on the Magnus expansion, see Ref. [24]. Let us start from the first Magnus expansion, Eq. (28). The first term of the expansion can be written as \[A_{I}^{(1)}(t)\equiv(\Omega\tau_{d})\tilde{A}_{I}^{(1)}(t), \tag{29}\] where \[\begin{split}\tilde{A}_{I}^{(1)}(t)&=\int_{-T/\tau _{d}}^{t/\tau_{d}}du\,\tilde{H}_{I}(u)\\ &=\mathcal{O}[(\Omega\tau_{d})^{0}]\end{split} \tag{30}\] and \(\tilde{H}_{I}(u)\equiv H_{I}(u)/\hbar\Omega\) is defined in Eq. (23). Any following term, i.e. \(A_{I}^{(m)}(t)\) for \(m>1\), can be written in terms of its preceding terms, i.e. \(A_{I}^{(k)}(t)\) for \(1\leq k<m\), as [28] \[A_{I}^{(m)}(t)=\Omega\tau_{d}\sum_{j=1}^{m-1}\frac{B_{j}}{j!}\sum_{ \begin{subarray}{c}k_{1}+\cdots+k_{j}=m-1\\ k_{1}\geq 1,\cdots,k_{j}\geq 1\end{subarray}}\int_{-T/\tau_{d}}^{t/\tau_{d}}du \,\mathrm{ad}_{-iA_{I}^{(k_{1})}(u)}\cdots\mathrm{ad}_{-iA_{I}^{(k_{j})}(u)} \tilde{H}_{I}(u). \tag{31}\] \(B_{j}\) for a nonnegative integer \(j\) is the Bernoulli number [29; 30] and \(\mathrm{ad}_{X}Y\equiv[X,Y]\) for given operators \(X\) and \(Y\). For example, the second Magnus term is given by \[A_{I}^{(2)}(t)=\Omega\tau_{d}\left(-\frac{1}{2}\right)\int_{-T/\tau_{d}}^{t/ \tau_{d}}du\,[-iA_{I}^{(1)}(u),\tilde{H}_{I}(u)],\] with \(B_{1}=-1/2\). From Eq. (31), let us show \[\begin{split} A_{I}^{(m)}(t)&\equiv(\Omega\tau_{d} )^{m}\tilde{A}_{I}^{(m)}(t)\\ &=\mathcal{O}[(\Omega\tau_{d})^{m}]\end{split} \tag{32}\] for all \(m\geq 1\), by an induction. This implies \(\tilde{A}_{I}^{(m)}(t)=\mathcal{O}[(\Omega\tau_{d})^{0}]\) for all \(m\). Equation (32) holds for \(m=1\), which follows from Eqs. (29) and (30). For any \(m>1\), if Eq. (32) holds for all \(k\) such that \(1\leq k<m\), we set \(A_{I}^{(k)}(t)\equiv(\Omega\tau_{d})^{k}\tilde{A}_{I}^{(k)}(t)\) and use \(\mathrm{ad}_{cX}Y=c\,\mathrm{ad}_{X}Y\) for \(c\in\mathbb{C}\) to show that Eq. (31) is proportional to \((\Omega\tau_{d})^{m}\). This concludes the induction for Eq. (32) to hold for all \(m\geq 1\). We then proceed to show \[\tilde{A}_{I}^{(m)}(t)=\mathcal{O}[(\sqrt{n})^{m}], \tag{33}\] for all \(m\geq 1\), where \(n\) is the number of excitations in the state of the system. When the state is in a superposition of states with different numbers of excitations, \(n\) may be set to the average number of excitations. Each \(\sqrt{n}\) comes from \(a\) or \(a^{\dagger}\). Equation (33) holds for \(m=1\), which follows from Eq. (30) and the fact that \(\tilde{H}_{I}(u)\) is linear in \(a\) and \(a^{\dagger}\) in its leading order, as can be seen from Eqs. (26) and (5). Showing Eq. (33) for any \(m>1\) can be done by another induction in the same manner as we did for deriving Eq. (32). Combining Eq. (32) with (33), we obtain Eq. (28). Similarly, Eq. (44) can be derived by substituting \(A_{I}^{(k)}(t)\) and \(\tilde{H}_{I}(u)\) in Eq. (31) for \(A_{II}^{\prime(k)}(t)\) and \(\tilde{H}_{II}^{\prime}(t)\), respectively, for all \(k\) such that \(1\leq k\leq m\). ###### Acknowledgements. S.A. was supported by the education and training program of the Quantum Information Research Support Center, funded through the National research foundation of Korea (NRF) by the Ministry of science and ICT (MSIT) of the Korean government under number 2021M3H3A103657313. S.A. and A.S.M. were supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) under number 2020R1A2C1008500. V.Y.C. and S.M. were supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences, under Award Number DE-SC0022134. S.M. was also supported by the National Science Foundation (NSF). S.A. is grateful to God, who created the Heaven and the Earth, to be faithful in helping him whenever asked for wisdom and the way to go for this research.
2304.01342
Self-gravitating collapsing star and black hole spin-up in long gamma ray bursts
Long Gamma Ray Bursts (GRBs) originate from the collapse of massive, rotating stars. We aim to model the process of stellar collapse in the scenario of a self-gravitating collapsing star. We account for the changes in Kerr metric induced by the growth of the black hole, accretion of angular momentum, as well as the self-gravity effect due to a large mass of the collapsing stellar core falling onto black hole in a very short time. We also investigate the existence of accretion shocks in the collapsar, and role of magnetic field in their propagation. We compute the time-dependent axially-symmetric General Relativistic magnetohydrodynamic model of a collapsing stellar core in the dynamical Kerr metric. We explore the influence of self-gravity in such star, where the newly formed black hole is increasing the mass, and changing its spin. The Kerr metric evolves according to the mass and angular momentum changes during the collapse. We parameterize the rotation inside the star, and account for the presence of large-scale poloidal magnetic field. For the set of the global parameters, such as the initial black hole spin, and initial content of specific angular momentum in the stellar envelope, we determine the evolution of black hole parameters (mass and spin) and we quantify the strength of the gravitational instability, variability timescales and amplitudes. We find that the role of the gravitational instability measured by the value of the Toomre parameter is relatively important in the innermost regions of the collapsing star. The character of accretion rate variability strongly depends on the assumption of self-gravity in the model, and is also affected by the magnetic field. Additional factors are initial spin and rotation of the stellar core.
Agnieszka Janiuk, Narjes Shahamat Dehsorkh, Dominika Krol
2023-04-03T20:17:20Z
http://arxiv.org/abs/2304.01342v1
# Self-gravitating collapsing star and black hole spin-up in long gamma ray bursts ###### Abstract Context:Long Gamma Ray Bursts (GRBs) originate from the collapse of massive, rotating stars. Some of the GRBs exhibit much stronger variability patterns in the prompt GRB emission than the usual stochastic variations. We discuss the mechanisms which could account for this effect. Aims:We aim to model the process of stellar collapse in the scenario of a self-gravitating collapsing star. We account for the changes in Kerr metric induced by the growth of the black hole, accretion of angular momentum, as well as the self-gravity effect due to a large mass of the collapsing stellar core falling onto black hole in a very short time. We also investigate the existence of accretion shocks in the collapsar, and role of magnetic field in their propagation. Methods:We compute the time-dependent axially-symmetric General Relativistic magnetohydrodynamic model of a collapsing stellar core in the dynamical Kerr metric. We explore the influence of self-gravity in such star, where the newly formed black hole is increasing the mass, and changing its spin. The Kerr metric evolves according to the mass and angular momentum changes during the collapse. We parameterize the rotation inside the star, and account for the presence of large-scale poloidal magnetic field. For the set of the global parameters, such as the initial black hole spin, and initial content of specific angular momentum in the stellar envelope, we determine the evolution of black hole parameters (mass and spin) and we quantify the strength of the gravitational instability. Then we estimate the variability timescales and amplitudes. Results:We found that the role of the gravitational instability measured by the value of the Toomre parameter is relatively important in the innermost regions of the collapsing star. The character of accretion rate variability strongly depends on the assumption of self-gravity in the model, and is also affected by the magnetic field. Additional factors are initial spin and rotation of the stellar core. We find that for sub-critical rotation of the pre-collapsed star, a centrifugally supported mini-disk is present at the equatorial plane, and it may be subject to fragmentation due to self-gravitating instability. We also find that self-gravity may play a role in the angular momentum transport and it generally lowers the final mass and spin of the black hole, while the accretion rate variability amplitude is much larger in self-gravitating objects. The effect of magnetic field is rather weak, while it seems to decrease the strength of accretion shocks. The magnetisation affects the global properties of the flow in a non-linear way, and is manifested mostly in models with moderate initial black hole spins, but for super-critial rotation of the collapsing star. Conclusions:Our computations confirm that the gravitational instability can account for flaring activity in GRBs and the variations in their prompt emission. Rapid variability detected in case of the brightest GRBs (most likely powered by rapidly spinning black holes) is consistent with the self-gravitating collapsar model where the transonic shocks are formed. The effect should be weakened by magnetic field. ## 1 Introduction Massive stars are born, live and die collapsing under their gravitational force, and eject their outer hydrogen-rich envelopes, after billions of years of transforming light elements into heavier ones, via nuclear fusion. In the last stage of their evolution they are called collapsars if the star rotation velocity was large enough to enable formation of an accretion disk in the core. In contrast to the low-mass stars which leave white dwarfs as their compact remnants, the more massive ones with masses \(\geq 8M_{\odot}\), die violently in supernova explosions that inject freshly synthesized elements, enriching the interstellar medium. In this case, the iron core of the progenitor collapses to a neutron star or black hole. These types of explosions are called core-collapse Supernova (CCSN) and the particular type of remnant after at the final stage of the massive stars' evolution depends on its mass, metallicity, and rotation rate (Janka et al., 2007; Woosley and Heger, 2015). More precisely, in the simplest case of no rotation and no mass loss, for the stars of mass \(8-30M_{\odot}\) their iron cores collapse to neutron stars, leading to supernova. However, some of these stars may either not explode or explode incompletely, leaving black holes as their remnants. This occurs especially for stars with massive helium cores from \(7M_{\odot}\) up to \(10M_{\odot}\). In cases of stars with mass \(30-80M_{\odot}\) (helium core mass \(10-35M_{\odot}\) ), the black hole formation is quite likely. Rotation generally shifts the main sequence mass ranges (but not the helium core masses) downwards for each outcome. Mass loss complicates the relation between initial main sequence mass and final helium core mass (Woosley and Heger, 2015). Gamma-ray bursts (GRBs) may accompany some of the type I b/c supernovae explosions. These transient events are manifested in a sudden release of about \(10^{51}-10^{54}\) ergs of energy in a volume with a radius of less than 100 km, which last from 0.01 to 100 s (for reviews, see e.g. Piran (2004); Kumar and Zhang (2015)). According to the duration time \(T_{90}\), which is defined as the time interval over which 90% of the total background-subtracted counts are observed, GRBs are usually separated into two classes: long GRBs (LGRBs; \(T_{90}>2\) s), whose existence emanates from the core collapse of massive stars (Woosley, 1993; Hjorth et al., 2003), and short GRBs (SGRBs; \(T_{90}<2\) s), whose origins are thought to be the coalescence of neutron stars (NSs) or NS-black hole binary systems (Eichler and Cheng, 1989; Narayan et al., 1992). Most observed GRBs (70%) have a duration greater than two seconds and are classified as LGRBs. Since these events constitute the majority of the population, and as they tend to have the brightest afterglows, they have been observed in much greater detail than their short counterparts. Almost every well-studied long GRB has been linked to a galaxy with rapid star formation, and in many cases to a CCSN as well (Woosley and Bloom (2006)). Long GRB afterglow observations which reflect their association with high redshift (\(z\gtrsim 5\)), are also consistent with the GRB having originated in star forming regions (Pontzen et al., 2010). However, not all the collapsing stellar cores give birth to GRBs. It is about being of a sufficiently large angular momentum for the progenitor to have an accretion disk at the equatorial plane, in order to be capable of producing LGRBs (see e.g., Janiuk and Proga (2008); Janiuk et al. (2008), and Krol and Janiuk (2021)). If this condition is not satisfied, the collapse will proceed without an electromagnetic transient and lead to a disappearance of the star from the filed of view of our telescopes (Murguia-Berthier et al., 2020). Otherwise, the creation of a jet via the process of accretion, is a key factor to account for the formation of a GRB. The strong magnetic field that can be sustained during the process of collapse, and amplified by the differential rotation and dynamo effects, helps launching relativistic jets from the accretion disk. These jets further scrape off through the stellar surface and produce emission in gamma rays (Zhang et al. (2003)). In the previous work, we have build a numerical model that accounts for a dynamical change of black hole parameters, and related Kerr metric, during the collapse onto a newly formed black hole. We used general relativistic hydrodynamical (Janiuk et al., 2018) or magnetohydrodynamical simulations (Krol and Janiuk, 2021), to probe the amount of the angular momentum in the collapsing star envelope and conditions which are sufficient for producing either a GRB, or just a massive, moderately rotating black hole with no electromagnetic transient. In that work, we evolved the Kerr metric of the space-time surrounding the rotating black hole with changing mass and spin. However, we neglected the gravitational force of the massive star itself, which acts on the collapsing gas. This simplification was justified by the fact that the massive core is very compact in comparison with diluted stellar envelope residing in a much larger volume. In this work, we release this simplifying assumption and we extend our model with a new numerical scheme where we account for the self gravity of the star. We are doing this via a perturbative approach, so still not by solving the full set of Einstein equations. This is done by integration of mass and angular momentum in the flow around the center at any given radius, and adding this component as a small term to the mass and spin of the black hole which are constituents of the Kerr metric defined locally. In this way we represent more correctly the influence of self-gravitating mass enclosed in the volume on the orbiting material. We analyze the possible instabilities in the self-gravitating collapsing stellar core, and we also find regions where shock discontinuities are formed. We also study the process of stellar collapse by allowing the initial core radius to slightly vary and hence define different initial conditions for the subsequent formation of a rotationally supported accretion disk. Finally, we account for the magnetic field component to perturb the accretion rate and we estimate the time scale of variability which may be reflected in the observed properties of GRBs, if the jest are formed. Similarly to our previous studies, we explore the properties of an exemplary model, assuming the \(25M_{\odot}\) envelope of a collapsing star and the \(3M_{\odot}\) initial black hole mass formed from the core. We probe the parameter space similar to our previous study, i.e. we vary the value of initial black hole spin, magnitude of angular momentum inside the envelope, and strength of magnetic fields. However, we extend this parameter space to larger specific angular momentum endowed in the star, and we implement several alternative magnetic field configurations. We compare the results of new models, i.e. those with self-gravity of the collapsing core, to those without self-gravity force. The article is organized as follows. In Section 2 we present the general framework of our model, which has been developed upon the general relativistic MHD code and extended to work in time-dependent Kerr spacetime metric in previous papers. In Section 3, we describe the current advancement of the previous model, which is the implementation of self-gravity of the collapsing star in the spacetime dynamics. In Section 3.2 we describe a modification in the inner boundary condition, in Section 3.3 we describe the perturbative method used to compute the change of mass and angular momentum of the black hole due to self-gravity, and in Section 3.4 we define the magnetic field configurations used in our testing models. In Section 4 we present the results of our calculations. In particular, Section 4.1 describes time evolution of the self-gravitating, non-magnetized stellar cores, and compares them to our previous, non-selfgravitating models, Section 4.3 presents a detailed analysis of the gravitational instability, which is a new feature found in the newly developed models, and Section 4.4 presents evolution of magnetized, self-gravitating collapsing cores. In Section 5 we discuss our results, in the context of GRB phenomenology, and in Section 6 we summarize our conclusions. ## 2 Time evolution with accreting black hole mass and spin update Apart from the modifications described in Sect. 3.2 and 3.3, we follow the time evolution of the collapsing core as described in Krol and Janiuk (2021). We use the general relativistic magnetohydrodynamic code, called High Accuracy Relativistic Magnetohydrodynamics (HARM) which has been originally established by Gammie et al. (2003) (see also Noble et al. (2006)). The code introduces a conservative, shock-capturing scheme with low numerical viscosity, to solve the hyperbolic system of partial differential equations of GR MHD. The numerical scheme uses the plasma energy-momentum tensor, \(T_{\mu\nu}\), with contributions from matter (gas) and electromagnetic field. For the GR MHD evolution, we solve two fundamental equations: the equation of mass and energy-momentum conservation which are as follows: \[(\rho u^{\mu})_{\mu}=0;\qquad T^{\mu}_{\nu;\mu}=0. \tag{1}\] The energy stress tensor is a sum of two parts, gas and electromagnetic: \[T^{\mu\nu}_{(m)}=\rho hu^{\mu}u^{\nu}+pg^{\mu\nu} \tag{2}\] \[T^{\mu\nu}_{(em)}=b^{k}b_{k}hu^{\mu}u^{\nu}+\frac{1}{2}b^{k}b_{k}g^{\mu\nu}-b^ {\mu}b^{\nu} \tag{3}\] \[T^{\mu\nu}=T^{\mu\nu}_{(m)}+T^{\mu\nu}_{(em)} \tag{4}\] where \(u^{\mu}\) denotes the four-velocity of gas, \(u\) represents internal energy density, \(b^{\mu}\) and \(h\) are magnetic four-vector and the fluid specific enthalpy, respectively. The MHD scheme is brought in conservative form, by implementing a Harten-Lax-van Leer (HLL) solver (Harten et al. 1983) to calculate numerically the corresponding fluxes. The fluid equation of state is that of a polytrope with a pressure \(P=K\rho^{\gamma}\), where \(\rho\) is the density, \(\gamma=4/3\) is the adiabatic index, and \(K\) is the specific entropy, in this case taken to be that of a relativistic fluid with inefficient cooling. We have been developing a new version of the HARM code which was already implemented by Janiuk et al. (2018), where we considered a dynamically evolving space-time owing to the changes in the central black hole's parameters. The simulations are started after the black hole has already formed and it is assumed that its gravitational field controls the subsequent space-time evolution. Then, the matter gets to accrete onto it, and the code applies a sequence of quasi-stationary solutions with black hole's mass and spin updated by a very small value in each time step. The Kerr black hole's line element (metric) in the Boyer-Linquadist coordinates is given by: \[ds^{2}=(1-\frac{2M_{BH}r}{\Sigma})dt^{2}+\frac{4M_{BH}arsin^{2} \theta}{\Sigma}dtd\phi- \tag{5}\] \[-\frac{\Sigma}{\Delta}dr^{2}-\theta^{2}-sin^{2}\theta(r^{2}+a^{2} +\frac{2M_{BH}a^{2}rsin^{2}\theta}{\Sigma})d\phi^{2}\] where \(\Delta=r^{2}-2M_{BH}r+a^{2}\), \(\Sigma=r^{2}+a^{2}cos^{2}\theta\), and \(a=\frac{J}{M_{BH}}\), with \(M_{BH}\) and \(J\) are the mass and angular momentum of the black hole. To put the evolution of the black hole's parameters into effect, one can consider the metric is changed discretely between the consecutive time steps, according to mass and spin small changes, \(\Delta M=(M^{i}_{BH}/M^{0}_{BH}-1)\), and \(\Delta a=(J/M^{i}_{BH}-a^{i-1}/M^{i}_{BH}E)\Delta t\), where \(M^{i}_{BH}\) reflects the current black hole mass at time \(t>0\), \(M^{0}_{BH}\) denotes initial mass of the black hole at \(t=0\), and \(\dot{J}\) and \(\dot{E}\) are the flux of angular momentum and energy flux transmitted through the black hole event horizon. The six non-trivial Kerr metric components are then updated at every time step to get their new values. Our gride size is that or \(R_{out}=1000~{}r_{g}\), and the resolution is 256x256 points in the radial and polar directions. The outer boundary conditions in radial direction are free outflow (variables are copied to the two ghost zones). In the polar direction, reflecting boundary conditions are assumed (velocity and magnetic field components change their signs on the polar axis). ## 3 Self-gravitating collapsar model Now, we calculate the time evolution of the collapsing massive star using the GR MHD scheme, using the new version, upgraded upon Janiuk et al. (2018). The evolution of the space-time Kerr metric is again accounted for by the increasing mass and changing spin of the black hole in the collapsing stelar core. However, new terms due to self-gravity of the star are computed and volume-integrated, at every time-step during dynamical simulation. ### Initial conditions We adopt the initial conditions similar to those used in Krol & Janiuk (2021), namely a slowly rotating, transonic, quasi-spherical accretion flow with small angular momentum. The initial distribution of density and radial velocity in the accreting sphere is given by numerically integrated Bondi solution, which we parameterize with the location of the sonic radius. In our models, it is fixed at \(r_{s}=80r_{g}\). Below this radius the matter falls into black hole supersonically, reaching the speed of light at the black hole horizon. Once the critical point is determined, the velocity at this critical point is (Shapiro & Teukolsky 1986): \[(u^{\prime}_{s})^{2}=\frac{GM_{\rm BH}}{2r_{\rm s}}, \tag{6}\] where \(r\) is the radial coordinate and \(u^{\prime}\) is the radial component of the four-velocity. The radial velocity can be obtained by numerically solving the relativistic Bernoulli equation: \[\left(1+\frac{\gamma}{\gamma-1}\frac{P}{\rho}\right)^{2}\!\!\left(1-\frac{2GM_{ \rm BH}}{r}+(u^{\prime})^{2}\right)=\rm constant, \tag{7}\] and the density is set by the mass accretion rate \(\dot{M}\): \[\rho=\frac{\dot{M}}{4\pi r^{2}u^{\prime}}. \tag{8}\] The specific entropy value, \(K\), depends on the radial velocity and is taken to be (Sukova & Janiuk 2015; Sukova et al. 2017; Palit et al. 2019): \[K=\left(u^{\prime}4\pi r^{2}\frac{c_{\rm s}^{\frac{2}{3-1}}}{\gamma^{\frac{1 }{\gamma-1}}\dot{M}}\right)^{\gamma-1}, \tag{9}\] where \(c_{\rm s}^{2}=\frac{rP}{\rho}\) is the local sound speed. The small angular momentum is imposed on the spherically distributed gas, similarly to Krol & Janiuk (2021). The specific angular momentum is normalized with the parameter \(S\), such that the flow is circularized at the innermost stable circular orbit (ISCO). In addition, the rotation velocity scales with the polar angle, so that at the equator, \(\theta=\pi/2\), the rotation of the star is maximal. \[l=Sl_{\rm ISCO}r^{2}\sin^{2}\theta \tag{10}\] with \[l_{\rm isco}=u_{\phi,\rm isco}=\frac{r_{\rm isco}^{1/2}-2a/r_{\rm isco}+a^{2}/r_{ \rm isco}^{3/2}}{\sqrt{1-3/r_{\rm isco}+2a/r_{\rm isco}^{3/2}}}. \tag{11}\] Here the radius \(r_{\rm ISCO}\) in Kerr geometry depends on the black hole spin. Our black hole spin parameter is set in the initial setup, and from now on, it is denoted as \(A_{0}\). We use both sub-critical and super-critical rotation speeds, parameterized with \(S<1\) and \(S>1\), respectively. Our model parameter space is therefore defined by \(S\) and \(A_{0}\). We notice here that in this formulation, the black hole in the center of collapsing stellar core is already as massive as \(M_{BH}=3M_{\odot}\), which scales the length unit of \(r_{g}=4.45\times 10^{5}\ cm\). This means that our computational grid size in cgs units is only \(4.45\times 10^{8}cm\). Therefore, if a compact C-O core of a Wolf-Rayet star, or a pre-supernova of \(25M_{\odot}\)(Woosley & Heger, 2006) is assumed as our initial model, it is now squeezed into a much smaller volume and reaching order of magnitude larger densities in the center. Our model is compact enough to address the problem of self-gravitating gas close to the horizon of a newly formed black hole. We do not address here any prior or ongoing supernova explosion. Depending on the rotation parameter, \(S\), the ultimate outcome might be either a direct collapse, or a formation of a mini-disk inside the stellar core, i.e. collapsar (as depicted in Figure 1). The latter may lead to an electromagnetic transient. ### Boundary condition between the initial stellar core and outer flow For numerical reasons, we need to ensure that enough number of grid cells are located below the black hole horizon. The inflow of matter proceeds then smoothly through the horizon, thanks to change of coordinates to the Kerr-Schild ones, which are non singular. In the initial setup, we assume that the transition between the newly formed black hole and the accreting stellar core is shifted by a certain factor. We choose a shift radius, on the order of \(1.2\ r_{g}\), to account for a smooth free-fall of the stellar shells onto the black hole, that has been already shielded by the event horizon. Therefore, we introduce an initial offset between the black hole horizon radius, \(r_{b}=1+\sqrt{1-a^{2}}\), in \(r_{g}\) units, and the dense surrounding gas. The inner radius of the accreting stellar core is now placed at \(R_{in}=R_{shift}\cdot r_{b}\). Our model parameter \(R_{shift}\), represents therefore the initial inner radius of the stellar core before the collapse starts. Its minimum value, \(1.0\), would imply that the collapse starts immediately when the black hole has formed. Otherwise, values larger than \(1\), imply a slight delay of the black hole growth. The shift provides numerical stability of the initial phase of the simulation. ### Modification of the stellar core structure due to the self-gravity To describe the self-force acting in the collapsing gas, the Teukolsky equation has been chosen. This equation describes gravitational, electromagnetic, and scalar field perturbations of a rotating Kerr black hole (Teukolsky, 1972). The global vacuum solution of the Teukolsky equation is given by the CCK method (see Chrzanowski (1975); Cohen & Kegeles (1974), Wald (1978)) which reconstructs the metric perturbation and shows that only perturbations of the mass and angular momentum (\(\delta M\) and \(\delta J\), defined below) are to be concluded within the Kerr metric. We notice that in the weak field limit, the Einstein gravitational field equation will be reduced to the Poisson equation leaving us with the only non-zero component of the perturbed metric potential (cf. Ryu et al. (2020)). Recently, van de Meent (2017) showed that the perturbation due to a particle on a bound orbit around black hole described by CCK metric affects the Kerr parameters describing the mass and angular momentum of the black hole for the metric 'outside' the particle's orbit and vanishes 'inside' the orbit. Our work proposes numerical implementation of this method, and follows the assumptions of van de Meent (2017) for fluid dynamics, instead of particles, so we compute volume integrals of the corresponding stress-energy tensor components. By definition, the problem does not assume spherical symmetry. The potential wells may therefore appear off-axis, in the whole region 'outside' the orbit of a given fluid element. We have developed a new version of GRMHD numerical code HARM. As the initial results have shown (Janiuk, 2022), we expect to see more mass and spin growth of the black hole after incorporating the perturbation effects of the accreting disk into the updating metric. In our new simulations, both the mass and angular momentum accreted onto black hole horizon, and used to update the Kerr metric coefficients, are now modified with perturbation acting on the metric in the region above the horizon, due to the self-gravity force that the gas feels at a given distance from the horizon (a schematic view of the calculation is depicted in Fig. 1). These perturbative terms are calculated from stress-energy tensor. Hence, in addition to the two Equations governing the growth of black hole mass and spin via the mass and angular momentum transfer through the horizon: \[\dot{M}_{BH}=\int d\theta d\phi\ \sqrt{-g}\,T^{\prime}{}_{t}, \tag{12}\] and \[\dot{J}=\int d\theta d\phi\ \sqrt{-g}\,T^{\prime}{}_{\phi}, \tag{13}\] (see Janiuk et al. (2018) for more details), we now calculate: \[\delta M_{BH}(t,r)=2\pi\int_{r_{in}}^{r}T^{\prime}_{t}\sqrt{-g}d\theta \tag{14}\] computed at every radius above horizon. Analogously, the angular momentum of the black hole will change by adding the perturbation: \[\delta J(t,r)=2\pi\int_{r_{in}}^{r}T^{\prime}_{\phi}\sqrt{-g}d\theta \tag{15}\] Figure 1: Schematic view of the collapsar at the onset of the GRB: stellar core (dark blue), stellar envelope composed of subsequently accreting shells with decreasing density (light blue-yellow-orange-green), and rotationally supported accretion disk formed at the equatorial region (red). Horizontal black line represents the equatorial plane. The circle marked with a stripped line represents an exemplary chosen radius above the horizon, at which the gas feels the perturbative force due to self gravity of matter enclosed within this radius (see Eq. 7.). The dimensionless spin of the black hole, as a result, will change by \[\delta a=\frac{J+\delta J(r)}{M_{BH}+\delta M_{BH}(r)}-a^{i} \tag{16}\] Here \(a^{i}=a^{i-1}+\Delta a\), according to eq. (7) in Janiuk et al. (2018). Having different \(\delta M\) and \(\delta J\) at each grid point in radial direction at each time, affects the metric coefficients which are sensitive to the mass and spin update. Our main change of the Janiuk et al. (2018) model is developing a new module in the time-dependent code, that accounts for self-gravity force. In the rest of the paper, we present the results computed with this module enabled in the simulation setup. We also compare them with the runs without self-gravity (the new modules are switched off), to emphasize the difference and investigate the role of self-gravity in the collapsar physics. ### Post-collapse magnetic field and its chosen structure We assume that evolution of the collapsing stellar core prior to the simulated phase was unaffected by the particular configuration of magnetic field. This is justified, as we address the class of massive stars with relatively weak fields. The surface magnetic fields can be constrained by observations and suggest that there might be a bimodal distribution (Petit et al. 2019). The magnetic fields in the stellar core are unknown and all scenarios are uncertain. Currently, various pre-supernova models and scenarios adopt only general scaling of magnetic fields, such as a large scale dipole (Reichert et al. 2023). Some models have been using the fields inherited from the stellar evolution models (Woosley & Heger 2006), possibly supplemented by a toroidal component and amplified via a dynamo mechanism in a differentially rotating star (Spruit 2002), which may lead to ultimate formation of a proto-magnet in the core (Obergaulinger et al. 2009). In our simulations, we consider some possible effective modification of the collapsing zone due to the action of the weak magnetic field which is dynamically unimportant. The application of the realistic evolved star field geometry to our quasi-spherical mass distribution is not unique, and we introduce two specific prescriptions: (i) uniform magnetic field, and (ii) dipole magnetic field. Therefore we introduce the magnetic field in our initial conditions, by defining the shape of the magnetic vector potential, and setting up a proper normalisation. First, we assume that the initial accreting gas is embedded in a simple poloidal configuration of the magnetic field, as discussed already in Krol & Janiuk (2021). In this case, the only non-vanishing component of the magnetic field vector potential is given by: \[A_{\varphi}\propto\frac{1}{2}r\sin(\theta) \tag{17}\] Furthermore, as a variation of the uniform field derived for the Kerr black hole, we adopt the formula by Wald (1974) \[A_{\varphi}\propto[\frac{1}{2}(r^{2}+a^{2})-a^{2}r\Sigma^{-1}(1+\cos^{2}( \theta)]\sin^{2}(\theta) \tag{18}\] We normalize this uniform field, assuming an initial maximum gas-to-magnetic pressure ratio, \(\beta=(\gamma-1)u/(0.5b^{2})\). The second scenario assumes a dipole field, where vector potential is given by \[A_{\varphi}\propto\frac{\sin(\theta)}{r} \tag{19}\] We normalize the magnetic field to a chosen initial value of maximum gas-to-magnetic pressure ratio, \(\beta_{0}=p_{gas}/p_{mag}\), which in case of the Bondi gas distribution is reached at the black hole horizon. By implementing these various field strengths and setups, we aim to verify its astrophysical implications for the collapsar scenario as the central engine for LGRBs. To sum up, in comparison with previous code applications (Janiuk et al. 2018; Krol & Janiuk 2021), we allow for three major modifications: * the offset of the initial stellar core, * the modification of the metric evolution to account for self-gravity effects, * several different setups and strength of magnetic fields, aimed to model realistic stellar cores. ## 4 Results The HARM code works in dimensionless units of G = c = 1. Conversion coefficients can be found in Table 1. We calculated several set of models, which differ with respect to the chosen initial parameters, and with respect to presence or absence of self-gravity effects, and magnetic fields. All models and their input parameters are listed in Table 3, where we give the values of initial black hole spin \(A_{0}\) in dimensionless units, the inner radius of the collapsing stellar core, \(R_{shift}\) (equal to the stellar core radius, or no pre-collapsed core and then the inner radius is located at ISCO), and the initial specific angular momentum, \(S\), in the accreting gas. The self-gravity effect is denoted as either "yes" or "-" in the Table. For magnetized models, we provide the type of field initial configuration (vertical, dipole) and we give the initial strength of the field, normalized either with the ratio of gas to magnetic pressure at \(\beta_{0}\), or with the initial magnetisation, \(\sigma_{0}\). Below, we show the results for the time evolution of the disk under the self-force, and we compare them with trends previously observed, i.e. in calculations where the self-gravity was neglected. We consider for now only the non-magnetized models. In the next subsection, we will present a few characteristic properties of the low angular momentum, quasi-spherical accretion models, analyzed with respect to their gravitational stability. The magnetized models will be presented in more detail in Sect. 4.4. ### Time-dependent evolution From the point of view of the evolutionary timescales, the black hole mass is the key parameter as it determines the object's scale size. Here we concentrate on the results for the initial black hole mass of \(M_{BH}=3M_{\odot}\) and we check whether the whole star collapsed to the black hole, contributing to its final mass. We check how the evolution proceeds for various initial spin parameters, and several values of rotation parameter. We compare the time profile, and time-averaged value of the accretion rate during the collapse, for SG and non-SG models. We also \begin{table} \begin{tabular}{c c c} \hline \hline Physical quantity & Geometrical units & cgs units \\ \hline \hline Length & \(r_{h}=\frac{\mathrm{erg}}{\mathrm{s}}\) & \(4.44\times 10^{7}\mathrm{cm}\) \\ Time & \(T_{\mathrm{amb}}=\frac{\gamma_{c}}{r}\) & \(1.38\times 10^{-3}\mathrm{s}\) \\ \hline \end{tabular} \end{table} Table 1: Geometric units to cgs units conversion. We adopted M=\(3M_{\odot}\) (the initial central black hole mass). check the evolution of black hole spin, and check what was the maximum spin value reached during the collapse. We then compare the final black hole spin value, which saturates when the collapse is ended. These results are given in the Table. Assigning three different values to the initial rotation parameter, \(S=1,1.4,2\), and two values of 0.5 and 0.85 to the initial black hole spin parameter \(A_{0}\), we analyze the evolution of our model features, i.e. accretion rate, black hole's mass and dimensionless spin parameter. The time profiles of these quantities are plotted in Figure 2. Both cases including and excluding self-gravity are compared. The three panels show the black hole mass profile (left), the accretion rate (middle) and black hole spin evolution. (In the rest of the paper, we label the models with their acronyms, corresponding to these used in Table 3.) Giving different initial black hole spins, one can find no remarkable changes in the \(M_{BH}\) evolution. The situation differs when it comes to various rotation parameters \(S\), however. We notice that the larger the initial rotation magnitude, the longer it takes for the black hole mass to evolve. The non-SG simulations end with very different final black hole mass, depending on the \(S\) parameter. This is because material kept by super-critical rotation is affected by the centrifugal force and remains in the accretion torus, while only the material from polar regions is able to reach the black hole quickly, in the free fall time-scale, as was already found in Janiuk et al. (2018). In contrast, the self-gravity of the envelope can speed up the evolution of the collapsing stellar core significantly. We also observe no big difference in the black hole mass evolution in both cases where \(S\geq 1\). Only in the \(S=2\) run, the final black hole mass is significantly smaller than the total mass of the evolved stellar core, and it saturates at the value of about \(20M_{\odot}\) (see Table 3 for details). The middle plots in Fig 2 confirm that with no self-gravity effects taken into account, a considerably less fluctuating behavior of the accretion rate during a longer time is deduced. In this case, there exist some oscillations in the accretion rate during some time intervals (around 0.2 sec. for \(S=1.4\), and \(0.4-0.5\) sec for \(S=2\)). Before these periods of time, there seem to be some mass accumulation in the inner regions, observed in the density profiles, and also in specific angular momentum distributions in different time snapshots (see next Subsection). This mass accumulation followed by fluctuations in the accretion rate is attributed to the generation of the rotationally supported torus in the inner stellar core, which is concentrated on the equatorial plane. On the other hand, in curves with self-gravity the mass accretion rate increases and drops much more sharply than in non-SG models, presenting a characteristic pattern of a sudden fluctuating rise in very short periods of time (less than 0.1s). We notice here that those sharp peaks have very large magnitude, but their height may be to some extent affected by numerical resolution, and overestimated. The mass accumulation prior to the oscillations can be seen in this case as well. We found the fluctuating reduction in the size of the rotational supported torus, followed by the formation of an inhomogeneous structure in density profiles that may account for this temporal behavior. We will present more details on these events in the next section. Considering the black hole spin (dimensionless, so the angular momentum of black hole scaled with mass inverse), one can find that in self-gravitating models, a notable increase in the black hole's mass is corresponding to the spin parameter reaching its maximum. Further accretion after this time, brings more matter than angular momentum to the central object resulting in a considerable rise of \(M_{BH}\) and later drop of spin. It coincides with the remarkable increase of accretion rate onto the black hole. The non-SG models behave differently. Here, the black hole mass and spin rise more slowly than in SG models. However, a maximal spin is reached earlier than maximum mass (for sub-critical rotation, \(S=1\)), or is reached at a rather late time, when still the black hole mass has not saturated on the final value (for \(S=2\)). We also found that black hole's spin is reaching its maximum value when the rotationally supported region close to the black hole is shrinking. Self-gravity effect seems to speed up this evolution. Moreover, higher value of the rotation parameter \(S\), as well as the presence of disk at late times in the collapsar with \(S=2\), leads to a less massive but more spinning black hole, as can be found in the green curves of the right panel in Figure 2. Final spin in this case is about \(A=0.4-0.5\). In Figure 3 we show the evolution of the black hole spin, where we compare the runs with various initial spin \(A_{0}\). The thick lines represent self-gravitating stars, while the thin lines are shown for comparison, and represent calculations where self-gravity was neglected. The initial black hole spin was taken in the range between \(A_{0}=0.3-0.85\). In the two panels, we show the simulations with critical and super-critical angular momentum content inside the collapsing star, namely \(S=1\) and \(S=2\). As the figure shows, the models with \(S=1\) tend to have smaller value of the maximum and final black hole spin. The maximum value reached during the collapse is reached at about \(t\sim 0.1s\) (around time 5000 \(t_{g}\) in geometric units) and it correlates with the initial spin value. For smallest \(A_{0}=0.3\), the net spin-up is largest, while for \(A_{0}=0.85\) the black hole actually does not increase its spin, and only temporary flattening of the spin time evolution profile is observed. Similarly to the cases presented in Fig. 2, we notice that the evolution of the spin proceeds much faster in the self-gravitating star. Nevertheless, the final black hole spin is almost the same for all models (about 0.2). For super-critical rotation of the star, on the other hand, the value of maximum spin is almost the same for all models, regardless of the initial spin value, and it is about \(A_{max}=0.87\) (see Table 3). In contrast, the final spin value, is systematically higher than for the case of \(S=1\), and it somewhat depends on the initial black hole spin. For \(A_{0}=0.85\), the net spin-down of the black hole during the whole simulation is largest. The values of maximum and final spins for all models are reported in Table 3. We notice that all our models evolve very quickly, as a consequence of the small radius of the computational domain. Our collapsing stellar core is squeezed into a volume that is much reduced wrt. to a typical stellar progenitor. For a star of the radius about \(10^{12}-10^{14}\ cm\) it would take hours to entirely collapse. On the other hand, for a long gamma ray burst, it is sufficient to sustain the accretion disk in the collapsar for about 100 seconds. As was demonstrated with a toy model by Janiuk et al. (2008), the duration of the GRB event between 20-40 s, or around 100-150 s, is expected for a collapsing star where the black hole is being fed with mass and angular momentum, while the rotationally-supported accretion disk is sustained (cf. Fig. 9 in their paper). The most violent changes of the black hole parameters occur at the very initial phase, corresponding to the shells collapsing from radii below \(10^{9}-10^{10}\) cm, where most mass is enclosed. Our calculations, now beyond this old toy model and employing full GR hydrodynamics, confirm this qualitative picture. ### Effects of the self-force on the flow properties In this Section, we present in more detail the properties of the self-gravitating collapsing star for chosen models. We analyze specific time snapshots, which correspond to the characteristic features found in the time profiles of mass accretion rate, and black hole spin. #### 4.2.1 Density inhomogeneities and formation of the accretion shocks In Figure 2 we have shown time profiles of accretion rate through the black hole horizon, which exhibit a sudden increase at early times of the simulation, followed by an oscillatory pattern in the cases with self-gravity. We now show in Figure 4 the corresponding snapshots of density profiles in addition to velocity vector field. We notice that they reflect formation of some special structures, i. e. an equatorial outflow of matter, which reaches the radii up to about \(80~{}r_{g}\) and then is stalled in the transonic shock. Furthermore, we notice some small inhomogeneities in the density at the chosen time intervals, visible in more detail in Fig. 5. The assumed black hole spin parameter in both models was \(A_{0}=0.5\), while the rotation parameter is either \(S=1.4\), or \(S=2\), for Fig. 2 or Fig.4, respectively. Both models are including self-gravity impacts. In either Figure, the top three images depict density snapshots of the collapsing stellar core overlaid with normalized velocity vector fields and contour of Mach number \(M=1\) (i.e. sonic surface) for the sake of a better representation of the transonic shock and equatorial outflow regions. Furthermore, we Figure 3: Time evolution of the black hole spin for the rotation normalized with specific angular momentum at ISCO \(S=1.0\) (left) and \(S=2.0\) (right). Thick lines represent self-gravitating models and thin lines are for self gravity neglected. Various initial black holes spin values are shown, as labeled in the plots and marked by dark green for \(a_{0}=0.85\), violet for \(a_{0}=0.6\), magenta for \(a_{0}=0.5\), and light green for \(a_{0}=0.83\). Models are labeled in the both panels with symbols referring to Tab 3. Figure 2: Evolution of the accretion rate and black hole parameters including and excluding self-gravity plotted by the thick and thin curves, respectively. Top panels refer to the initial black hole spin parameter \(A_{0}=0.85\) while the bottom panels correspond to \(A_{0}=0.5\). Three different values for the initial rotation parameter, i.e. \(S=1,1.4,2\) related to the blue, pink and green curves, have been considered as well. The left plots refer to the black hole mass evolution, the middle one show the accretion rate temporal behavior and the panels at the right demonstrate the evolution of black hole’s spin parameter. Models are labeled in the middle panels with symbols referring to Tab 3. provide the pressure maps (the three bottom profiles in both figures) which are corresponding to the same time steps as those of density profiles. In this way, we investigate the possibility of some types of hydrodynamical interfacial instabilities (i.e., Rayleigh-Taylor or Self-gravity Interfacial (SGI) instability). In the density profiles during the early time steps, we found an increase of density in the inner regions (\(r<100~{}r_{g}\)) which show an accumulation of mass at the equatorial region. Such an accumulation refers to the formation of outflow as shown through backward velocity vectors surrounded by contours of Mach number \(M=1\). This appeared in density snapshots at times \(t=0.089~{}s\), or \(t=0.118~{}s,0.133~{}s\), for the case with \(S=1.4\), or \(S=2\), respectively. We interpret the creation of outflow at the equatorial plane as an impact of centrifugal force, which is found for supercritical rotation. Consequently, a transonic shock is located within \(\simeq 100~{}r_{g}\) radius. In the following time steps, the density profiles show that the incoming material from outside the disk is finally channeled into this inner region, leading to a rise in the accretion rate at about \(0.1~{}s\) (cf. the early time sharp increase of the accretion rate, as shown in the thick pink and green curves in Figure 2). As expected, for the case of higher rotation parameter \(S=2\) the outflow region has a larger size, and survives for a longer time (\(\simeq 0.133~{}s\)). Additionally, another outflow arises at the end of simulation, for the case of higher \(S=2\) parameter, which leads to the outflow of matter from the envelope. This is demonstrated in the third density profile of Figure 5 (\(t=0.665~{}s\)). This causes a more oscillatory behavior of the mass accretion rate during the last time steps of our simulation (cf. thick green curve in Figure 2). In comparison, for non self-gravitating cases, in Krol & Janiuk (2021), we found a considerably longer lasting disk structures at the equator, which were spread out through larger radii. This confirms that there exists an interplay between self-gravity and centrifugal force (rotation), and consequently its suppressive impact on the outflow regions. We notice that an inhomogeneous structure appears in both density and pressure profiles (as well as other quantities, such as specific angular momentum and Much number). This behavior is seen at time step \(t=0.123~{}s\) for \(S=1.4\), and more evident at \(t=0.133~{}s\) for \(S=2\), respectively. To have a better representation, we depicted a snapshot showing only an upper hemisphere in the case with \(S=2\), for which there is a symmetric structure with respect to the equator (in contrast to the case of \(S=1.4\)). One can find that the inhomogeneities start growing from the interfaces with significant discontinuities in density. As a result, interfacial instabilities seem likely to be responsible for such a structure. In general, instabilities may be divided into two types, global instabilities (such as Jeans) and interfacial instabilities (examples are Kelvin-Helmholtz and Rayleigh-Taylor (RT) instabilities). Among interfacial instabilities RT instability occurs when density and pressure gradients act in opposite directions. This criterion can be identified through the linear growth rate proposed to examine the RT-unstable regions, which reads (Kifonidis et al., 2003): \[\sigma_{RT}=\sqrt{-\frac{p}{\rho}\frac{\partial ln\rho}{\partial r}\frac{ \partial lnp}{\partial r}}. \tag{20}\] Moreover, Hunter Jr et al. (1997, 1998) introduced another type of interfacial instability driven by self-gravity, called Self-gravity Interfacial instability (SGI). The linear growth rate for the SGI instability is supposed to be (Hunter Jr et al., 1997, 1998) \begin{table} \begin{tabular}{c c c c} \hline \hline \(r\) (\(r_{g}\)) & \(\sigma_{RT}(s^{-1})\) & \(\sigma_{SGI}(s^{-1})\) & \(\frac{\sigma_{RT}}{\sigma_{RT}}\) \\ \hline \hline 20.0 & \(2.18\times 10^{-4}\) & \(1.1\) & \(>0\) \\ 21.0 & \(2.75\times 10^{-4}\) & \(0.97\) & \(>0\) \\ 22.0 & \(3.64\times 10^{-4}\) & \(0.82\) & \(>0\) \\ 23.0 & \(4.99\times 10^{-4}\) & \(0.63\) & \(>0\) \\ 24.0 & \(7.13\times 10^{-4}\) & \(0.4\) & \(>0\) \\ 25.0 & \(9.5\times 10^{-4}\) & \(0.18\) & \(>0\) \\ \hline \end{tabular} \end{table} Table 2: RT and SGI growth rates for the self-gravitating case of \(S=1.4\) and \(A_{0}=0.5\) (model A05-S14-SG-R10). Different radii located around the mixing boundary at \(\theta=\frac{\pi}{40}\) are considered. Figure 4: Snapshots of density and pressure at different time steps, for self-gravitating model. The rotation parameter is \(S=1.4\) and initial black hole spin is \(A=0.5\). The top three profiles demonstrate the velocity vector field overlaid on the background of density profiles, with thick white curves as the contour plot of the sonic surface. The three bottom snapshots show to the pressure profiles and indicate how it varies through the inner regions, corresponding to density, as a hint to SGI/RT instability. Note that the first snapshot (\(t=0.089s\)) is zoomed to a larger area of about \(100r_{g}\) while the other two time snapshots illustrate the zoomed in profiles. This is done for a clear indication of shocked region. Model shown is A05-S14-SG-R10, as listed in Table 3. \[\sigma_{SGI}=\sqrt{\frac{2\pi G(\rho_{2}-\rho_{1})^{2}}{(\rho_{2}+\rho_{1})}}. \tag{21}\] The RT and SGI instabilities result in very similar configurations at density snapshots. However, they have their own characteristics which allow us to differentiate between these types of hydrodynamical instabilities. Since self-gravity knows no preferred direction, it is destabilizing across all density interfaces, while an interface is RT-unstable only if the heavy fluid is on top of the light fluid. It has been also confirmed that RT instability is characterized by dense spikes penetrating the tenuous fluid, whereas the SGI develops with tenuous spikes streaming into the denser fluid (Hueckstaedt et al. 2005). As one can infer from Figure 4, SGI instability seems to dominate over RT instability and produces the inhomogeneities. First, density and pressure profiles at \(t=0.123s\) confirm the fact that density and pressure gradients over the discontinuity, that appeared at \(r\simeq 20R_{g}\), act in the same direction. To have a better intuition, some data for the criteria (20) and (21), are provided in Table 2, which quantifies the growth rates of RT and SGI instabilities around the boundary with the emergence of growing unstable configurations (i. e., \(r\sim 20R_{g}\)). It also confirms the positive sign of \(\frac{\partial\log r}{\partial r}\frac{\partial\log r}{\partial r}\) at these regions. Moreover, a comparison between the growth rate of RT and SGI instabilities represents a considerably faster growing manner for the SGI modes, in the mean that this type of instability seems a plausible candidate that may account for the inhomogeneous structure. Second, it appears that tenuous bubbles penetrate into denser matter which also points out a less dense matter lies on the top of denser fluid, regarding direction of the flow in this region. Therefore, we argue that it is SGI instability which seems to be of growing modes rather than RT. Based on similar evidence, we also believe SGI instability can be a more probable mechanism to cause the inhomogeneous structure in the case with \(S=2\), as shown in Figure 5. It is worth discussing that such a configuration does not seem to be a non-linear turbulent structure, since we found no mixing among bubbles and spikes so that their size changes remarkably before falling into the black hole. Moreover, the velocity vector fields also confirm no detectable interaction between these small oscillatory patterns, i. e. there seem not to be any chaotic direction in the velocity vectors. Therefore, we believe that this pattern can be treated as the linear growth of SGI modes which is due to SG, without representing any non-linear growth that should occur as a result of interactions between small bubbles and spikes. #### 4.2.2 Self-gravity and angular momentum transport Self-gravity is expected to play a role in the angular momentum transport, e.g. in protoplanetary disks (Armitage 2011). In more detail, the transport of angular momentum can be possible via Hydrodynamics (HD) or Magneto-Hydrodynamics (MHD) instabilities to produce a turbulent structure leading to such a transition. Self-gravity can also provide this through gravitational instabilities (Lodato 2008), in addition to SGI instability discussed above. We postulate that self-gravity facilitates the transfer of angular momentum in our collapsing scenario. In Figure 6 we show the specific angular momentum of the envelope at the equator for two cases, namely with and without self-gravity. The top panels show models with \(S=1.4\), while plots on the bottom present models with \(S=2\). Comparing those two cases with non-SG models (with plots located in the left), we show that self-gravity has paved the way for the angular momentum to be transferred outwards as appeared in the larger radii. On the other hand, during the intervals when the density inhomogeneities emerge (i.e., between \(t=0.118\ s-0.148\ s\)), there seem to be a sudden decrease in the innermost regions, in the profile of the angular momentum. It can be interpreted as a sudden transport of the mass and angular momentum towards the black hole. However, it indicates an upward trend once the inhomogeneities disappear (i.e., from \(t=0.163s\) afterwards). More precisely, we think that self-gravity has two major impacts (SG) on the specific angular momentum. First, SG models accelerate the evolution of envelope considerably, so that the inward mass and outward angular momentum transfer occur within a shorter period of time with respect to non-SG model. This results in a larger increase of the outer region's specific angular momentum, from a time step to another, in comparison to the non-SG models. Therefore, we may attribute differences in the specific angular momentum at the larger radii, illustrated in Figure 6 for the cases with and without self-gravity, to this issue. We believe that the faster evolution in SG models is due to the suppressing impact of SG on the centrifugal force, which also explains the longer lasting outflow of matter around the equator in non-SG models (one may attribute the production of outflow to the centrifugal force, that results in the slower evolution of non-SG models). Second, the instabilities that occur due to SG effects, i.e. ring-like gravitational instability (which leads to a very small scale ring-like structures, see below), followed by SGI instability (that causes an inhomogeneous structure of the inner envelope), can affect the inner region's angular momentum transport, and consequently, controls the early-time accretion rate onto the black hole (that can be traced in time domain \(0.1-0.2\ s\) in Figure 2). More precisely, the ring-like gravitational instability sounds to stop the accretion rate from being of a highly increasing manner, through a rise in the angular momentum of the inner regions, producing a very transient ring-like structure. Afterwards, as the inhomogenities grow in the inner region (from \(\sim 0.118s\) to \(\sim 0.148s\)) the bubbles with lower angular momentum (this can be detected from images in Figures 7 & 8) increase in numbers at the equatorial plane as well as other zones. We argue that such a mixing of layers with different densities and angular momentum besides the tendency of having the lowest energy configuration cause the angular momentum to be transferred from the spikes (with larger amount of angular momentum) into the bubbles (with lower angular momentum), when they meet at the same radii. This yields an inward transport of angular momentum. Similar discussion about the inward transformation of angular momentum when the fluid elements are mixing, can be found in Balbus (2003) and references therein. In contrast, at time steps during which the inhomogeneous structure start disappearing (from \(\sim 0.163s\) up to \(\sim 0.192s\)) one can find a rise in the specific angular momentum, getting back to a stabilized configuration. In Figures 7 and 8 we show snapshots of the specific angular momentum distribution for these two models. We show here the self-gravitating models. It can be easily traced that the specific angular momentum starts decreasing through an inhomogeneous configuration from \(t=0.118\ s\) until \(t=0.148\ s\) and adopts an increasing trend afterwards, as pointed out earlier. In the model with higher rotation of the star's envelope, the inhomogeneous structures in angular momentum distribution seem to be smoothed out more quickly, and by \(t=0.177\ s\) they disappear. ### Analysis of gravitational stability In self-gravitating collapsing stellar cores, gravitational instability can provide different structures like axisymmetric ring formation as well as nonaxisymetric spiral arms, called I-modes, or fragmentation, that can be referred as J-modes and identified with the Jeans mechanism of instability (Hachisu et al. 1987; Christodoulou & Narayan 1992). The so-called Toomre parameter, \(Q=\frac{\kappa c_{s}}{\pi\sigma c_{s}}\) with \(\kappa\) is the epicyclic frequency, \(c_{s}\) the local sound speed, and \(\Sigma\) the surface density, has been introduced for the axisymmetric local instabilities in geometrically thin disks (Toomre 1964). Later on, Hachisu et al. (1987) proposed a universal criterion for gravitational instability which is valid in both thick and thin systems, as the following: \[\tilde{Q}=\frac{\kappa^{2}}{\pi G\rho}<1, \tag{22}\] where \(\kappa^{2}=4\Omega^{2}+rd\Omega^{2}/dr\). They argued that non-axisymetric fragmentation in rapidly rotating systems is generally triggered by the onset of ring formation (as the axisymmetric consequence of the gravitational instability). Considering the importance of gravitational instability in self-gravitating accretion systems, we probe for the possibility of the axisymmetric disk formation and ring fragmentation in our 2D collapsar scenario. It can consequently provide us with an estimate for the non-axisymetric fragmentation. In spite of the fact that our stellar core is set up in 2D, the latter can be considered a possible outcome for a 3D set up (planned for our future work). Figure 9 is a demonstration of the axisymmetric mode's condition, given by Eq. (22), to observe its behavior on the equatorial plane for several time steps. We are comparing here both cases, with and without self-gravity. To consider any possible connection with the emergence of density and angular momentum inhomogeneities, we also mark the profiles with "Homo" and "InHomo" labels, which stand for the homogeneous and inhomogeneous structures, respectively. Additionally, in cases without self-gravity, we did not find any possibility for axisymmetric modes to form, as shown by the smaller inset plots in Fig. 9. One can find that the inner regions are unstable towards ring formation, just before the inhomogeneities start arising. For higher rotation parameter \(S\), the unstable region moves outward and the collapsing star is prone to becoming unstable at earlier times. However, we did not detect any unstable region for the case of \(S=2\) as can be found from the plot at the bottom in Figure 9. The gaps in this case appear to be related to the emergence of inhomogeneities at the equator in the corresponding time steps. Considering the increase of the spin parameter \(A_{0}\), for a given \(S\) parameter, a smaller axisymmetric instability region would appear. Based on Hachisu et al. (1987), we argue that as soon as the condition for axisymmetric ring formation is met, the non-axisymetric fragmentation can be possible. To investigate it, we would need however a 3D set up of the collapsing stellar core, which is beyond the scope of the present work. To provide an intuition of what happens through the unstable regions, while we consider the axisymmetric modes, we present density profiles at the time \(t=0.118\)\(s\) in Figure 10. It seems to be an unstable time snapshot for these three cases of rotation parameter, \(S=1\), \(1.4\), and \(2\), from top to bottom, respectively. ### Simulations of magnetized collapsing stars We now investigate the role of the magnetic field and its importance from the point of view of gravitational stability of the collapsing core. Here we present the general trends of the system evolution, starting from the simplest case of a weak uniform magnetic field, given by Eq. 17. In Figure 11 we plot the time evolution of the black hole mass, spin, and mass accretion rate onto the black hole, for the two values of specific angular momentum in the collapsing star, \(S=1.0\) and \(S=2.0\), and we compare the system evolution with and without self-gravity term. The initial gas-to-magnetic pressure ratio \(\beta=10\), and the initial black hole spin is \(a=0.85\) in all models. In general, the larger specific angular momentum results in smaller fi Figure 5: Snapshots of density and pressure profiles similar to Fig (4), for the model with rotation parameter \(S=2\) and initial black hole spin \(A_{0}=0.5\). The third snapshot (\(t=0.665s\)) is zoomed out to a larger area of about \(1000r_{s}\) while the first two time snapshots illustrate a zoomed in to \(300r_{g}\) for a clear representation of both outflow regions at final time steps and the visibility of inhomogeneities. Model shown is A05-S20-SG-R12, as listed in Table 3. nal black hole mass and larger final black hole spin (although net spin-down is found in all cases). The self-gravitating stellar core evolves much faster, and the final value of black hole spin is reached already after the initial \(\sim 10000\)\(t_{g}\). Also, in case of self-gravitatingcollapsing cores, the instantaneous accretion rate presents much larger amplitudes of oscillations, during the period of steep black hole mass increase. We have also examined the influence of the self-gravity on the global density profile and the shape of the magnetic lines. Self gravity modifies 2D profiles of density or other quantities noticeably on two stages of the evolution. For all of the combinations of \(A_{0}\), \(S\) and gas-to-magnetic pressure values the self-gravity shows its effect for the first time around \(t=0.133\)\(s\) by creating inhomogenities. On this stage the presence of the magnetic field does not change significantly the self-gravity influence. Their structure and lasting times are similar in the simulations with and without magnetic field. They are visible only in the small scale around \(\sim 100\)\(r_{g}\). Around \(t\sim 0.148\)\(s\) we see drop of the angular momentum which is not reflected in the density profile, and around \(t\sim 0.163\)s higher values appear again. Inhomogenity disappear around \(t\sim 0.192\)s.. Magnetic field does not have an significant influence on the evolution at that time. Inhomogenities have the same structures and morphology as seen in Fig. 7 for a non-magnetised case (model A05-S14-SG-R10). Self-gravity makes its presence known for the second time at the end of the simulations. This time scale of the effect depends on the magnetic filed presence and strength. For the simulations without self-gravity in the final stage of the simulation more or less spherically symmetrical structure of the density is preserved, and magnetic potential lines are radial. Situation change for the self gravity profiles. With the absence of the magnetic field we observe thin disk-like structure which forms at the end of the simulations. For the magnetic field characterized by \(\beta=100\) similar structure is formed and its shape is followed by the magnetic potential lines. It is however slightly bigger then in non-magnetized case. More magnetized envelope with \(\beta=10\) results in structure which is visible at scales of up to \(r\sim 800\)\(r_{g}\). Moreover, simulations with self-gravity leave the evolved stellar core much less dense. We present exemplary profiles illustrating those stages of the simulations for \(A_{0}=0.3\) and \(S=1.0\) in the Fig. 12. We show here models A03-S10-nSG-R12, A03-S10-nSG-wMF and A03-S10-nSG-sMF in top ow, while bottom row shows models A03-S10-SG-R12, A03-S10-SG-wMF and A03-S10-SG-sMF. In addition to the vertical magnetic field configuration, we computed the evolution of collapsing star embedded in the dipole magnetic field, given by Eq. 19. In Figures 13 and 14 we present the initial and evolved states of magnetized models with a dipole configuration. This configuration seems more natural for a stellar structure, at large scale in the envelope. It was not considered in our previous study (Krol & Janiuk 2021). In the presented models the initial black hole spin was assumed equal to \(A_{0}=0.85\), and specific angular momentum was normalized with \(S=1.0\) or \(S=2.0\). In Figure 13 we show the snapshots from model with neglected the self-gravity effects, while in the Figure 14 the self-gravity effect is included. The dipole field is a prospec Figure 6: Specific angular momentum at the equator, taken at several time steps corresponding to the inhomogeneous structure in self-gravitating case (right panels in both rows). The cases without self-gravity are also presented in the left two panels. Two cases of \(S=1.4\) and \(S=2\) with \(A_{0}=0.5\) are shown in the top and bottom panels, respectively. During the emergence of inhomogeneities, self-gravity seems to transfer the angular momentum into the black hole. Models are labeled in all panels with symbols referring to Tab 3. tive configuration of the field, in the context of stellar collapse models (White et al., 2022). We notice, that in comparison to the uniform field configuration, the general evolution of the system is similar. There appear small quantitative differences as for the final black hole mass and spin, as well as the average accretion rate values. In non-self-gravitating models, the maximum accretion rates (we probed only the case of initial spin \(A_{0}=0.85\) and envelope rotations \(S=1.0\) or \(S=2.0\)) seem to be slightly larger for dipole field than for vertical one. The maximum black hole spin value does not change, however the final spin of the black hole in general can be a bit larger (by \(\sim 0.02\)) while the final black hole mass gets slightly smaller, for \(S=1.0\). In case of \(S=2.0\), the trend reverses, and final black hole mass is larger, while final spin is smaller (even by \(\sim 0.06\)). Detailed values are given in Table 3. Noticeably, for dipole field normalized to a Figure 8: Specific angular momentum snapshots, for the model with self-gravitating collapsing stellar core with \(S=2\) and \(A_{0}=0.5\). The inhomogeneities demonstrate self-gravity impacts, as seen in the second and third time snapshot. Model shown is A05-S20-SG-R12, as listed in Tab. 3. Figure 7: Specific angular momentum snapshots, for the model with self-gravitating collapsing stellar core with \(S=1.4\) and \(A_{0}=0.5\). In a time interval between the second and the fifth snapshot, an inhomogeneous structure in the inner region can be detected indicating the effects of self-gravity. Model shown is A05-S14-SG-R10, as listed in Tab. 3. maximum gas-to-magnetic field ratio at horizon, the results wit dipole field do not depend on this normalisation. In Figures 15 and 16 we present the initial and evolved states of magnetized models with a Wald configuration as given by Eq. 18. We notice that in this case magnetic field acts as a barrier and prevents material from accreting onto black hole. A repulsive effect of the black hole magnetosphere is seen in both simulations, with and without self-gravity. Hence, black hole mass does not change during the simulation (cf. Table 3). In case of the dipole magnetic field, shown in Fig. 13, the accreting torus is formed at the equatorial region close to the black hole. Its size is rather small. The magnetic flux brought to the black hole horizon is not large enough, to be able to power the successful jet. We checked that the dimensionless magnetic Figure 10: Density profiles at the time \(t=0.118\)\(s\) for three cases of \(S=1,\,1.4\) and \(2\), related to the images at the top, middle and bottom, respectively. These snapshots illustrate density structure of the envelope when the ring-like gravitational instability gets possible. The case with \(S=2\), last profile, shows no axisymmetric growing mode, however. Models shown from top to bottom, are A05-S10-SG-R12, A05-S14-SG-R10 and A05-S20-SG-R12, as listed in Tab. 17 Figure 9: A demonstration of how the condition for axisymmetric modes (\(\alpha^{\rm z}/\pi G\rho<1\)) is satisfied at the equator in some time steps, considering both cases with (solid lines) and without (dashed inset curves) self-gravity. “Homo” and “InHomo” refer to a homogeneous and inhomogeneous structures, respectively. Notice the logarithmic scale on the vertical axis. Models are labeled in all panels with symbols referring to Tab 3. flux, i.e. scaled to the mass flux on the black hole horizon: \[\phi_{BH}=\frac{\Phi_{BH}}{\sqrt{M_{r}g_{c}^{2}}c}=\frac{\int B_{r}dA}{\sqrt{M_{ g}^{2}c}} \tag{23}\] is at most about \(\phi_{BH}\sim 5\) in all models, and very quickly drops to zero during the evolution. On the other hand, larger \(\phi_{BH}>15\), is presumably needed to form a magnetically arrested state and help launching the relativistic jets from the collapsar's central and power a gamma ray burst (Janiuk 2022a). It is beyond the scope of the present paper to investigate in more detail the evolution of a magnetically arrested state of the accretion flow, especially in the case of a self-gravitating collapsar. We plan to study this scenario in a separate work, and verify whether such configuration can possibly give rise to an long-lasting jet launched from the black hole horizon. For now, we only verified, that for self-gravitating models embedded in dipole magnetic field, a slightly larger \(\phi_{BH}\) (albeit still smaller than 'canonical' value of 15) is reached at the beginning of the simulation, if only we normalize the models with maximum magnetisation, \(\sigma=b^{2}/\rho\) (instead of the maximum gas-to-magnetic pressure ratio). We list those models also in the Table 3. We conclude, that still a purely dipole field is rather unable to support launching relativistic jets from self-gravitating collapsars, unless the fields at the core of the star is amplified and reconfigured. Such conclusions should be however verified by Figure 11: Time evolution of the black hole mass (left), accretion rate on the black hole horizon (middle) and black hole spin (right) for the rotation normalized with specific angular momentum at ISCO \(S=1.0\) and \(S=2.0\). ( Models are labeled in the middle panel with symbols referring to Tab. 3). Initial spin of the black hole was \(A_{0}=0.85\), and the initially vertical magnetic field given by Eq. 17 was normalized with \(\beta=10\) or \(\beta=100\) at the ISCO radius. Figure 12: Profiles of the density with the contours of the magnetic field vector potential at final stage of the simulations. First row presents simulations with no self-gravity and: no magnetic field (left panel), \(\beta=100\) (middle panel) and \(\beta=10\) (right panel). Second row presents simulations with self-gravity and: no magnetic field (left panel), \(\beta=100\) (middle panel) and \(\beta=10\) (right panel). Simulation assumed vertical magnetic field configuration. Parameters: \(A_{0}=0.3,S=1.0\). Models shown are A03-S10-nSG-R12, A03-S10-nSG-wMF, and A03-S10-nSG-sMF (top row), and A03-S10-SG-R12, A03-S10-SG-wMF, and A03-S10-SG-sMF (bottom row) fully 3D simulations, similar to those presented in Gottlieb et al. (2022). In their work, the dipole magnetic field prescription has been modified with a factor depending on the radius, to disentangle the magnetic field of the stellar core from the dipole-like field of the envelope(cf. Eq. 10 in their article). We propose that the magnetic field preserved by the collapsing stellar core which forms a black hole, might be reconfigured during collapse and for a Wald magnetosphere. Hence, a repulsing effect of such field will act on matter, if the black hole is spinning sufficiently fast (Karas et al., 2020). In Figure 17 we show the time dependence of the magnetic flux of the horizon, for the above mentioned configurations with two values of magnetisation, \(\sigma=1\), and \(0.1\). For comparison, we also checked the magnetic flux level in case of our third magnetic field configuration, the Wald solution given by Eq. 18 and supposed to describe accurately the magnetosphere of a fastly spinning black hole. In the numerical simulation, the Wald magnetic field confines a large-scale toroidal structure in the equatorial plane, which is present for most of the simulation time. A temporary jet-like structure forms in the polar regions, in the early time of the simulation. A low density funnel is formed along the black hole rotation axis, and reaches distance about \(\sim 250r_{g}\). In this simulation, the magnetically arrested state developed, and the dimensionless magnetic flux of \(\phi_{BH}\sim 50\) was reached at the horizon region. This high values were obtained in both self-gravitating and non-SG models, regardless of the specific angular momentum content in the envelope. Still, the magnetic field did not prevent matter from collapsing through the polar and intermediate-latitude regions, hence dense material was later present below and above the torus. This material ultimately halted the jet that was trying to emerge from the black hole. Again, possibly 3D simulation results might lead to a different outcomes, and will be studied in a future work. Specifically, the case of highly magnetized collapsar, where initial magnetisation is normalized to \(\sigma=0.1\), dense material should not fall onto the center from the poles, but be expelled outwards. Then, the jet will be more likely to break out, is a form of a persistent, or transient structure, than for less magnetized material. To study this effect in detail, 3-dimensional simulation with finer resolution, and possible adaptive mesh refinement, will be needed. Global trends across the black hole spin scale, angular momentum content, and the magnetic field strength In the three previous subsections, we discussed the dependencies of the collapsing stellar core properties separately for each black hole spin. We also used several values of the angular momentum content in the pre-collapse star, and several prescriptions of the initial magnetic field configuration and strength. Here we propose a synthetic, quantitative way to examine the influence of the self-gravity on the system. We check the relative differences between global resulting quantities: final black hole spin, \(A_{final}\), maximal black hole spin, \(A_{max}\), and the maximal black hole mass, \(M_{BH}\), calculated for the simulations with and without self-gravity. We show three cases with different input of the magnetic field, as presented in Fig. 18 (top row is non-magnetized case, middle and bottom rows differ with respect to \(B_{0}\) parameter that normalized vertical field). We show the color maps of those three global quantities, in the parameter space defined by the model parameters: \(A_{0}\) and \(S\). We notice that the final spin of black hole is always smaller in simulations with self-gravity. The magnetic field input enhances this difference, especially for models with higher rotation parameter, and regardless of the initial black hole spin. In other words, the relative difference between final spins is negative and spans larger parameter space, if the magnetic field is present in the collapsing stellar core. The reason for that is the action of magnetic barrier, that pushes outward the in-falling gas, hence temporary decrease of the mass accretion rate. The effect on the black hole angular momentum is rather negligible, but the dimensionless spin \(A\) is reduced. The behaviour of the final black hole mass and the maximal spin is more complicated and depends on the combination of the \(A_{0}\) and \(S\) parameters. \(A_{max}\) is higher for self-gravitating models with \(A_{0}\sim 0.6\) and high values of \(S\), for magnetized models. But when the initial spin is very low or very high, the maximum spin \(A_{max}\) is higher for models without self-gravity. In weakly magnetized, and non-magnetized models, the relative differences become roughly zero, as shown in the top row of Fig. 18. Similarly, the maximal black hole mass in most cases is lower for the simulations with self-gravity, however, again we can see different behaviour for the simulations with high \(S\) and stronger magnetic field. The relative difference between the resulting quantities is enhanced by the magnetic field. To sum up, as shown in Figure 18, one can interpret the self-gravity as a factor that lowers the final mass and spin transferred into the black hole, while it may increase the maximum spin parameter of the black hole with respect to non self-gravitating case. On the other hand, we found that a sharp decrease of the accretion rate at time \(t\gtrsim 0.2s\) which appears in self-gravitating case (see the second plots of both rows in Figure 2), coincides with an increase in the specific angular momentum of the inner radii. It suggests that the accretion of matter into the black hole may take a longer time than what we considered for the whole simulation, so that the final mass of the black hole is obviously less than in non self-gravitating case. We believe that it may further cause a decrease in the final spin of the black hole, as well. However, considering the higher maximum spin parameter in the self-gravitating case, we came into conclusion that at the earlier time steps (\(\lesssim 0.15s\)), the self-force of the envelope seems to pave the way for the angular momentum to be transferred into the black hole via making more amount of mass fall inside the horizon as a result of the higher accretion rate. When it comes to magnetic field effects, it appears to influence these parameters in either cases with and without self-gravity. However, for the self-gravitating case this impact is rather more complicated than the non self-gravitating one. On the one hand, self-force acts against magnetic filed since it causes the matter to get denser which is against the role of magnetic field in small scale. On the other hand, one may expect the magnetic field decreases the accretion of matter towards the black hole due to the large scale effect of magnetic torque on the envelope. This type of influence agrees with the previously mentioned increase in the specific angular momentum of the inner radii when self-gravity is taken into account, at the time steps \(\gtrsim 0.2s\). In general, we found that self-gravity impact dominates over magnetic field so that it influences non self-gravitating case more than self-gravitating one. This may result in the increase of the relative difference between black hole features, regarding two cases with and without self-gravity. Figure 14: Distribution of density and magnetic field vector potential contours at t=0 (left), and the short-time evolved snapshots taken at time \(t=0.148\)\(s\) (middle), and at the end of simulation time \(t=0.739\)\(s\) (right). The magnetic field of dipole configuration is adopted and normalized with \(\beta=50\). Top row is scaled to the outer radius of the domain, at 1000 \(r_{g}\), and bottom row is zoomed in to 100 \(r_{g}\). Simulation was done in an evolving Kerr metric, plus with self-gravity effect. Parameters are the same as in Fig. 13. Model shown is D08-S10-SG-b50, as listed in Tab. 3. Figure 13: Distribution of density and magnetic field vector potential contours at t=0 (left), and the short-time evolved snapshots taken at time \(t=0.148\)\(s\) (middle), and at the end of simulation time \(t=0.739\)\(s\) (right). The magnetic field of dipole configuration is adopted and normalized with \(\beta=50\). Top row is scaled to the outer radius of the domain, at 1000 \(r_{g}\), and bottom row is zoomed in to 100 \(r_{g}\). Simulation was done in an evolving Kerr metric, but without self-gravity effect. Parameters: \(A_{0}=0.85,S=1.0\). Model shown is D08-S10-nSG-b50, as listed in Tab. 3. Figure 16: Distribution of density and magnetic field vector potential contours at t=0 (left), and the short-time evolved snapshots taken at time \(t=0.148\ s\) (middle), and at the end of simulation at time \(t=0.296\ s\) (right). The magnetic field of dipole configuration is adopted and normalized with \(\beta=50\). Top row is scaled to the outer radius of the domain, at 1000 \(r_{g}\), and bottom row is zoomed in to 100 \(r_{g}\). Simulation was done in an evolving Kerr metric, plus with self-gravity effect. Parameters: \(A_{0}=0.85\), \(S=1.0\). Model shown is W08-S10-SG-b50, as listed in Tab. 3 Figure 15: Distribution of density and magnetic field vector potential contours at t=0 (left), and the short-time evolved snapshots taken at time \(t=0.148\ s\) (middle), and at the end of simulation at time \(t=0.296\ s\) (right). The magnetic field of dipole configuration is adopted and normalized with \(\beta=50\). Top row is scaled to the outer radius of the domain, at 1000 \(r_{g}\), and bottom row is zoomed in to 100 \(r_{g}\). Simulation was done in an evolving Kerr metric, but without self-gravity effect. Parameters: \(A_{0}=0.85\), \(S=1.0\). Model shown is W08-S10-nSG-b50, as listed in Tab. 3 Figure 17: Time evolution of the magnetic flux at the black hole horizon, normalized to the mass flux (see Eq. 23). Models are normalized with specific angular momentum at ISCO \(S=1.0\) and \(S=2.0\) (as labeled in the panels). Initial spin of the black hole was \(A_{0}=0.85\). The initial magnetic field was dipole given by Eq. 19, and was normalized with maximum magnetisation of \(\sigma=1\) or \(\sigma=0.1\) (left panel), or the vertical field given by Eq. 18, and normalized with gas-to-magnetic pressure ratio at ISCO equal to \(\beta=50\). Models are labeled in the both panels with symbols referring to Tab 3. Figure 18: The density plots showing the difference of \(A_{final}\) (left panels), \(A_{max}\) (middle panels) and \(M_{BH}\) (right panels) between Self-Gravitating models and non Self-Gravitating models for simulations without magnetic field (upper row), with magnetic field normalized to \(\beta=100\) (middle row) and for \(\beta=10\) (lower row). ## 5 Discussion In this work, we present for the first time the new version of our time-dependent code, based on HARM scheme but supplemented with the dynamically evolving space-time metric coefficients, as described and developed originally by Janiuk et al. (2018) and later explored in Krol & Janiuk (2021). The current significant modification aims to numerically account for the Kerr metric perturbation due to the gravitational self-force, which is acting on the matter inside the orbit of a given fluid element, and changing with the distance from the black hole. This dynamical treatment of the metric perturbation is not providing a method of solving the full set of Einstein field equations, such as is possible in the Einstein Toolkit1 framework. Nevertheless, we argue that it provides a good approximation to the collapsar problem and allows to compute the stellar collapse with a wide range of black hole spin parameters and dynamical evolution of the black hole mass and spin, while the mass of the self-gravitating envelope which imposes the perturbation in the Kerr metric is non-negligibly large. Such a calculations is, to the best of our knowledge, currently not possible with other methods. Footnote 1: [https://einsteintoolkit.org/](https://einsteintoolkit.org/) In this paper, we compared the new results with the cases when the self-gravity perturbation was neglected, and we found dramatic differences between these two cases, mainly in the early phase of the collapsing star time evolution. We then focused on the potential role of the gravitational instability in collapsing stellar cores, across the broad range of values of the initial spins of the black hole. We studied both non-magnetized and magnetized models. In the latter case, we explored in most detail the self-gravitating collapsing stars embedded in an initially vertical magnetic field, in order to be able to compare them quantitatively with our previous results, where only such a configuration was used. In addition, we adopted an alternative configuration of a dipole magnetic field, which is presumably more adequate for large scale field initialization in collapsing stars. Nevertheless, we found out that this configuration itself does not lead to the magnetic flux being brought into the black hole horizon, which would be large enough to account for magnetically arrested state formation, MAD (Janiuk 2022a). The most promising way to produce bi-polar jets which would be able to emerge from the collapsar envelope, while being also able to be powered by magnetized accretion originally seeded in the stellar magnetic field, is therefore a hybrid configuration, such as the one proposed in Gottlieb et al. (2022). Exploring such models should be done in a 3-dimensional setup, which is more demanding computationally. We notice that when one accounts for self-gravity, apart from the Kerr metric changes because of black hole growth, two additional equations for perturbation of mass and angular momentum have to be solved and integrated over volume in each grid point. Both cases were studied so far in 2D only. We postpone the 3D task to our future work. In our self-gravitating models we assumed the accreting black hole of initial mass of \(3M_{\odot}\), which is slightly larger than possible maximum mass of a neutron star, and ensures that our core collapsed directly to a black hole. After that, the compact object increases its mass due to the fallback of stellar envelope. Our adopted model assumes the envelope mass is fixed and equal to \(25~{}M_{\odot}\). This mass is smaller than a typical core-collapse supernova may have, while it is adequate for the massive star that has been already stripped off from its Hydrogen envelope (Podsiadlowski et al. 2003). Therefore, as for the black hole growth, we allow for a change of its mass up to this possible value of about \(28M_{\odot}\), which is found to be in the lower end of the mass distribution of black holes detected by gravitational wave interferometers (10-85 \(M_{\odot}\), Abbott et al. (2020)). The initial spin of the newly formed black hole is our model parameter and ranges from 0.3 to 0.85. We do not start from a non-spinning black hole, as we did in our previous work (e.g. Murguia-Berthier et al. (2020)), because in general what is of interest for this study is an electromagnetic counterpart of the collapse, in the form of a GRB event that is presumably powered by a spinning black hole. The spin changes during the collapse, due to accretion of the envelope where some content of the angular momentum was already available. As the stellar rotation is parameterized in our models by the ratio between given specific angular momentum and critical rotation speed at the ISCO, while on the other hand, a moderately rotating black hole was already seeded in the core, the ultimate outcome of the process naturally depends on the two-parameter space, namely \(S\) and \(A_{0}\). This parameter space was explored in our set of simulations, while we also aimed to break the degeneracy between these parameters by introducing magnetic field in some of the models. Our simulations are started with the smooth transonic solution, when the gas radial velocity is supersonic within inner 80 gravitational radii. This is different from a multi-transonic shock solution, introduced in the simulations of (Sukova & Janiuk 2015), which would naturally lead to shock oscillations. Here we observe the sonic front expansion, and also some transient shock formation during the collapse. At early times, the small transonic shocks appear located around 100 \(r_{g}\). They are presenting a moderate density contrast (pre-shock to post-shock density ratio \(R=\rho_{1}/\rho_{2}\sim 10\)). Such shocks appear also at later times during collapsar evolution. We find that their formation is enhanced by the self-gravity effects. Regarding the magnetic field impact, we found that it does not make any significant difference in these time scales, although it may influence the strength of the transient shock which appeared in the case of high rotation, \(S=2\). This comparison can be confirmed by the Figure 19 which indicates the radial evolution of density and Mach number through early stage of the simulation. The left hand plots are associated with the self-gravitating case while the right ones also contain the effects of magnetic field. This impact of the magnetic field on shock strength is consistent with previous studies. We notice therefore that magnetic field does not significantly change the properties of inhomogeneous region. The latter conclusion about magnetized shocks is consistent with the Fermi acceleration process studies made for the pulsar wind nebulae. As found by Sironi & Spitkovsky (2011), the particle acceleration in magnetized shocks leads to smaller speed and energy for larger magnetisation, \(\sigma\). Moreover Komissarov (1999) showed that even for moderately magnetized plasma the formation of the strong shocks is different than in purely hydrodynamical case. In our models, the typical magnetisation is weak, \(\sigma<1\), but still in the early times of collapse, the shocks which form in the innermost regions of the star have larger density contrast in non-magnetized cases. Several scenarios have been proposed to explain the temporal variability of the GRB light curves. Such fluctuations are detected in both prompt emission of Gamma rays and flaring activity in X-ray or optical bands. The internal shock model (Piran et al. 1993; Katz 1994), turbulence or magnetic reconnections in the jet (Narayan & Kumar 2009; Beloborodov et al. 1998, 2000; Amati et al. 2018), viscous and thermal instabilities in hyperaccretion disk, as the GRB central engine, (Janiuk et al. 2007; Lei et al. 2009; Kawanaka & Kohri 2012; Kawanaka et al. 2013) are among the studies to model this erratic behavior. As suggested already by Perna et al. (2006), the phenomenology of short and long gamma ray bursts indicates that the gravitational instability in their engines may lead to the flaring activity (Margutti et al., 2010). In particular, the self-gravity can lead to a clumpy structure (see also Shahamat & Abbassi (2020) for the case of flaring activity, Shahamat et al. (2021) for the prompt emission variability, and Coughlin et al. (2020) regarding either cases of the flares and prompt emission's oscillatory behavior) or spiral density waves (Masada et al., 2007) in the central engine. This activity perturbation may however be further modulated by the jet breakout from the star (Petropoulou et al., 2020). What is then measured in the data is the amplitude of the the count rate, for the variability observed in jets of GRBs, and that depends strongly on the selected energy band. About 30% of GRBs present X-ray flares whose origin is now a subject of discussion. The longest of them, lasting few hundred seconds, are attributed to the reverse shock emission, while the following plateau phase, seen in Optical and X-ray bands, may be related to the late central engine activity in GRB180205 (Becerra et al., 2019). Prompt gamma-ray emission exhibits typically a stochastic variability, described by a Poisson noise. The variability time scale is estimated to be of the order of \(<10s\) down to \(10ms\)(Bhat et al., 2011; Golkhou & Butler, 2014). It naturally gives rise to clusterization, i.e., time intervals characterized by an intense activity with a high rate of peaks, are interspersed with quiescent periods, during which the rate drops significantly (Guidorzi et al., 2015). This high energy variability and late X-ray flares are related to each other, and presumably they just represent different fragments of the GRB central engine, being accreted at the beginning and at the end, respectively, and are likely following the same the same mechanism. In our calculations, the central engine is represented by the innermost parts of the collapsing star, enclosed within the computational domain. The very first fragments of the self-gravitating star will lead to the prompt emission variability on the scale of 0.1-0.2 seconds, while the black hole spin is also changing on time. The remaining parts of the envelope, additionally broken into further fragments and rings over larger inhomogeneities, should lead to longer timescale variability. In this phase, the black hole spin should reach a final value, and the process of energy extraction to the jets will not be affected by the spin changes. Thus, the extended quiescent period of the prompt emission, can be followed by several subsequent pulses, when the matter clumps incoming form the outermost regions of the engine will eventually fall onto the black hole, likely delayed by the viscous spreading (Dall'Osso et al., 2017). Flares occur therefore \(100-1000s\) after the prompt emission, while the light curves are generally of around \(>10s\) length. Our study identified several key factors that may influence accretion rate variability, and potentially explain the oscillations in prompt emission. First, the barrier between gravitational torque of the black hole and the centrifugal force, which pushes matter outwards especially in the cases of higher rotations (i.e., \(S=1.4,~{}2\)) and near the equator, causes some fluctuations in the size of outflow, while this region shrinks due to the dominance of the central gravity. It can produce a pulse with a duration of around \(\lesssim 0.05~{}s\). In cases without self-gravity, on the other hand, the outflow zone moves outward, and shrinks in a more stable manner during a longer period of time. Consequently, the accretion rate is much smoother, with a variability of the order of a few \(10^{-1}~{}s\) (see the right hand side panel in the Figure 2). We identify this factor as the only responsible for the smooth variability of the accretion rate, in the absence of self-gravity. Second, the inhomogeneous structure of the inner stellar core due to SGI instability, in addition to the creation of transient shock (seen in the earlier time steps of the case \(S=2\)) leads to a variable accretion rate. We estimate that the former generates a pulse of the width less than \(\sim 0.074~{}s\) (regarding the period of time during which the specific angular momentum, through the inner regions, encounters a drop as demonstrated in Figure 6) and the latter provides a variability of the order of \(\sim 10^{-2}~{}s\). On the other hand, shocks are considered as another factor that can provide a short-term variability, and also a high efficiency of energy conversion, in the astrophysical accreting systems such as active galactic nuclei (Meszaros & Ostriker, 1983). In conclusion, we are of the opinion that the short-term variability of the GRB's prompt emission can be well-explained in terms of these mechanisms through our model. ## 6 Conclusions We studied collapsing stellar core models accounting for the changing black hole mass and spin, the retaletd coefficients of the Kerr space-time metric, and, for the first time, the self-gravity of the star. In the main part of this analysis, we compared our new results to the cases without the self-gravity terms. We also analyzed the impact of SGI instability on the properties of the collapsing flow. In addition, some of our models were embedded in a magnetic field of a various strength and couple typical configurations. The main findings of this study are the following: * We show that evolution of spin and mass of the black hole are quantitatively and qualitatively affected by the self-gravitation of the envelope. * We show that accretion rate variability at early times is much stronger in self-gravitating collapsing stars and may lead to detectable signals in long GRB prompt emission. * We find that self-gravity effects provide mechanism of the transport of angular momentum, and that final black hole mass and spin are reached much earlier during the collapse. * We see a weak and non-linear dependence of the black hole evolution on its initial spin, manifested mainly when the magnetic field present in super-critically rotating envelopes. * We detect formation of transient shocks, with moderate density contrast, also in magnetized models. * At early times of simulation, the density contrast in transonic shocks seems to be higher in non-magnetized cases. ###### Acknowledgements. We thank Petra Sukova and Ishika Palit for helpful discussions. The project was partially supported by grant 2019/35/B/ST9/04000 from Polish National Science Center. We made use of computational resources of the PL-Grid infrastructure, under grant _plggg_. Additionally D. L. K. was supported by the Polish National Science Center DEC-2019/35/08/ST9/0458, We hereby acknowledge Sci-HPC center of Ferdowsi University of Mashhad, Iran, where some part of this research was performed. AJ acknowledges the Czech-Polish mobility program (MSMT S320PL037 and PPN/BC/2019/1/00069)
2303.12399
Masser-Wüstholz bound for reducibility of Galois representations for Drinfeld modules of arbitrary rank
In this paper, we give an explicit bound on the irreducibility of mod-$\mathfrak{l}$ Galois representation for Drinfeld modules of arbitrary rank without complex multiplication. This is a function field analogue of Masser-W\"ustholz bound on irreducibility of mod-$\ell$ Galois representation for elliptic curves over number field.
Chien-Hua Chen
2023-03-22T09:07:30Z
http://arxiv.org/abs/2303.12399v3
Masser-Wustholz bound for reducibility of Galois representations for Drinfeld modules of arbitrary rank ###### Abstract In this paper, we give an explicit bound on the irreducibility of mod-\(\ell\) Galois representation for Drinfeld modules of arbitrary rank without complex multiplication. This is a function field analogue of Masser-Wustholz bound on irreducibility of mod-\(\ell\) Galois representation for elliptic curves over number field. ## 1 Introduction In 1993, Masser and Wustholz [14] proved a famous result on existence of isogeny, with degree bounded by an explicit formula, between two isogenous Elliptic curves. Later on, they [14] applied such an isogeny estimation to give an explicit bound on the irreducibility of mod-\(\ell\) Galois representation associated to elliptic curves over number field without complex multiplication (CM). This bound is then used to deduce a bound on the surjectivity of mod-\(\ell\) Galois representation for elliptic curves over number field without CM. As a function field analogue of the theory in elliptic curves, David and Dennis [1] gave an isogeny estimation for Anderson \(t\)-modules. In particular, they deduced an isogeny estimation for Drinfeld \(\mathbb{F}_{q}[T]\)-modules over a global function field, see Theorem 2.13 for more details. Thus it is natural to ask whether one can apply the same strategy as Messer-Wustholz to deduce a bound on irreducibility of mod-\(\ell\) Galois representation for rank-\(r\) Drinfeld modules without CM. However, the Messer-Wustholz strategy can not be applied directly to the context of Drinfeld modules. The main research is when one computes the degree of isogeny between Drinfeld modules, the degree is always a power of \(q\), which is not a prime number. Thus the computational trick in Lemma 3.1 of [14] does not work for Drinfeld modules. However, the idea from Messer-Wustholz inspired us to produce a similar method. Combining with the estimation on heights between isogenous Drinfeld modules given by Breuer, Pazuki, and Razafinjatovo (see Theorem 2.14), we can deduce our main result for an explicit bound on irreducibility of mod-\(\ell\) Galois representation for Drinfeld modules of arbitrary rank and without CM: **Theorem 1.1**.: _Let \(q=p^{e}\) be a prime power, \(A:=\mathbb{F}_{q}[T]\), and \(K\) be a finite extension of \(F:=\mathbb{F}_{q}(T)\) of degree \(d\). Let \(\phi\) be a rank-\(r\) Drinfeld \(A\)-module over \(K\) of generic characteristic and assume that \(\operatorname{End}_{\bar{K}}(\phi)=A\). Let \(\mathfrak{l}=(\ell)\) be a prime ideal of \(A\), consider the mod-\(\mathfrak{l}\) Galois representation_ \[\bar{\rho}_{\phi,\mathfrak{l}}:\operatorname{Gal}(\bar{K}/K)\to\operatorname{ Aut}(\phi[\mathfrak{l}])\cong\operatorname{GL}_{r}(A/\mathfrak{l}).\] _If \(\bar{\rho}_{\phi,\mathfrak{l}}\) is reducible, then either_ \[\deg_{T}\ell-10(d+1)^{7}\log\deg_{T}\ell\leqslant\log c_{2}+10(d+1)^{7}\left\{ \log d+r+\log[h_{G}(\phi)+1+\frac{q}{q-1}-\frac{q^{r}}{q^{r}-1}]\right\} \tag{1}\] _or_ \[\deg_{T}\ell\leqslant\log c_{2}+10(d+1)^{7}[\log d\cdot h(\phi)] \tag{2}\] As a corollary of Theorem 1.1, we deduce a lowerbound for \(\deg_{T}\ell\) such that the mod-\(\mathfrak{l}\) Galois representation \(\bar{\rho}_{\phi,\mathfrak{l}}\) is irreducible. See corollary 4.2 for details. For the special case "rank-2 Drinfeld modules over \(\mathbb{F}_{q}(T)\)", there is actually a finer estimation on irreducibility of mod-\(\mathfrak{l}\) Galois representation made by Chen and Lee [19]. However, their strategy uses the fact that a power of 1-dimensional group representation is again a group representation, see proof of Proposition 7.1 in [19]. In the rank-2 case, the reducibility of mod-\(\mathfrak{l}\) Galois representation always contributes a 1-dimensional subrepresentation. But this is not true for higher rank Drinfeld modules. On the other hand, Chen and Lee [19] gave an explicit bound on surjectivity of mod-\(\mathfrak{l}\) Galois representations for rank-2 Drinfeld modules over \(\mathbb{F}_{q}(T)\) without CM. Such an explicit bound is still unknown for higher rank Drinfeld modules. The main difficulty is the classification of maximal subgroups (up to conjugacy classes) in \(\operatorname{GL}_{r}\) over finite field is much more complicated comparing to the \(\operatorname{GL}_{2}\) case, where one only need to take care of the Borel and Cartan cases. ## 2 Preliminaries Let \(A=\mathbb{F}_{q}[T]\) be the polynomial ring over finite field with \(q=p^{e}\) an odd prime power, \(F=\mathbb{F}_{q}(T)\) be the fractional field of \(A\), and \(K\) be a finite extension over \(F\). Set \(K\{\tau\}\) to be the twisted polynomial ring with the multiplication rule \(\tau\alpha=\alpha^{q}\tau\) for any \(\alpha\in K\). Throughout this paper, "log" refers to the logarithm with base \(q\). ### Drinfeld modules We view \(K\) as an \(A\)-field, which is a field equipped with a homomorphism \(\gamma:A\to K\). The \(A\)**-characteristic** of \(K\) is defined to be the kernel of \(\gamma\). **Note 2.1**.: _Throughout this paper, we take \(\gamma:A\to K\) to be the natural embedding from \(A\) to \(K\). i.e. the \(A\)-characteristic of \(K\) is always equal to zero._ **Definition 2.2**.: _A Drinfeld \(A\)-module of rank \(r\) over \(K\) of generic characteristic is a ring homomorphism_ \[\phi:A\to K\{\tau\}=\operatorname{End}_{\mathbb{F}_{q}}(\mathbb{G}_{a,K})\] _such that_ 1. \(\phi(a):=\phi_{a}\) _satisfies_ \(\deg_{\tau}\phi_{a}=r\cdot\deg_{T}a\)__ 2. _Denote_ \(\partial:F\{\tau\}\to F\) _by_ \(\partial(\sum a_{i}\tau^{i})=a_{0}\)_, then_ \(\phi\) _satisfies_ \(\gamma=\partial\circ\phi\) From the definition of Drinfeld \(A\)-module, we can characterize a Drinfeld module \(\phi\) by writing down \[\phi_{T}=T+g_{1}\tau+\cdots+g_{r-1}\tau^{r-1}+g_{r}\tau^{r},\text{ where }g_{i} \in K\text{ and }g_{r}\in K^{*}.\] **Proposition 2.3**.: _There is an isomorphism between the twisted polynomial ring \(K\{\tau\}\) and the ring of \(q\)-polynomials \((K<x>,+,\circ)\) where \(K<x>:=\left\{\sum_{i=0}^{n}c_{i}x^{q^{i}}\mid c_{i}\in K\right\}\) and the mutiplication of \(K<x>\) is defined to be composition of \(q\)-polynomials._ Proof.: Consider the map sending \(\sum_{i=0}^{n}c_{i}\tau^{i}\) to \(\sum_{i=0}^{n}c_{i}x^{q^{i}}\). This defines an isomorphism between \(K\{\tau\}\) and \(K<x>\). Fix a Drinfeld module \(\phi\) over \(K\). From the above proposition, the image \(\phi_{a}\) of Drinfeld module \(\phi\) at \(a\in A\) corresponds to a \(q\)-polynomial \(\phi_{a}(x)\). Hence for an ideal \(\mathfrak{a}=<a>\) of \(A\), we may define the \(\mathfrak{a}\)-torsion of the Drinfeld module \(\phi\) over \(K\). **Definition 2.4**.: _The \(\mathfrak{a}\)-torsion of a Drinfeld module \(\phi\) over \(K\) is defined to be_ \[\phi[\mathfrak{a}]:=\big{\{}\text{ zeros of }\phi_{a}(x)\text{ in }\bar{K}\big{\}}\subset\bar{K}.\] Now we define the \(A\)-module structure on \(\bar{K}\). For any elements \(b\in A\) and \(\alpha\in\bar{K}\). We define the \(A\)-action of \(b\) on \(\alpha\) via \[b\cdot\alpha:=\phi_{b}(\alpha).\] This gives \(\bar{K}\) an \(A\)-module structure. And the \(A\)-module structure inherits to \(\phi[\mathfrak{a}]\). As our Drinfeld module \(\phi\) over \(K\) has generic characteristic, we have the following proposition **Proposition 2.5**.: \(\phi[\mathfrak{a}]\) _is a free \(A/\mathfrak{a}\)-module of rank \(r\)._ Proof.: See [10], Proposition 4.5.3. Let \(\mathfrak{l}\) be a prime ideal of \(A\), then the \(\mathfrak{l}\)-torsion \(\phi[\mathfrak{l}]\) of the Drinfeld module \(\phi\) is an \(r\)-dimensional \(A/\mathfrak{l}\)-vector space. Applying the action of absolute Galois group \(\operatorname{Gal}(\bar{K}/K)\) on \(\phi[\mathfrak{l}]\), we obtain the so-called mod-\(\mathfrak{l}\) Galois representation \[\bar{\rho}_{\phi,\mathfrak{l}}:\operatorname{Gal}(\bar{K}/K)\to\operatorname {Aut}(\phi[\mathfrak{l}])\cong\operatorname{GL}_{r}(A/\mathfrak{l})\] for the Drinfeld module \(\phi\) over \(K\). Now we define some heights of Drinfeld modules. We denote by \(M_{K}\) the set of all places of \(K\) including places above \(\infty\). For each place \(\nu\in M_{K}\), define \(n_{\nu}:=[K_{\nu}:F_{\nu}]\) to be the degree of local field extension \(K_{\nu}/F_{\nu}\). And define \(|\cdot|_{\nu}\) to be a normalized valuation of \(K_{\nu}\). **Definition 2.6**.: _Let \(\phi\) be a rank-\(r\) Drinfeld module over \(K\) characterized by_ \[\phi_{T}=T+g_{1}\tau+\cdots+g_{r-1}\tau^{r-1}+g_{r}\tau^{r},\text{ where }g_{i}\in K\text{ and }g_{r}\in K^{*}.\] 1. _The naive height of_ \(\phi\) _is defined to be_ \[h(\phi):=\max\{h(g_{1}),\cdots,h(g_{r})\},\] _where_ \(h(g_{i}):=\frac{1}{[K:F]}\sum_{\nu\in M_{K}}n_{\nu}\cdot\log|g_{i}|_{\nu}\)_._ 2. _The graded height of_ \(\phi\) _is defined to be_ \[h_{G}(\phi):=\frac{1}{[K:F]}\sum_{\nu\in M_{K}}n_{\nu}\cdot\log\,\max\{|g_{i}| _{\nu}^{1/(q^{i}-1)}\mid 1\leqslant i\leqslant r\}\] **Corollary 2.7**.: _One can observe from the definition of naive height and graded height that_ \[h(\phi)\leqslant(q^{r}-1)\cdot h_{G}(\phi).\] ### Isogenies **Definition 2.8**.: _Let \(\phi\) and \(\psi\) be two rank-\(r\) Drinfeld \(A\)-modules over \(K\). A_ **morphism**_\(u:\phi\to\psi\) over \(K\) is a twisted polynomial \(u\in K\{\tau\}\) such that_ \[u\phi_{a}=\psi_{a}u\ \text{for all }a\in A.\] _A non-zero morphism \(u:\phi\to\psi\) is called an isogeny. A morphism \(u:\phi\to\psi\) is called an_ **isomorphism** _if its inverse exists._ Set \(\operatorname{Hom}_{K}(\phi,\psi)\) to be the group of all morphisms \(u:\phi\to\psi\) over \(K\). We denote \(\operatorname{End}_{K}(\phi)=\operatorname{Hom}_{K}(\phi,\phi)\). For any field extension \(L/K\), we define \[\operatorname{Hom}_{L}(\phi,\psi)=\{u\in L\{\tau\}\mid u\phi_{a}=\psi_{a}u\ \text{for all }a\in A\}.\] For \(L=\bar{K}\), we omit subscripts and write \[\operatorname{Hom}(\phi,\psi):=\operatorname{Hom}_{\bar{K}}(\phi,\psi)\ \text{and}\ \operatorname{End}(\phi):= \operatorname{End}_{\bar{K}}(\phi)\] **Definition 2.9**.: _The composition of morphisms makes \(\operatorname{End}_{L}(\phi)\) into a subring of \(L\{\tau\}\), called the_ **endomorphism ring** _of \(\phi\) over \(L\). For any rank-\(r\) Drinfeld module \(\phi\) over \(K\) with \(\operatorname{End}(\phi)=A\), we say that \(\phi\) does not have complex multiplication._ **Definition 2.10**.: _Let \(f:\phi\to\psi\) be an isogeny of Drinfeld modules over \(K\) of rank \(r\), we define the degree of \(f\) to be_ \[\deg f:=\#\ker(f).\] **Proposition 2.11**.: _Let \(f:\phi\to\psi\) be an isogeny of Drinfeld modules over \(K\) of rank \(r\). There exists a dual isogeny \(\hat{f}:\psi\to\phi\) such that_ \[f\circ\hat{f}=\psi_{a}\ \text{and}\ \hat{f}\circ f=\phi_{a}.\] _Here \(0\neq a\in A\) is an element of minimal \(T\)-degree such that \(\ker(f)\subset\phi[a]\)._ Proof.: See [10] Proposition 4.7.13 and Corollary 4.7.14. The following corollary is immediate by counting cardinalities. **Corollary 2.12**.: _As in the setting of Proposition 2.11, we have_ \[q^{r\cdot\deg_{T}(a)}=(\deg f)\cdot(\deg\hat{f}).\] Now we can state the key tools to derive our main result: **Theorem 2.13** ([1] Theorem 1.3).: _Let \(K\) be a finite extension over \(F\) with \([K:F]:=d\). Suppose that there are two \(\bar{K}\)-isogenous Drinfeld modules \(\phi\) and \(\psi\) defined over \(K\). Then there is an isogeny \(f:\phi\to\psi\) such that_ \[\deg f\leqslant c_{2}\cdot(dh(\phi))^{10(r+1)^{7}}.\] _Here \(c_{2}=c_{2}(r,q)\) is a effectively computable constant depends only on \(d\) and \(q\)._ **Theorem 2.14** ([1] Theorem 3.1).: _Let \(f:\phi\to\psi\) be an isogeny of rank-\(r\) Drinfeld modules over \(\bar{K}\) and suppose that \(\ker(f)\subset\phi[N]\) for some \(0\neq N\in A\). Then we have_ \[|h_{G}(\psi)-h_{G}(\phi)|\leqslant\deg_{T}(N)+\left(\frac{q}{q-1}-\frac{q^{r} }{q^{r}-1}\right).\] ## 3 Proof of Theorem 1.1 We are given a rank-\(r\) Drinfeld module \(\phi\) defined over \(K\) with \(\operatorname{End}(\phi)=A\). And suppose the image of mod-I Galois representation \(\operatorname{Im}\bar{\rho}_{\phi,\operatorname{I}}\) acting on \(\phi[\operatorname{I}]\) has an invariant \(A/\mathfrak{l}\)-subspace of dimension \(1\leqslant k\leqslant r-1\). Denote such an invariant subspace by \(H\). From Proposition 4.7.11 and Remark 4.7.12 of [10], there is an isogeny \[f:\phi\to\phi/H\] with \(\ker(f)=H\). Since \(\phi\) and \(f\) both are defined over \(K\), one can see that the Drinfeld module \(\phi/H\) is a rank-\(r\) Drinfeld module defined over \(K\) as well. In addition, we have \[\deg f=\#H=q^{k\cdot\deg_{T}\operatorname{I}}.\] Take a dual isogeny \(\hat{f}:\phi/H\to\phi\) of \(f\). The degree of \(\hat{f}\) can be computed using Corollary 2.12. We get \[\deg\hat{f}=q^{(r-k)\cdot\deg_{T}\operatorname{I}}.\] Besides, we can find two isogenies between \(\phi\) and \(\phi/H\) with bounded degree from Theorem 2.13: * \(u:\phi\to\phi/H\) is such an isogeny defined over \(\bar{K}\) with \(\deg u\leqslant c_{2}\cdot(dh(\phi))^{10(d+1)^{7}}\) * \(u^{\prime}:\phi/H\to\phi\) is such an isogeny defined over \(\bar{K}\) with \(\deg u^{\prime}\leqslant c_{2}\cdot(dh(\phi/H))^{10(d+1)^{7}}\) Since \(\operatorname{End}(\phi)=A\), we have \(u^{\prime}\circ u=\phi_{b}\) for some \(b\in A\). Now we consider the composition of isogenies \[u^{\prime}\circ f\circ\hat{f}\circ u:\phi\to\phi/H\to\phi\to\phi/H\to\phi.\] Since \(\operatorname{End}(\phi)=A\), we can find \(N_{1}\) and \(N_{2}\) in \(A\) such that \[u^{\prime}\circ f=\phi_{N_{1}},\text{ and }\hat{f}\circ u=\phi_{N_{2}}.\] Thus we have \[u^{\prime}\circ f\circ\hat{f}\circ u=(u^{\prime}\circ f)\circ(\hat{f}\circ u )=\phi_{N_{1}N_{2}}.\] On the other hand, we compute in different order and get \[u^{\prime}\circ f\circ\hat{f}\circ u=u^{\prime}\circ(f\circ\hat{f})\circ u=u^ {\prime}\circ(\phi/H)_{\ell}\circ u=\phi_{\ell}\circ(u^{\prime}\circ u)=\phi_ {\ell b}.\] Thus we get the equality \(\ell b=N_{1}N_{2}\). As \(\ell\) is prime, we have either case (1): \(\ell|N_{1}\) or case (2): \(\ell|N_{2}\). * \(\ell|N_{1}\). Then we may write \(N_{1}=\ell\cdot\beta\) for some \(0\neq\beta\in A\). From the equality \(u^{\prime}\circ f=\phi_{N_{1}}\), we have \[\log\deg u^{\prime}+k\cdot\deg_{T}\ell=r(\deg_{T}\ell+\deg_{T}\beta).\] Hence we get \(\log\deg u^{\prime}=(r-k)\deg_{T}\ell+r\deg_{T}\beta\). Combining with the bound \(\deg u^{\prime}\leqslant c_{2}\cdot(dh(\phi/H))^{10(d+1)^{7}}\), we obtain the inequality \[(r-k)\deg_{T}\ell\leqslant\log c_{2}+10(d+1)^{7}\log[dh(\phi/H)]-r\deg_{T}\beta \leqslant\log c_{2}+10(d+1)^{7}\log[dh(\phi/H)]\] Thus we have \[\deg_{T}\ell\leqslant\frac{1}{r-k}\cdot\big{(}\log c_{2}+10(d+1)^{7}\log[dh( \phi/H)]\big{)}\leqslant\log c_{2}+10(d+1)^{7}\log[dh(\phi/H)].\] ( \(\star\) Now from Corollary 2.7 and Theorem 2.14, we have \[h(\phi/H)\leqslant(q^{r}-1)h_{G}(\phi/H)\leqslant(q^{r}-1)\cdot\left[h_{G}(\phi)+ \deg_{T}\ell+(\frac{q}{q-1}-\frac{q^{r}}{q^{r}-1})\right].\] Deduce from the above inequality, we get \[\log h(\phi/H) \leqslant \log(q^{r}-1)+\log\left(h_{G}(\phi)+\deg_{T}\ell+(\frac{q}{q-1}- \frac{q^{r}}{q^{r}-1})\right)\] \[\leqslant r+\log\deg_{T}\ell+\log\left(h_{G}(\phi)+1+(\frac{q}{q-1}-\frac{ q^{r}}{q^{r}-1})\right)\] Combining with the inequality (\(\star\)), we have the desired inequality (1) \[\deg_{T}\ell-10(d+1)^{7}\log\deg_{T}\ell\leqslant\log c_{2}+10(d+1)^{7}\left\{ \log d+r+\log[h_{G}(\phi)+1+\frac{q}{q-1}-\frac{q^{r}}{q^{r}-1}]\right\}\] * \(\ell|N_{2}\). Then we may write \(N_{2}=\ell\cdot\beta\) for some \(0\neq\beta\in A\). From the equality \(\hat{f}\circ u=\phi_{N_{2}}\), we have \[(r-k)\deg_{T}\ell+\log\deg u=r(\deg_{T}\ell+\deg_{T}\beta).\] Thus we get \(\log\deg u=k\deg_{T}\ell+r\deg_{T}\beta\). Together with the bound \(\deg u\leqslant c_{2}\cdot(dh(\phi))^{10(d+1)^{7}}\), we achieve that \[k\deg_{T}\ell\leqslant\log c_{2}+10(d+1)^{7}\log[dh(\phi)]-r\deg_{T}\beta \leqslant c_{2}\cdot(dh(\phi))^{10(d+1)^{7}}.\] Hence we have the inequality (2) \[\deg_{T}\ell\leqslant\frac{1}{k}\cdot\left(\log c_{2}+10(d+1)^{7}[\log d\cdot h (\phi)]\right)\leqslant\log c_{2}+10(d+1)^{7}[\log d\cdot h(\phi)]\] This complete the proof of Theorem 1.1. ## 4 Lower bound on irreducibility of \(\bar{\rho}_{\phi,\mathfrak{l}}\) Under the setting of Theorem 1.1, one may further solve the inequality (1) for \(\deg_{T}\ell\). Set \[\Omega_{\phi}:=\max\left\{\log c_{2}+10(d+1)^{7}\left(\log d+r+\log[h_{G}(\phi )+1+\frac{q}{q-1}-\frac{q^{r}}{q^{r}-1}]\right),\log c_{2}+10(d+1)^{7}[\log d \cdot h(\phi)]\right\},\] and \(N_{d}:=10(d+1)^{7}\). Theorem 1.1 implies that the mod-\(\mathfrak{l}\) Galois representation is irreducible when \[\text{(1'): }\frac{q^{\deg_{T}\ell}}{\deg_{T}\ell^{N_{d}}}>q^{\Omega_{\phi}} \text{ and (2): }\deg_{T}\ell>\Omega_{\phi}\] Since when we fix a finite extension \(K/F\) and a Drinfeld module \(\phi\), the numbers \(N_{d}\) and \(\Omega_{\phi}\) are fixed. Elementary Calculus can tell us that the fraction \(\frac{q^{\deg_{T}\ell}}{\deg_{T}\ell^{N_{d}}}\) tends to infinity as \(\deg_{T}\ell\) goes to infinity. Thus we can always find a real number \(C_{\phi,d}\) such that \(\deg_{T}\ell>C_{\phi.d}\) implies \(\frac{q^{\deg_{T}\ell}}{\deg_{T}\ell^{N_{d}}}>q^{\Omega_{\phi}}\). Now we try to solve \(C_{\phi,d}\) explicitly: **Lemma 4.1**.: _Let \(a,b,\) and \(c\) be positive real numbers such that \(c^{1/b}\cdot\frac{b}{\ln a}\geqslant e\), here \(e\) is the Euler's number. Then_ \[x>\frac{-b\cdot W_{-1}(\frac{-\ln a}{c^{1/b}\cdot b})}{\ln a}\] _is a solution to the inequality_ \[\frac{a^{x}}{x^{b}}>c.\] _Here \(\ln(\cdot):=\log_{e}(\cdot)\) and \(W_{-1}\) is the negative brach of the real-valued Lambert \(W\)-function, i.e. the inverse function of the complex valued function \(f(y)=ye^{y}\)._ Proof.: First of all, we solve the equation \[\frac{a^{x}}{x^{b}}=c.\] We have \[\frac{a^{x}}{x^{b}}=c\] \[\Rightarrow x\ln(a)=\ln(c)+b\ln(x)\] \[\Rightarrow x\cdot\frac{\ln(a)}{b}=\ln(c^{1/b})+\ln(x)\] \[(\text{set }\tilde{x}=x\frac{\ln(a)}{b}) \Rightarrow \tilde{x}=\ln(c^{1/b}\cdot\frac{b}{\ln(a)})+\ln(\tilde{x})\] \[(\text{set }\tilde{c}=-\ln(c^{1/b}\frac{b}{\ln(a)})) \Rightarrow -\tilde{x}+\ln(\tilde{x})=\tilde{c}\] \[\Rightarrow -\tilde{x}e^{-\tilde{x}}=-e^{\tilde{c}}\] \[(\text{Here we use condition }c^{1/b}\cdot\frac{b}{\ln(a)}\geqslant e) \Rightarrow -\tilde{x}=W_{-1}(-e^{\tilde{c}})\] \[\Rightarrow x=\frac{-bW_{-1}(\frac{-\ln(a)}{c^{1/b}\cdot b})}{\ln(a)}\] Now from elementary Calculus, we can see that the function \(g(x)=\frac{a^{x}}{x^{b}}\) is increasing whenever \(x>\frac{b}{\ln(a)}\). On the other hand, the output value of \(-W_{-1}(\frac{-\ln(a)}{c^{1/b}\cdot b})\) is at least \(1\). Thus we have \[x>\frac{-b\cdot W_{-1}(\frac{-\ln(a)}{c^{1/b}\cdot b})}{\ln a}\geqslant\frac {b}{\ln(a)}.\] Thus we get \[\frac{a^{x}}{x^{b}}>c,\text{ whenever }x>\frac{-b\cdot W_{-1}(\frac{-\ln a}{c^{1/ b}\cdot b})}{\ln(a)}.\] Now we take \(x=\deg_{T}\ell\), \(a=q\), \(b=N_{d}\), and \(c=q^{\Omega_{\phi}}\). One can check that \[c^{1/b}\cdot\frac{b}{\ln(a)}\geqslant e.\] Therefore, Lemma 4.1 shows that \[C_{\phi,d}=\frac{-b\cdot W_{-1}(\frac{-\ln(a)}{c^{1/b}\cdot b})}{\ln(a)}.\] And we can conclude the following corollary: **Corollary 4.2**.: _Let \(q=p^{\epsilon}\) be a prime power, \(A:=\mathbb{F}_{q}[T]\), and \(K\) be a finite extension of \(F:=\mathbb{F}_{q}(T)\) of degree \(d\). Let \(\phi\) be a rank-\(r\) Drinfeld \(A\)-module over \(K\) of generic characteristic and assume that \(\operatorname{End}_{\bar{K}}(\phi)=A\). Let \(\operatorname{l}=(\ell)\) be a prime ideal of \(A\), consider the mod-\(\operatorname{l}\) Galois representation_ \[\bar{\rho}_{\phi,\operatorname{l}}:\operatorname{Gal}(\bar{K}/K)\to \operatorname{Aut}(\phi[\operatorname{l}])\cong\operatorname{GL}_{r}(A/ \operatorname{l}).\] _If \(\deg_{T}\ell>\max\{C_{\phi,d},\Omega_{\phi}\}\), then \(\bar{\rho}_{\phi,\operatorname{l}}\) is irreducible. Here_ \[\Omega_{\phi}:=\max\left\{\log c_{2}+10(d+1)^{7}\left(\log d+r+\log[h_{G}( \phi)+1+\frac{q}{q-1}-\frac{q^{r}}{q^{r}-1}]\right),\log c_{2}+10(d+1)^{7}[ \log d\cdot h(\phi)]\right\},\] _and_ \[C_{\phi,d}=\frac{-b\cdot W_{-1}(\frac{-\ln(a)}{c^{\operatorname{l}/b},b})}{ \ln(a)}.\] _Where \(a=q\), \(b=10(d+1)^{7}\), and \(c=q^{\Omega_{\phi}}\)_ ## Acknowledgement The author would like to thank Sophie Marcques for inspiring discussions, and Wei-Hung Su for showing me how to solve the inequality in Lemma 4.1.
2305.16271
Knot Floer homology as immersed curves
To a nullhomologous knot $K$ in a 3-manifold $Y$, knot Floer homology associates a bigraded chain complex over $\mathbb{F}[U,V]$ as well as a collection of flip maps; we show that this data can be interpretted as a collection of decorated immersed curves in the marked torus. This is inspired by earlier work of the author with Rasmussen and Watson, showing that bordered Heegaard Floer invariants $\widehat{\mathit{CFD}}$ of manifolds with torus boundary can be interpreted in a similar way. Indeed, if we restrict the construction in this paper to the $UV = 0$ truncation of the knot Floer complex for knots in $S^3$ with $\mathbb{Z}/2\mathbb{Z}$ coefficients, which is equivalent to $\widehat{\mathit{CFD}}$ of the knot complement, we get precisely those curves; this paper then provides an entirely bordered-free treatment of those curves in the case of knot complements, which may appeal to readers unfamiliar with bordered Floer homology. On the other hand, the knot Floer complex is a stronger invariant than $\widehat{\mathit{CFD}}$ of the complement, capturing "minus" information while $\widehat{\mathit{CFD}}$ is only a "hat" flavor invariant. We show that this extra information is realized by adding an additional decoration, a bounding chain, to the immersed multicurves. We also give geometric surgery formulas, showing that $HF^-$ of rational surgeries on nullhomologous knots and the knot Floer complex of dual knots in integer surgeries can be computed by taking Floer homology of the appropriate decorated curves in the marked torus. A section of the paper is devoted to a giving a combinatorial construction of Floer homology of Lagrangians with bounding chains in marked surfaces, which may be of independent interest.
Jonathan Hanselman
2023-05-25T17:27:11Z
http://arxiv.org/abs/2305.16271v1
# Knot Floer homology as immersed curves ###### Abstract. To a nullhomologous knot \(K\) in a 3-manifold \(Y\), knot Floer homology associates a bigraded chain complex over \(\mathbb{F}[U,V]\) as well as a collection of flip maps; we show that this data can be interpretted as a collection of decorated immersed curves in the marked torus. This is inspired by earlier work of the author with Rasmussen and Watson, showing that bordered Heegaard Floer invariants \(\widehat{\mathit{CFD}}\) of manifolds with torus boundary can be interpreted in a similar way [HRW, HRW22]. Indeed, if we restrict the construction in this paper to the \(UV=0\) truncation of the knot Floer complex for knots in \(S^{3}\) with \(\mathbb{Z}/2\mathbb{Z}\) coefficients, which is equivalent to \(\widehat{\mathit{CFD}}\) of the knot complement, we get precisely the curves in [HRW]; this paper then provides an entirely bordered-free treatment of those curves in the case of knot complements, which may appeal to readers unfamiliar with bordered Floer homology. On the other hand, the knot Floer complex is a stronger invariant than \(\widehat{\mathit{CFD}}\) of the complement, capturing "minus" information while \(\widehat{\mathit{CFD}}\) is only a "hat" flavor invariant. We show that this extra information is realized by adding an additional decoration, a bounding chain, to the immersed multicurves. We also give geometric surgery formulas, showing that \(\mathit{HF}^{-}\) of rational surgeries on nullhomologous knots and the knot Floer complex of dual knots in integer surgeries can be computed by taking Floer homology of the appropriate decorated curves in the marked torus. A section of the paper is devoted to a giving a combinatorial construction of Floer homology of Lagrangians with bounding chains in marked surfaces, which may be of independent interest. The author was partially supported by NSF grant DMS-2105501 ###### Contents * 1 Introduction * 1.1 Immersed curve invariants for knots * 1.2 Surgery formulas * 1.3 Relationship to Bordered Floer homology and related work * 1.4 An example * 1.5 Organization * 2 Knot Floer homology * 2.1 Bigraded complexes over \(\mathcal{R}^{-}\) * 2.2 The knot Floer chain complex * 2.3 Notational remarks * 2.4 Examples * 3 Immersed Floer theory in marked surfaces * 3.1 The space \(CF(L_{0},L_{1})\) ###### Contents * 1 Introduction * 2 Preliminaries * 3 The \(\mathcal{S}\)- 11.1 Rational surgery formula * 11.2 Surgery formula for dual knots * 12 Examples and further considerations * 12.1 Simplifying bounding chains * 12.2 Some curves with only trivial bounding chains * 12.3 An example with nontrivial \(\widehat{\mathbf{b}}\) * 12.4 Remarks on \(\mathbb{Z}\)-coefficients ## 1. Introduction Knot Floer homology, defined by Ozsvath and Szabo [11] and Rasmussen [14], is an invariant of a knot \(K\) in a closed \(3\)-manifold \(Y\). In the decades since its introduction, knot Floer homology has proved to be a tremendously useful invariant, with numerous applications in the study of knots as well as three and four dimensional manifolds. In its usual formulation it associates an algebraic object to the pair \((Y,K)\), namely a bigraded chain complex \(CFK_{\mathcal{R}^{-}}(Y,K)\) over the ring \(\mathcal{R}^{-}=\mathbb{F}[U,V]\) for some coefficients \(\mathbb{F}\); we will assume \(\mathbb{F}\) is a field throughout, though we briefly remark on the case of \(\mathbb{Z}\) coefficients in Section 12.4. The complex \(CFK_{\mathcal{R}^{-}}(Y,K)\) is an invariant of the pair \((Y,K)\) up to graded chain homotopy equivalence. This complex splits over \(\mathrm{spin}^{c}\) structures of \(Y\): \[CFK_{\mathcal{R}^{-}}(Y,K)=\bigoplus_{\mathfrak{s}\in\mathrm{Spin}^{c}(Y)}CFK _{\mathcal{R}^{-}}(Y,K;\mathfrak{s}).\] In addition to this bigraded complex the knot Floer package also comes with a collection of chain maps \[\Psi_{\mathfrak{s}}:CFK_{\mathcal{R}^{-}}(Y,K;\mathfrak{s})\to CFK_{ \mathcal{R}^{-}}(Y,K;\mathfrak{s}+PD[K]),\] defined up to chain homotopy, called _flip maps_. The goal of this paper is to show that this algebraic data, the knot Floer complex \(CFK_{\mathcal{R}^{-}}(Y,K)\) together with the collection of flip maps \(\{\Psi_{\mathfrak{s}}\}_{\mathfrak{s}\in\mathrm{Spin}^{c}(Y)}\), admits a geometric representation as an element of the immersed Fukaya category of the marked torus, that is, as a decorated immersed curve in the marked torus. We will also show that this geometric description allows for simplified computations of the Heegaard Floer homology \(HF^{-}\) of Dehn surgeries on \(K\). For simplicity we will restrict our attention to nullhomologous knots \(K\). We remark that most of the results can be extended to knots that are only rationally nullhomologous, and in fact the core arguments are unchanged, but there is an added layer of complexity describing the \(\mathrm{spin}^{c}\) decomposition and the gradings in this more general setting. To avoid obscuring the main constructions with these details, the case of rationally nullhomologous knots will be addressed in a subsequent paper. ### Immersed curve invariants for knots Given a nullhomologous knot \(K\) in \(Y\), let \(M\) denote the complement \(Y\setminus\nu(K)\). Let \(\lambda\in H_{1}(\partial M;\mathbb{Z})\) be the homology class of the Seifert longitude. We consider the marked torus \(T_{M}=H_{1}(\partial M;\mathbb{R})/H_{1}(\partial M;\mathbb{Z})\) with a marked point at \(0\); note that \(T_{M}\) can be identified with \(\partial_{M}\) with a marked point \(w\). We will also consider the universal cover \(\widetilde{T}_{M}=H_{1}(\partial M;\mathbb{R})\) with a set of marked points given by \(H_{1}(\partial M;\mathbb{Z})\), as well as the intermediate covering space \(\overline{T}_{M}=\widetilde{T}_{M}/\langle\lambda\rangle\). For each \(\mathrm{spin}^{c}\) structure \(\mathfrak{s}\) in \(\mathrm{Spin}^{c}(M)\) (which can be identified with \(\mathrm{Spin}^{c}(Y)\), since \(K\) is nullhomologous), we define a decorated immersed multicurve \(\mathit{HF}^{-}(Y,K;\mathfrak{s})\) in \(\overline{T}_{M}\); this is a pair \((\Gamma,\mathbf{b})\) where \(\Gamma\) is an oriented, weighted, graded immersed multicurve in \(T_{M}\) and \(\mathbf{b}\) is a bounding chain, which may be thought of as a linear combination of self-intersection points of \(\Gamma\) satisfying certain conditions. The decorated curve \(\mathit{HF}^{-}(Y,K;\mathfrak{s})\) encodes the knot Floer complex \(CFK_{\mathcal{R}^{-}}(Y,K;\mathfrak{s})\), which can be recovered by adding additional marked points to \(\overline{T}_{M}\) and taking the Lagrangian Floer complex with (a lift of) a meridian of \(K\). It also encodes the flip map \[\Psi_{\mathfrak{s}}:CFK_{\mathcal{R}^{-}}(Y,K;\mathfrak{s})\to CFK_{ \mathcal{R}^{-}}(Y,K;\mathfrak{s}),\] which can be recovered from the Lagrangian Floer complex of \(\mathit{HF}^{-}(Y,K;\mathfrak{s})\) with another particular curve in \(\overline{T}_{M}\). Conversely, the decorated curve is uniquely determined by \(CFK_{\mathcal{R}^{-}}(Y,K;\mathfrak{s})\) and the flip map, up to equivalence in the immersed Fukaya category \(\overline{T}_{M}\). Here two decorated curves in the Fukaya category are considered to be equivalent if they have the same Floer homology with any other decorated curve. Uniqueness up to equivalence in the Fukaya category is a slightly unsatisfying notion: there are many decorated curves representing the same equivalence class, and it is not always apparent when two decorated curves are equivalent. A much stronger claim is that we can always choose the decorated curve \(\mathit{HF}^{-}(Y,K;\mathfrak{s})\) to have a particularly nice form, and that with this assumption the underlying immersed multicurve \(\Gamma\) is well defined up to homotopy in the marked surface \(\overline{T}_{M}\). The first condition for this nice representative is that the immersed multicurve \(\Gamma\) is in almost simple position (see Definition 6.2), which essentially means it is in minimal position subject to the constraint that it bounds no immersed annuli. The second condition concerns the bounding chain \(\mathbf{b}\). The self-intersection points of \(\Gamma\) all have a degree, and \(\mathbf{b}\) is a linear combination of the self-intersection points with non-positive degree. Let \(\widehat{\mathbf{b}}\) denote the restriction of this linear combination to self-intersection points of degree zero. We say \(\widehat{\mathbf{b}}\) is of local system type if it contains only a very special subset of degree zero intersection points (see Definition 6.3); an immersed curve decorated with such a \(\widehat{\mathbf{b}}\) is equivalent to an immersed curve decorated with local systems. **Theorem 1.1**.: _For any nullhomologous knot \(K\) in a 3-manifold \(Y\) and for any \(\mathfrak{s}\in\mathrm{Spin}^{c}(Y)\), there is a decorated immersed curve \(\mathit{HF}^{-}(Y,K;\mathfrak{s})\) in \(\overline{T}_{M}\) representing the knot Floer complex \(CFK_{\mathcal{R}^{-}}(Y,K;\mathfrak{s})\) and the flip map \(\Psi_{\mathfrak{s}}\), such that the underlying curve \(\Gamma\) is in simple position and the restriction \(\widehat{\mathbf{b}}\) of the bounding chain \(\mathbf{b}\) to degree zero intersection points is of local system type. This decorated curve is a well-defined invariant of \((Y,K;\mathfrak{s})\) up to equivalence in the Fukaya category of \(\overline{T}_{M}\); moreover, the underlying immersed curve is unique up to homotopy in the marked cylinder \(\overline{T}_{M}\) and \(\widehat{\mathbf{b}}\) is unique as a subset of self-intersection points of \(\Gamma\)._ Note that in claiming \(\widehat{\mathbf{b}}\) is unique as a subset of self-intersection points of \(\Gamma\), when \(\Gamma\) is only defined up to homotopy, we use the fact that there is an obvious identification of the relevant self-intersection points between any two homotopic curves \(\Gamma\) and \(\Gamma^{\prime}\) that are both in simple position. We remark that we do not have a unique representative for the portion of \(\mathbf{b}\) coming from strictly negative degree self-intersection points. Thus, while \(\Gamma\) is well-defined and the degree zero part \(\widehat{\mathbf{b}}\) of \(\mathbf{b}\) is well-defined, there may be different choices of \(\mathbf{b}\) on \(\Gamma\) that are equivalent in the Fukaya category and satisfy our conditions for a nice representative. It may be possible to define a normal form for \(\mathbf{b}\), imposing additional constraints on \(\mathbf{b}\) so that we can always find a representative satisfying these restraints and so that such a representative is unique as a subset of the self-intersection points of \(\Gamma\), and we hope to explore this in future work. However, in practice, once \(\Gamma\) and \(\widehat{\mathbf{b}}\) are fixed there are very few valid choices for \(\mathbf{b}\) and it is generally not difficult to tell which choices are equivalent to each other. On a case by case basis, we can often find a representative that is clearly the simplest possible (see Section 12.2). Theorem 1.1 follows from a structure theorem relating bigraded complexes and flip maps to immersed curves in the infinite marked strip \(\mathcal{S}=[-\frac{1}{2},\frac{1}{2}]\times\mathbb{R}\) and the infinite marked cylinder \(\mathcal{Z}=(\mathbb{R}/\mathbb{Z})\times\mathbb{R}\), each with marked points at points \((0,n+\frac{1}{2})\) for \(n\in\mathbb{Z}\): **Theorem 1.2**.: _Any bigraded complex \(C\) over \(\mathcal{R}^{-}\) can be represented by a decorated immersed curve \((\Gamma,\mathbf{b})\) in the infinite marked strip \(\mathcal{S}\), and a bigraded complex \(C\) equipped with a flip map \(\Psi\) can be represented by a decorated immersed curve \((\Gamma,\mathbf{b})\) in the infinite marked cylinder \(\mathcal{Z}\), where in each case \(\Gamma\) is in almost simple position and the restriction \(\widehat{\mathbf{b}}\) of \(\mathbf{b}\) to degree zero intersection points is of local system type. The decorated curves are well-defined as elements of the relevant Fukaya category, and moreover \(\Gamma\) is well defined up to homotopy and \(\widehat{\mathbf{b}}\) is well-defined as a subset of self-intersection points of \(\Gamma\)._ There is a simplified curve invariant \(\widehat{HF}(Y,K;\mathfrak{s})\) obtained from \(\mathit{HF}^{-}(Y,K;\mathfrak{s})\) by replacing \(\mathbf{b}\) with \(\widehat{\mathbf{b}}\). The underlying immersed curve \(\Gamma\) is unchanged, but we now view the decorated curve \((\Gamma,\widehat{\mathbf{b}})\) as an element of the Fukaya category of the punctured cylinder \(\overline{T}^{*}_{M}\) obtained from \(\overline{T}_{M}\) by removing the marked points. This curve represents the simplified knot Floer complex \(CFK_{\widehat{\mathbf{K}}}(Y,K;\mathfrak{s})\) over the ring \(\widehat{\mathcal{R}}=\mathbb{F}[U,V]/(UV=0)\) along with the corresponding flip map. As noted above, since \(\widehat{\mathbf{b}}\) is of local system type the decorated curve \((\Gamma,\widehat{\mathbf{b}})\) may also be interpreted as a collection of immersed curves decorated with local systems. It is much easier to work in the \(UV=0\) setting; our strategy for constructing \(\mathit{HF}^{-}(Y,K;\mathfrak{s})\) will be to first construct \(\widehat{HF}(Y,K;\mathfrak{s})\) and then systematically modify \(\mathbf{b}\), starting from \(\widehat{\mathbf{b}}\), to capture any information lost in the \(UV=0\) quotient. While the invariants \(\mathit{HF}^{-}(Y,K;\mathfrak{s})\) are curves in the marked cylinder \(\overline{T}_{M}\), we will sometimes think of these curves as living in the marked torus \(T_{M}\) by applying the covering map \(p:\overline{T}_{M}\to T_{M}\). Some information may be lost under this projection, so the curves \(p(\mathit{HF}^{-}(Y,K;\mathfrak{s}))\) in \(T_{M}\) should be thought of as decorated with additional grading information that amounts to specifying a lift to \(\overline{T}_{M}\). If we do not care about the spin\({}^{c}\) decomposition, we can consider the combined curve invariant \[\mathit{HF}^{-}(Y,K)=\bigcup_{\mathfrak{s}\in\mathrm{Spin}^{c}(Y)}p\left( \mathit{HF}^{-}(Y,K;\mathfrak{s})\right).\] It may seem more natural to define \(\mathit{HF}^{-}(Y,K)\) as a curve in \(\overline{T}_{M}\), but in the case of rationally nullhomologous knots it may not be possible to combine the curves \(\mathit{HF}^{-}(Y,K;\mathfrak{s})\) in a single copy of \(\overline{T}_{M}\) for grading reasons (see, for example, [10, Example 57]). This is not a problem for nullhomologous knots, but with this issue in mind, and following the convention of [10], we instead combine the projections and define \(\mathit{HF}^{-}(Y,K)\) to be a collection of decorated curves in the marked torus \(T_{M}\). The simplified invariant \(\widehat{HF}(Y,K)\) is defined similarly as the union of the projections of the simplified curves \(\widehat{HF}(Y,K;\mathfrak{s})\). ### Surgery formulas One of the reasons knot Floer homology has been such a valuable tool is its close connection with Heegaard Floer invariants for closed \(3\)-manifolds. Given a closed \(3\)-manifold \(Y\), the Heegaard Floer homology \(\mathit{HF}^{-}(Y)\) is a graded module over \(\mathbb{F}[W]\). Note that our convention throughout the paper will be to use \(W\) as the formal variable for Heegaard Floer invariants of closed \(3\)-manifolds and for Floer homology in singly marked surfaces, while \(U\) and \(V\) are used when two marked points are involved; we also use \(W\) in the doubly marked setting to represent the product \(UV\). Given a nullhomologous knot \(K\subset Y\), Ozsvath and Szabo gave a surgery formula describing \(\mathit{HF}^{-}(Y_{p/q})\) in terms of the knot Floer complex (and flip maps) associated with \(K\). This surgery formula admits a particularly nice description in terms of immersed curves: it amounts to taking the Floer homology with a curve of slope \(p/q\) in the marked torus. **Theorem 1.3** (Surgery Formula).: _For a nullhomologous knot \(K\subset Y\) and rational slope \(p/q\), there is an isomorphism of relatively graded \(\mathbb{F}[W]\)-modules_ \[\mathit{HF}^{-}(Y_{p/q}(K))\cong\mathcal{HF}(\mathit{HF}^{-}(Y,K),\ell_{p/q}),\] _where \(\mathcal{HF}\) on the right side denotes Lagrangian Floer homology in the marked torus \(T_{M}\) and \(\ell_{p/q}\) is a simple closed curve homotopic to \(p\mu+q\lambda\)._ For integer slopes, the surgery formula was enhanced in [HL] to compute the knot Floer complex of the dual knot \(K^{*}\) in the surgery \(Y_{n}(K)\). This enhancement can also be described in terms of Floer homology of curves by adding an additional marked point. Let \(T_{M}^{z,w}\) denote the doubly marked torus obtained from \(T_{M}\) by adding a second marked point \(z\) just next to the existing marked point \(w\) and let \(\ell_{p/q}^{*}\) be a simple closed curve of slope \(p/q\) that crosses the short arc connecting \(z\) and \(w\) exactly once. The decorated curve \(\mathit{HF}^{-}(Y,K)\) in the singly marked torus \(T_{M}\) can also be viewed as a curve in the doubly marked torus \(T_{M}^{z,w}\) that is disjoint from the short arc connecting \(z\) to \(w\). Floer homology of (decorated) curves in the doubly marked surface \(T_{M}^{z,w}\) gives a bigraded complex over \(\mathcal{R}^{-}\), and we have the following: **Theorem 1.4** (Surgery Formula for Dual Knots).: _For a nullhomologous knot \(K\subset Y\) and an integer slope \(n\), \(\mathcal{HF}(\mathit{HF}^{-}(Y,K),\ell_{p/q}^{*})\) is chain homotopy equivalent to \(CFK_{\mathcal{R}^{-}}(Y_{n}(K),K^{*})\) as relatively bigraded complexes over \(\mathcal{R}^{-}\)._ Though it is suppressed from the statements above, the equivalences in Theorem 1.3 and 1.4 also recover the spin\({}^{c}\) decomposition of \(\mathit{HF}^{-}(Y_{p/q}(K))\) and \(CFK_{\mathcal{R}^{-}}(Y_{n}(K),K^{*})\), respectively, where the spin\({}^{c}\) decomposition on the corresponding Floer homology of curves can be defined by considering the spin\({}^{c}\) decomposition on \(\mathit{HF}^{-}(Y,K)\) as well as appropriate lifts to \(\overline{T}_{M}\); for more details, see Section 11. Each theorem has a weaker form obtained by setting \(W=0\) in Theorem 1.3 and setting \(UV=0\) in Theorem 1.4. In both cases we can replace the curve invariants \(\mathit{HF}^{-}(Y,K)\) with the simplified invariants \(\widehat{\mathit{HF}}(Y,K)\). In Theorem 1.3 we take Floer homology in the punctured torus \(T_{M}^{*}\) rather than in the marked torus \(T_{M}\), thus ignoring disks that cover the marked point, and we recover the \(\mathbb{F}\)-vector space \(\widehat{\mathit{HF}}\) rather than the \(\mathbb{F}[W]\)-vector space \(\mathit{HF}^{-}\). In Theorem 1.4 we still take Floer homology in the doubly marked torus but we can ignore disks that cover both marked points, and we recover the \(UV=0\) knot Floer complex \(CFK_{\widehat{\mathcal{R}}}(Y_{n}(K),K^{*})\). ### Relationship to Bordered Floer homology and related work This paper is inspired by earlier work of the author with Rasmussen and Watson realizing bordered Heegaard Floer invariants as decorated immersed curves [HRW, HRW22]. Given a 3-manifold \(M\) with torus boundary and a parametrization \((\alpha,\beta)\) of the boundary, bordered Floer homology associates a type D structure \(\widehat{\mathit{CFD}}(M,\alpha,\beta)\) over a particular algebra \(\mathcal{A}\). In [HRW] a structure theorem was given for these algebraic objects, showing that \(\widehat{\mathit{CFD}}(M,\alpha,\beta)\) is equivalent to a collection of immersed curves decorated with local systems in the punctured torus \(\partial M\setminus z\) for some basepoint \(z\); this decorated immersed multicurve is denoted \(\widehat{\mathit{HF}}(M)\). A pairing theorem also shows that if \(M_{1}\) and \(M_{2}\) are two manifolds with torus boundary, \(\widehat{\mathit{HF}}(M_{1}\cup M_{2})\) can be obtained from the Floer homology of the corresponding curves in the gluing torus \(-\partial M_{1}=\partial M_{2}\) (with a puncture at \(z_{1}=z_{2}\)). When \(Y=S^{3}\) and \(\mathbb{F}=\mathbb{Z}/2\mathbb{Z}\), it is known that \(\widehat{\mathit{CFD}}(M,\alpha,\beta)\) is equivalent to the \(UV=0\) knot Floer complex \(CFK_{\widehat{\mathcal{R}}}(S^{3},K)\) by an algorithm of Lipshitz, Ozsvath, and Thurston [LOT18, Chapter 11]. In this case, it was observed in [HRW22, Section 4] that the immersed curve invariants \(\widehat{\mathit{HF}}(M)\) representing bordered Floer homology also recover the associated graded of the knot Floer complex (i.e. the \(V=0\) quotient of \(CFK_{\mathcal{R}^{-}}\)) by taking Floer homology with the meridian in a doubly pointed torus, and conversely that the curves can be constructed directly from the knot Floer complex given a suitably nice basis. The results in this paper generalize those observations to more general knots. Using the relationship between \(CFK_{\widehat{\mathcal{R}}}(S^{3},K)\) and the bordered invariant of the knot complement \(M\), it is straightforward to check that the curves \(\widehat{\mathit{HF}}(S^{3},K)\) defined in this paper are precisely the same as the decorated curves \(\widehat{\mathit{HF}}(M)\) constructed in [HRW]. We expect that this is true more generally: **Conjecture 1.5**.: _For any knot \(K\subset Y\) with complement \(M=Y\setminus\nu(K)\), the decorated curves \(\widehat{\text{HF}}(Y,K)\) in the punctured torus \(T^{*}_{M}\) agree with the curves \(\widehat{\text{HF}}(M)\) defined in [HRW]; in particular, they are invariants of the knot complement \(M\)._ If this conjecture holds, the simplified \(W=0\) version of Theorem 1.3 can be viewed as a special case of the immersed curve pairing theorem for bordered invariants. Immersed curve invariants for knots in \(S^{3}\) were instrumental in the authors previous work on the cosmetic surgery conjecture [10]. That work used the invariants defined in [HRW], making use of the equivalence between bordered Floer homology and knot Floer homology in this case; in particular, the surgery formula used was a consequence of the bordered pairing theorem in [HRW]. However, [10] required slightly more than the bordered approach to immersed curves could offer, since it was important to understand the \(d\)-invariants of Dehn surgeries but the bordered pairing theorem does not see either minus information or absolute gradings. In fact, the results in [10] rely in a small way on the proof of Theorem 1.3 presented in this paper, which relates the Floer homology of the relevant curves to the mapping cone formula rather than invoking the bordered pairing theorem. The minus version of Theorem 1.3 was not needed, but the identification with the mapping cone formula was used to identify the distinguished generator determining the \(d\)-invariant. A sketch of this proof, in the \(UV=0\) setting, was given but the detail have not appeared until now. The construction of immersed curve invariants for bordered Floer homology in [HRW] was based on an algebraic structure theorem for type D structures over the torus algebra. This is a special case of a more general structure theorem due to Haiden, Katzarkov, and Kontsevich [14]. This result for other surfaces can also be recovered using the more constructive proof method from [HRW]; this is worked out for arbitrary surfaces in [KWZa]. We use the same core proof to obtain the \(UV=0\) simplifications of the results in this paper. We repeat this main argument, adapted to the setting of bigraded complexes, in an effort to make the paper more self-contained and also to set up the argument with a view toward generalizing to the minus setting. But we point out that the structure theorem for complexes over \(\widehat{\mathcal{R}}\) also follows from the more general case. We especially wish to point out a close connection between the constructions in this paper and the work of Kotelskiy, Watson, and Zibrowius in [15], which specifically applies the structure theorem to type D structures over the algebra \(\widehat{\mathcal{R}}\) (such type D structures are equivalent to complexes over \(\widehat{\mathcal{R}}\)). It is shown that these structures are equivalent to immersed curves with local systems in the doubly marked disk. The \(UV=0\) version of Theorem 1.2 for bigraded complexes (ignoring flip maps) is equivalent to Theorem 1 in [15]. To relate these results, we remove the marked points from the infinite strip and project our decorated curves to the quotient by the vector \((0,1)\). This punctured cylinder plays the role of the doubly marked disk, where we have interchanged the roles of boundary components and marked points (note that in [15], curves avoid the boundary and non-compact curves approach the punctures). While the hat-type curve invariants for knots defined in this paper are parallel to the curve invariants defined in [HRW] and [15], the minus-type curve invariants are fundamentally new. This is because the curve invariants in [HRW] are constructed using bordered Heegaard Floer homology, and until recently this was defined only as a hat-theory. When this project first began one motivation was to use knot Floer homology to construct immersed curve invariants for manifolds with torus boundary in order to bypass the reliance on bordered Floer homology and access minus information. This allowed us to determine what extra decorations would be needed to enhance the immersed curves from [HRW] without working from a minus bordered invariant. Very recently Lipshitz, Ozsvath, and Thurston have extended their construction of bordered Floer homology to a minus-type invariant for manifolds with torus boundary [LOT]; this invariant takes the form of a module over a particular weighted \(A_{\infty}\) algebra. The results of this paper suggest that the algebraic objects defined in [LOT] can also be represented by immersed curves decorated with bounding chains in the marked torus. We hope to explore this and the connection between the curves constructed in this paper and those arising from minus bordered invariants in the future. Before the minus extension of \(\widehat{\mathit{CFD}}\) appeared in [LOT], another approach to defining minus Heegaard Floer invariants for manifolds with torus boundary was given by Zemke [Zem]. This approach also avoids relying on bordered Floer invariants by using knot (or link) Floer homology along with auxiliary data (the link surgery formula, which contains information about flip maps) to construct an invariant. In this way, the immersed curve invariants in this paper should be closely related to the invariants defined in [Zem] (in the case of a knot rather than a link), but those invariants are defined algebraically as a type D module over some algebra. This paper was developed independently of Zemke's work, but it should be the case that the decorated immersed curves described in this paper provide a geometric interpretation for the algebraic invariants defined in [Zem]; exploring this connection concretely is another goal for future work. The curves \(\mathit{HF}^{-}(Y,K)\) constructed in this paper are invariants of knots, but they provide a possible path to defining minus type bordered Floer invariants for manifolds with torus boundary. Generalizing Conjecture 1.5, we expect that the immersed curves for a knot \(K\) are in fact an invariant of the knot complement: **Conjecture 1.6**.: _For any knot \(K\subset Y\) with complement \(M=Y\setminus\nu(K)\), the decorated curves \(\mathit{HF}^{-}(Y,K)\) in the marked torus \(T_{M}\) are an invariant of \(M\)._ For any manifold \(M\) with torus boundary, we can choose some meridian \(\mu\) and view \(M\) as the complement of \(K_{\mu}\subset Y_{\mu}\), where \(Y_{\mu}\) is the Dehn filling of \(M\) along \(\mu\) and \(K_{\mu}\) is the core of the filling torus. We can then construct the decorated curve \(\mathit{HF}^{-}(Y_{\mu},K_{\mu})\), and Conjecture 1.6 asserts that the result does not depend on the choice of \(\mu\). If this is true, we could denote this curve \(\mathit{HF}^{-}(M)\). We note that if Conjecture 1.6 is true, it provides a simple way to recover the knot Floer complex and flip maps associated to the dual knot for any Dehn surgery on a knot \(K\subset Y\) from the knot Floer complex and flip maps associated with \(K\): we simply use the knot Floer data to construct the curve \(\mathit{HF}^{-}(M)=\mathit{HF}^{-}(Y,K)\) and then read off a complex with flip maps from this in the usual way but with the Dehn filling slope in place of the meridian. Conversely, if we knew the knot Floer complex and flip map associated to a dual knot agreed with that predicted by this procedure, then the decorated immersed curve representing the dual knot would be precisely the decorated curve representing the original knot, proving Conjecture 1.6. Theorem 1.4 can be interpreted as saying that for integer surgery, the surgery formula for the dual knot Floer complex predicted by Conjecture 1.6 is correct, giving evidence for Conjecture 1.6. To prove the conjecture, we would also need to find a surgery formula for the flip maps associated to a dual knot and check that it agrees with the one predicted by immersed curves. If the immersed curves defined using knot Floer homology are in fact bordered invariants, we would expect to have a general pairing theorem: **Conjecture 1.7**.: _If \(M_{1}\) and \(M_{2}\) are manifolds with torus boundary and \(\phi:\partial M_{1}\to\partial M_{2}\) is an orientation reversing gluing map, then_ \[\mathit{HF}^{-}(M_{1}\cup_{\phi}M_{2})\cong\mathcal{HF}(\phi(\mathit{HF}^{-}( M_{1})),\mathit{HF}^{-}(M_{2})).\] Theorem 1.3 is a special case of this where \(M_{2}\) is a solid torus. To prove this more generally, assuming the curves \(\mathit{HF}^{-}(Y,K)\) are defined for all rationally nullhomologous knots and assuming Conjecture 1.6, we could choose any slope on the gluing torus in \(M_{1}\cup_{\phi}M_{2}\) and use it as a meridian on either side to view the gluing as a splice of two knot complements. The Floer homology \(\mathcal{HF}(\phi(\mathit{HF}^{-}(M_{1})),\mathit{HF}^{-}(M_{2}))\) can then be identified with the shifted pairing (defined in Section 10.4) of the two knot Floer complexes, and this in turn agrees with the mapping cone complex for some integer surgery on the tensor product of the two knot Floer complexes equipped with the tensor product of the two flip maps. Provided flip maps behave in the obvious under connected sums, this is the mapping cone complex for the integer surgery on the connected sum of the two knots, which standard arguments show is equivalent to the splice of the not complement. We aim to carry out this strategy in future work. We remark again that this most likely amounts to a geometric reinterpretation of the pairing theorem for Zemke's algebraic invariants, but it would be enlightening to have a curve based proof of this. ### An example We will demonstrate our key results with an example. Let \(K\subset S^{3}\) be the left handed trefoil with meridian \(\mu\) and Seifert longitude \(\lambda\), and let \(M\) denote the complement. The knot Floer complex \(CFK_{\mathcal{R}^{-}}(S^{3},K)\) has three generators \(a\), \(b\), and \(c\), and differential \[\partial(a)=-Vb,\quad\partial(b)=0,\quad\text{ and }\quad\partial(c)=Ub.\] By the construction in Section 7 and Section 9 this bigraded complex is represented by the immersed arc in the marked strip \(\mathcal{S}\) shown in Figure 1(a); the bounding chain is trivial (as it must be since the arc has no self-intersection points). Note that the complex is recovered from this curve by taking Floer homology with the vertical line \(\mu\) in the doubly marked strip in which we replace each marked point by a \(z\) marked point just to the left of \(\mu\) and a \(w\) marked point just to the right of \(\mu\). In particular there is a bigon on the right side of \(\mu\) from \(c\) to \(b\) covering the right side of a marked point once, contributing \(Ub\) to \(\partial c\), and there is a bigon from \(a\) to \(b\) covering the left side of a marked point once and contributing \(-Vb\) to \(\partial a\) (the sign convention in this case records that the orientation on \(\mathit{HF}^{-}(S^{3},K)\) opposes the boundary orientation of the latter bigon). The horizontal and vertical homology of \(CFK_{\mathcal{R}^{-}}(S^{3},K)\) are both one dimensional (generated by \(a\) and \(c\), respectively) and the flip isomorphism associated to \(K\) simply takes \(a\) to \(c\). The decorated immersed curve \(\Gamma\) in the cylinder \(\mathcal{Z}\) representing \(CFK_{\mathcal{R}^{-}}(S^{3},K)\) with this flip map is obtained by gluing the opposite sides of \(\mathcal{S}\) and identifying the endpoints of the immersed arc; the bounding chain is still trivial. After identifying the cylinder \(\mathcal{Z}\) with \(\overline{T}_{M}\), taking the horizontal direction to \(\lambda\) and the vertical direction to \(\mu\), the immersed curve \(\Gamma\) is the invariant \(\mathit{HF}^{-}(S^{3},K;\mathfrak{s})\), where \(\mathfrak{s}\) is the unique spin\({}^{c}\) structure on \(S^{3}\); the projection to the marked torus \(T_{M}\) is denoted \(\mathit{HF}^{-}(S^{3},K)\). The hat version of the curve, which only represents the complex and flip map modulo \(UV\), is obtained by restricting the bounding chain to degree zero intersection points; since the bounding chain is trivial, in this case \(\mathit{HF}^{-}(S^{3},K)\) and \(\widehat{\mathit{HF}}(S^{3},K)\) are the same. Note that \(\widehat{\mathit{HF}}(S^{3},K)\) agrees with the curve \(\widehat{\mathit{HF}}(M)\) defined in [HRW]. We next consider the manifold \(Y=S^{3}_{1}(K)\) and the dual knot \(K^{*}\) in this surgery. By Theorem 1.3, \(\mathit{HF}^{-}(Y)\) is the Floer homology of \(\mathit{HF}^{-}(S^{3},K)\) with a curve of slope \(1\) in the marked torus \(T_{M}\). This is shown (in the covering space \(\overline{T}_{M}\)) in Figure 1(b). There are \(3\) generators, \(a\), \(b\), and \(c\), with differential \[\partial(a)=Wb,\quad\partial(b)=0,\quad\text{ and }\quad\partial(c)=-Wb,\] Figure 1. (a) A curve in \(\mathcal{S}\) representing the knot Floer complex of the left handed trefoil—identifying the edges of the strip also gives a curve in the cylinder \(\mathcal{Z}\) representing the knot Floer complex and the flip isomorphism; (b) the Floer complex with a curve of slope \(1\) recovers \(HF^{-}(Y_{1}(K))\); (c) the Floer complex with a line of slope \(1\) through the marked points, where we interpret each marked point as a pair marked points \(z\) and \(w\) on the left and right, respectively, gives the knot Floer complex of the dual knot. so the homology is isomorphic to \(\mathbb{F}[W]\oplus\mathbb{F}\). By the refinement of the surgery formula, Theorem 1.4, the complex \(CFK_{\mathcal{R}^{-}}(Y,K^{*})\) is given by the Floer homology with a line of slope \(1\) in \(T_{M}\) that passes through the marked point, after we replace the marked point with a \(z\) marked point just to the left and a \(w\) marked point just to the right. These curves are shown in the covering space \(\widetilde{T}_{M}\) in Figure 1(c). There are \(5\) generators, \(a\), \(b\), \(c\), \(d\), and \(e\), with differential \[\partial(a)=Ub,\quad\partial(b)=0,\quad\partial(c)=-UVb+UVd,\quad\partial(d)=0,\quad\text{ and }\quad\partial(e)=-Vd.\] Following the construction in Sections 7 and 9, we can represent this complex by the immersed multicurve in the strip \(\mathcal{S}\) shown in Figure 2(a), decorated with a bounding chain \(\mathbf{b}\), where \(\mathbf{b}\) is the linear combination of the two self-interesection points with coefficients as shown in the figure. Note that to recover the complex we count generalized bigons where, which are allowed to make left turns at self-intersection points with nonzero coefficient in \(\mathbf{b}\) and which are counted according to the weights associated with all such left-turns. For example, there is a generalized bigon from \(c\) to \(b\) that contributes \(Wb\) to \(\partial c\). To turn the immersed curve in the strip \(\mathcal{S}\) from Figure 2(a) into an immersed curve in the cylinder \(\mathcal{Z}\), we need to use the flip isomorphism associated with \(K^{*}\subset Y\), which now carries interesting information because the horizontal and vertical homology both have rank \(3\). We do not have a surgery formula for the flip isomorphism associated with the dual knot in a surgery, and in this case the flip isomorphism is not uniquely determined by the complex, but we can deduce the correct flip isomorphism using gradings and a surgery argument (for details see Example 2.6). The horizontal complex is generated by \(\{a,b,c\}\) while the vertical complex is generated by \(\{c,d,e\}\), and (ignoring powers of \(U\) and \(V\), which are determined by gradings) the flip isomorphism takes \(a\) to \(c\), \(b\) to \(d\), and \(c\) to \(e\). Gluing the sides of the strip \(\mathcal{S}\) after inserting arcs to identify the endpoints according to the flip isomorphism produces the decorated curve in \(\mathcal{Z}\) in Figure 2(b). This can be simplified slightly by a homotopy to give the curve in Figure 2(c). We define what we mean by homotopy of immersed curves decorated with bounding chains in Section 3.4. Note that here we homotope the underlying curve to remove two pairs of intersection points; this is allowed in this case, even though in each pair one intersection point has nontrivial coefficient in \(\mathbf{b}\) following move \((j)\) in Figure 8. Identifying \(\mathcal{Z}\) with \(\overline{T}_{M}\), this decorated curve is \(\mathit{HF}^{-}(Y,K^{*};\mathfrak{s})\), where \(\mathfrak{s}\) is the unique spin\({}^{c}\) structure on \(Y\), and the projection of this curve to \(T_{M}\) is \(\mathit{HF}^{-}(Y,K^{*})\). We remark that, as curves in \(\overline{T}_{M}\), the decorated curves \(\mathit{HF}^{-}(S^{3},K)\) and \(\mathit{HF}^{-}(Y,K^{*})\) actually agree. They appear differently in the cylinder \(\mathcal{Z}\) only because we use different parametrizations to identify \(\mathcal{Z}\) with \(\overline{T}_{M}\). Indeed, starting with the curve in Figure 1(c) we can apply the lift to \(\overline{T}_{M}\) of a Dehn twist about \(\lambda\) in \(T_{M}\) that takes the meridian \(\mu^{*}=\lambda+\mu\) of the dual knot to the vertical direction, and the resulting curve is the one in Figure 2(c). This is consistent with Conjecture 1.6. Figure 2. (a) A decorated immersed curve in \(\mathcal{S}\) representing the knot Floer complex of \(K_{1}\subset Y_{1}\); (b) adding a thin strip of arcs encoding the flip isomrophism and then identifying opposite edges of the strip produces a decorated curve in the cylinder \(\mathcal{Z}\) representing the complex and the flip isomorphism; (c) the curve in \(\mathcal{Z}\) after a homotopy. ### Organization We begin by briefly reviewing knot Floer homology and algebraic preliminaries for bigraded complexes over \(\mathcal{R}^{-}\) in Section 2. In Section 3 we define Floer homology of decorated immersed curves in marked surfaces, and in Section 4 we discuss an alternate interpretation of these decorated curves in terms of immersed train tracks. The construction of Floer homology is completely combinatorial, and Sections 3 and 4 may be of independent interest since this construction is more accessible than other treatments of immersed Lagrangian Floer theory. In Section 5 we show that a decorated curve in the marked strip \(\mathcal{S}\) encodes a bigraded complex, and a decorated curve in the marked cylinder \(\mathcal{Z}\) encodes a bigraded complex equipped with a flip map. We also observe that any bigraded complex or any bigraded complex with flip map can be represented by some (not necessarily nice) decorated multicurve in \(\mathcal{S}\) or \(\mathcal{Z}\). In Section 6 we discuss what it would mean for such a representative to be nice, and prove some properties of the complexes coming from curves in a suitably nice position. In Section 7 and 8 we restrict to the \(UV=0\) setting and construct, in Section 7, a nice representative in \(\mathcal{S}\) for any bigraded complex over \(\widehat{\mathcal{R}}\) and, in Section 8, a nice representative in \(\mathcal{Z}\) for any such complex equipped with a flip map. This proves the existence part of a \(UV=0\) version of Theorem 1.2. In Section 9 we show that for a complex over \(\mathcal{R}^{-}\) and a flip map, the representative of the \(UV=0\) quotient can be enhanced, without changing the underlying curve, by modifying the bounding chain decoration on the curve. This completes the existence part of Theorem 1.2. In Section 10 we turn our at attention to the Floer homology of two decorated curves in the marked strip or cylinder, relating this geometric pairing to morphisms of complexes (for curves in \(\mathcal{S}\)) or to a more complicated algebraic pairing we define for complexes with flip maps (for curves in \(\mathcal{Z}\)). Using the invariance of this algebraic pairing under homotopy equivalence of the complexes, we prove the uniqueness claim in Theorem 1.2. In Section 11 we prove Theorems 1.3 and 1.4 by relating the algebraic pairings from Section 10 to mapping cone formulas known to recover \(\mathit{HF}^{-}\) of surgeries on knots and \(\mathit{CFK}^{-}\) of dual knots in surgeries. We end with examples and some discussion about simplifying the bounding chain decoration in Section 12. ### Acknowledgements This project has lasted several years and has benefitted from many fruitful conversations over that time. In addition to many others, the author is especially grateful to Liam Watson, Adam Levine, Robert Lipshitz, Artem Kotelskiy, and Wenzhao Chen for helpful conversations, answering questions, and comments on earlier versions of this work. ## 2. Knot Floer homology ### Bigraded complexes over \(\mathcal{R}^{-}\) The knot Floer complex takes the form of a bigraded chain complex over \(\mathcal{R}^{-}\) (or, more precisely, a collection of such complexes). We begin by reviewing these algebraic structures and their properties. We will also define a certain notion of filtered maps between bigraded complexes. Throughout the paper we work with coefficients in an arbitrary field \(\mathbb{F}\) and \(\mathcal{R}^{-}\) denotes the ring \(\mathbb{F}[U,V]\). We define a bigrading \(\mathrm{gr}=(\mathrm{gr}_{w},\mathrm{gr}_{z})\) on \(\mathcal{R}^{-}\) where \(\mathrm{gr}(U)=(-2,0)\) and \(\mathrm{gr}(V)=(0,-2)\). The two components of the grading are called the _\(U\)-grading_ and the _\(V\)-grading_, respectively. While the most general knot Floer invariants are defined over \(\mathcal{R}^{-}\), we can simplify the invariant by passing to certain quotients of \(\mathcal{R}^{-}\). The most common, which we denote \(\widehat{\mathcal{R}}\), is obtained by setting \(UV=0\). More generally, we will consider \[\mathcal{R}_{n}:=\mathbb{F}[U,V]/(UV)^{n}.\] Note that in this notation, \(\widehat{\mathcal{R}}=\mathcal{R}_{1}\). Finally, let \(\mathcal{R}^{\infty}\) denote \(\mathbb{F}[U,U^{-1},V,V^{-1}]\). Since the product \(UV\) will appear frequently we will set \(W=UV\) throughout the paper. At times we will need to discuss an object that is nearly a bigraded chain complex but for which \(\partial^{2}\) is not zero; we refer to this as a precomplex. **Definition 2.1**.: A _bigraded precomplex over \(\mathcal{R}\)_ is a finitely generated module over \(\mathcal{R}^{-}\) with an integer bigrading \((\mathrm{gr}_{w},\mathrm{gr}_{z})\) such that \(\mathrm{gr}_{w}\) and \(\mathrm{gr}_{z}\) agree mod \(2\) and multiplication by \(U\) and \(V\) have degree \((-2,0)\) and \((0,-2)\), respectively, equipped with a linear map \(\partial\) of degree \((-1,-1)\). A _bigraded complex over \(\mathcal{R}^{-}\)_ is a bigraded precomplex over \(\mathcal{R}^{-}\) which satisfies \(\partial^{2}=0\). The grading \(\operatorname{gr}_{w}\) is called the _Maslov grading_ and will also be denoted \(M\). The _Alexander grading_\(A\) is given by \(\frac{1}{2}(\operatorname{gr}_{w}-\operatorname{gr}_{z})\). Because we assume that \(\operatorname{gr}_{w}\) and \(\operatorname{gr}_{z}\) have the same parity, \(A\) is also integral. Let \(C^{-}\) be a bigraded complex over \(\mathcal{R}^{-}\) as described above with differential \(\partial\). Let \(C^{\infty}\) denote \(C^{-}\otimes_{\mathcal{R}^{-}}\mathcal{R}^{\infty}\), the result of localizing both \(U\) and \(V\); the differential \(\partial\) extends to \(C^{\infty}\). We will use the term _bigraded complex over \(\mathcal{R}^{\infty}\)_ to mean a complex \(C^{\infty}\) obtained in this way from some \(C^{-}\) over \(\mathcal{R}^{-}\). Let \(\widehat{C}\) denote the complex over \(\widehat{\mathcal{R}}\) obtained from \(C^{-}\) by setting \(UV=0\). For any \(s\in\mathbb{Z}\), let \(C^{-}|_{A=s}\) denote the subcomplex of \(C^{-}\) with Alexander grading \(s\), so that \(C^{-}=\bigoplus_{s\in\mathbb{Z}}C^{-}|_{A=s}\), and similarly for \(C^{\infty}|_{A=s}\) and \(\widehat{C}|_{A=s}\). We say a basis \(\{x_{1},\dots,x_{n}\}\) for \(C^{-}\) over \(\mathcal{R}^{-}\) is _homegeneous_ if each \(x_{i}\) lies in an Alexander graded summand \(C^{-}|_{A=A(x_{i})}\). Note that given such a basis and any \(s\in\mathbb{Z}\), \(\{V^{s-A(x_{1})}x_{1},\dots,V^{s-A(x_{n})}x_{n}\}\) is a basis for \(C^{\infty}_{s}\) over \(\mathbb{F}[W,W^{-1}]\). Any two homogenous bases \(\{x_{1},\dots,x_{n}\}\) and \(\{x^{\prime}_{1},\dots,x^{\prime}_{n}\}\) are related by a homogenous change of basis, where \(x^{\prime}_{i}=\sum_{j=1}^{n}c_{i,j}x_{j}\) for some coefficients \(c_{i,j}\) in \(\mathcal{R}^{-}\) such that each nonzero term \(c_{i,j}x_{j}\) in the sum has the same Alexander grading. A basis for \(C^{-}\) is _reduced_ if \(\partial\) is trivial when \(U\) and \(V\) are both set to zero; it is a standard argument that any bigraded complex \(C^{-}\) is homotopy equivalent to one which admits a reduced basis. Unless otherwise stated, all bases for bigraded complexes will be assumed to be homogeneous and reduced. Given a basis for \(C^{-}\), we can record the differential \(\partial\) with an \(n\times n\) matrix with coefficients in \(\mathcal{R}\), with the \((i,j)\) entry specifying the coefficient of \(x_{j}\) in \(\partial x_{j}\). In fact, the powers of \(U\) and \(V\) in each entry are determined by the bigrading change from \(x_{i}\) to \(x_{j}\), so if the gradings on the generators are specified then \(\partial\) can be encoded by a matrix \(\{d_{i,j}\}_{1\leq i,j\leq n}\) with coefficinets in \(\mathbb{F}\). More precisely, this means that \[\partial(x_{i})=\sum_{j=1}^{n}d_{i,j}U^{a_{i,j}}V^{b_{i,j}}x_{j}\] where \(a_{i,j}\) and \(b_{i,j}\) are defined by \[a_{i,j}=\frac{\operatorname{gr}_{w}(x_{j})-\operatorname{gr}_{w}(x_{i})+1}{2},\quad\text{ and }\quad b_{i,j}=\frac{\operatorname{gr}_{z}(x_{j})- \operatorname{gr}_{z}(x_{i})+1}{2}. \tag{1}\] Note that \(d_{i,j}\) must be zero if \(\operatorname{gr}_{w}(x_{i})-\operatorname{gr}_{w}(x_{j})\) is even or if \(a_{i,j}\) or \(b_{i,j}\) are negative. If the coefficient \(d_{i,j}\) is nonzero, we say that there is an arrow from \(x_{i}\) to \(x_{j}\); we will say that this arrow is _vertical_ if \(a_{i,j}=0\) and that it is _horizontal_ if \(b_{i,j}=0\). This terminology comes from the fact that it is common to represent \(C^{-}\) or \(C^{\infty}\) in the plane with \(U^{a}V^{b}x_{i}\) represented by a point at coordinates \((-a,-b)\) and the differential represented by arrows. We will often use the notion of arrows to refer to nonzero terms in the differential; arrows are labeled by the coefficient \(d_{i,j}U^{a_{i,j}}V^{b_{i,j}}\) (or just by \(d_{i,j}\) when the relevant powers of \(U\) and \(V\) are understood). There are several quotient complexes of \(C^{-}\) that will be relevant to us, which we now describe. Assume we have fixed a reduced homogeneous basis \(\{x_{1},\dots,x_{n}\}\) for \(C^{-}\). Let \((C^{v},\partial^{v})\) denote the complex obtained from \((C^{-},\partial)\) by setting \(V=1\); we call this the _vertical complex_ of \(C^{-}\). The vertical complex is a chain complex over \(\mathbb{F}[W]\) (note that setting \(V=1\) also means that \(W=U\)). It is a singly graded complex: the grading \(\operatorname{gr}_{w}\) on \(C^{-}\) descends to a grading on \(C^{v}\), but the grading \(\operatorname{gr}_{z}\) does not. The _vertical homology_ of \(C^{-}\) will refer to the homology of the vertical complex, \(H_{s}C^{v}\); this is a graded module over \(\mathbb{F}[W]\). If we further set \(U=0\), the resulting graded chain complex \((\widehat{C}^{v},\widehat{\partial}^{v})\) is called the _hat vertical complex_. Its homology \(H_{s}\widehat{C}^{v}\), a graded vector space over \(\mathbb{F}\), is the _hat vertical homology_ of \(C^{-}\). Similarly, the _horizontal complex_\((C^{h},\partial^{h})\) is the complex over \(\mathbb{F}[W]\) obtained from \(C^{-}\) by setting \(U=1\) and \(W=V\), with a grading inherited from \(\operatorname{gr}_{z}\). The _hat horizontal complex_ comes from setting \(V=1\) and also \(U=0\). The _horizontal homology_ and _hat horizontal homology_ refer to the homologies of the respective complexes. We are interested in choosing bases for \(C^{-}\) which are well behaved with respect to the horizontal or vertical complexes. We say that a basis \(\{x_{1},\dots,x_{n}\}\) of \(C^{-}\) is _vertically simplified_ if for each basis element \(x_{i}\) either \(\partial(x_{i})\equiv V^{b_{i}}x_{i+1}\pmod{U}\) for some \(b_{i}\in\mathbb{Z}\) or \(\partial(x_{i})\equiv 0\pmod{U}\). That is, every generator is an end of at most one vertical arrow; equivalently, every generator in the hat vertical complex has at most one arrow in or out. The generators of the vertical homology are exactly the generators with no vertical arrow in or out. Similarly, a basis for \(C^{-}\) is _horizontally simplified_ if for each basis element \(x_{i}\) either \(\partial(x_{i})\equiv U^{a_{i}}x_{i+1}\pmod{V}\) for some \(a_{i}\in\mathbb{Z}\) or \(\partial(x_{i})\equiv 0\pmod{V}\); that is, if each generator is an end of at most one horizontal arrow. **Proposition 2.2**.: _Let \(C\) be a bigraded chain complex over \(\mathcal{R}=\mathbb{F}[U,V]\), where \(\mathbb{F}\) is a field. \(C\) is chain homotopy equivalent to a complex \(C^{\prime}\) which is reduced. Moreover, \(C^{\prime}\) admits a homogeneous basis which is vertically simplified. It also admits a (possibly different) homogeneous basis which is horizontally simplified._ Proof.: This is essentially Proposition 11.52 of [1]. That result assumes \(\mathbb{Z}/2\mathbb{Z}\) coefficients rather than an arbitrary field \(\mathbb{F}\) and is stated in terms of filtered complexes over \(\mathbb{F}[U]\) rather than complexes over \(\mathbb{F}[U,V]\) (see the notational remarks in Section 2.3 below), but the proof is completely analogous. Note that while we can always pick a basis which is either horizontally or vertically simplified, there exist complexes which do not admit a single basis that is both horizontally and vertically simplified (see Example 12.7). We now consider maps between two bigraded complexes \(C_{1}\) and \(C_{2}\), or more precisely between the localized versions \(C_{1}^{\infty}\) and \(C_{2}^{\infty}\). We will be interested in chain maps which interchange the roles of \(U\) and \(V\). We say that a chain map \(\Psi:C_{1}^{\infty}\to C_{2}^{\infty}\) is a _skew \(\mathcal{R}^{-}\)-module homomorphism_ if \(\Psi\) becomes a homomorphism of \(\mathcal{R}^{-}\)-modules if the roles of \(U\) and \(V\) are exchanged in the action of \(\mathcal{R}^{-}\) on \(C_{2}\); in particular, \(\Psi(Ux)=V\Psi(x)\) and \(\Psi(Vx)=U\Psi(x)\) for any \(x\) in \(C\). We say that such a map has _skew degree_\((a,b)\) if it it interchanges the two gradings and then raises \(\mathrm{gr}_{w}\) by \(a\) and \(\mathrm{gr}_{z}\) by \(b\), so that \(\mathrm{gr}_{w}(\Psi(x))=\mathrm{gr}_{z}(x)+a\) and \(\mathrm{gr}_{z}(\Psi(x))=\mathrm{gr}_{w}(x)+b\). Note that a bigraded complex over \(\mathcal{R}^{\infty}\) carries two natural filtrations given by the negative exponent of \(U\) or of \(V\). More precisely, the _\(U\)-filtration_ is defined so that for any generator \(x\) of \(C_{1}\) the element \(U^{a}V^{b}x\) is at filtration level \(i=-a\), and the _\(V\)-filtration_ is defined so that \(U^{a}V^{b}x\) is at filtration level \(j=-b\). We say that a skew \(\mathcal{R}^{-}\)-module homomorphism \(\Psi:C_{1}^{\infty}\to C_{2}^{\infty}\) is _flip-filtered_ if it is filtered with respect to the \(V\)-filtration on \(C_{1}\) and the \(U\)-filtration on \(C_{2}\); equivalently, \(\Psi\) takes each generator \(x\) of \(C_{1}\) to a sum of terms of the form \(cU^{a}V^{b}y\) where \(y\) is a generator of \(C_{2}\), \(c\) is a nonzero element of \(\mathbb{F}\), and \(a\geq 0\). Similarly, we say \(\Psi\) is _reverse flip-filtered_ if it is filtered with respect to the \(U\)-filtration on \(C_{1}\) and the \(V\)-filtration on \(C_{2}\). We say that \(\Psi\) is a flip-filtered chain homotopy equivalence if it is flip-filtered and there exists a reverse flip-filtered map \(\bar{\Psi}:C_{2}\to C_{1}\) such that \(\bar{\Psi}\circ\Psi\) and \(\Psi\circ\bar{\Psi}\) are both filtered chain homotopic to the respective identity maps (with respect to the \(V\)-filtration on \(C_{1}\) and the \(U\)-filtration on \(C_{2}\)). Given two flip-filtered maps \(\Psi_{1}\) and \(\Psi_{2}\) from \(C_{1}\) to \(C_{2}\), a _flip-filtered chain homotopy_ is a skew-\(\mathcal{R}^{-}\)-module homomorphism \(H:C\to C^{\prime}\) such that \(\Psi_{1}-\Psi_{2}=H\circ\partial+\partial\circ H\) and \(H\) is filtered with respect to the \(V\)-filtration on \(C_{1}\) and the \(U\)-filtration on \(C_{2}\). The flip-filtered maps we will consider exchange the gradings; that is, they will have skew degree \((0,0)\). In general for each bigraded complex \(C_{i}\) we could fix an Alexander grading shift \(s_{i}\) in \(\mathbb{Z}\) and then define a flip map \(\Psi_{s}\) of skew degree \((0,-2s)\) where \(s=s_{1}+s_{2}\). However, it is enough to consider one (arbitrary) shift on each complex, since multiplying a flip-filtered map of skew degree \((0,-2s)\) by \(V\) gives a flip-filtered map of skew degree \((0,-2s-2)\) and so the maps associated with different choices of shifts carry equivalent information. Thus we will set \(s_{1}=s_{2}=0\). Given bases \(\{x_{1},\ldots,x_{m}\}\) for \(C_{1}\) and \(\{y_{1},\ldots,y_{n}\}\) for \(C_{2}\), a skew \(\mathcal{R}^{-}\)-module homomorphism \(\Psi:C_{1}\to C_{2}\) of skew degree \((0,0)\) is specified by a collection of coefficients \(c_{i,j}\) for each \(1\leq i\leq m\) and \(1\leq j\leq n\) such that \(\operatorname{gr}_{w}(y_{j})-\operatorname{gr}_{z}(x_{i})\) is even (we take \(c_{i,j}\) to be \(0\) for other pairs). In particular, \[\Psi(x_{i})=\sum_{j}c_{i,j}U^{\frac{\operatorname{gr}_{w}(y_{j})-\operatorname{ gr}_{z}(x_{i})}{2}}V^{\frac{\operatorname{gr}_{z}(y_{j})-\operatorname{gr}_{w}(x_{i})}{2 }}y_{j}.\] If we further assume \(\Psi\) is flip-filtered then \(c_{i,j}\) is only nonzero if \(\operatorname{gr}_{w}(y_{j})\geq\operatorname{gr}_{z}(x_{i})\), since the exponent on \(U\) must be nonnegative. If we have a nice basis for \(C\), then up to homotopy we can assume \(\Psi\) has an even simpler form: **Proposition 2.3**.: _Let \(\{x_{1},\ldots,x_{2k},x_{2k+1},\ldots,x_{m}\}\) be a horizontally simplified basis for \(C_{1}\) such that \(\widehat{\partial}^{h}(x_{2i-1})=x_{2i}\) for \(1\leq i\leq k\) and \(\widehat{\partial}^{h}\) is zero on all other generators, where \(\widehat{\partial}^{h}\) is the differential on the hat horizontal complex. If \(\Psi:C_{1}\to C_{2}\) is a flip-filtered chain map, then \(\Psi\) is flip-filtered chain homotopic to another such map \(\Psi^{\prime}\) for which \(\Psi^{\prime}(x_{2i-1})=0\) for \(1\leq i\leq k\). Moreover, for each \(1\leq i\leq k\), \(\Psi^{\prime}(x_{2i})\) is trivial mod \(U\) and is determined by the values of \(\Psi^{\prime}(x_{\ell})\) for \(2k+1\leq\ell\leq m\)._ Proof.: We will modify \(\Psi=\Psi_{0}\) to be zero on the generators \(x_{2i-1}\) one at a time, defining a sequence of flip-filtered chain homotopies \(H_{i}\) from \(\Psi_{i-1}\) to \(\Psi_{i}\) such that \(\Psi_{i}(x_{2i-1})=0\) and \(\Psi_{i}(x_{2j-1})=\Psi_{i-1}(x_{2j-1})=0\) for all \(j<i\). Then \(\Psi_{k}=\Psi^{\prime}\) is the desired map. Letting \(a_{i}\) be the length of the horizontal arrow from \(x_{2i-1}\) to \(x_{2i}\), we define the homotopy \(H_{i}:C_{1}\to C_{2}\) by setting \(H(x_{2i})=-U^{-a_{i}}\Psi_{i-1}(x_{2i-1})\) and \(H_{i}=0\) on all other generators. Then \[(\Psi_{i}-\Psi_{i-1})(x_{2i-1})=H_{i}\circ\partial(x_{2i-1})+\partial\circ H_ {i}(x_{2i-1})=-U^{a_{i}}U^{-a_{i}}\Psi_{i-1}(x_{2i-1})\] and so \(\Psi_{i}(x_{2i-1})=0.\) The final claim follows from the fact that \(\Psi\) is a chain map. When \(\Psi\) is a flip-filtered chain homotopy equivalence it induces a homotopy equivalence on each filtration level, using the \(V\)-filtration on \(C_{1}^{\infty}\) and the \(U\)-filtration on \(C_{2}^{\infty}\). In particular, considering the \(0\) filtration levels, it gives a chain homotopy equivalence between \(C_{1}\otimes_{\mathcal{R}^{-}}\mathbb{F}[U,U^{-1},V]\) and \(C_{2}\otimes_{\mathcal{R}^{-}}\mathbb{F}[U,V,V^{-1}]\). Setting \(U=1\) in \(C_{1}\otimes_{\mathcal{R}^{-}}\mathbb{F}[U,U^{-1},V]\) and \(V=1\) in \(C_{2}\otimes_{\mathcal{R}^{-}}\mathbb{F}[U,V,V^{-1}]\) gives a chain homotopy equivalence from the horizontal complex of \(C_{1}\) to the vertical complex of \(C_{2}\), and this induces an isomorphism \(\Psi_{*}\) from the horizontal homology of \(C_{1}\) to the vertical homology of \(C_{2}\). We call such an isomorphism a _flip isomorphism_. When \(\Psi\) also has skew-degree \((0,0)\), the induced isomorphism \(\Psi^{*}\) is grading preserving with respect to \(\operatorname{gr}_{z}\) on \(C_{1}\) and \(\operatorname{gr}_{w}\) on \(C_{2}\). By setting \(V=0\) in \(C_{1}^{\infty}\) and \(U=0\) in \(C_{2}^{\infty}\), we also have a chain homotpy equivalence from the hat horizontal complex of \(C_{1}\) to the hat vertical complex of \(C_{2}\), it induces a grading preserving isomorphism \(\widehat{\Psi}_{*}\) from the hat horizontal homology of \(C_{1}\) to the hat vertical homology of \(C_{2}\). A flip-filtered chain homotopy equivalence \(\Psi\) is determined up to chain homotopy by the flip isomorphism \(\Psi_{*}\) it induces on homoloogy. To see this, choose a horizontally simplified basis for \(C_{1}\) and a vertically simplified basis for \(C_{2}\), so that the generators which are not on a horizontal or vertical arrow, respectively, form bases for the horizontal homology of \(C_{1}\) and the vertical homology of \(C_{2}\). By Proposition 2.3, \(\Psi\) is determined by its image on the basis for horizontal homology of \(C_{1}\). A similar argument shows that the image is determined by the projection to the basis for vertical homology of \(C_{2}\). ### The knot Floer chain complex We will now describe the knot Floer complex associated to a nullhomologous knot \(K\) in a \(3\)-manifold \(Y\). We assume the reader is familiar with this invariant as defined by Ozsvath and Szabo [11] and Rasmussen [14]; surveys of this material can be found in [10] and [15]. However, we adopt slightly different conventions; in particular, while the original formulation defines the knot Floer complex as a filtered chain complex over \(\mathbb{F}[U]\), we introduce a second formal variable \(V\) to keep track of the Alexander filtration and view the invariant as a collection of bigraded chain complexes over \(\mathcal{R}^{-}\). This notation is becoming more common in the literature; we largely follow [13, Section 1.5] (see also [1, Section 2]). For the reader's convenience, the relationship between these two notational conventions is explained in Section 2.3. Recall that knot Floer homology can be defined in terms of a _doubly pointed Heegaard diagram_ for the pair \((Y,K)\), that is, a tuple \(\mathcal{H}=(\Sigma,\boldsymbol{\alpha},\boldsymbol{\beta},w,z)\) where \((\Sigma,\boldsymbol{\alpha},\boldsymbol{\beta})\) is a Heegaard diagram for \(Y\) and \(z\) and \(w\) are points in \(\Sigma\) in the complement of \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\). The pair of basepoints \(z\) and \(w\) determine the oriented knot \(K\) by connecting \(z\) to \(w\) through the \(\boldsymbol{\alpha}\)-handlebody, avoiding the \(\boldsymbol{\alpha}\) disks, and connecting \(w\) to \(z\) through the \(\boldsymbol{\beta}\)-handlebody, avoiding the \(\boldsymbol{\beta}\) disks. Given a doubly pointed Heegaard diagram \(\mathcal{H}\), the set \(\mathcal{G}(\mathcal{H})\) consists of unordered tuples of points in \(\boldsymbol{\alpha}\cap\boldsymbol{\beta}\) such that each alpha curve and each beta curve is occupied exactly once. We construct a chain complex \(CFK_{\mathcal{R}^{-}}(\mathcal{H})\) generated over \(\mathcal{R}^{-}\) by \(\mathcal{G}(\mathcal{H})\) whose differential counts holomorphic disks in an appropriate symmetric product of \(\mathcal{H}\). The differential is given by \[\partial(\mathbf{x})=\sum_{\mathbf{y}\in\mathcal{G}(\mathcal{H})}\sum_{ \begin{subarray}{c}\phi\in\pi_{2}(\mathbf{x},\mathbf{y})\\ \mu(\phi)=1\end{subarray}}\#\left(\frac{\mathcal{M}(\phi)}{\mathbb{R}}\right) U^{n_{w}(\phi)}V^{n_{z}(\phi)}\mathbf{y},\] where \(\pi_{2}(\mathbf{x},\mathbf{y})\) is the set of homotopy classes of disks connecting \(\mathbf{x}\) to \(\mathbf{y}\), \(\mu\) is the Maslov index of such a class, \(\mathcal{M}(\phi)\) is the moduli space of all pseudoholomorphic disks in the homotopy class \(\phi\), and \(n_{w}(\phi)\) and \(n_{z}(\phi)\) count the multiplicity with which \(\phi\) covers the basepoint \(w\) and \(z\), respectively. Note that if \(\mathcal{M}(\phi)\) is nonempty than \(n_{w}(\phi)\) and \(n_{z}(\phi)\) are both nonnegative. To each generator \(\mathbf{x}\) in \(\mathcal{G}(\mathcal{H})\) we can associate a spin\({}^{c}\) structure \(\mathfrak{s}_{w}(\mathbf{x})\) of \(Y\), and generators \(\mathbf{x}\) and \(\mathbf{y}\) determine the same spin\({}^{c}\) structure if and only if \(\pi_{2}(\mathbf{x},\mathbf{y})\) is nonempty. It follows that \(CFK_{\mathcal{R}^{-}}(\mathcal{H})\) splits as a direct summand over Spin\({}^{c}(Y)\), the set of spin\({}^{c}\) structures on \(Y\): \[CFK_{\mathcal{R}^{-}}(\mathcal{H})=\bigoplus_{\mathfrak{s}\in\text{Spin}^{c}( Y)}CFK_{\mathcal{R}^{-}}(\mathcal{H};\mathfrak{s}).\] If \(\mathfrak{s}\) is a torsion spin\({}^{c}\) structure then \(CFK_{\mathcal{R}^{-}}(\mathcal{H};\mathfrak{s})\) can be equipped with an absolute bigrading \((\text{gr}_{w},\text{gr}_{z})\). For generators \(\mathbf{x}\) and \(\mathbf{y}\) and any \(\phi\in\pi_{2}(\mathbf{x},\mathbf{y})\), the grading difference between \(\mathbf{x}\) and \(\mathbf{y}\) is given by \[\text{gr}_{w}(\mathbf{x})-\text{gr}_{w}(\mathbf{y})=\mu(\phi)-2n_{w}(\phi), \tag{2}\] \[\text{gr}_{z}(\mathbf{x})-\text{gr}_{z}(\mathbf{y})=\mu(\phi)-2n_{z}(\phi). \tag{3}\] In particular, the differential has degree \((-1,-1)\). \(\text{gr}_{w}\) and \(\text{gr}_{z}\) have the same parity so \(A=\frac{\text{gr}_{w}-\text{gr}_{z}}{2}\) is also a \(\mathbb{Z}\)-grading. Thus \(CFK_{\mathcal{R}^{-}}(\mathcal{H};\mathfrak{s})\) is a bigraded complex as introduced in the previous section. If \(\mathfrak{s}\) is not a torsion spin\({}^{c}\) structure then we have only a relative bigrading \((\text{gr}_{w},\text{gr}_{z})\) defined by Equations (2) and (3). It turns out that the bigraded complexes \(CFK_{\mathcal{R}^{-}}(\mathcal{H};\mathfrak{s})\) are invariants, up to filtered chain homotopy equivalence, of the triple \((Y,K;\mathfrak{s})\) and do not depend on the choice of doubly pointed Heegaard diagram \(\mathcal{H}\). We denote this filtered chain homotopy equivalence class of complexes \(CFK_{\mathcal{R}^{-}}(Y,K;\mathfrak{s})\), or \(CFK_{\mathcal{R}^{-}}(Y,K)\) for the sum over all spin\({}^{c}\) structures. In the case that \(Y=S^{3}\) we omit it from the notation and write \(CFK_{\mathcal{R}^{-}}(K)\). At times it is convenient to allow negative powers of \(U\) and \(V\); for this we define \(\text{\it CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K;\mathfrak{s})\) to be \(CFK_{\mathcal{R}^{-}}(Y,K;\mathfrak{s})\otimes_{\mathcal{R}^{-}}\mathcal{R}^{\infty}\) (similarly \(\text{\it CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K)=CFK_{\mathcal{R}^{-}}(Y,K) \otimes_{\mathcal{R}^{-}}\mathcal{R}^{\infty}\)). \(\text{\it CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K)\) is called the _full knot Floer complex_ of the knot \(K\). Simpler versions of the invariant can be defined analogously by replacing \(\mathcal{R}^{-}\) with one of its quotients \(\mathcal{R}_{n}\) defined above. In particular, a frequently used version is \(CFK_{\widehat{\mathcal{R}}}(Y,K)\), the \(UV=0\) quotient of the knot Floer complex. This complex is considerably easier to compute, since holomorphic discs that cover both basepoints can be ignored. See, for example, [OS] which gives an effective method for computing the \(UV=0\) knot Floer complex. In addition to the bigraded complexes above, for each spin\({}^{c}\) structure \(\mathfrak{s}\) in Spin\({}^{c}(Y)\) the knot Floer package defines a flip-filtered chain homotopy equivalence \[\Psi_{\mathfrak{s}}:\text{\it CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K;\mathfrak{s}) \rightarrow\text{\it CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K;\mathfrak{s}+\text{PD }[K])\] known as a _flip map_. Note that since we are restricting to nullhomologous, \(\text{PD}[K]=0\) so \(\Psi_{\mathfrak{s}}\) takes \(\text{\it CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K;\mathfrak{s})\) to itself. The flip map is well-defined up to flip-filtered chain homotopy. The flip maps are defined as the composition of three maps. Fix a doubly pointed Heegaard diagram \(\mathcal{H}\) representing \((Y,K)\) and let \(\mathcal{H}_{z}\) and \(\mathcal{H}_{w}\) denote the singly pointed Heegaard diagrams for \(Y\) obtained by ignoring the \(w\) basepoint or the \(z\) basepoint, respectively. Both \(\mathcal{H}_{z}\) and \(\mathcal{H}_{w}\) are singly pointed Heegaard diagrams for the ambient \(3\)-manifold \(Y\), so they can be used to compute \(\mathit{CF}^{-}(Y;\mathfrak{s})\). Ignoring the \(w\) basepoint corresponds to setting \(U=1\) and \(V=W\); this gives a map \[\Omega_{z}:\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(\mathcal{H};\mathfrak{s}) \to\mathit{CF}^{\infty}(\mathcal{H}_{z};\mathfrak{s}).\] In other words, \(\mathit{CF}^{\infty}(\mathcal{H}_{z};\mathfrak{s})\) is (the \(\infty\) version of) the horizontal complex of \(\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(\mathcal{H};\mathfrak{s})\). Restricting to the Alexander grading zero summand gives an isomorphism \[\Omega_{z}:\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(\mathcal{H};\mathfrak{s}) |_{A=0}\to\mathit{CF}^{\infty}(\mathcal{H}_{z};\mathfrak{s}),\] This map takes \(\mathrm{gr}_{z}\) to the Maslov grading of \(\mathit{CF}^{\infty}(\mathcal{H}_{z};\mathfrak{s})\). Similarly, ignoring the \(z\) basepoint corresponds to setting \(V=1\) and \(U=W\) and gives an isomorphism \[\Omega_{w}:\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(\mathcal{H};\mathfrak{s}) |_{A=0}\to\mathit{CF}^{\infty}(\mathcal{H}_{w};\mathfrak{s})\] taking \(\mathrm{gr}_{w}\) to the Maslov grading. Finally, let \[\Gamma:\mathit{CF}^{\infty}(\mathcal{H}_{z};\mathfrak{s})\to\mathit{CF}^{ \infty}(\mathcal{H}_{w};\mathfrak{s})\] be a filtered chain homotopy equivalence arising from a sequence of Heegaard moves taking \(z\) to \(w\) in \(\mathcal{H}\). We define \[\Psi_{\mathfrak{s}}:\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K;\mathfrak{s} )|_{A=0}\to\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K;\mathfrak{s})|_{A=0}\] to be the composition \(\Omega_{w}^{-1}\circ\Gamma\circ\Omega_{z}\). We can uniquely extend this map to a skew \(\mathcal{R}^{-}\)-module homomorphism \(\Psi_{\mathfrak{s}}\) from \(\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K;\mathfrak{s})\) to itself. **Proposition 2.4**.: _The flip map \(\Psi_{\mathfrak{s}}\) is flip-filtered and has skew degree \((0,0)\)._ Proof.: This follows from the fact that \(\Gamma\) is filtered and grading preserving. It is enough to check this on \(\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K;\mathfrak{s})|_{A=0}\), since both properties remain true when we extend the map as a skew \(\mathcal{R}^{-}\)-module homomorphism to all of \(\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K;\mathfrak{s})\). For the second claim, note that \(\Omega_{z}\) takes \(\mathrm{gr}_{z}\) to \(M\), \(\Omega_{w}\) takes \(\mathrm{gr}_{w}\) to \(M\), and \(\Gamma\) preserves \(M\), so \(\Psi_{\mathfrak{s}}\) takes \(\mathrm{gr}_{z}\) to \(\mathrm{gr}_{w}\). Since \(\mathrm{gr}_{z}=\mathrm{gr}_{w}\) on both the target and source of \(\Psi_{\mathfrak{s}}\) (both being summands with Alexander grading zero), \(\Psi_{\mathfrak{s}}\) also takes \(\mathrm{gr}_{w}\) to \(\mathrm{gr}_{z}\). For the first claim, consider an element \(U^{A(x)}x\) of \(\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(\mathcal{H};\mathfrak{s})|_{A=0}\). \(\Omega_{z}\) takes this to \(x\), and \(\Gamma\) takes \(x\) to a sum of the form \(\sum_{i}c_{i}U^{a_{i}}y_{i}\) where \(c_{i}\) is a constant in \(\mathbb{F}\), \(y_{i}\) is a generator and \(a_{i}\geq 0\). It follows that \[\Psi_{\mathfrak{s}}(U^{A(x)}x)=\sum_{i}c_{i}U^{a_{i}}V^{-A(y_{i})+a_{i}}y_{i}.\] and thus the \(U\)-filtration level of \(\Psi_{\mathfrak{s}}(U^{A(x)}x)\) is at most as large as the \(V\)-filtration level of \(U^{A(x)}x\). This relationship is preserved when the input is multiplied by \(U\) and \(V\), so \(\Psi_{\mathfrak{s}}\) is flip filtered. ### Notational remarks Though it is becoming more common, some readers may be unfamiliar with the \(\mathbb{F}[U,V]\) notation used here for knot Floer complexes. In its original formulation, the knot Floer complex \(\mathit{CFK}^{\infty}(Y,K)\) is defined as a chain complex over \(\mathbb{F}[U,U^{-1}]\) equipped with an additional Alexander filtration; we find it convenient to encode this filtration with the second variable \(V\). We use the subscript \(\mathcal{R}^{-}\) in our notation to highlight our different conventions, but the two complexes carry the same information: \(\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K)\) is isomorphic to infinitely many copies of \(\mathit{CFK}^{\infty}(Y,K)\). More precisely, \(\mathit{CFK}^{\infty}(Y,K)\) is isomorphic to \(\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K)|_{A=s}\) for any \(s\in\mathbb{Z}\). We can view \(\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K)_{A=s}\) as generated over \(\mathbb{F}[W,W^{-1}]\) by generators \(\{V^{-A(\mathfrak{x})}\mathbf{x}\}_{\mathfrak{x}\in\mathcal{G}}\); setting \(V=1\) and \(U=W\) recovers the familiar complex over \(\mathbb{F}[U,U^{-1}]\), and the Alexander filtration is given by negative powers of \(V\). For any \(s\), multiplication by \(V^{s}\) gives an isomorphism from \(\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K)|_{A=0}\) to \(\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K)|_{A=s}\). In [11], Ozsvath and Szabo in fact define a different copy of \(\mathit{CFK}^{\infty}(Y,K;\mathfrak{s})\) for each relative \(\mathrm{spin}^{c}\) structure \(\xi\) in \(G^{-1}_{Y,K}(\mathfrak{s})\), where \(G_{Y,K}\) is a map from the set of relative \(\mathrm{spin}^{c}\) structures for \((Y,K)\) to the set of \(\mathrm{spin}^{c}\) structures for \(Y\)(for nullhomologous knots \(G^{-1}_{Y,K}(\mathfrak{s})\) is indexed by \(s\) in \(\mathbb{Z}\)). These complexes are described as generated over \(\mathbb{F}\) by triples \([x,i,j]\) where \(x\) is a generator and \(i\) and \(j\) are integers satisfying \(j-i=A(x)-s\). We identify the triple \([x,i,j]\) with \(U^{-i}V^{-j}x\) and note that the Ozsvath-Szabo complex associated to \(s\) is precisely the Alexander grading \(s\) summand of \(\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K;\mathfrak{s})\). In [HL], which we rely on substantially for the background on flip maps, slightly different notation is used. There a single complex is given for each \(\mathfrak{s}\), generated by triples \([x,i,j]\) with \(j-i=A(x)\). However, the dependence on a choice of \(s\) in \(\mathbb{Z}\) arises when defining filtered maps; the relevant filtration on the sources is given by the integer \(j-s\) rather than by \(j\). For us the \(V\) filtration is always the negative power, so \([x,i,j]\) in the notation of [HL] corresponds to \(U^{-i}V^{s-j}x\). This distinction is not relevant in the present setting, since we only define the flip maps corresponding to \(s=0\), though it is relevant for rationally nullhomologous knots. In general, for any \(\mathrm{spin}^{c}\) structure \(\mathfrak{s}\) in \(\mathrm{Spin}^{c}(Y)\) we can define a family of flip maps by choosing relative \(\mathrm{spin}^{c}\)-structures; these maps are equivalent to each other, differing only by multiplication by a power of \(V\), so it suffices to compute any one. For arbitrary knots \(G^{-1}_{Y,K}(\mathfrak{s})\) is indexed by \(s\) that is not in \(\mathbb{Z}\) but in \(\mathbb{Z}+A(\mathfrak{s})\) for some rational \(A(\mathfrak{s})\). Since our knots are nullhomologous \(A(\mathfrak{s})=0\), and it makes sense to choose \(s=0\). We remark that \(\mathit{CFK}^{-}(Y,K;\mathfrak{s})\) can be identified with the subcomplex of \(\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K)|_{A=s}\) (for any integer \(s\)) with nonnegative power of \(V\). This carries the same information as \(\mathit{CFK}_{\mathcal{R}^{-}}(Y,K)|_{A=s}\) but the two are not quite the same, at least under the identification above, since \(\mathit{CFK}_{\mathcal{R}^{-}}(Y,K)_{s}\) also requires nonnegative powers of \(U\). The subcomplex of \(\mathit{CFK}^{\infty}_{\mathcal{R}^{-}}(Y,K)|_{A=s}\) with nonnegative powers of \(V\) can also be described as the Alexander grading \(s\) summand of \(\mathit{CFK}_{\mathcal{R}^{-}}(Y,K)\otimes_{\mathcal{R}^{-}}\mathbb{F}[U,U^{-1 },V]\). In fact, the complex \(\mathit{CFK}_{\mathcal{R}^{-}}(Y,K)|_{A=s}\) is more directly related to the complex \(A^{-}_{s}\) appearing in minus version of the surgery formulas of Ozsvath and Szabo; this is the subcomplex of \(\mathit{CFK}^{-}(Y,K)\) consisting of triples \([x,i,j]\) with \(\max(i,j-s)\leq 0\). Under the identification given above, this corresponds to the subcomplex of \(\mathit{CFK}_{\mathcal{R}^{-}}(Y,K)|_{A=0}\) generated by terms of the form \(U^{-i}V^{-j}x\) with \(j-i=A(x)\) and \(\max(i,j-s)\leq 0\). Multiplying by \(V^{s}\) gives an isomorphism between this and \(\mathit{CFK}_{\mathcal{R}^{-}}(Y,K)|_{A=s}\). ### Examples To clarify conventions, particularly regarding flip maps, we will describe two examples in detail. We will return to these examples later when we represent bigraded complexes and flip maps in terms of immersed curves. **Example 2.5**.: Let \(Y\) be \(+1\)-surgery on the figure eight knot, and let \(K\) be the dual knot, i.e. the core of the filling torus. \(Y\) has a single \(\mathrm{spin}^{c}\)-structure, which we denote \(\mathfrak{s}\). The knot Floer complex of \(\mathit{CFK}_{\mathcal{R}^{-}}(Y,K)\) can be computed using the surgery formula of Hedden and Levine [HL] (the easiest way to do this is using immersed curves, using Theorem 1.4). For a particular choice of basis, the resulting complex has five generators, which we denote \(a\), \(b\), \(c\), \(d\), and \(e\), and the only nonzero differentials are \[\partial(a)=Vb,\quad\text{ and }\quad\partial(e)=Ud.\] The bigrading and Alexander grading are given in the table below: \[\begin{array}{c|cccccc}&a&b&c&d&e\\ \hline(\mathrm{gr}_{w},\mathrm{gr}_{z})&(1,-1)&(0,0)&(0,0)&(0,0)&(-1,1)\\ A&1&0&0&0&-1\end{array}\] Although it would be difficult to compute directly from a Heegaard diagram, the map \(\Psi_{\mathfrak{s}}\) is uniquely determined by the bigradings up to homotopy and multiplication by a unit in \(\mathbb{F}\). Since \(\Psi_{\mathfrak{s}}\) interchanges the gradings it must take \(a\) to a multiple of \(e\), \(e\) to a multiple of \(a\), and \(b\), \(c\), and \(d\) to linear combinations of \(b\), \(c\), and \(d\). By Proposition 2.3 we can assume after applying an appropriate flip-filtered chain homotopy that \(\Psi_{\mathfrak{s}}(e)=0\), since there is a horizontal arrow starting at \(e\). Then we must have \(\Psi_{\mathfrak{s}}(d)=0\) since \(\Psi_{\mathfrak{s}}\) is a chain map. By applying flip-filtered chain homotopies \(H\) that take \(b\) or \(c\) to appropriate multiples of \(a\), we can assume that the coefficients of \(b\) in \(\Psi_{\mathfrak{s}}(b)\) and \(\Psi_{\mathfrak{s}}(c)\) are zero. We thus have that \[\Psi_{\mathfrak{s}}(a) =c_{1}e,\] \[\Psi_{\mathfrak{s}}(b) =c_{2}c+c_{3}d,\] \[\Psi_{\mathfrak{s}}(c) =c_{4}c+c_{5}d,\] where the \(c_{i}\)'s are constants in \(\mathbb{F}\). Note that we have reduced the problem to finding the induced map from horizontal homology to vertical homology, which are generated by \(\{a,b,c\}\) and \(\{c,d,e\}\), respectively. The constant \(c_{1}\) must be nonzero so that the induced map from horizontal homology to vertical homology is an isomorphism; after a change of basis rescaling \(e\) by a constant, we can take this multiple to be \(1\). When we rescale \(e\), we will also rescale \(d\) by the same amount so that the differential is unchanged. We then must have \(c_{2}=0\) and \(c_{3}=1\) for \(\Psi_{\mathfrak{s}}\) to be a chain map. Up to a change of basis adding a multiple of \(b\) to \(c\), we can assume \(c_{5}=0\). The flip map is then determined, up to homotopy, by the constant \(c_{4}\) (which must be nonzero to have an isomorphism on homology). In particular, in the case that \(\mathbb{F}=\mathbb{Z}/2\mathbb{Z}\) the flip map is uniquely determined from the complex. We note that when \(\mathbb{F}\) is not \(\mathbb{Z}/2\mathbb{Z}\) different choices of \(c_{4}\) give non-equivalent flip maps. In this case we can indirectly deduce that the correct flip map is given by \(c_{4}=1\) as follows: given the flip map, the surgery formula allows us to compute \(\mathit{HF}^{-}\) of rational surgeries on \(K\). Changing the value of \(c_{4}\) does not affect the answer for non-zero slopes, but considering the \(0\)-surgery on \(K\) (which is the same as \(0\)-surgery on the figure eight knot), we see that only \(c_{4}=1\) gives the correct answer. In the example above, the collection of flip maps is uniquely determined by the bigraded complexes up to a unit in \(\mathbb{F}\). This is not always the case, as the next example demonstrates. **Example 2.6**.: Let \(Y\) be \(+1\)-surgery on the left handed trefoil, and let \(K\) be the core of the surgery. Once again \(Y\) has a single spin\({}^{c}\)-structure, denoted \(\mathfrak{s}\), and the complex \(\mathit{CFK}_{\mathcal{R}}^{\infty}(Y,K;\mathfrak{s})\) can be computed using the surgery formula. This complex is identical as an ungraded complex to the one in the previous example: the generators are \(a\), \(b\), \(c\), \(d\), and \(e\), and the nonzero differentials are \[\partial(a)=Vb,\quad\text{ and }\quad\partial(e)=Ud.\] The bigradings, however, are different and are given in the table below: \[\begin{array}{c|ccccc}&a&b&c&d&e\\ \hline(\mathrm{gr}_{w},\mathrm{gr}_{z})&(2,0)&(1,1)&(0,0)&(1,1)&(0,2)\\ A&1&0&0&0&-1\end{array}\] We describe the possible flip maps up to homotopy. As in the previous example, it is enough to describe the induced map from horizontal to vertical homology: gradings force \(\Psi_{\mathfrak{s}}(e)\) to be zero, the chain map property implies that \(\Psi_{\mathfrak{s}}(d)=0\), and by by appropriate chain homotopies we can assume that the coefficients of \(a\) and \(b\) in \(\Psi_{\mathfrak{s}}(a)\), \(\Psi_{\mathfrak{s}}(b)\), and \(\Psi_{\mathfrak{s}}(c)\) are zero. The bigradings now tell us that \[\Psi_{\mathfrak{s}}(a) =c_{1}V^{-1}c+c_{2}e,\] \[\Psi_{\mathfrak{s}}(b) =c_{3}d,\] \[\Psi_{\mathfrak{s}}(c) =c_{4}c+c_{5}Ve.\] The constant \(c_{3}\) must be non-zero and can be made to be \(1\) after a change of basis rescaling \(d\) (and also rescaling \(e\), so that the differential is unchanged). The fact that \(\Psi_{\mathfrak{s}}\) is a chain map implies that \(c_{5}=0\) and \(c_{2}=1\). The constant \(c_{4}\) must then be nonzero. If \(c_{1}\) is nonzero, we may assume it is \(1\) by a change of basis rescaling \(c\). We are left with two fundamentally different cases: \(c_{1}=0\) or \(c_{1}=1\), along with a choice of nonzero constant \(c_{4}\) in each case. None of these remaining choices are equivalent. In particular, even when \(\mathbb{F}=\mathbb{Z}/2\mathbb{Z}\) we have two nonequivalent flip maps that could occur for the given bigraded complex. The surgery formula does not give the flip map on the dual surgery, and computing the flip map directly would be quite difficult, but as in the previous example we can use known surgeries on \(K\) to deduce that the correct choice is \(c_{1}=c_{4}=1\). We check that \(c_{4}=1\) by considering the zero surgery on \(K\), as before. To see that \(c_{1}=1\), we use both possible flip maps in the mapping cone formula to compute \(\widehat{HF}(Y_{-4/3}(K))\); using \(c_{1}=1\) gives rank \(1\) while using \(c_{1}=0\) gives rank \(3\). \(Y_{-4/3}(K)\) is the same as \(-1\)-surgery on the left handed trefoil in \(S^{3}\), so the correct rank is \(1\). **Remark 2.7**.: There is no algebraic obstruction to some other knot \(K^{\prime}\subset Y^{\prime}\) giving the exact same complexes as in Example 2.6 and a different choice of flip map. However, this does not happen because such a knot complement would still be genus one and fibered. This implies that the complement of \(K^{\prime}\) is either the figure-eight complement or a trefoil complement and no framing on one of these gives the complex above with a different flip map. ## 3. Immersed Floer theory in marked surfaces The algebraic objects described in the previous section, bigraded complexes and flip maps, can be given a geometric interpretation using Floer homology of immersed curves in certain marked surfaces. In this section we define Floer theory for immersed Lagrangians in these surfaces. Immersed Floer theory is defined more generally in [1], but in our two-dimensional setting Floer theory can be defined combinatorially. We will also prove, in our setting, a stronger notion of homotopy invariance than is shown in [1]. Our construction is inspired by but not identical to the combinatorial treatment of Floer cohomology and the Fukaya category for curves in surfaces found in [1]. One difference is that we use homological conventions rather than cohomological conventions, since we are ultimately interested in representing Heegaard Floer homology. Another difference is that we avoid the use of Novikov coefficients and ignore the area of disks we count; in particular, we do not need to choose a symplectic form on the surface. This is possible because we restrict to non-compact surfaces (in fact, for our purposes it is sufficient to work only in the infinite strip and the infinite cylinder) and impose an assumption that immersed curves are in an admissible configuration (see Definition 3.6 below). In this respect, our construction is more similar to the combinatorial treatment of Floer homology of curves in [1], though that work restricts to embedded curves and does not define higher product operations. Another difference is that, unlike in [1], we allow curves which bound immersed monogons. This adds significant technical difficulties and requires curves to carry a special decoration, a _bounding chain_, before Floer homology can be defined. A final difference is that we will consider curves in marked surfaces. In this case we introduce a formal variable \(W\) to record interactions of immersed disks with the marked points, and the Floer complex will be a module over \(\mathbb{F}[W]\) rather than a vector space over \(\mathbb{F}\). In fact, we will ultimately be interested in doubly marked surfaces in which there are two types of marked points; in this setting the relevant coefficient ring is \(\mathcal{R}^{-}=\mathbb{F}[U,V]\), where the formal variables \(U\) and \(V\) are associated with the two types of marked points. To simplify notation we will stick to the singly marked setting for most of this section, and we explain how the definitions extend to doubly marked surfaces in Section 3.5. ### The space \(CF(L_{0},l_{1})\) Consider a non-compact surface \(\Sigma\) with a collection \(\{w_{i}\}_{i\in I}\) of marked points (where \(I\) is any index set--generally we take \(I\) to be \(\mathbb{Z}\), but we occasionally consider finite collections of marked points); we allow \(\Sigma\) to have boundary, but we will require that any compact boundary component is decorated with a basepoint called a _stop_. We mention them for completeness, but we will not need the case of compact boundary components with stops; the two key examples of surfaces for the purposes of this paper are the infinite strip \(\mathcal{S}=[-\frac{1}{2},\frac{1}{2}]\times\mathbb{R}]\), with marked points \(w_{i}\) at \((0,i-\frac{1}{2})\) for integers \(i\), and the infinite cylinder \(\mathcal{Z}=\mathcal{S}/(-\frac{1}{2},y)\sim(\frac{1}{2},y)\). Let \(L\) be an immersed Lagrangian in \(\Sigma\) that is disjoint from all of the marked points; more precisely, \(L\) is a disjoint union of copies of \(S^{1}\), \([0,1]\) and \(\mathbb{R}\) along with an immersion \(\iota:L\to\Sigma\) whose image is disjoint from the marked points. We let \(\iota_{i}\) denote the restriction of \(\iota\) to a component \(L_{i}\) of \(L\). The image of a component \(L_{i}\) will be referred to as an immersed circle, immersed arc, or immersed line when \(L_{i}\) is \(S^{1}\), \([0,1]\) or \(\mathbb{R}\), respectively. We require that the endpoints of an immersed arc lie on the boundary of \(\Sigma\). We also require that an immersed line eventually leaves any compact subsurface of \(\Sigma\) on both ends. For example, when \(\Sigma\) is the infinite strip or cylinder, this means the ends of immersed lines must escape to the infinite ends of the strip or cylinder. Such a collection of immersions \(\iota_{i}:L_{i}\to\Sigma\) will be called collectively an _immersed multicurve_. By slight abuse of notation, we will sometimes conflate the immersion with its image. The immersed multicurves we consider will be weighted in the following sense: there will be a collection of basepoints on \(L\) and a nonzero weight in \(\mathbb{F}\) will be associated to each basepoint. We can usually assume that there is one basepoint on each \(S^{1}\) component of \(L\) and no basepoints on other components of \(L\), but at times it is convenient to allow additional basepoints. In addition to weights our immersed multicurves will be equipped with grading information, which we now define. We first review how gradings are defined on immersed curves in unmarked surfaces, and then describe a modification of this definition for marked surfaces. To define a grading on an immersed multicurve \(\iota:L\to\Sigma\) in an unmarked surface, we must first fix a trivialization of the tangent bundle of \(\Sigma\); in the case of the strip \([0,1]\times\mathbb{R}\) or the cylinder \(S^{1}\times\mathbb{R}\) we will use the obvious trivialization coming from viewing the strip as a subset of \(\mathbb{R}^{2}\) and from cutting the cylinder open to give the strip. Having fixed a trivialization, the tangent slope defines a map \(\tau\) from \(L\) to \(\mathbb{RP}^{1}\), which we identify with \(\mathbb{R}/\mathbb{Z}\). An orientation on \(L\) allows us to lift this to a map from \(L\) to \(S^{1}=\mathbb{R}/2\mathbb{Z}\). More generally, a \(\mathbb{Z}/N\mathbb{Z}\)-_grading_ on \(L\) is a lift of this map to \(\mathbb{R}/N\mathbb{Z}\), and a \(\mathbb{Z}\)-_grading_ on \(L\) is a lift \(\tilde{\tau}\) of this map to \(\mathbb{R}\). Note that each \(S^{1}\) component of \(L\) presents a potential obstruction to the existence of a \(\mathbb{Z}\)-grading. For such a component, the tangent slope map defines a loop in \(\mathbb{RP}^{1}\), and the winding number of this loop must be zero for this loop to lift to \(\mathbb{R}\). The _Maslov class_ of \(L\), denoted \(\mu_{L}\), is the element of \(\operatorname{Hom}(\pi_{1}(L),\mathbb{Z})\) that records this obstruction for all closed components. If \(\mu_{L}\) vanishes, we say that \(L\) is \(\mathbb{Z}\)-_gradable_. We now describe a modified notion of gradings for immersed Lagrangians in marked surfaces, which allows the gradings to capture the interaction of the curves with marked points. For each marked point, we choose a half-open oriented arc starting at that marked point and converging to a puncture or infinite end of \(\Sigma\). We will refer to these arcs as _grading arcs_. The grading arcs should be chosen so that any compact curve in \(\Sigma\) intersects finitely many arcs. Given such a choice of arcs, we now define a grading on \(L\) to be a piecewise continuous map \(\tilde{\tau}:L\to\mathbb{R}\) that lifts the tangent slope map and is continuous except at intersections of \(L\) with the arcs from the marked points, at which it has jump discontinuities of magnitude \(2\). More precisely, any time \(L\) crosses a grading arc passing from the left side to the right side of the arc, \(\tilde{\tau}\) increases by \(2\). The obstruction to such a grading is the _adjusted Maslov class_ of \(L\), an element of \(\operatorname{Hom}(\pi_{1}(L),\mathbb{Z})\) which records the change in tangent slope around each \(S^{1}\) component, taking into account the jump discontinuities described above. When the adjusted Maslov class vanishes then \(L\) is \(\mathbb{Z}\)-gradable; otherwise \(L\) only admits a \(\mathbb{Z}/N\mathbb{Z}\) grading for some \(N\). Note that the marked points have no affect on the grading modulo \(2\), and as before a \(\mathbb{Z}/2\mathbb{Z}\)-grading is equivalent to an orientation on \(L\). All of our immersed multicurves will be oriented, and unless otherwise noted they will all be \(\mathbb{Z}\)-graded. Given immersed multicurves \(L_{0}\) and \(L_{1}\) in \(\Sigma\) that intersect transversally, we define \(CF(L_{0},L_{1})\) to be the module over \(\mathbb{F}[W]\) generated by the intersections of \(L_{0}\) and \(L_{1}\). If \(L_{0}\) and \(L_{1}\) both contain immersed arcs, we also make the requirement that on any given boundary component all endpoints of arcs in \(L_{1}\) occur after all endpoints of arcs in \(L_{0}\), with the order coming from the boundary orientation (in the case of compact boundary components, this is the reason for marking a stop; we interpret the order of endpoints by following the boundary orientation starting and ending at the stop). We remark that the requirement on ordering the endpoints of arcs is necessary given that hope aim to promote \(CF(L_{0},L_{1})\) to a chain complex whose homology is invariant under reasonable homotopies of the curves, since sliding an endpoint of an arc in \(L_{0}\) past an endpoint of an arc \(L_{1}\) would change the parity of the intersection number and thus could not preserve homology. If \(L_{0}\) and \(L_{1}\) are oriented, then \(CF(L_{0},L_{1})\) has a \(\mathbb{Z}/2\mathbb{Z}\) grading given by the sign of intersection points as in Figure 3. This grading can be enhanced if \(L_{0}\) and \(L_{1}\) are \(\mathbb{Z}\)-graded. Given \(\mathbb{Z}\)-gradings \(\tilde{\tau}_{0}:L_{0}\to\mathbb{R}\) on \(L_{0}\) and \(\tilde{\tau}_{1}:L_{1}\to\mathbb{R}\) on \(L_{1}\), we can define a \(\mathbb{Z}\) grading on \(CF(L_{0},L_{1})\) as follows: **Definition 3.1**.: For each intersection point \(p\) in \(L_{0}\cap L_{1}\), we define the grading \(\mathrm{gr}(p)\) to be the greatest integer less than \(\tilde{\tau}_{1}(p)-\tilde{\tau}_{0}(p)\). Equivalently, the grading is \(\tilde{\tau}_{1}(p)-\tilde{\tau}_{0}(p)-\theta_{01}(p)\), where \(\theta_{01}(p)\) is \(\frac{1}{\pi}\) times the angle covered when turning counterclockwise from \(L_{0}\) to \(L_{1}\). Note that if \(L_{0}\) and \(L_{1}\) only carry \(\mathbb{Z}/N\mathbb{Z}\) gradings, then the definition above defines a \(\mathbb{Z}/N\mathbb{Z}\) grading on \(CF(L_{0},L_{1})\). In particular, given orientations on \(L_{0}\) and \(L_{1}\) this definition determines the \(\mathbb{Z}/2\mathbb{Z}\) grading described in Figure 3. We remark that the grading on \(CF(L_{0},L_{1})\) only depends on the homotopy classes of the grading arcs used to define gradings on \(L_{0}\) and \(L_{1}\); if we apply a homotopy to the arc in \(\Sigma\), each time the arc passes an intersection point the gradings of \(L_{0}\) and \(L_{1}\) at that point both jump by two in the same direction, so their difference is unchanged. **Remark 3.2**.: Readers may notice that the grading in Definition 3.1 differs from the usual definition of the grading by one. For instance, in [1] the degree of an intersection point corresponds to \(\tilde{\tau}_{0}(p)-\tilde{\tau}_{1}(p)\) plus the angle of clockwise rotation from \(L_{0}\) to \(L_{1}\). Likewise, the mod \(2\) degree defined in [1] differs from ours by one. This is due to the fact that we are using homological rather than cohomological conventions. There is a canonical identification between \(CF(L_{0},L_{1})\) and \(CF(L_{1},L_{0})\), as they are generated by the same intersection points, but the gradings are different. That is, the grading of an intersection point depends on whether it is viewed as a point in \(L_{0}\cap L_{1}\) or \(L_{1}\cap L_{0}\). It is straightforward to check from Definition 3.1 that these two gradings for a given intersection point sum to \(-1\), so the identification between \(CF(L_{0},L_{1})\) and \(CF(L_{1},L_{0})\) takes the grading \(\mathrm{gr}\) to \(-1-\mathrm{gr}\). Defining \(CF(L_{0},L_{1})\) requires \(L_{0}\) and \(L_{1}\) to intersect transversally. To allow for arbitrary Lagrangians, we can apply a small perturbation to one of them. In particular, for an immersed multicurve \(L\) let \(L^{\prime}\) denote an immersed multicurve which agrees with the pushoff of \(L\) by some small amount \(\epsilon\) to the right (with respect to the orientation on \(L\)) except in a small neighborhood of the basepoints on \(L\) or of the terminal endpoint of an arc component of \(L\), near which \(L^{\prime}\) lies to to the left of \(L\) as in Figure 4. We then define \(CF(L_{0},L_{1})\) to be \(CF(L_{0},L^{\prime}_{1})\) for a sufficiently small perturbation. Note that if \(L_{0}\) and \(L_{1}\) were already transverse then the perturbation can be chosen small enough to not affect the intersection points. We remark that when perturbing a curve it is important to perturb in the way described above rather than simply pushing the curve to the same side everywhere; this is a combinatorial realization of the usual requirement for Floer homology that isotopies are Hamiltonian. Figure 4. \(L^{\prime}\) is obtained by translating \(L\) slightly to the right, except in a neighborhood of the basepoints on \(L\) or the terminal endpoint of any arc component of \(L\). Near each basepoint, \(L^{\prime}\) lies to the left of \(L\) instead; this results in two intersection points between \(L\) and \(L^{\prime}\) near each basepoint. Near the endpoint of an arc in \(L\) at which the arc is oriented toward the boundary \(L^{\prime}\) also lies to the left of \(L\), requiring one additional intersection point. Figure 3. The mod \(2\) grading on an intersection point \(p\in L_{0}\cap L_{1}\) As a special case, we can define \(CF(L)\) to be \(CF(L,L)\), which means \(CF(L,L^{\prime})\) for a suitable small perturbation \(L^{\prime}\) of \(L\). Note that there are two points in \(L\cap L^{\prime}\) for each self-intersection point of \(L\), and two additional points in \(L\cap L^{\prime}\) near each basepoint on \(L\) and one additional intersection point for each arc component of \(L\). A \(\mathbb{Z}\)-grading on an immersed multicurve \(L\) gives rise to a \(\mathbb{Z}\)-grading on the pushoff \(L^{\prime}\) and thus defines a \(\mathbb{Z}\)-grading on \(CF(L)\). It is easy to check that the pair of intersection points in \(L\cap L^{\prime}\) associated to any basepoint of \(L\) have gradings \(0\) and \(-1\). Similarly, the pair of intersection points associated to any self intersection point of \(L\) have gradings that sum to \(-1\). Because of this relationship, it is convenient to encode the grading information for both points associated to a self-intersection of \(L\) by keeping track of only the even grading. **Definition 3.3**.: The _degree_ of a self intersection point \(p\) of \(L\) is the even integer \(\deg(p)\) such that the two intersection points of \(L\cap L^{\prime}\) corresponding to \(p\) have gradings \(\deg(p)\) and \(-1-\deg(p)\). ### Polygon counting maps and \(\mathcal{A}_{\infty}\) relations For an immersed Lagrangian \(L\) in a marked surface, the module \(CF(L)\) can be equipped with operations giving it an \(\mathcal{A}_{\infty}\) structure. These operations count immersed polygons bounded by various perturbations of \(L\). More generally, given a collection of \(k+1\) pairwise-transverse immersed Lagrangians \(L_{0},\ldots,L_{k}\) in a marked surface \(\Sigma\), we will define a polygon counting operation \[m_{k}:CF(L_{0},L_{1})\otimes\cdots\otimes CF(L_{k-1},L_{k})\to CF(L_{0},L_{k}).\] We note that if \(\Sigma\) has boundary and the \(L_{i}\) have arc components, we will require that all endpoints of arcs in \(L_{j}\) occur after all endpoints in \(L_{i}\) with respect to the boundary orientation for any \(j>i\). **Definition 3.4**.: Given Lagrangians \(L_{0},\ldots,L_{k}\) in \(\Sigma\), intersection points \(p_{i}\) in \(L_{i-1}\cap L_{i}\) for \(1\leq i\leq k\) and an intersection point \(q\) in \(L_{0}\cap L_{k}\), an _immersed \((k+1)\)-gon with corners \(p_{1},\ldots,p_{k}\), and \(q\)_ is an orientation preserving map \[u:(D^{2},\partial D^{2})\rightarrow(\Sigma,L_{0}\cup\cdots\cup L_{k})\] with the following properties: * \(u(\bar{q})=q\) and \(u(\bar{p}_{i})=p_{i}\) for \(1\leq i\leq k\), where \(\bar{q},\bar{p}_{1},\ldots,\bar{p}_{k}\) are fixed points on \(\partial D^{2}\) appearing in clockwise order; * \(u\) maps the segment of \(\partial D^{2}\) between \(\bar{p}_{i}\) and \(\bar{p}_{i+1}\) to \(L_{i}\) for \(1\leq i<k\), and the segments between \(\bar{q}\) and \(\bar{p}_{1}\) and between \(\bar{p}_{k}\) and \(\bar{q}\) are mapped to \(L_{0}\) and \(L_{k}\), respectively; * \(u\) is an immersion away from the points \(\bar{q}\) and \(\bar{p}_{i}\); and * the points \(q\) and \(p_{i}\) are convex corners of the image of \(u\) (that is, that the image of a neighborhood of \(\bar{q}\) or \(\bar{p}_{i}\) covers only of the four quadrants near the intersection point \(q\) or \(p_{i}\)). We can make sense of this definition even when \(k=0\), in which case it describes an _immersed monogon_. These shapes have also been referred to as _teardrops_ or _fishtails_. **Definition 3.5**.: Given a self-intersection point \(q\) of \(L_{0}\), an _immersed monogon with corner \(q\)_ is an orientation preserving map \[u:(D^{2},\partial D^{2})\rightarrow(\Sigma,L_{0})\] with the following properties: * \(u(\bar{q})=q\), where \(\bar{q}\) is a fixed point on \(\partial D^{2}\). * \(u\) is an immersion away from \(\bar{q}\); and * the point \(q\) is a convex corner of the image of \(u\). We will consider immersed polygons up to smooth reparametrization of the disk \(D^{2}\), and we define \(\mathcal{M}(p_{1},\ldots,p_{k},q)\) to be the set of equivalence classes of immersed \((k+1)\)-gons with corners \(p_{1},\ldots,p_{k}\), and \(q\). In order to ensure that the set \(\mathcal{M}(p_{1},\ldots,p_{k},q)\) is finite, we will need to impose an admissibility condition on the immersed Lagrangians. As usual, we require the surface \(\Sigma\) to be noncompact. We will restrict mainly to compact Lagrangians, but we do allow at most one of \(L_{0},\ldots,L_{k}\) to be noncompact provided it has finitely many self intersection points (in most cases the non-compact curves we consider will be embedded). **Definition 3.6**.: A collection of immersed Lagrangians \(L_{0},\ldots,L_{k}\) in a non-compact surface \(\Sigma\) is _admissible_ if they are pairwise transverse, no Lagrangian bounds an immersed disk in \(\Sigma\), and no two Lagrangians bound an immersed annulus in \(\Sigma\). We furthermore assume that at most one of \(L_{0},\ldots,L_{k}\) is non-compact and that any non-compact Lagrangian has finitely many self-intersection points. **Proposition 3.7**.: _If \(L_{0},\ldots,L_{k}\) are admissible, then for any intersection points \(p_{1},\ldots,p_{k}\), and \(q\) as above the set \(\mathcal{M}(p_{1},\ldots,p_{k},q)\) is finite._ Proof.: The collection of Lagrangians define a cell structure on the surface \(\Sigma\), where \(0\)-cells are intersections between Lagrangians, \(1\)-cells are segments of Lagrangians, and \(2\) cells are the connected components of \(\Sigma\setminus(L_{0}\cup\cdots\cup L_{k})\). The image of any immersed polygon determines a \(2\)-chain (with nonnegative coefficient for each region) called the _domain_ of the polygon. In addition to the domain, an immersed polygon determines combinatorial gluing data specifying which edges of which regions should be indentified to form the disk. Standard arguments show that to study immersed polygons it is enough to work with this combinatorial data. In particular, a polygon is uniquely determined up to equivalence by its domain and gluing data. Moreover, there are finitely many choices of combinatorial gluing data for each domain, and a domain is uniquely determined by its boundary, a \(1\)-cycle. Thus we need to show that there are finitely many \(1\)-cycles which could bound an immersed polygon with the given corners. The boundary of a polygon with the given corners consists of paths from \(p_{1}\) to \(q\) in \(L_{0}\), from \(q\) to \(p_{k}\) in \(L_{k}\), and from \(p_{i+1}\) to \(p_{i}\) in \(L_{i}\) for each \(1\leq i<k\). Although each \(L_{i}\) may have multiple components, only one component will be involved in the boundary of a polygon so we will assume without loss of generality that each \(L_{i}\) has a single component. If \(L_{i}\) is an immersed arc or an immersed line, then there is a unique path up to reparametrization connecting the given corners, but if \(L_{i}\) is an immersed circle there are infinitely many such paths obtained from each other by adding full multiples of the closed curve \(L_{i}\). Thus the difference between any two potential boundaries of an immersed polygon, as \(1\)-cycles, is a collection of full copies of the \(L_{i}\)'s (specifically of those that are immersed circles). We will say that the length of a path in \(L_{i}\) is the number of times the path passes its starting point before reaching its ending point (this is the number of full copies of \(L_{i}\) that the path covers). Suppose there are infinitely many immersed polygons with the desired corners, and thus infinitely many distinct \(1\)-cycles representing their boundaries. Then there must be boundaries which contain arbitrarily long paths on at least one Lagrangian, say \(L_{\ell}\). We will consider a polygon \(u_{N}\) for which the \(L_{\ell}\) part of the boundary has length at least \(N\) for some very large \(N\). The finiteness of \(\mathcal{M}(p_{1},\ldots,p_{k},q)\) does not depend on the orientation of the Lagrangians, so we will assume without loss of generality that all Lagrangians are oriented such that their orientation agrees with the boundary orientation induced by \(u_{N}\). Figure 5. An immersed \((k+1)\)-gon. We will choose an arc \(\gamma\) in \(\Sigma\) and consider the preimage of \(\gamma\) under \(u_{N}\), noting that \(u_{N}^{-1}(\gamma)\) is a collection of disjoint arcs in \(D^{2}\). For each \(i\) let \(\bar{L}_{i}\) denote the portion of \(\partial D^{2}\) mapping to \(L_{\ell}\) under \(u_{N}\). We will choose \(\gamma\) with the property that for large enough \(N\) there are arbitrarily many arcs in \(u_{N}^{-1}(\gamma)\) with one endpoint on \(\bar{L}_{\ell}\) and one endpoint on \(\bar{L}_{m}\) for some \(m\neq\ell\). To see that this is possible, we first consider the case that \(L_{\ell}\) is homotopically nontrivial in \(\Sigma\). In this case we can let \(\gamma\) be arc in \(\Sigma\) (with either closed ends on \(\partial\sigma\) or open ends approaching punctures of \(\Sigma\)) that intersects \(L_{\ell}\) in a homotopically essential way, meaning that any curve homotopic to \(L_{\ell}\) intersects \(\gamma\) at least once. In particular, given a decomposition of \(\Sigma\) into \(0\) and \(1\)-handles, we can take \(\gamma\) to be the cocore of a \(1\)-handle that \(L_{\ell}\) runs over. Since the path \(u_{N}(\bar{L}_{\ell})\) runs over \(L_{\ell}\) at least \(N\) times, \(u_{N}^{-1}(\gamma)\cap\bar{L}_{\ell}\) has at least \(N\) points and we would like to show that a large number of the arcs in \(u_{N}^{-1}(\gamma)\) starting at these points do not end on \(\bar{L}_{\ell}\) (since there are finitely many Lagrangians, it will follow that there is some \(m\) such that a large number of these arcs end on \(\bar{L}_{m}\)). Consider the path in \(\bar{L}_{\ell}^{\prime}\) in \(D^{2}\) from \(\bar{p}_{\ell+1}\) to \(\bar{p}_{\ell}\) that follows \(\bar{L}_{\ell}\) except that just before hitting any arc in \(u_{N}^{-1}(\gamma)\) that has both endpoints on \(\bar{L}_{\ell}\) it turns left and follows (a push off of) the arc before continuing along \(\bar{L}_{\ell}\). This path is clearly homotopic to \(\bar{L}_{\ell}\) and \(u_{N}(\bar{L}_{\ell}^{\prime})\), being homotopic to \(u_{N}(\bar{L}_{\ell})\), must intersect \(\gamma\) at least \(N\) times. For each intersection of \(u_{N}(\bar{L}_{\ell}^{\prime})\) with \(\gamma\) there is an arc in \(u_{N}^{-1}(\gamma)\) with exactly one end on \(\bar{L}_{\ell}\), so there are at least \(N\) of these as desired. In the case that \(L_{\ell}\) is nullhomotopic we need a different way of choosing \(\gamma\). In this case there must be a point \(p\) in \(\Sigma\) about which \(\mathrm{L}_{\ell}\) has negative winding number, since otherwise it bounds an immersed disk violating admissibility; we choose \(\gamma\) to be any arc passing through \(p\). We interpret the winding number as the signed intersection number of an arc going from this region to some fixed boundary or puncture of \(\Sigma\), and assume that the ends of \(\gamma\) approach this same boundary or puncture. The point \(p\) divides \(\gamma\) into two halves, and on either half there are more negative intersection points with \(L_{\ell}\) than positive intersection points. The negative intersection points are the ones at which the image of an arc in \(u_{N}^{-1}(\gamma)\) points away from \(p\) along \(\gamma\), and the positive intersection points are the ones at which the image of such an arc moves toward \(p\). It follows that not at least one of the arcs from a negative intersection point moving away from \(p\) do not stop at intersection point with \(L_{\ell}\) and must end on some other \(L_{i}\); this gives rise to an arc in \(u_{N}^{-1}(\gamma)\) from \(\bar{L}_{\ell}\) to \(\bar{L}_{i}\). Since \(u_{N}(\bar{L}_{\ell})\) runs over \(L_{\ell}\) at least \(N\) times there are at least \(N\) such arcs. Assume we have an arc \(\gamma\) as described above. Let \(\bar{\gamma}^{1},\bar{\gamma}^{2},\ldots,\bar{\gamma}^{N}\) denote a set of arcs in \(u_{N}^{-1}(\gamma)\) that have one endpoint on \(\bar{L}_{\ell}\) and one endpoint on \(\bar{L}_{m}\), and let \(\bar{x}_{\ell}^{i}\) and \(\bar{x}_{m}^{i}\) denote the endpoints of \(\bar{\gamma}^{i}\) on \(\bar{L}_{\ell}\) and \(\bar{L}_{m}\), respectively (see Figure 6). Since there are finitely many intersections of \(\gamma\) with both \(L_{\ell}\) and \(L_{m}\), for \(N\) large enough we can find distinct arcs \(\bar{\gamma}^{i}\) and \(\bar{\gamma}^{j}\) so that \(u_{N}(\bar{x}_{\ell}^{i})=u_{N}(\bar{x}_{\ell}^{j})\) and \(u_{N}(\bar{x}_{m}^{i})=u_{N}(\bar{x}_{m}^{j})\). Note that \(u_{N}(\bar{\gamma}^{i})\) and \(u_{N}(\bar{\gamma}^{j})\) are homotopic, since they both follow the unique path in \(\gamma\) connecting \(u_{N}(\bar{x}_{\ell}^{i})\) to \(u_{N}(\bar{x}_{m}^{i})\). Restricting \(u_{N}\) to the part of \(D^{2}\) between the two arcs \(\bar{\gamma}^{i}\) and \(\bar{\gamma}^{j}\) defines an immersed rectangle with sides on \(L_{\ell}\), \(\gamma\), \(L_{m}\), and two identical opposite sides on \(\gamma\); identifying the two \(\gamma\) sides of this rectangle forms an immersed annulus bounded by \(L_{\ell}\) and \(L_{m}\). This contradicts the assumption that the curves are in admissible position. Two (non-admissible) collections of curves in the infinite cylinder are shown in Figure 7 that each bound an infinite number of \(4\)-gons; local multiplicities of a particular \(4\)-gon \(u_{N}\) are indicated in the Figure. In each example \(L_{1}\) plays the role of \(L_{\ell}\) in the proof of Proposition 3.7. The boundary of \(u_{N}\) runs over both \(L_{1}\) and \(L_{3}\)\(N\) times, forming a long strip; cutting this strip twice along the arc \(\gamma\) and regluing shows that there is an immersed annulus bounded by \(L_{1}\) and \(L_{3}\). We remark that \(\Sigma\) being non-closed is essential in the proof of Proposition 3.7; on the right of Figure 7 is a collection of three embedded curves which bound infinitely many triangles but which are admissible as no immersed annuli are present. It is still true that two Lagrangians are covered arbitrarily many times as the triangles grow and we can choose a curve \(\gamma\) that intersects both such that for a large triangle there are arbitrarily many arcs in \(u_{N}^{-1}(\gamma)\) connecting the preimages of \(L_{1}\) and \(L_{0}\); the difference is that here \(\gamma\) is a closed curve so there is not a unique path between two points and the images of different arcs in \(u_{N}^{-1}(\gamma)\) can not be identified. **Remark 3.8**.: This combinatorial proof of Proposition 3.7 is new, to the author's knowledge. But in the symplectic setting there is a simpler argument relying on the fact that Lagrangians in a non-closed surface are exact. Alternatively, one can define Floer homology with coefficients in a Novikov ring and keep track of the symplectic area of polygons; this approach works for closed surfaces as well (see [1]). Given an immersed \((k+1)\)-gon \(u\) in \(\mathcal{M}(p_{1},\ldots,p_{k},q)\), we define a sign \((-1)^{s(u)}\) by multiplying contributions from each corner. The contribution of a corner \(p\) in \(L_{i}\cap L_{j}\) with \(i<j\) is \(-1\) if and only if the corner has odd grading and the orientation of \(L_{i}\) opposes the boundary orientation of the polygon. In particular, the corner \(p_{i}\) in \(L_{i}\cap L_{j}\) contributes \(-1\) if and only if the orientations of \(L_{i}\) and \(L_{j}\) both oppose the boundary orientation of the polygon, while the corner \(q\) contributes \(-1\) if and only if the orientation on \(L_{0}\) opposes the boundary orientation and the orientation on \(L_{k}\) agrees with the boundary orientation. An immersed polygon is allowed to cover the marked points in \(\Sigma\), but we will keep track of how many times this happens. Let \(n_{w}(u)\) denote the total multiplicity with which the polygon \(u\) covers the Figure 7. (Left and middle) Arrangements of Lagrangians in the infinite strip for which there are infinitely many polygons (the local multiplicities of each region in the \(N\)th such polygon are indicated). In each case there is an immersed annulus, so the curves are not admissible. (Right) Three Lagrangians in the closed torus bounding infinitely many triangles, even though there are no immersed annuli. marked points in \(\Sigma\). Finally, each immersed polygon \(u\) has weight \(c(u)\) given by the product of the weight or the inverse of the weight associated with any basepoints of \(L\) occurring on the boundary, where the inverse of the weight is used if the boundary orientation opposes the orientation of \(L\) at the given basepoint. We now define: \[m_{k}(p_{1},\ldots,p_{k})=\sum_{q\in L_{0}\cap L_{k}}\sum_{u\in\mathcal{M}(p_{1 },\ldots,p_{k},q)}(-1)^{s(u)}c(u)W^{n_{w}(u)}q. \tag{4}\] If the collection of Lagrangians is admissible then Proposition 3.7 ensures that the second sum above is finite for any \(q\) in \(L_{0}\cap L_{k}\), and the assumption that at most one of the Lagrangians is non-compact ensures that there are finitely many intersection points between any two Lagrangians, and in particular the first sum is finite. Strictly speaking the \(p_{i}\) and \(q\) appearing in (4) are intersection points in \(L_{i-1}\cap L_{i}\) and \(L_{0}\cap L_{k}\), respectively, but these are identified with generators of \(CF(L_{i-1},L_{i})\) and \(CF(L_{0},L_{k})\), so the same formula defines the desired map \[m_{k}:CF(L_{0},L_{1})\otimes\cdots\otimes CF(L_{k-1},L_{k})\to CF(L_{0},L_{k}).\] This definition makes sense for any \(k>0\). When \(k=0\) we also get a monogon counting map \[m_{0}:\{1\}\to CF(L_{0},L_{0})=CF(L_{0}).\] We think of \(m_{0}\) as a function with no inputs, so that \(m_{0}()\) is an element of \(CF(L_{0})\). The map \(m_{0}\) is defined by (4) just like \(m_{k}\), except that we need to be slightly more careful because in this case \(q\) is a self-intersection point of \(L_{0}\) and there are two generators of \(CF(L_{0})\) corresponding to each self-intersection point of \(L_{0}\). These two generators have gradings of opposite parity, and we will declare that in this case the relevant generator is the one with even grading. That is, if we let \(q^{+}\) denote the even grading generator of \(CF(L_{0})\) associated with the self-intersection point \(q\), we define \[m_{0}()=\sum_{q}\sum_{u\in\mathcal{M}(q)}c(u)W^{n_{w}(u)}q^{+},\] where the first sum is over self-intersection points \(q\) of \(L_{0}\). Recall that by admissibility there are finitely many self-intersection points. Note that the sign term \((-1)^{s(u)}\) from (4) is always positive since the orientation on \(L_{0}\) can not both agree with and oppose the boundary orientation of the monogon. **Proposition 3.9**.: _For \(k\geq 0\), the degree of the map \(m_{k}\) is \(k-2\)._ Proof.: Fix an immersed polygon \(u\) contributing a multiple of \(q\) to \(m_{k}(p_{1},\ldots,p_{k})\). Let \(\tilde{\tau}_{i}:L_{i}\to\mathbb{R}\) denote the grading on \(L_{i}\) and for \(p\) in \(L_{i}\cap L_{j}\), let \(\theta(p_{i})\) denote \(\frac{1}{\pi}\) times the angle covered when turning counterclockwise from \(L_{i-1}\) to \(L_{i}\), and let \(\theta(q)\) denote \(\frac{1}{\pi}\) times the angle covered when turning counterclockwise from \(L_{0}\) to \(L_{k}\). Recall that \(\operatorname{gr}(p_{i})=\tilde{\tau}_{i}(p_{i})-\tilde{\tau}_{i-1}(p_{i})- \theta(p_{i})\), and \(\operatorname{gr}(q)=\tilde{\tau}_{k}(q)-\tilde{\tau}_{0}(q)-\theta(q)\), so that \[\sum_{i=1}^{k}\operatorname{gr}(p_{i})-\operatorname{gr}(q)=\sum_{i=0}^{k} \left[\tilde{\tau}_{i}(p_{i})-\tilde{\tau}_{i}(p_{i+1})\right]+\theta(q)-\sum _{i=1}^{k}\theta(p_{i}),\] where in the first sum on the right hand side we adopt the convention that \(p_{0}=p_{k+1}=q\). Let \(\operatorname{rot}_{i}(u)\) denote the net rotation when traversing the \(L_{i}\) portion of \(\partial u\) (in radians, divided by \(\pi\)), and note that \(\tilde{\tau}_{i}(p_{i})-\tilde{\tau}_{i}(p_{i+1})\) is given by \(\operatorname{rot}_{i}(u)-2n_{w;i}(u)\) where \(n_{w;i}(u)\) is the signed number of times the \(L_{i}\) portion of \(\partial u\) crosses a grading arc from a \(w\) marked point. The sum \(\sum_{i=0}^{k}n_{w;i}(u)\) is simply \(n_{w}(u)\), the number of \(w\) marked points enclosed by \(u\). The total rotation when traversing \(\partial u\), including the corners, which must be \(2\) because \(u\) is a polygon, is given by \[2=\sum_{i=0}^{k}\operatorname{rot}_{i}(u)+\theta(q)+\sum_{i=1}^{k}(1-\theta(p _{i})).\] It follows that \[\sum_{i=1}^{k}\operatorname{gr}(p_{i})-\operatorname{gr}(q)=\sum_{i=0}^{k} \left[\operatorname{rot}_{i}(u)-2n_{w;i}(u)\right]+\theta(q)+\sum_{i=1}^{k} \left[1-\theta(p_{i})\right]-k=2-k-2n_{w}(u).\] Since the coefficient of \(q\) in \(m_{k}(p_{1},\dots,p_{k})\) contains \(W^{n_{w}(u)}\) and \(\operatorname{gr}(W)=-2\), it follows that \[\operatorname{gr}(m_{k}(p_{1},\dots,p_{k}))-\sum_{i=1}^{k}\operatorname{gr}(p _{i})=k-2.\] An important property of the maps \(m_{k}\) is that they satisfy the _\(\mathcal{A}_{\infty}\)-relations_: **Proposition 3.10**.: _For any admissible collection of immersed curves \(L_{0},\dots,L_{k}\) and any collection of intersection points \(p_{1},\dots,p_{k}\) with \(p_{i}\) in \(L_{i-1}\cap L_{i}\)._ \[\sum_{\ell=1}^{k}\sum_{j=0}^{k-\ell}(-1)^{*}m_{k+1-\ell}(p_{1},\dots,p_{j},m_ {\ell}(p_{j+1},\dots,p_{j+\ell}),p_{j+\ell+1},\dots,p_{k})=0\] _where \(*=\sum_{i=j+\ell+1}^{k}1+\operatorname{gr}(p_{i})\)._ Proof.: This is a standard argument (see, for instance, [1, Lemma 3.6]). We consider immersed \((k+1)\)-gons with one obtuse corner; each of these can be cut in two different ways at the obtuse corner, and each resulting pair of polygons contributes to a term in the relation. Conversely, each pair of polygons contributing to a term in the relation arises in this way, as combining the polygons gives a polygon with one obtuse corner. The most subtle part of the argument is checking that signs and weights work out correctly. The main difference here compared to [1, Lemma 3.6] is that we allow monogons contributing to \(m_{0}\) operations, but this does not substantially affect the argument. Another difference is the presence of marked points, but it is clear that the two pairs of polygons arising from cutting a given polygon with an obtuse corner cover the same collection of marked points, and so the corresponding terms in the relation have the same power of \(W\). Ultimately, we would like to view the bigon counting map \(m_{1}:CF(L_{0},L_{1})\to CF(L_{0},L_{1})\) as a differential and consider homology of the resulting complex. However, the \(\mathcal{A}_{\infty}\)-relations above show that this may not be possible. The relation with \(k=2\) is \[m_{1}(m_{1}(p_{1}))=(-1)^{\operatorname{gr}(p_{1})}m_{2}(m_{0}(),p_{1})-m_{2} (p_{1},m_{0}()),\] so \(m_{0}\) gives an obstruction to \((m_{1})^{2}\) being zero. For this reason it is common to simply restrict to Lagrangians which do not bound immersed monogons, so that the map \(m_{0}\) is always zero; this is the approach taken in [1]. However, this turns out to be too restrictive an assumption for our purposes. Instead, we will see that a broader class of Lagrangians can be decorated, and the polygon counting maps modified, in order to eliminate this obstruction and define a differential. ### Turning points and bounding chains In order to define Floer homology, we will need to add decorations to our immersed Lagrangians; these decorations will take the form of linear combinations of self-intersection points. Given an immersed multicurve \(L\), let \(\mathcal{I}\) denote the set of self-intersection points of \(L\). We will only need to include self-intersection points with non-positive degree, and at times we will restrict to only points with degree zero; let \(\mathcal{I}_{\leq 0}\) and \(\mathcal{I}_{0}\) denote the subsets of \(\mathcal{I}\) consisting of points with non-positive degree or degree zero, respectively. **Definition 3.11**.: A _collection of turning points_ on \(L\) is a linear combination, over \(\mathbb{F}\), of points in \(\mathcal{I}_{\leq 0}\). The objects we will consider are now pairs \((L,\mathbf{b})\) where \(L\) is a weighted and graded immersed Lagrangian in \(\Sigma\) and \(\mathbf{b}\) is a collection of turning points. A key observation is that a collection of turning points \(\mathbf{b}\) may be viewed as an element of \(CF(L)\). **Definition 3.12**.: A collection of turning points \(\mathbf{b}=\sum_{p\in\mathcal{I}_{\leq 0}}c_{p}p\) determines an element \[\overline{\mathbf{b}}=\sum_{p\in\mathcal{I}_{\leq 0}}c_{p}W^{-\deg(p)/2}p^{-}\] of \(CF(L)\), where \(p^{-}\) is the odd grading generator of \(CF(L)\) associated with the intersection point \(p\). The powers of \(W\) here are chosen so that \(\overline{\mathbf{b}}\) is a homogeneous element of \(CF(L)\) with grading \(-1\). Conversely, any homogeneous grading \(-1\) element of \(CF(L)\) determines a collection of turning points by taking the coefficients of all \(p^{-}\) for \(p\) in \(\mathcal{I}_{\leq 0}\) and ignoring the power of \(W\). **Remark 3.13**.: For any homogeneous grading \(-1\) element of \(CF(L)\), the coefficient of \(p^{-}\) must be zero for any \(p\) in \(\mathcal{I}\) with positive degree; this is because \(\operatorname{gr}(p^{-})=-1-\deg(p)<-1\) and we do not allow negative powers of \(W\). However, such an element of \(CF(L)\) may include the odd grading generators associated with a basepoint on \(L\). A homogeneous grading \(-1\) element of \(CF(L)\) is therefore equivalent to a linear combination (over \(\mathbb{F}\)) of points in \(\mathcal{I}_{\leq 0}\cup\mathcal{J}\), where \(\mathcal{J}\) denotes the set of basepoints on \(L\). A linear combination of basepoints in \(\mathcal{J}\) with nonzero coefficients is the same as a weighting of \(L\). Thus a collection of turning points on \(L\) along with the weights on \(L\) determines a homogeneous grading \(-1\) element of \(CF(L)\). In view of this, it might make sense to work with unweighted curves and include the weighting along with the collection of turning points as one decoration by an element of \(CF(L)\). However, since the generators of \(CF(L)\) coming from basepoints behave differently from the other generators in a few places, we have chosen to keep these decorations separate. Given a collection of immersed multicurves with collections of turning points \((L_{0},\mathbf{b}_{0}),\ldots,(L_{k},\mathbf{b}_{k})\), we can define a modified polygon counting operation \[m_{k}^{\mathbf{b}}:CF(L_{0},L_{1})\otimes\cdots\otimes CF(L_{k-1},L_{k}) \to CF(L_{0},L_{k})\] by allowing arbitrarily many copies of the turning points to be inserted between the corners. That is, for points \(p_{1},\ldots,p_{k}\) with \(p_{i}\) a generator of \(CF(L_{i-1},L_{k})\) we seek to define \[m_{k}^{\mathbf{b}}(p_{1},\ldots,p_{k})=\sum_{n_{0},\ldots,n_{k}\geq 0}m_{k+n_{0} +\cdots+n_{k}}(\underbrace{\overline{\mathbf{b}}_{0},\ldots,\overline{\mathbf{ b}}_{0}}_{n_{0}},p_{1},\underbrace{\overline{\mathbf{b}}_{1},\ldots,\overline{ \mathbf{b}}_{1}}_{n_{1}},p_{2},\ldots,p_{k},\underbrace{\overline{\mathbf{b}}_{ k},\ldots,\overline{\mathbf{b}}_{k}}_{n_{k}}).\] We can interpret \(m_{k}^{\mathbf{b}}(p_{1},\ldots,p_{k})\) as counting polygons with at least \(k+1\) corners, where \(k\) of the corners are the intersection points corresponding to \(p_{1},\ldots,p_{k}\), one is an intersection point in \(L_{0}\cap L_{k}\) corresponding to some output generator \(q\) of \(CF(L_{0},L_{k})\), and the remaining corners are self-intersection points of \(L_{i}\) for some \(i\) whose coefficient is nonzero in \(\mathbf{b}_{i}\). We will refer to corners of the third type as _false corners_. Strictly speaking, a false corner at a self-intersection point \(p\) in \(L_{i}\) should be viewed as the odd grading generator \(p^{-}\) of \(CF(L_{i},L_{i}^{\prime})\) associated with \(p\). Each such polygon contributes to \(m_{k}^{\mathbf{b}}(p_{1},\ldots,p_{k})\) with a weight in \(\mathbb{F}[W]\) given by the product of the coefficient of \(p^{-}\) in \(\overline{\mathbf{b}}_{i}\) for each false corner at an intersection point \(p\). Since each false corner is an odd grading element of \(CF(L_{i})\), the orientation on \(L_{i}\) is consistent at the corner in the sense that it either agrees with the boundary orientation of the polygon before and after the corner or it opposes the boundary orientation before and after the corner; note that in the latter case the false corner contributes \(-1\) to the sign of the polygon. To formalize the view of counting polygons with false corners, let a _polygonal path_ in \(L\) be a piecewise smooth path in \(L\) in which adjacent smooth sections are connected by a left turn at a self-intersection point of \(L\) and for which the path either always follows or always opposes the orientation on \(L\). Given a collection of turning points \(\mathbf{b}\) for \(L\), we say that a polygonal path is _consistent with_\(\mathbf{b}\) if it only has left turns at self-intersection points that have nonzero coefficient in \(\mathbf{b}\). A polygon counted in \(m_{k}^{\mathbf{b}}(p_{1},\ldots,p_{k})\) can be thought of as the image of a \((k+1)\)-gon whose corners map to \(p_{1},\ldots,p_{k}\) and \(q\) and whose \(i\)th side maps to a polygonal path in \(L_{i}\) consistent with \(\mathbf{b}_{i}\). Keeping with the terminology introduced above, we will refer to the left turns along the polygonal paths as _false corners_ of the polygon. **Definition 3.14**.: Given intersection points \(p_{i}\) in \(L_{i-1}\cap L_{i}\) for \(1\leq i\leq k\) and an intersection point \(q\) in \(L_{0}\cap L_{k}\), a _generalized immersed \((k+1)\)-gon with corners \(p_{1},\ldots,p_{k}\), and \(q\)_ is an orientation preserving map \[u:(D^{2},\partial D^{2})\to(\Sigma,L_{0}\cup\cdots\cup L_{k})\] with the following properties: * \(u(\bar{q})=q\) and \(u(\bar{p}_{i})=p_{i}\) for \(1\leq i\leq k\), where \(\bar{q},\bar{p}_{1},\ldots,\bar{p}_{k}\) are fixed points on \(\partial D^{2}\) appearing in clockwise order. * \(u\) maps the segment of \(\partial D^{2}\) from \(\bar{p}_{i+1}\) to \(\bar{p}_{i}\) to a polygonal path in \(L_{i}\) consistent with \(\mathbf{b}_{i}\) from \(p_{i+1}\) to \(p_{i}\), for \(1\leq i<k\), and the segments from \(\bar{p}_{1}\) to \(\bar{q}\) and from \(\bar{q}\) to \(\bar{p}_{k}\) are mapped to polygonal paths in \(L_{0}\) and \(L_{k}\) consistent with \(\mathbf{b}_{0}\) and \(\mathbf{b}_{k}\), respectively; * \(u\) is an immersion away from the points \(\bar{q}\), \(\bar{p}_{i}\), and the preimages of any left turns on the polygonal paths in the boundary; and * the points \(q\) and \(p_{i}\) are convex corners of the image of \(u\). We define \(\mathcal{M}^{\mathbf{b}}(p_{1},\ldots,p_{k},q)\) to be the set of equivalence classes of generalized immersed polygons with corners \(p_{1},\ldots,p_{k}\) and \(q\). A polygonal path in \(L_{i}\) consistent with \(\mathbf{b}_{i}\) has a weight in \(\mathbb{F}[W]\) that is the product of a contribution for each left turn along the path and each basepoint of \(L\) passed. The contribution of a left turn at an intersection point \(p\) is the coefficient of \(p^{-}\) in \(\overline{\mathbf{b}}_{i}\) (recall that this is the coefficient of \(p\) in \(\mathbf{b}_{i}\) times \(W^{-\deg(p)/2}\)) if the polygonal path follows the orientation on \(L_{i}\) and \(-1\) times the coefficient of \(p^{-}\) in \(\overline{\mathbf{b}}\) if the polygonal path opposes the orientation on \(L_{i}\). As usual, the contribution of a basepoint is the weight associated to that basepoint or its inverse, depending on whether the path is following or opposing the orientation on \(L_{i}\). We then define a weight \(c(u)\) for a polygon \(u\) in \(\mathcal{M}^{\mathbf{b}}(p_{1},\ldots,p_{k},q)\) to be the product of weights of the \(k+1\) polygonal paths making up the boundary of the polygon. With this notation, \(m_{k}^{\mathbf{b}}\) can equivalently be defined by \[m_{k}^{\mathbf{b}}(p_{1},\ldots,p_{k})=\sum_{q\in L_{0}\cap L_{k}}\sum_{u\in \mathcal{M}^{\mathbf{b}}(p_{1},\ldots,p_{k},q)}(-1)^{s(u)}c(u)U^{n_{w}(u)}q,\] where \((-1)^{s(u)}\) is the usual product of signs associated with the corners \(p_{1},\ldots,p_{k}\), and \(q\). We still need to enhance our notion of admissibility to ensure that only finitely many polygons in \(\mathcal{M}^{\mathbf{b}}(p_{1},\ldots,p_{k},q)\) contribute; this entails prohibiting certain generalized immersed disks and annuli. Just as \(m_{k}^{\mathbf{b}}\) counts polygons which may have any number of false corners inserted on each side, we will consider immersed disks and annuli in which false corners may appear on each boundary component. More precisely, a _generalized immersed disk_ bounded by \((L_{i},\mathbf{b}_{i})\) is a map \(u:D^{2}\to\Sigma\) taking \(\partial D^{2}\) to a closed polygonal path in \(L_{i}\) consistent with \(\mathbf{b}_{i}\) that is an immersion away from the preimages of the false corners on the boundary. If \(A\) is the annulus \(1\leq z\leq 2\) in the complex plane, a _generalized immersed annulus_ bounded by \((L_{i},\mathbf{b}_{i})\) and \((L_{j},\mathbf{b}_{j})\) is a map \(u:A\to\Sigma\) that takes the outer boundary to a closed polygonal path in \(L_{i}\) consistent with \(\mathbf{b}_{i}\) and the inner boundary to a closed polygonal path in \(L_{j}\) consistent with \(\mathbf{b}_{j}\), such that \(u\) is an immersion except at the preimages of the false corners. **Definition 3.15**.: A collection of immersed Lagrangians with turning points \((L_{0},\mathbf{b}_{0}),\ldots,(L_{k},\mathbf{b}_{k})\) in \(\Sigma\) is _admissible_ if the immersed multicurves \(L_{i}\) are pairwise transverse, no decorated multicurve bounds a generalized immersed disk, and no two decorated multicurves bound a generalized immersed annulus. We also assume all but at most one of \(L_{0},\ldots,L_{k}\) are compact, and any non-compact Lagrangians have finitely many self-intersection points. **Proposition 3.16**.: _If \((L_{0},\mathbf{b}_{0}),\ldots,(L_{k},\mathbf{b}_{k})\) are admissible, then for any intersection points \(p_{1},\ldots,p_{k}\), and \(q\) as above the set \(\mathcal{M}^{\mathbf{b}}(p_{1},\ldots,p_{k},q)\) is finite. Thus the operation \(m_{k}^{\mathbf{b}}\) is well-defined._ Proof.: The proof is essentially the same as Proposition 3.7. The main difference is that there may be more than one closed path in \(L\) and an arbitrarily long path in \(L_{\ell}\) may not cover every point of \(L_{\ell}\) many times. But if we consider infinitely many paths in \(L_{\ell}\) then there must be a path covering some point arbitrarily many times; we choose \(\bar{p}_{\ell}\) to be a preimage of this point, and the proof is otherwise the same. For any choice of turning points \(\mathbf{b}_{i}\) on each immersed multicurve \(L_{i}\), the modified polygon maps \(m_{k}^{\mathbf{b}}\) still satisfy the \(\mathcal{A}_{\infty}\) relations. **Proposition 3.17**.: _For any admissible collection of immersed multicurves with turning points \((L_{0},\mathbf{b}_{0}),\dots,(L_{k},\mathbf{b}_{k})\) and any collection of intersection points \(p_{1},\dots,p_{k}\) with \(p_{i}\) in \(L_{i-1}\cap L_{i}\)._ \[\sum_{\ell=1}^{k}\sum_{j=0}^{k-\ell}(-1)^{*}m_{k+1-\ell}^{\mathbf{b}}(p_{1}, \dots,p_{j},m_{\ell}^{\mathbf{b}}(p_{j+1},\dots,p_{j+\ell}),p_{j+\ell+1},\dots, p_{k})=0\] _where \(*=\sum_{i=j+\ell+1}^{k}1+\operatorname{gr}(p_{i})\)._ Proof.: This follows from the usual proof of the \(\mathcal{A}_{\infty}\) relations. The sum in the relation counts pairs of polygons such that the first contributes some \(q\) to \[m_{\ell+n}(\vec{\mathbf{b}}_{j},p_{j+1},\vec{\mathbf{b}}_{j+1},p_{j+2},\dots,p _{j+\ell},\vec{\mathbf{b}}_{j+\ell}),\] where here \(\vec{\mathbf{b}}_{i}\) represents a sequence of any number of false corners in \(\mathbf{b}_{i}\) and \(n\) is the total number of such false corners, and the second contributes \(q^{\prime}\) to \[m_{k+1-\ell+n^{\prime}}(\vec{\mathbf{b}}_{0},p_{1},\vec{\mathbf{b}}_{1},p_{2}, \dots,p_{j},\vec{\mathbf{b}}_{j},q,\vec{\mathbf{b}}_{j+\ell},p_{j+\ell},\dots, p_{k},\vec{\mathbf{b}}_{k}),\] where again the \(\vec{\mathbf{b}}_{i}\)'s stand in for a total of \(n^{\prime}\) false corners. The union of these two polygons forms a polygon with one obtuse corner. This polygon has corners at \(q\) and \(p_{1}\),..., \(p_{k}\), as well as false corners, and the obtuse corner may be of any type. Cutting in two ways at the obtuse corner gives two polygons contributing to the sum on the left side of the relation, and it can be checked as before that these contribute with opposite weight. Although we will consider more general collections of turning points as intermediate objects in our proof, we will ultimately be interested in collections of turning points satisfying an additional constraint. **Definition 3.18**.: Given an immersed multicurve \(L\), a _bounding chain_ for \(L\) is is a collection of turning points \(\mathbf{b}\) (or by slight abuse of notation its corresponding element \(\overline{\mathbf{b}}\in CF(L)\)) such that \[m_{0}^{\mathbf{b}}=\sum_{k\geq 0}m_{k}(\underbrace{\overline{\mathbf{b}}, \dots,\overline{\mathbf{b}}}_{k})=0.\] This constraint on \(\mathbf{b}\) is known as the _Maurer-Cartan equation_. The motivation for this constraint should now be clear: the \(\mathcal{A}_{\infty}\) relations imply that \(m_{0}\) is an obstruction to \(m_{1}\) being a differential--since the same relations hold for the modified operations \(m_{k}^{\mathbf{b}}\), \(m_{0}^{\mathbf{b}}\) is an obstruction to \(m_{1}^{\mathbf{b}}\) being a differential. **Proposition 3.19**.: _If \((L_{0},\mathbf{b}_{0})\) and \((L_{1},\mathbf{b}_{1})\) are immersed multicurves decorated with bounding chains, then_ \[m_{1}^{\mathbf{b}}:CF(L_{0},L_{1})\to CF(L_{0},L_{1})\] _is a differential._ Proof.: This is immediate from the \(\mathcal{A}_{\infty}\) relations and the fact that \(m_{0}^{\mathbf{b}}=0\). **Definition 3.20**.: Given two immersed multicurves \(L_{0}\) and \(L_{1}\) decorated with bounding chains \(\mathbf{b}_{0}\) and \(\mathbf{b}_{1}\), the _Floer chain complex_\(CF\left((L_{0},\mathbf{b}_{0}),(L_{1},\mathbf{b}_{1})\right)\) is defined to be the graded complex \(CF(L_{0},L_{1})\) equipped with the differential \(\partial=m_{1}^{\mathbf{b}}\). If we ignore marked points, \(CF\left((L_{0},\mathbf{b}_{0}),(L_{1},\mathbf{b}_{1})\right)\) becomes a chain complex over \(\mathbb{F}\). In this case it makes sense to take its homology, which we denote \(HF\left((L_{0},\mathbf{b}_{0}),(L_{1},\mathbf{b}_{1})\right)\), but in general we will view \(CF\left((L_{0},\mathbf{b}_{0}),(L_{1},\mathbf{b}_{1})\right)\) as a chain homotopy equivalence class of graded chain complexes over \(\mathbb{F}[W]\). ### Invariance of Floer homology We will now investigate the effect of applying a homotopy to an immersed curve with bounding chain \((L,\mathbf{b})\). In particular, we will show that the Floer chain complex is invariant, up to chain homotopy equivalence, under such homotopies. We will first explain what we mean by a homotopy of a pair \((L,\mathbf{b})\); in short, this is simply a regular homotopy of the immersed multicurve \(L\) with a corresponding effect on the bounding chain \(\mathbf{b}\). We consider regular homotopies of \(L\) that do not pass through any marked points. We allow any regular homotopy of \(L\) that does not remove a self intersection point with nonzero coefficient in \(\mathbf{b}\). In some cases homotopies which remove self intersection points with nonzero coefficients in \(\mathbf{b}\) are also allowed, as in Figure 8(j). Note that in this situation admissibility implies that at most one of the two intersection points has nonzero coefficient in \(\mathbf{b}\), since otherwise the bigon in question would form a generalized immersed disk. We emphasize that removing two intersections as in the reverse of move (i) is only possible when neither intersection point has nonzero coefficient in \(\mathbf{b}\). Any regular homotopy of \(L\) can be decomposed as the composition of smaller homotopies, each of which can be realized either by an ambient isotopy of \(\Sigma\) (fixing the marked points), by a local move sliding a piece of \(L\) past a self intersection point or basepoint, as in (f) and (g) of Figure 8, or by a local move adding or removing two intersection points as in (i) or (j) of Figure 8. By a regular homotopy of a decorated immersed multicurve \((L,\mathbf{b})\) we mean the composition of a sequence of these small homotopies of \(L\) along with the corresponding change to \(\mathbf{b}\) as indicated in Figure 8 (for ambient isotopies of \(\Sigma\) the bounding chain \(\mathbf{b}\) is unchanged). Consider the Floer complex associated with a pair of decorated immersed multicurves \((L_{0},\mathbf{b}_{0})\) and \((L_{1},\mathbf{b}_{1})\). We will show invariance under homotopy of \((L_{0},\mathbf{b}_{0})\) while keeping \((L_{1},\mathbf{b}_{1})\) fixed; homotopies of \((L_{1},\mathbf{b}_{1})\) follow similarly. As discussed above, any homotopy of \((L_{0},\mathbf{b}_{0})\) can be broken into small steps which either can be realized by an ambient isotopy of the surface (it is clear that these steps preserve the complex exactly) or have one of the forms shown in Figure 8. **Proposition 3.21**.: _If \((L_{0},\mathbf{b}_{0})\) and \((L_{1},\mathbf{b}_{1})\) are modified by one of the local changes in Figure 8 and the configuration is admissible before and after the change then the complex \(CF\left((L_{0},\mathbf{b}_{0}),(L_{1},\mathbf{b}_{1})\right)\) is unchanged up to chain homotopy equivalence._ Proof.: For each move there is a clear identification of generators of \(CF\left((L_{0},\mathbf{b}_{0}),(L_{1},\mathbf{b}_{1})\right)\) before and after the move, with the exception of move (e) where the complex on the right has two additional generators. Let \(D\) denote the pictured disk in which the local modification occurs. To understand how the moves affect the differential \(m_{1}^{\mathbf{b}}\), we must consider immersed polygons which intersect \(D\). For move (a), it is clear that any polygon on the left that does not have \(x\) or \(y\) as a corner will be effectively unchanged by this move. It is also clear that the differential is unaffected by this move if the coefficient \(c\) in \(\mathbf{b}_{1}\) of the given self-intersection point of \(L_{1}\) is zero. If the coefficient \(c\) is nonzero, we must consider generalized immersed bigons which have a false corner at this self intersection point of \(L_{1}\). Suppose there is a polygon contributing \(z\) to \(m_{1}^{\mathbf{b}}(x)\) on the left; this polygon must contain one of the triangles \(u_{1}\) or \(u_{2}\) shown in Figure 9. If it contains \(u_{1}\) then replacing \(u_{1}\) with \(v_{1}\) gives a polygon contributing \(z\) to \(m_{1}^{\mathbf{b}}(x^{\prime})\) on the right, and replacing \(u_{1}\) with \(v_{2}\) gives another polygon contributing \(c\cdot z\) to \(m_{1}^{\mathbf{b}}(y^{\prime})\) on the right. If th polygon contains \(u_{2}\), then replacing \(u_{2}\) with \(v_{3}\) yields a polygon contributing \(z\) to \(m_{1}^{\mathbf{b}}(x^{\prime})\) on the right. In this case, there is an additional polygon on the left contributing \(-c\cdot z\) to \(m_{1}^{\mathbf{b}}(y)\) obtained by replacing \(u_{2}\) with \(u_{3}\), and this has no analogous polygon on the right. All other polygons with initial corner \(x\) or \(y\) on the left or \(x^{\prime}\) or \(y^{\prime}\) on the right are in clear one-to-one correspondence. We see that (a) has the effect of adding \(c\) times \(m_{1}^{\mathbf{b}}(x)\) to \(m_{1}^{\mathbf{b}}(y)\); that is, \(m_{1}^{\mathbf{b}}(y^{\prime})\) on the right is identified with \(m_{1}^{\mathbf{b}}(y)+c\cdot m_{1}^{\mathbf{b}}(x)\) on the left and \(m_{1}^{b}\) acts the same on all other generators. A similar argument shows that for each polygon contributing a multiple of \(y\) to \(m_{1}^{\mathbf{b}}(z)\) for some \(z\), there is a corresponding contribution of \(y^{\prime}-cx^{\prime}\) to \(m_{1}^{\mathbf{b}}(z^{\prime})\) on the right. It follows that move (a) has the effect of a change of basis; the two complexes are isomorphic, where the isomorphism identifies \(y^{\prime}\) with \(y+cx\), \(x^{\prime}\) with \(x\), and is the identity on all generators outside of \(D\). Figure 8. Given pair of decorated immersed multicurves \((L_{0},\mathbf{b}_{0})\) and \((L_{1},\mathbf{b}_{1})\), a homotopy of \((L_{0},\mathbf{b}_{0})\) can be broken into smaller homotopies such that each is either realized by an ambient isotopy of \(\Sigma\) fixing \(L_{1}\) or by one of the local modifications shown here. \(L_{0}\) is red and \(L_{1}\) is blue. Each self-intersection point on \(L_{0}\) or \(L_{1}\) is labeled by the coefficient of the corresponding generator of \(\mathbf{b}_{0}\) or \(\mathbf{b}_{1}\), and basepoints on \(L_{i}\) are labeled with the corresponding weight. Orientations on arcs are included unless the orientation does not affect the argument for the given move. Move \((j)\) requires non-local information about the immersed curves outside of the specified disk; the red shaded bigon in the figure indicates the weighted sum of all paths in \(L_{0}\) outside the disk that form a bigon with the given portion of the boundary of the disk has weight \(c\). Move (b) is similar to move (a) and we omit the details. Once again the move produces an isomorphic complex with the isomorphism corresponding to a change of basis identifying \(x^{\prime}\) with \(x-cy\) and \(y^{\prime}\) with \(y\). In moves (c) and (d) there is a one-to-one correspondence not only of generators but also of polygons contributing to the differential, but the weights of polygons involving the intersection point in \(D\) are affected. As a result of the change, any generalized bigon with initial generator \(x\) either gains the basepoint on its boundary with positive orientation or loses the marked point with negative orientation; in either case, the weight of the polygon is multiplied by \(c\). Any polygon with \(x\) as the final generator either gains the basepoint with positive orientation or looses it with negative basepoint, so the weight is multiplied by \(c^{-1}\). For either move, the Floer complexes are isomorphic and the isomorphism identifies \(cx\) with \(x^{\prime}\). Move (e) introduces two new intersection points, but there is a clear bigon on the right contributing a differential from \(x^{\prime}\) to \(y^{\prime}\). We claim that the complex on the left is precisely the result of canceling this differential in the complex on the right, and thus the complexes are chain homotopy equivalent. First note that there can be no other polygon contributing \(y^{\prime}\) to \(m_{1}^{\mathbf{b}}(x^{\prime})\), since the portion of such a polygon outside of \(D\) combined with the strip between \(L_{0}\) and \(L_{1}\) in \(D\) on the left would produce a generalized immersed annulus, implying that the curves were not in admissible position on the left side. To cancel the differential from \(x^{\prime}\) to \(y^{\prime}\), for each pair of polygons contributing \(ay^{\prime}\) to \(m_{1}^{\mathbf{b}}(w)\) and \(bz\) to \(m_{1}^{\mathbf{b}}(x^{\prime})\) we must add \(abz\) to \(m_{1}^{\mathbf{b}}(w)\). Such polygons would approach \(y^{\prime}\) from the left and leave \(x^{\prime}\) on the right, and it is clear that when the the fingermove in \(D\) is reversed these two polygons form a single polygon connecting \(w\) to \(z\) whose weight is the product of the weights of the two polygons. Since all other bigons are unchanged, the claim follows. Move (f) is similar to moves (a) and (b) and move (g) is similar to moves (c) and (d), but in each case the complex is unchanged as a result of the changes to the bounding chain pictured. Combining two weighted base points into one as in move (h) clearly has no effect on the complex. Move (i) has no effect on the complex; since the two new self intersection points on the right are added to the bounding chain with coefficient \(0\), we do not consider polygons with corners at these intersection points and there is a one-to-one correspondence of polygons, which preserves weights, before and after the move. Move (j) is the only move which requires knowledge of how \((L_{0},\mathbf{b}_{0})\) behaves outside of \(D\). The move performs a finger move that introduces two new self-intersection points of \(L_{0}\) in \(D\), but in some cases we need to add one of these to \(\mathbf{b}_{0}\) with nonzero coefficient. In particular, if the two right endpoints are connected by a polygonal path in \((L_{0},\mathbf{b}_{0})\) that forms an immersed polygon with the relevant segment of the boundary of \(D\) and if this polygonal path (or, more precisely, the sum of all such paths) has weight \(c\), then we must add the leftmost of the two new self-intersection points to \(\mathbf{b}_{0}\) with coefficient \(c\). This is clearly necessary for \(\mathbf{b}_{0}\) to remain a bounding chain, since the path outside of \(D\) contributes \(c\) times the other new intersection point to \(m_{0}^{\mathbf{b}}\) and this can only be canceled by the newly formed bigon in \(D\). Having made this change, it is straightforward to check that the complex is unchanged. For instance, if a polygon on the left intersects \(D\) in the rectangular strip between the two strands of \(L_{0}\) and also contains the \(c\) weighted path outside \(C\) to the right, then the finger move destroys this polygon but creates a new truncated one with a false corner at the weighted self-intersection point in \(D\), and the contribution of this new polygon to \(m_{1}^{\mathbf{b}}\) is precisely the same. ### Doubly marked surfaces So far we have assumed that all marked points on a surface are treated the same. In fact, we are primarily interested in surfaces with two families of basepoints, a collection \(\{w_{i}\}_{i\in I}\) of \(w\)_-marked points_ and a collection \(\{z_{i}\}_{i\in I}\) of \(z\)_-marked points_. We call such a surface _doubly marked_ (even if there are infinitely many marked points) because there are two distinct types of marked points. It is straightforward to extend the definitions in the rest of this section to the doubly marked setting. Instead of \(W\) we have two formal variables \(U\) and \(V\), with the variable \(U\) associated with the \(w\)-marked points and the variable \(V\) associated with the \(z\)-marked points. The relevant coefficients are \(\mathcal{R}^{-}=\mathbb{F}[U,V]\) rather than \(\mathbb{F}[W]\). An immersed Lagrangian \(L\) will be equipped with a bigrading \((\tilde{\tau}_{w},\tilde{\tau}_{z})\), where \(\tilde{\tau}_{w}:L\to\mathbb{R}\) is a grading in the usual sense with respect to the \(w\)-marked points only (that is, it has discontinuities at intersections of \(L\) with grading arcs emanating from the \(w\)-marked points but not at grading arcs coming from \(z\)-marked points) and \(\tilde{\tau}_{z}:L\to\mathbb{R}\) is a grading with respect to only the \(z\)-marked points. Each self-intersection point of \(L\) has a bidegree \((\deg_{w},\deg_{z})\), where each degree is defined as usual with respect to the relevant grading. Given two Lagrangians \(L_{0}\) and \(L_{1}\), \(CF(L_{0},L_{1})\) is generated over \(\mathcal{R}^{-}\) by intersections of \(L_{0}\) with (a suitable perturbation of) \(L_{1}\). Bigradings on \(L_{0}\) and \(L_{1}\) induce a bigrading \((\mathrm{gr}_{w},\mathrm{gr}_{z})\) on the \(\mathcal{R}^{-}\)-module \(CF(L_{0},L_{1})\). The operations \(m_{k}\) count immersed \((k+1)\)-gons, where the coefficient in \(\mathcal{R}^{-}\) records the number of times each type of marked point is covered. More precisely, given Lagrangians \(L_{0},\dots,L_{k}\) and intersection points \(p_{1},\dots,p_{k}\) with \(p_{i}\) in \(L_{i-1}\cap L_{i}\), we have \[m_{k}(p_{1},\dots,p_{k})=\sum_{q\in L_{0}\cap L_{k}}\sum_{u\in\mathcal{M}(p_{1 },\dots,p_{k},q)}(-1)^{s(u)}U^{n_{w}(u)}V^{n_{z}(u)}q, \tag{5}\] where \(n_{w}(u)\) and \(n_{z}(u)\) are the numbers of times \(u\) covers \(w\)-marked points and \(z\)-marked points, respectively. Since ignoring either family of marked points returns us to the singly marked setting, Proposition 3.9 implies that the map \(m_{k}\) has bidegree \((k-2,k-2)\). A collection of turning points \(\mathbf{b}\) on an immersed Lagrangian \(L\) in a doubly marked surface is once again a linear combination (over \(\mathbb{F}\)) of self-intersection points in \(\mathcal{I}_{\leq 0}\), where \(\mathcal{I}_{\leq 0}\) is the subset of self-intersection points for which the degree is non-positive with respect to both gradings. This determines a homogenous element \(\overline{\mathbf{b}}\) of bigrading \((-1,-1)\) which can be viewed as a linear combination over \(\mathcal{R}^{-}\) of points in \(\mathcal{I}_{\leq 0}\) whose coefficient for a given self-intersection point \(p\) is given by multiplying the corresponding coefficient of \(\mathbf{b}\) by \(U^{-\deg_{w}(p)/2}V^{-\deg_{z}(p)/2}\). We can define the modified map \(m_{k}^{\mathbf{b}}\) just as before, except that coefficients include powers of \(V\) as well as powers of \(U\). The proof of the \(\mathcal{A}_{\infty}\)-relations (Proposition 3.10) is unchanged--for pairs of terms that are meant to cancel, the powers of \(V\) agree for the same reason the powers of \(U\) agree. As usual, \(\mathbf{b}\) is a bounding chain if \(m_{0}^{\mathbf{b}}=0\). In this case, \(m_{1}^{\mathbf{b}}\) is a differential on \(CF((L_{0},\mathbf{b}_{0}),CF(L_{1},\mathbf{b}_{1}))\), making this space into a bigraded chain complex over \(\mathcal{R}^{-}\). ## 4. Train tracks and arrow slides ### Immersed curves with turning points as train tracks There is an alternative perspective on immersed curves with collections of turning points that we find convenient to work with: we can think of such a decorated immersed multicurve as an immersed train track (this is the perspective taken in [HRW]). We will now describe these objects and their relation to the curves with turning points discussed so far. As in the previous section we will first describe these objects in the setting of a singly marked surface, but the extension to doubly marked surfaces is straightforward. A _train track_ refers to a graph with the property that at each vertex all incident edges are mutually tangent; vertices of a train track are often called _switches_, in reference to the junctions in railroads. At each switch there are two (opposite) directions from which an edge may approach, and a smooth path in the train track is one which approaches and leaves each switch on opposite sides. In the train tracks we consider, all vertices will have valence at least two and will have at least one incident edge on each side; in other words, no switch will be a dead end for smooth paths (an exception will be switches on the boundary of the surface). We will consider partially directed train tracks in which some edges have a specified direction; when considering smooth paths in a train track, we will only consider paths that follow directed edges in the specified direction. Our train tracks will also be oriented, meaning that every edge has an orientation; the orientations are required to be consistent at each switch, in the sense that all edges on one side are oriented toward the switch and all edges on the other are oriented away from the switch; it follows that a smooth path must always follow the orientation or always oppose the orientation. The orientation should not be confused with the direction on directed edges; these need not agree, and we do not require smooth paths to follow the orientation. Finally, edges in our train tracks will be weighted by (nonzero) homogenous elements of \(\mathbb{F}[W]\). For undirected edges the power of \(W\) is zero, so the weight is just an element of the field \(\mathbb{F}\). For directed edges the power of \(W\) is determined by a grading on the train track as described below. In practice, the weights on most undirected edges will be \(1\), and these weights can be ignored; thus the weights on directed edges will be recorded by marking some number of basepoints on the train track (on the undirected edges with weight other than \(1\)) and labeling these with a weight. A grading on an immersed train track in a marked surface is nearly the same as a grading on an immersed Lagrangian. Having fixed a trivialization of the tangent bundle of the surface to identify tangent slopes with \(\mathbb{RP}^{1}\), a grading is a map from the train track to \(\mathbb{R}\) that lifts the tangent slope map and is piecewise smooth, possibly with jump discontinuities of even magnitude. These discontinuities are of two types: they occur whenever the train track crosses a chosen grading arc emanating from a marked point, as usual, and there is also a discontinuity at the midpoint of each directed edge. At the discontinuity along a directed edge the grading decreases by \(2k\) for some integer \(k\); we associate \(W^{k}\) to this edge, so that the weight on this directed edge is \(cW^{k}\) for some \(c\) in \(\mathbb{F}\). An immersed multicurve \(L\) with a collection of turning points \(\mathbf{b}\) determines an immersed train track of the form described above as follows: First, we view the immersed multicurve as an immersed train track with only undirected edges by placing at least one valence two switch anywhere on each component. We place enough switches so that the weighted basepoints on the curves each lie on their own directed edge; these directed edges inherit the weight associated with the basepoint, and all other directed edges have weight \(1\). To this we add a pair of directed edges near each self intersection point \(p\) that has nonzero coefficient in \(\mathbf{b}\), as pictured in Figure 10. These directed edges make the two left turns at \(p\) that are consistent with the orientation on \(L\). The orientation on each directed edge is chosen to be consistent with the orientation on \(L\) at either end; note that for one edge in the pair the orientation agrees with the direction of the edge and for the other edge the orientation opposes the direction. The new directed edges are weighted by \(\pm cW^{-\deg(P)/2}\), where \(c\) is the coefficient of \(p\) in \(\mathbf{b}\) and the minus sign appears on the edge for which the direction opposes the orientation. These directed edges are added precisely so that the false corners which may appear in generalized bigons contributing to \(m_{1}^{\mathbf{b}}\) are smoothed in the train track. In other words, a polygonal path in \(L\) consistent with \(\mathbf{b}\) corresponds to a smooth path in the train track, and vice versa. Moreover, the weight associated with a polygonal path agrees with the product of the weights of all edges on the corresponding smooth path in the train track (where the weight of an undirected edge is inverted if the edge is traversed in the opposite direction of the orientation). We will define a Floer complex associated to two immersed train tracks in much the same way we defined it for immersed curves: the complex is generated by intersection points and the differential counts immersed bigons whose sides map to smooth paths in the train track. With this in mind, we can view the Floer complex of two immersed curves with bounding chains as the Floer complex of the corresponding train tracks. Indeed, the additional edges in the train track precisely allow left turns to be made at intersection points in the bounding chains, and the bigons counted between two train tracks are precisely the generalized bigons counted between the two corresponding decorated curves with the false corners smoothed. ### The Floer complex of train tracks We now more precisely define what we mean by the Floer complex of two immersed train tracks. We caution that this notion is not well-defined for arbitrary train tracks, and we will not make an attempt to describe the biggest family of train tracks for which our definitions make sense. Instead, we will restrict our attention to a class of train tracks which can be directly related to immersed curves decorated with bounding chains and observe that in this case the two notions of Floer homology agree. The fact that Floer homology of these train tracks is well defined then follows from the corresponding fact for Floer homology of decorated curves. This means that, strictly speaking, it is not necessary to use train tracks at all since every train track we will consider really represents an immersed curve with a bounding chain; however, we find that train tracks are a very convenient framework to use when manipulating these objects. Given a pair of train tracks \(\boldsymbol{\vartheta}_{i}\) and \(\boldsymbol{\vartheta}_{j}\) in a marked surface that intersect transversally, we define \(CF(\boldsymbol{\vartheta}_{i},\boldsymbol{\vartheta}_{j})\) to be the \(\mathbb{F}[U]\)-module generated by \(\boldsymbol{\vartheta}_{i}\cap\boldsymbol{\vartheta}_{j}\). Given \(k+1\) immersed train tracks \(\boldsymbol{\vartheta}_{0},\ldots,\boldsymbol{\vartheta}_{k}\), we define operations \[m_{k}^{\boldsymbol{\vartheta}}:CF(\boldsymbol{\vartheta}_{0},\boldsymbol{ \vartheta}_{1})\otimes\cdots\otimes CF(\boldsymbol{\vartheta}_{k-1}, \boldsymbol{\vartheta}_{k})\to CF(\boldsymbol{\vartheta}_{0},\boldsymbol{ \vartheta}_{k})\] by counting immersed \((k+1)\)-gons with appropriate corners whose sides are smooth paths in the appropriate train tracks. The weight with which each \((k+1)\)-gon contributes is the product of the weights of all edges in the boundary (where for undirected edges the inverse of the weight is used if the boundary orientation of the polygon opposes the orientation on the train track), a sign contribution from each corner defined in the usual way, and a factor of \(W\) for each time the marked point is covered by the polygon. The operation \(m_{0}^{\boldsymbol{\vartheta}}\) counts immersed monogons with boundary on a given train track. We say that a train track \(\boldsymbol{\vartheta}\) is unobstructed if \(m_{0}^{\boldsymbol{\vartheta}}=0\). The operations \(m_{k}^{\boldsymbol{\vartheta}}\) satisfy \(\mathcal{A}_{\infty}\)-relations, and if \(\boldsymbol{\vartheta}_{0}\) and \(\boldsymbol{\vartheta}_{1}\) are both unobstructed then \(\partial^{\boldsymbol{\vartheta}}=m_{1}^{\boldsymbol{\vartheta}}\) is a differential on \(CF(\boldsymbol{\vartheta}_{0},\boldsymbol{\vartheta}_{1})\). We say that a collection of train tracks are admissible if no train tracks bounds an immersed disk and no two train tracks bound an immersed polygon. We expect that the counts of immersed polygons used to define \(m_{k}^{\boldsymbol{\vartheta}}\) are finite as long as the train tracks are admissible, though we will not prove this fact. Instead, as already mentioned, we will restrict our attention to particularly nice train tracks for which the finiteness of these polygon counts can be established indirectly. For instance, suppose for each \(i\) that \(\boldsymbol{\vartheta}_{i}\) is the train track constructed from an immersed multicurve with bounding chain \((\Gamma_{i},\mathbf{b}_{i})\) as described in the previous section, so that all directed edges appear in pairs as in the middle of Figure 10. Generically we may assume that \(\Gamma_{i}\cap\Gamma_{j}\) is disjoint from the self intersection points of \(\Gamma_{i}\) and \(\Gamma_{j}\), and thus that \(\boldsymbol{\vartheta}_{i}\) and \(\boldsymbol{\vartheta}_{j}\) only intersect on their undirected edges. In this case there is an obvious identification between \(\Gamma_{i}\cap\Gamma_{j}\) and \(\boldsymbol{\vartheta}_{i}\cap\boldsymbol{\vartheta}_{j}\). For any collection of such curves and train tracks there is also a clear bijection between the \((k+1)\)-gons contributing to \(m_{k}^{\boldsymbol{\vartheta}}\) and the generalized \((k+1)\)-gons contributing to \(m_{k}^{\mathbf{b}}\). It follows that in this case the operations \(m_{k}^{\mathbf{b}}\) are well defined and are in fact identical to the operations \(m_{k}^{\mathbf{b}}\). We will be interested in a slightly larger class of train tracks than those immediately constructed from immersed curves with turning points. In particular, we would like to be able to apply homotopies to train tracks of this form that slide the directed edges away from the self-intersection points. Invariance of the Floer complex quickly fails if we only slide one directed edge, but it turns out we can make more sense of homotopies that slide a pair of directed edges as a unit. In a train track representing an immersed curve with turning points, if we move both edges near a self-intersection point to one side of the self-intersection point, as on the right of Figure 10, they cross each other in an X-shaped pattern. We will refer to a pair of edges in this arrangement as a _crossover arrow_, and as a diagrammatic shorthand we denote these by a bold arrow as in Figure 11. The train tracks we will consider will have the property that the collection of undirected edges forms an immersed multicurve and the directed edges come in pairs forming crossover arrows. We will assume, as usual, that all self-intersection points of the train track are transverse; we will also assume that crossover arrows do not cross each other. If a crossover arrow lies in a neighborhood of an intersection point of two undirected edges in a train track and moves counter-clockwise in one quadrant, as in the rightmost train track in Figure 10, we say that it is a _left turn crossover arrow_. It is clear that an immersed curve with a collection of left turn crossover arrows is equivalent to the same immersed curves with a bounding chain. Train tracks which allow arbitrary crossover arrows are seemingly more general objects, but in fact any unobstructed train track that consists of an immersed multicurve with crossover arrows attached can be modified in a neighborhood of each crossover arrow to produce an equivalent train track which has only left turn crossover arrows. To accomplish this, we apply a finger move homotopy pushing the section of immersed curve at the tail of the the arrow forward along the crossover arrow through the section of curve at the head of the arrow. The crossover arrow then becomes a left turn crossover arrow at one of the new intersection points as in Figure 12. Figure 12 shows the simplest case in which the crossover arrow has no intersections with the rest of the train track apart form its endpoints. In general, a neighborhood of the crossover arrow will consist of the arrow, the segments it connects, and some number of other immersed curve segments transverse to the crossover arrow between the initial and final segments. For concreteness, suppose the crossover arrow moves upward vertically connecting two rightward oriented horizontal segments and all immersed curve segments through a rectangular neighborhood of the arrow are horizontal. In this case we must be conscious of paths in the train track outside the rectangular neighborhood of the crossover arrow that bound immersed bigons with the left or right side of the neighborhood. We first observe that for an unobstructed train track, there can be no such paths on the left side of the crossover arrow for which one end is the top segment or bottom segment (or, more precisely, the weighted count of such paths for any pair of endpoints is zero). If such a path did exist, there would be an immersed monogon with a corner on the crossover arrow, as shown in Figure 13, and there is no way for this to cancel with any other monogons. We say that a crossover arrow is _unobstructed_ if there are no bigons on the left side of the arrow as in Figure 13, and we need not consider train tracks with obstructed arrows. Similar paths on the right side of the arrow that bound bigons with the Figure 11. Shorthand for crossover arrows. Note that the sign convention for crossover arrow labels is that the minus sign is on the weight of the directed edge whose direction opposes the orientation on the train track. Figure 10. Converting an immersed curve with a collection of turning points into an immersed train track. For each intersection point with nonzero coefficient \(c\) and degree \(-2m\) in the collection of turning points (left) we add two directed edges to the train track (middle) with the weights \(\pm cW^{m}\). These new edges realize the false corners which may occur at the given intersection as smooth paths in the train track. We could equivalently place both directed edges in the train track in the same quadrant near the self-intersection point (right). right side of the rectangular neighborhood are less problematic. However, any such paths starting at the bottom segment must be taken into account when performing the finger move. Following the train track analogue of move \((j)\) from Figure 8, pushing the bottom segment past the middle segments may require adding left turn crossover arrows at some of the new intersection points. The finger move transformation in this more general case is shown in Figure 14. **Proposition 4.1**.: _Transforming a train track in the neighborhood of a crossover arrow as in Figure 14 results in an equivalent train track._ Proof.: To show that the train tracks are equivalent, we need to check that any counts of immersed polygons involving the train tracks agree before and after the transformation. We consider the two steps of the transformation shown in Figure 14 separately. The first can be viewed as a transformation in the left third of the neighborhood of the crossover arrow, which does not involve the arrow. This is just repeated application of move (j) from Figure 8; we have left-turn crossover arrows in place of a bounding chain, but we have already established that these are equivalent and the same proof applies. The second part of the transformation is simply translating the crossover arrow along the curve. It is clear that any polygon passing through this neighborhood is not meaningfully altered by this homotopy. Figure 14. An unobstructed arrow can be transformed to a left turn crossover arrow by applying a finger move just as in Figure 12, but when the arrow crosses other segments this may require adding additional left turn crossover arrows as indicated. A rectangular neighborhood of the original crossover arrow is shown. The coefficients \(a_{i}\in\mathbb{F}[W]\) on the dotted arcs represent the weighted count of smooth paths in the train track with the given ends that bound bigons with the right side of the rectangular neighborhood of the arrow; note that some or all of these coefficients may be zero. Figure 12. Figure 13. Obstructed crossover arrows. By applying this transformation for each arrow, we see that any unobstructed train track that has the form of an immersed multicurve with crossover arrows is equivalent to an immersed multicurve with left-turn crossover arrows. In particular: **Proposition 4.2**.: _For unobstructed train tracks that consist of immersed curves with crossover arrows, the map \(\partial^{\boldsymbol{\vartheta}}\) and the Floer complex are well defined. _ We will always require that crossover arrows are unobstructed, but if we work over a quotient of \(\mathbb{F}[W]\) obtained by setting \(W^{m}=0\) for some \(m\) then we can weaken the unobstructed assumption accordingly. We will say that a crossover arrow is _unobstructed modulo_\(W^{m}\) if the weight on the arrow is a multiple of \(W^{k}\) for some \(k\) and if any bigon bounded by \(\boldsymbol{\vartheta}\) and the left side of the rectangular neighborhood of the crossover arrow has a weight which is a multiple of \(W^{m-k}\). In this case we use the transformation in Figure 14 but note that additional left turn crossover arrows may need to be added corresponding to bigons on the left side of the crossover arrow ending at the bottom segment. Figure 15 shows an example of a crossover arrow with weight \(W^{m-1}\) which is unobstructed only modulo \(W^{m}\). Note that the finger move transformation does not produce an equivalent train track working over \(\mathbb{F}[W]\), since the shaded bigon on the right does not have an analogous bigon on the left. However, this problematic bigon involves two crossover arrows whose weights multiply to \(W^{m}\), so this bigon does not contribute modulo \(W^{m}\). ### Sliding crossover arrows The main advantage of working with train tracks of the form described above is that in many cases sliding crossover arrows along the immersed curves produces an equivalent train track. Since any (unobstructed) crossover arrow can be replaced with a left turn crossover arrow after an isotopy of the immersed curves, the arrow slide moves described in this section could alternatively be described as modifications of immersed curves with turning points involving isotopies of the immersed curve as well as modification to the collection of turning points. However, we find arrow sliding more convenient because it allows us to (for the most part) keep the underlying immersed curves fixed throughout the process. By arrow sliding we mean translating the points at which a crossover arrow meets the immersed curve part of the train track along the relevant curves. The simplest example of such a move is sliding a crossover arrow between two parallel sections of curve, as in Figure 16. Note that in this simple move the arrow does not interact with any other features of the train track (such as self-intersection points, weighted basepoints, or other crossover arrows). We do allow there to be other parallel segments of immersed curve between the two segments connected by the curve. To see that applying this move in a train track \(\boldsymbol{\vartheta}_{0}\) results in an equivalent train track, we need to show that the complex \(C(\boldsymbol{\vartheta}_{0},\boldsymbol{\vartheta}_{1})\) is unchanged for any other train track \(\boldsymbol{\vartheta}_{1}\). This is immediately clear if \(\boldsymbol{\vartheta}_{1}\) is disjoint from the strip through which the crossover arrow slides. The slide is more interesting if it passes some portion of \(\boldsymbol{\vartheta}_{1}\). By breaking an arrow slide into pieces, it is sufficient to consider the case of sliding past one segment of \(\boldsymbol{\vartheta}_{1}\) as in Figure 17. Figure 15. A crossover arrow that is obstructed, but unobstructed modulo \(W^{m}\). Applying the finger move as in Figure 14 and sliding the crossover arrow to an intersection point produces a train track that is equivalent modulo \(W^{m}\). There is an unwanted bigon on the right involving both crossover arrows, but this does not contribute modulo \(W^{m}\). **Proposition 4.3**.: _If \(\boldsymbol{\vartheta}_{0}\) and \(\boldsymbol{\vartheta}_{1}\) are unobstructed train tracks (or unobstructed modulo \(W^{m}\)), the effect on the \(\mathbb{F}[W]\) (or \(\mathbb{F}[W]/W^{m}\)) complex \(C(\boldsymbol{\vartheta}_{0},\boldsymbol{\vartheta}_{1})\) of sliding a crossover arrow on \(\boldsymbol{\vartheta}_{0}\) through a segment of \(\boldsymbol{\vartheta}_{1}\) as in Figure 17 is that of a change of basis replacing \(x\) with \(x^{\prime}=x+cy\)._ Proof.: While we could check this directly by considering the effect on immersed bigons that involve the crossover arrow (c.f. [HRW, Theorem 5]), this also follows from the invariance of the Floer complex for curves with turning points once we realize the curves with crossover arrows as representing curves with bounding cochains and apply the move (b) from Figure 8 twice, as in the bottom of Figure 17. Sliding a crossover arrow can have interesting side-effects if the arrow interacts with self-intersection points or other crossover arrows. We will describe a number of local arrow sliding moves as replacements of certain arrow configurations. Consider a rectangular region in \(\Sigma\) in which \(\boldsymbol{\vartheta}_{0}\) consists of \(n\) immersed curve segments (or strands) moving from one side of the rectangle to the opposite side, oriented in the same way, with some number of weighted basepoints on strands and crossover arrows between strands. We will call a portion of train track of this from an \(n\)_-strand arrow configuration_. The side of the rectangle on which the oriented strands start will be called the initial side and the side on which the strands end will be called the terminal side. If we fix an ordering of the \(n\) strand endpoints on the initial side and of the \(n\) endpoints on the terminal side, any \(n\)-strand arrow configuration determines an \(n\times n\) matrix over \(\mathbb{F}[W]\) where the \((i,j)\) entry is the weighted count of paths in the train track from the \(i\)th endpoint on the initial side to the \(j\)th endpoints on the terminal sides. Since the powers of \(W\) are forced by the gradings, we will omit them and view this matrix as having coefficients in \(\mathbb{F}\). We remark that the matrix corresponding to \(n\) parallel strands with a crossing between the \(i\)th and \(j\)th strands is a transposition matrix \(T_{ij}\) and the matrix for \(n\) parallel strands with a single crossover arrow of weight \(cW^{m}\) from the \(i\)th strand to the \(j\)th strand is the elementary \(A^{c}_{ij}\) with \(1\)'s on the diagonal, \(c\) in the \((i,j)\) entry, and \(0\)'s elsewhere. Figure 16. Sliding a crossover arrow between two parallel curve segments. Figure 17. Sliding a crossover arrow between parallel segments of \(\boldsymbol{\vartheta}_{0}\) past a crossing with \(\boldsymbol{\vartheta}_{1}\) has the effect of a change of basis on the complex \(C(\boldsymbol{\vartheta}_{0},\boldsymbol{\vartheta}_{1})\) that replaces \(x\) with \(x^{\prime}=x+cy\). This can also be interpreted as applying the move \((b)\) from Figure 8 twice. The matrix determined by an \(n\)-strand arrow configuration is invertible, with the inverse matrix given by counting paths from the terminal side to the initial side. To see this, first observe that if two \(n\)-strand arrow configurations are concatenated, identifying the terminal end of one with the initial end of the other, the matrix counting paths in the combined configuration is the product of the matrices for the two original configurations. Next, observe that any \(n\)-strand arrow configuration can be realized as a collection of parallel strands with a sequence of crossover arrows and crossing inserted; this corresponds to decomposing a matrix as a product of elementary matrices. Finally, we observe that reading the configuration backwards corresponds to the same sequence of elementary matrices in the opposite order, with a minus sign on the non-diagonal entry of each \(A^{c}_{ij}\), which produces the inverse matrix. We will say that two \(n\)-strand arrow configurations are equivalent if they determine the same matrix. It turns out that the matrix contains all the information needed from this region to construct the Floer complex with any other train track, so we can freely replace \(n\)-strand arrow configurations with equivalent ones. **Proposition 4.4**.: _If \(\boldsymbol{\vartheta}_{0}\) is an unobstructed train track (modulo \(U^{m}\)) and \(\boldsymbol{\vartheta}^{\prime}_{0}\) is obtained from \(\boldsymbol{\vartheta}_{0}\) by replacing an \(n\)-strand arrow configuration with another \(n\)-strand arrow configuration determining the same matrix, then \(\boldsymbol{\vartheta}^{\prime}_{0}\) is equivalent (modulo \(U^{m}\)) to \(\boldsymbol{\vartheta}_{0}\)._ Proof.: We must show that the complexes \(CF(\boldsymbol{\vartheta}_{0},\boldsymbol{\vartheta}_{1})\) and \(CF(\boldsymbol{\vartheta}^{\prime}_{0},\boldsymbol{\vartheta}_{1})\) are homotopy equivalent for any other train track \(\boldsymbol{\vartheta}_{1}\). It suffices to consider the case that \(\boldsymbol{\vartheta}_{1}\) is disjoint from the rectangle containing the \(n\)-strand arrow configuration, since \(\boldsymbol{\vartheta}_{1}\) can be isotoped off of the rectangle, possibly using a combination of the basis change move in Proposition 4.3 and moves from Figure 8. In the remaining case, it is easy to see that the complexes \(CF(\boldsymbol{\vartheta}_{0},\boldsymbol{\vartheta}_{1})\) and \(CF(\boldsymbol{\vartheta}^{\prime}_{0},\boldsymbol{\vartheta}_{1})\) are identical. If there is a bigon contributing to the first complex whose boundary passes through the region containing the \(n\)-strand arrow configuration, it does not matter what path the boundary takes from the point it enters the region to the point it leaves the region, only that a path with the given weight exists. Replacing the path through the \(n\)-strand arrow configuration of \(\boldsymbol{\vartheta}_{0}\) with a path with the same endpoints through the new \(n\)-strand arrow configuration in \(\boldsymbol{\vartheta}^{\prime}_{0}\) produces a bigon contributing to the second complex connecting the same two generators. Because the weighted count of paths between endpoints in the arrow configurations agree, the counts of bigons defining the differential in the complexes agree. Some useful replacements of \(n\)-strand configurations are shown in Figure 18. For example, crossover arrows may slide past each other, with the provision that if the head of one arrow passes the tail of another, a new arrow corresponding to the composition of the two must be added (see Figure 18, third row). A crossing between strands can also be replaced by a sequence of crossover arrows. These moves can be interpreted as pictorial representations of familiar relations on elementary matrices. **Remark 4.5**.: Note that the last move in Figure 18 requires introducing new weights that are the inverse of the weight on the crossover arrow. This is the main issue with extending the techniques described in this paper to non-field coefficients. All the other arrow slides make sense with \(\mathbb{Z}\)-coefficients, but this move requires weights to be invertible. In Section 7 we will give an algorithm for simplifying train tracks representing complexes over \(\widehat{\mathcal{R}}\) using arrow slide moves. The following \(n\)-strand arrow configuration replacement will be used repeatedly in this arrow sliding algorithm; this lemma is essentially [11, Lemma 31], but with the matrix language introduced above we give a much simpler proof. **Lemma 4.6**.: _Given an \(n\)-strand arrow configuration, fix any orderings \(<_{L}\) and \(<_{R}\) on the left endpoints and right endpoints, respectively. The \(n\)-strand configuration is equivalent to an \(n\)-strand configuration which can be divided into three regions, with all crossings between strands and all weighted basepoints in the middle region, and all crossover arrows in the left or right region moving in increasing order with respect to the relevant ordering on endpoints of strands \(<_{L}\) or \(<_{R}\)._ Proof.: When encoding the \(n\)-strand arrow configuration by a matrix, we will index the right endpoints with respect to the ordering \(<_{R}\) and the left endpoints with respect to the opposite of the ordering \(<_{L}\). That is, we number the right endpoints \(1,\ldots,n\) such that \(1<_{R}\cdots<_{R}n\) and we number the left endpoints \(1,\ldots,n\) so that \(n<_{L}\cdots<_{L}1\). Then the desired \(n\)-strand configuration replacement corresponds to the \(LDPU\) decomposition of the matrix. It is a standard exercise in linear algebra that an invertible matrix \(M\) with coefficients in a field can be decomposed as the product \(LDPU\), where \(L\) is lower triangular with \(1\)'s on the diagonal, \(D\) is diagonal, \(P\) is a permutation matrix, and \(U\) is upper triangular with \(1\)'s on the diagonal. \(L\) can be decomposed as a product of lower triangular elementary matrices, each of which correspond to an \(n\)-strand arrow configuration with a single arrow from a strand \(i\) to a strand \(j\) with \(j<i\) and thus \(i<_{L}j\). Similarly, \(U\) is a product of upper triangular elementary matrices which each correspond to a single crossover arrow from a strand \(i\) to a strand \(j\) with \(i<j\), and thus \(i<_{R}j\). \(D\) corresponds to \(n\) parallel strands, possibly containing weighted basepoints, and \(P\) corresponds to a region of strands with crossings but no crossover arrows. In addition to replacing \(n\)-strand arrow configurations, which lie in neighborhoods disjoint from the marked points, another important arrow move is to slide a crossover arrow past a marked point. An arrow can always be moved over a marked point on its left side at the expense of multiplying the weight of the arrow by \(W\). It is clear that the resulting train track is equivalent to the initial one: there is a one-to-one correspondence between bigons, and bigons involving the crossover arrow cover one less marked point but have an additional factor of \(W\) from the weight of the crossover arrow, so the overall weight is unchanged. Figure 18. Some common pairs of exchangeable \(n\)-strand arrow configurations, with the corresponding matrices. All horizontal strands are oriented rightward. These exchanges can be thought of as local moves for crossover arrows. Two parallel crossover arrows next to each other can be combined into one, and \(0\) weighted arrows can be deleted (first row). The endpoints of crossover arrows can slide along the immersed curve track until they meet another crossover arrow. Arrows commute if their endpoints do not meet or if only their tails or only their heads meet (second row). If the head of one arrow passes the tail of another, a new arrow coming from the composition of the two must be added (third row). Finally, a pair of opposing arrows can be replaced with an arrow and a crossing (fourth row). In the last row, weighted basepoints are added to the train tracks between the two crossover arrows on each strand. Sliding over a marked point is particularly valuable when working over a quotient ring \(F[W]/W^{m}\), since if the weight on a crossover arrow increases enough we can delete the arrow entirely. In particular, we say that a crossover arrow in a train track \(\boldsymbol{\vartheta}\) with weight \(cW^{m-1}\) is _removable modulo_\(W^{m}\) if there is an arc in the complement of \(\boldsymbol{\vartheta}\) from the left side of the crossover arrow to a marked point. Such an arrow can be deleted with no effect modulo \(\boldsymbol{\vartheta}\) since sliding the arrow over the marked point gives an arrow with weight \(cW^{m}\). On a similar note, we say any crossover arrow is _removable_ if there is an arc in the complement of \(\boldsymbol{\vartheta}\) from the left side of the crossover arrow to the boundary of \(\Sigma\) or to a puncture or infinite end of \(\Sigma\). To see that deleting a removable crossover arrow preserves the equivalence type of the train track, note that any other train track being paired with may be homotoped off of the arc and then any bigon involving the crossover arrow would have to contain the entire arc, which is not possible. Similar to sliding crossover arrows, we may also slide basepoints along the immersed curves in \(\boldsymbol{\vartheta}\). If the the basepoint slides past an end of a crossover arrow, we need to modify the weight on the crossover arrow as in Figure 19 to ensure that weighted counts of polygons are unchanged. To see that this produces an equivalent train track, note that if a basepoint in \(\boldsymbol{\vartheta}_{0}\) slides through a segment of \(\boldsymbol{\vartheta}_{1}\), the complex \(CF(\boldsymbol{\vartheta}_{0},\boldsymbol{\vartheta}_{1})\) changes by a change of basis as in move (c) from Figure 8. Moreover, two basepoints can be combined, multiplying their weights, and a basepoint with weight \(1\) can be deleted. Note that by applying these moves we can alway arrange that there is at most one basepoint on each \(S^{1}\) component of the immersed multicurve in \(\boldsymbol{\vartheta}\). We can eliminate all basepoints on immersed arcs or lines by sliding them to the ends until they cannot lie on any immersed polygons. **Remark 4.7** (Train tracks in doubly marked surfaces).: Just as with immersed curves with bounding chains, for simplicity we have introduced train tracks in the setting of singly marked surfaces. However, these constructions all generalize immediately to doubly marked surfaces. In the doubly marked setting undirected edges still have weights in \(\mathbb{F}\), but directed edges are weighted by elements of \(\mathcal{R}^{-}\). Train tracks can be equipped with a bigrading, and the powers of \(U\) and \(V\) associated with an undirected edge are determined by these gradings. In the polygon counting maps, we use the variables \(U\) and \(V\) to record how a polygon covers the \(w\) and \(z\) marked points, respectively. Setting either \(U=1\) or \(V=1\), and ignoring the corresponding grading, corresponds to ignoring one type of marked point. ## 5. Complexes from curves in \(\mathcal{S}\) or \(\mathcal{Z}\) In the Section 3 we defined Floer theory for immersed curves with bounding chains in arbitrary non-compact (doubly) marked surfaces. We now focus our attention on curves in particular marked surfaces, namely the infinite strip and infinite cylinder. By taking the Floer complex with a certain fixed curve, we will see that any decorated immersed curve in the marked strip determines a bigraded complex over \(\mathcal{R}^{-}\) and a decorated immersed curve in the infinite cylinder determines a bigraded complex with a flip map. Figure 19. Moves used to slide weighted basepoints along immersed curves in \(\boldsymbol{\vartheta}\). All horizontal segments are oriented rightward. Basepoints weighted by \(1\) can be freely added or removed, nearby basepoints can be replaced with a single basepoint weighted by the product of the weights, and basepoints can slide past the end of the arrows with an appropriate change to the arrow weights. ### Infinite marked strips and cylinders We first define and set notation for the relevant surfaces. By the infinite marked strip, we mean the surface \(\mathcal{S}=\left[-\frac{1}{2},\frac{1}{2}\right]\times\mathbb{R}\) equipped with infinitely many marked points occurring at \(\left(0,n+\frac{1}{2}\right)\) for each integer \(n\). Note that \(\mathcal{S}\) has two boundary components, \(\partial_{L}\mathcal{S}=\{-\frac{1}{2}\}\times\mathbb{R}\) and \(\partial_{R}\mathcal{S}=\{\frac{1}{2}\}\times\mathbb{R}\). At times we will consider the punctured surface \(\mathcal{S}^{*}\) obtained by removing the marked points in \(\mathcal{S}\). Another important variation we will consider is the doubly marked surface \(\mathcal{S}^{z,w}\) obtained from \(\mathcal{S}\) by replacing each marked point with a pair of marked points: a \(w\)-marked point just to the right and a \(z\)-marked point just to the left of each marked point of \(\mathcal{S}\). Thus \(\mathcal{S}^{z,w}\) has a family of \(z\)-marked points at \(\left(-\epsilon,n+\frac{1}{2}\right)\) and a second family of \(w\)-marked points at \(\left(+\epsilon,n+\frac{1}{2}\right)\) for integers \(n\) and some very small \(\epsilon\). The vertical line \(\{0\}\times\mathbb{R}\) will play an important role; this line (in either \(\mathcal{S}\) or \(\mathcal{S}^{z,w}\)) will be denoted \(\mu\). Note that \(\mu\) passes through the marked points on \(\mathcal{S}\) and passes in between the \(z\) and \(w\) marked points for each pair of nearby marked points on \(\mathcal{S}^{z,w}\). More generally, we will consider the vertical lines \(\mu_{a}=\{a\}\times\mathbb{R}\), which are translations of \(\mu\). In particular we will consider the lines \(\mu_{2\epsilon}=\{2\epsilon\}\times\mathbb{R}\) and \(\mu_{-2\epsilon}=\{-2\epsilon\}\times\mathbb{R}\), which pass just to the right or left, respectively, of all marked points in either \(\mathcal{S}\) or \(\mathcal{S}^{z,w}\). The infinite cylinder \(\mathcal{Z}\) is obtained by identifying the opposite edges of \(\mathcal{S}\); that is, \(\mathcal{Z}\) is \((\mathbb{R}/\mathbb{Z})\times\mathbb{R}\) with marked points at \((0,n+\frac{1}{2})\) for \(n\) in \(\mathbb{Z}\). As with the strip, the doubly marked cylinder \(\mathcal{Z}^{z,w}\) is obtained from \(\mathcal{Z}\) by replacing each marked point with a \(w\)-marked points \(\epsilon\) units to the right and a \(z\)-marked points \(\epsilon\) units to the left. Removing each marked point in \(\mathcal{Z}\) results in a punctured cylinder \(\mathcal{Z}^{*}\). We will assume that the grading arcs starting from each marked point in \(\mathcal{S}^{z,w}\) are disjoint from the line segment \([-\frac{1}{2},\frac{1}{2}]\times\{0\}\), so that grading arcs approach the positive or negative end of the strip depending on whether the marked point has positive or negative height. We also assume the grading arcs are disjoint from \(\mu\), \(\mu_{2\epsilon}\), \(\mu_{-2\epsilon}\); that is they lie in a neighborhood of \(\mu\) and lie to the right of \(\mu\) for arcs coming from \(w\) marked points and to the left of \(\mu\) for arcs coming from \(z\) marked points. The grading arcs from marked points in the singly marked strip \(\mathcal{S}\) are the same as the arcs from \(w\)-marked points in \(\mathcal{S}^{z,w}\) (preceded by the length \(\epsilon\) horizontal arc from the marked point to the corresponding \(w\) marked point). The grading arcs on \(\mathcal{Z}\) and \(\mathcal{Z}^{z,w}\) come from those in \(\mathcal{S}\) or \(\mathcal{S}^{z,w}\) after identifying opposite edges. ### Bigraded complexes from curves in \(\mathcal{S}\) Consider a compact decorated immersed multicurve \((\Gamma,\mathbf{b})\) in \(\mathcal{S}\), where \(\Gamma\) is a weighted and graded immersed multicurve in \(\mathcal{S}\) disjoint from the marked points and \(\mathbf{b}\) is a bounding chain. We will assume every component of \(\Gamma\) intersects \(\mu\). Given such a decorated curve, we will define a bigraded complex \(C(\Gamma,\mathbf{b})\) over \(\mathcal{R}^{-}\). To do so we first observe that, since \(\Gamma\) avoids a sufficiently small neighborhood around every marked point, \((\Gamma,\mathbf{b})\) can also be viewed as a decorated multicurve in the doubly marked surface \(\mathcal{S}^{z,w}\). The bounding chain \(\mathbf{b}\), when viewed as a linear combination of points in \(\mathcal{I}_{\leq 0}\), is unchanged; note, however, that in the corresponding element \(\overline{\mathbf{b}}\) of \(CF(L)\) we must add appropriate powers of \(V\) when passing to the doubly marked surface. By slight abuse of notation we will let \((\Gamma,\mathbf{b})\) denote both the decorated curve in \(\mathcal{S}\) and the corresponding decorated curve in \(\mathcal{S}^{z,w}\). The curve \((\Gamma,\mathbf{b})\) in \(\mathcal{S}\) comes equipped with a grading function \(\tilde{\tau}:\Gamma\to\mathbb{R}\). This will give rise to two grading functions \(\tilde{\tau}_{w}\) and \(\tilde{\tau}_{z}\) on the curve in \(\mathcal{S}^{z,w}\). We define \(\tilde{\tau}_{w}\) to be identically equal to \(\tilde{\tau}\); note that this is possible since the relevant sets of grading arcs agree. Recall that since the mod 2 grading is determined by the orientation on \(\Gamma\), \(\tilde{\tau}_{z}\) and \(\tilde{\tau}_{w}\) must differ by an even integer at any point. We will choose the grading function \(\tilde{\tau}_{z}\) so that at any point \(p\) in \(\Gamma\cap\mu\) that falls between the marked points at \(\left(0,n-\frac{1}{2}\right)\) and \(\left(0,n+\frac{1}{2}\right)\), \(\tilde{\tau}_{z}(p)=\tilde{\tau}(p)+2n\). This rule defines the grading function \(\tilde{\tau}_{z}\) on all of \(\Gamma\), since by assumption every component of \(\Gamma\) intersects \(\mu\). To see that this rule is consistent note that if two points \(p_{1}\) and \(p_{2}\) in \(\Gamma\cap\mu\) are connected by a path in \(\Gamma\) lying on the right side of \(\mu\), at heights \(n_{1}\) and \(n_{2}>n_{1}\), the gradings \(\tilde{\tau}_{w}\) and \(\tilde{\tau}_{z}\) change in the same way along the path from \(p_{1}\) to \(p_{2}\) except that \(\tilde{\tau}_{w}\) jumps down by an additional \(2(n_{2}-n_{1})\) since the path crosses \(n_{2}-n_{1}\) grading arcs from \(w\)-marked points. Similarly, if there is a path from \(p_{1}\) to \(p_{2}\) on the left side of \(\mu\) the signed number of times this path crosses grading arcs from \(z\)-marked points is \(n_{1}-n_{2}\), and so \(\tilde{\tau}_{z}\) increases \(2(n_{2}-n_{1})\) more than \(\tilde{\tau}_{w}\) does along this path. In either case, we have that \[\tilde{\tau}_{z}(p_{2})=\tilde{\tau}_{z}(p_{1})+(\tilde{\tau}_{w}(p_{2})-\tilde{ \tau}_{w}(p_{1}))+2(n_{2}-n_{1})=(\tilde{\tau}_{z}(p_{1})-\tilde{\tau}_{w}(p_{1 })-2n_{1})+\tilde{\tau}_{w}(p_{2})+2n_{2}=\tilde{\tau}_{w}(p_{2})+2n_{2}.\] Each self-intersection point \(p\) of \(\Gamma\) has a degree \(\deg(p)\) which is the even one of the gradings of the two intersection points of \(\Gamma\cap\Gamma^{\prime}\) associated with \(p\). When viewed as a curve in \(\mathcal{S}^{z,w}\), each intersection point \(p\) has a bidegree \((\deg_{w}(p),\deg_{z}(p))\). It is clear that \(\deg_{w}(p)=\deg(p)\), since \(\tilde{\tau}_{w}\) agrees with \(\tilde{\tau}\) by definition. It can also be checked that \(\deg_{z}(p)=\deg(p)\) for any self-intersection point \(p\); this is because the curve never passes in between a pair of \(z\) and \(w\) marked points. Thus we will simply speak of the degree of \(p\) rather than the bidegree, even when working in the doubly marked surface. We now define \(C(\Gamma,\mathbf{b})\) to be the Floer complex of \((\Gamma,\mathbf{b})\) with \(\mu\) in \(\mathcal{S}^{z,w}\). We understand \(\mu\) to be equipped with the trivial bounding chain (as \(\mu\) has no self-intersection points) and exclude this from the notation. We also equip \(\mu\) with the constant bigrading \(\left(\frac{1}{2},\frac{1}{2}\right)\); taken mod \(2\), this corresponds to orienting \(\mu\) upwards. Recall that the Floer complex is defined when at most one of the input curves is non-compact; here \(\mu\) is non-compact, but \(\Gamma\) is compact. We will always assume that \(\Gamma\) has transverse self-intersection with no triple points and that \(\Gamma\) is transverse to \(\mu\); in fact, we will assume that \(\Gamma\) is perpendicular to \(\mu\) at every intersection point. We will also assume \(\Gamma\) bounds no immersed disks. Since \(\mu\) does not contain an immersed circle there can be no (generalized) immersed annuli, so \((\Gamma,\mathbf{b})\) and \(\mu\) are in admissible position. We will sometimes weaken the assumption that \(\mathbf{b}\) is a bounding chain and consider arbitrary collections of turning points. Recall that the Floer complex can be constructed in the same way except that \(m_{1}^{\mathbf{b}}\) is no longer guaranteed to be a differential, so we obtain a precomplex rather than a complex. **Definition 5.1**.: For an immersed multicurve \(\Gamma\) with a collection of turning points \(\mathbf{b}\) in \(\mathcal{S}\), which we also view as a decorated curve in \(\mathcal{S}^{z,w}\), \(C(\Gamma,\mathbf{b})\) is the bigraded precomplex \(CF((\Gamma,\mathbf{b}),\mu)\) over \(\mathcal{R}^{-}\). If \(\mathbf{b}\) is a bounding chain then \(C(\Gamma,\mathbf{b})\) is a bigraded complex. We will adopt a similar definition when using the language of train tracks rather than decorated curves. That is, for an immersed train track \(\boldsymbol{\vartheta}\) that consists of an immersed curve with crossover arrows, \(C(\boldsymbol{\vartheta})\) will denote the bigraded precomplex \(CF(\boldsymbol{\vartheta},\mu)\) over \(\mathcal{R}^{-}\) coming from the Floer complex of train tracks in \(\mathcal{S}^{z,w}\), and if \(\boldsymbol{\vartheta}\) is unobstructed then this is a complex. The differential on \(C(\boldsymbol{\vartheta})\) will be denoted \(\partial^{\boldsymbol{\vartheta}}\). The construction of \(C(\Gamma,\mathbf{b})\) is a special case of the more general definition of Floer complexes from Section 3, but we recall the key features here. There is one generator of \(C(\Gamma,\mathbf{b})\) for each intersection of \(\Gamma\) with the vertical line \(\mu\). Each of these generators has a bigrading \((\operatorname{gr}_{w},\operatorname{gr}_{z})\); it follows from Definiton 3.1 and the fact that the grading on \(\mu\) is \(\left(\frac{1}{2},\frac{1}{2}\right)\) that \(\operatorname{gr}_{w}(x)=-\tilde{\tau}_{w}(p_{x})\) and \(\operatorname{gr}_{z}(x)=-\tilde{\tau}_{z}(p_{x})\), where \(p_{x}\) is the intersection point of \(\Gamma\cap\mu\) corresponding to the generator \(x\). Each generator \(x\) has an Alexander grading \[A(x)=\frac{\operatorname{gr}_{w}(x)-\operatorname{gr}_{z}(x)}{2}=\frac{\tilde{ \tau}_{z}(p_{x})-\tilde{\tau}_{w}(p_{x})}{2}.\] By the definition of \(\tilde{\tau}_{z}\), we have that \(A(x)=n\) if \(p_{x}\) falls on the segment of \(\mu\) between \((0,n-\frac{1}{2})\) and \((0,n+\frac{1}{2})\), so the Alexander grading records the discrete height of the corresponding intersection point. Note that the Alexander grading defines a partial ordering on the generators of \(C(\Gamma,\mathbf{b})\). The actual height of intersection points refines this to a total ordering on generators that will be useful in future arguments; we will say that \(x_{1}<x_{2}\) if \(p_{x_{1}}\) occurs below \(p_{x_{2}}\). Thus \((\Gamma,\mathbf{b})\) defines not just the precomplex \(C(\Gamma,\mathbf{b})\) but also a preferred choice of ordered basis for that precomplex. The map \(m_{1}^{\mathbf{b}}\) counts generalized immersed bigons with left boundary on \(\mu\) and right boundary a polygonal path in \(\Gamma\) consistent with \(\mathbf{b}\). Each generalized bigon \(u\) contributes with a coefficient given by the product of the following; * \(U^{n_{w}(u)}V^{n_{z}(u)}\) where \(n_{w}(u)\) and \(n_{z}(u)\) are the multiplicities with which \(u\) covers the \(w\) and \(z\) marked points; * the coefficient in \(\overline{\mathbf{b}}\) for each false corner in the boundary, or the opposite of the coefficient if the boundary orientation on \(u\) opposes the orientation on \(\Gamma\); * the weight \(c\) of each basepoint of \(\Gamma\) passed along \(\partial u\), or \(c^{-1}\) if the boundary orientation of \(u\) opposes the orientation on \(\Gamma\); and * an additional factor of \((-1)\) if the the orientation on \(\partial u\) opposes the orientation on \(\Gamma\). **Example 5.2**.: Consider the decorated immersed curve in Figure 20. The complex \(C(\Gamma,\mu)\) has seven generators \(a,b,c,d,e,f\), and \(g\). Generators \(a\) and \(b\) both have bigrading \((0,-2)\) and Alexander grading \(1\), generators \(c\), \(d\), and \(e\) have bigrading \((-1,-1)\) and Alexander grading \(0\), and generators \(f\) and \(g\) have bigrading \((-2,0)\) and Alexander grading \(-1\). There are eight generalized bigons contributing to \(\partial=m_{1}^{\mathbf{b}}\); six of these are true bigons and two are triangles with one false corner (the two triangles are shaded in Figure 20). This gives \[\partial a=\partial e=\partial g=0,\quad\partial b=-Ve,\quad\partial c=-Ua+Ub +Vf,\quad\partial d=-Ub-Vf+Vg,\quad\text{and}\quad\partial f=Ue.\] Just as \(C=C(\Gamma,\mathbf{b})\) is the Floer complex of \((\Gamma,\mathbf{b})\) with \(\mu\), we will define \(C_{\pm 2\epsilon}=C_{\pm 2\epsilon}(\Gamma,\mathbf{b})\) to be the Floer complex \(CF((\Gamma,\mathbf{b}),\mu_{\pm 2\epsilon})\). These complexes are closely related to \(C\): the localized versions \(C_{\pm 2\epsilon}^{\infty}\) are isomorphic to \(C^{\infty}\) but have a different natural choice of basis. In particular, if \(\{x_{i}\}_{i=1}^{n}\) is the basis for \(C\) (and thus for \(C^{\infty}\)) then the generators of \(C_{2\epsilon}\) can be identified with \(\{U^{A(x_{i})}x_{i}\}_{i=1}^{n}\) and the generators of \(C_{-2\epsilon}\) can be identified with \(\{V^{-A(x_{i})}x_{i}\}_{i=1}^{n}\). To see this, observe that essentially the same bigons contribute to the differential in all three complexes, but a bigon from \(x_{i}\) to \(x_{j}\) covers \(A(x_{j})-A(x_{i})\) fewer \(w\) marked points when considered in \(C_{2\epsilon}\) than in \(C\) and covers \(A(x_{i})-A(x_{j})\) fewer \(z\) marked points when considered in \(C_{-2\epsilon}\) than in \(C\). Note that all generators for \(C_{\pm 2\epsilon}\) have Alexander grading zero, and the given basis generates the Alexander grading zero summand of \(C_{\pm 2\epsilon}\) as an \(\mathbb{F}[W]\)-module. When viewed as a complex over \(\mathbb{F}[W]\), \(C_{\pm 2\epsilon}|_{A=0}\) is the same as the Floer complex \(CF((\Gamma,\mathbf{b}),\mu_{\pm 2\epsilon})\) taken in the singly marked strip \(\mathcal{S}\). Note that \(C_{2\epsilon}|_{A=0}\), viewed as a complex over \(\mathbb{F}[W]\), agrees with is the horizontal complex \(C^{h}\) of \(C\). Similarly \(C_{-2\epsilon}|_{A=0}\) is the vertical complex of \(C^{v}\) of \(C\). By the invariance of the Floer complex, we can slide line \(\mu_{2\epsilon}\) rightward or we can slide the line \(\mu_{-2\epsilon}\) leftward without changing the Floer complex with \((\Gamma,\mathbf{b})\) up to homotopy equivalence. In particular the Floer complex in \(\mathcal{S}\) with the Figure 20. Left: A decorated immersed curve \((\Gamma,\mathbf{b})\) in \(\mathcal{S}\). \(\Gamma\) has a single component and \(\mathbf{b}\) is the sum of the two self-intersection points marked, with coefficients \(1\) and \(-1\) as indicated. To see that \(\mathbf{b}\) is a bounding chain, note that the two shaded monogons (and another similar pair of monogons) make canceling contributions to \(m_{0}^{\mathbf{b}}\). The value of the bigrading function \((\tilde{\tau}_{w},\tilde{\tau}_{z})\) is \((0,2)\) at the endpoint on the left boundary of \(\mathcal{S}\); this determines the bigrading function on all of \(\Gamma\), but for convenience the value of \((\tilde{\tau}_{w},\tilde{\tau}_{z})\) is indicated near each intersection of \(\Gamma\) with the vertical line through the marked points. Center: \((\Gamma,\mathbf{b})\) and \(\mu\) in the doubly marked strip \(\mathcal{S}^{z,w}\), with two contributions to the Floer complex shaded. Right: The complex \(C(\Gamma,\mathbf{b})\). right boundary of \(\mathcal{S}\), \(CF((\Gamma,\mathbf{b}),\mu_{\frac{1}{2}})\), is homotopy equivalent to the horizontal complex of \(C\) and \(CF((\Gamma,\mathbf{b}),\mu_{-\frac{1}{2}})\) is homotopy equivalent to the vertical complex. In many cases the intersection number with \(\Gamma\) decreases as \(\mu\) is slid to the boundaries of the strip to the point that \(CF((\Gamma,\mathbf{b}),\mu_{\frac{1}{2}})\) and \(CF((\Gamma,\mathbf{b}),\mu_{-\frac{1}{2}})\) are reduced. In this case the generators of these complexes are the generators of the horizontal and vertical homology of \(C\). For instance, in Example 5.2 the single intersection of \(\Gamma\) with the right or left boundary of \(\mathcal{S}\) correspond \(g\) or \(a\), respectively, the generators of horizontal or vertical homology of \(C\). We caution that in some cases the process of sliding \(\mu_{\pm 2\epsilon}\) out to \(\partial\mathcal{S}\) involves basis changes to the corresponding complex, so in general the generators remaining at the boundary may not be a subset of the original generators. ### Complexes and flip maps from curves in \(\mathcal{Z}\) A decorated immersed curve \((\Gamma,\mathbf{b})\) in the marked cylinder \(\mathcal{Z}\) determines a bigraded complex \(C\) as well as a flip map \(\Psi:C^{\infty}\to C^{\infty}\). The complex \(C\) is defined to be the Floer complex in \(\mathcal{Z}^{z,w}\) of \((\Gamma,\mathbf{b})\) with \(\mu\). The flip map is defined using a curve \(\mu_{\Psi}\) constructed from \(\mu_{2\epsilon}\) and \(-\mu_{-2\epsilon}\); fixing a sufficiently large height that the compact curve \(\Gamma\) does not reach, we truncate \(\mu_{2\epsilon}\) and \(-\mu_{-2\epsilon}\) by cutting off the portion above that height and then connect the two loose ends by a cap. We now define \(C_{\Psi}\) to be the Floer complex of \((\Gamma,\mathbf{b})\) and \(\mu_{\Psi}\) in the singly marked cylinder \(\mathcal{Z}\). As an \(\mathbb{F}[W]\)-module, we have \[C_{\Psi}\cong C_{2\epsilon}|_{A=0}\oplus C_{-2\epsilon}|_{A=0}[-1].\] Bigons contributing to the differential on \(C_{\Psi}\) are of three types: the \(\mu_{\Psi}\) boundary of the bigon either is contained in the \(\mu_{2\epsilon}\) portion of \(\mu_{\Psi}\), is contained in the \(-\mu_{-2\epsilon}\) portion of \(\mu_{\Psi}\), or involves the cap in \(\mu_{\Psi}\). Bigons of the first two types recover the differential on \(C_{2\epsilon}|_{A=0}\) and \(C_{-2\epsilon}|_{A=0}[-1]\), so \(C_{\Psi}\) is the mapping cone of a map defined by counting bigons of the third type; we define \(\Psi:C_{2\epsilon}|_{A=0}\to C_{-2\epsilon}|_{A=0}\) to be this map. More precisely, counting bigons from intersection points on the \(\mu_{2\epsilon}\) part of \(\mu_{\Psi}\) to intersection points on the \(\mu_{-2\epsilon}\) part of \(\mu_{\Psi}\) defines a degree \(-1\) map \(\widetilde{\Psi}\) from \(C_{2\epsilon}|_{A=0}\) to \(C_{-2\epsilon}^{-}|_{A=0}[-1]\), which may also view as a degree \(0\) map from \(\widetilde{\Psi}:C_{2\epsilon}|_{A=0}\to C_{-2\epsilon}^{-}|_{A=0}\). We define \(\Psi\) in terms of \(\widetilde{\Psi}\) so that \[\widetilde{\Psi}(x)=(-1)^{\operatorname{gr}_{w}(x)}\Psi(x).\] That is, \(\Psi\) counts bigons with the usual conventions except with an extra minus sign for bigons whose boundary orientation opposes the orientation of \(\Gamma\). We remark that the distinction between \(\Psi\) and \(\widetilde{\Psi}\) can be ignored when working with \(\mathbb{Z}/2\mathbb{Z}\) coefficients; in general the extra signs are needed so that the differential on the Floer complex of \((\Gamma,\mathbf{b})\) and \(\mu_{\Psi}\) is given by \[\left(\begin{array}{cc}\partial_{C_{2\epsilon}|_{A=0}}&0\\ \Psi&\partial_{C_{-2\epsilon}|_{A=0}}\end{array}\right)\] which after a change of basis replacing \(x\) with \(-x\) for \(x\) in \(C_{-2\epsilon}|_{A=0}\) with odd grading becomes \[\left(\begin{array}{cc}\partial_{C_{2\epsilon}|_{A=0}}&0\\ \Psi&-\partial_{C_{-2\epsilon}|_{A=0}}\end{array}\right),\] the differential on \(\operatorname{Cone}(\Psi)\). The map \(\Psi\) defined in this way on \(C_{2\epsilon}|_{A=0}\) can be uniquely extended as a skew \(\mathcal{R}^{-}\)-module homomorphism to a map \(\Psi:C_{2\epsilon}^{\infty}\to C_{\simeq 2\epsilon}^{\infty}\). Finally, using the identification above we can view \(\Psi\) as a map from \(C^{\infty}\) to \(C^{\infty}\). **Proposition 5.3**.: \(\Psi\) _is a flip-filtered chain homotopy equivalence and has skew-degree \((0,0)\)._ Proof.: \(\Psi\) is a skew \(\mathcal{R}^{-}\)-module homomorphism by construction and it is a chain map because \(\partial^{2}=0\) on \(C_{\Psi}=\operatorname{Cone}(\Psi)\). It is clear that \(\Psi\) is flip-flip filtered since it takes each generator of \(C_{+2\epsilon}\) (which has \(V\)-filtration level zero as an element of \(C^{\infty}\)) to a sum of terms that are each a nonnegative power of \(W\) times a generator \(C_{-2\epsilon}\) (which has \(U\)-filtration level zero). The complex \(C_{\Psi}\) clearly has trivial homology, since \(\mu_{\Psi}\) can be homotoped off of \(\Gamma\); it follows that \(\Psi\) is a chain homotopy equivalence. Since \(\mu_{\Psi}\) has bigrading \((\frac{1}{2},\frac{1}{2})\) on the portion coming from \(\mu_{2\epsilon}\) and bigrading \((-\frac{1}{2},-\frac{1}{2})\) on the portion coming from \(-\mu_{-2\epsilon}\), the map \(\Psi\) on \(C_{+2\epsilon}|_{A=0}\) preserves the bi-grading. Since the Alexander grading is zero interchanging the gradings has no effect and \(\Psi\) has skew-degree \((0,0)\). This property is preserved when \(\Psi\) is extended as a skew-module homomorphism. **Example 5.4**.: Consider the curve \((\Gamma,\mathbf{b})\) in \(\mathcal{Z}\) shown in Figure 21; this is obtained from the decorated immersed arc in \(\mathcal{S}\) in Figure 20 by identifying the sides of \(\mathcal{S}\) such that the endpoints of the immersed arc are identified. The complex \(C=C(\Gamma,\mathbf{b})\) is the complex \(C\) computed in Example 5.2 (it is easy to see that, in this case, identifying the sides of the strip does not affect the differential). The flip map \(\Psi:C^{\infty}\to C^{\infty}\) counts generalized bigons between \((\Gamma,\mathbf{b})\) and \(\mu_{\Psi}\) involving the cap portion of \(\mu_{\Psi}\); there are three such bigons, shown in Figure 21. The first contributes \(V^{-1}a\) to \(\Psi(U^{-1}g)\), the second contributes \(W(V^{-1}a)=Ua\) to \(\Psi(Ub)\), and the third contributes \(W(V^{-1}a)=Ua\) to \(\Psi(Ua)\). Using the skew-module homomorphism property of \(\Psi\), we have \(\Psi(a)=\Psi(b)=UV^{-1}a\), \(\Psi(g)=a\), and \(\Psi(c)=\Psi(d)=\Psi(e)=\Psi(f)=0\). We remark that we can more easily find the induced isomorphism \(\Psi_{*}\) from the horizontal homology of \(C\) to the vertical homology of \(C\) by perturbing \(\mu_{\Psi}\) so that the vertical portions are closer to \(\mu_{\frac{1}{2}}\). When we do this, the single intersection of the left vertical piece of \(\mu_{\Psi}\) with \(\Gamma\) corresponds to \(g\) (the generator of horizontal homology of \(C\)), the single intersection of the right vertical piece with \(\Gamma\) corresponds to \(a\) (the generator of vertical homology), and the single bigon indicates that \(\Psi_{*}(g)=a\). **Example 5.5**.: Consider the curve \(\Gamma\) in \(\mathcal{Z}\) shown in Figure 22. The values of the bigrading function on \(\Gamma\) at intersections with \(\mu\) are indicated in the figure. We equip this curve with the trivial bounding chain \(\mathbf{b}\) and omit it from the notation. The complex \(C=C(\Gamma)\) is the Floer complex (in \(\mathcal{Z}^{z,w}\)) of \(\Gamma\) with the vertical line \(\mu\). There are five generators, \(a^{\prime}\), \(b^{\prime}\), \(c^{\prime}\), \(d^{\prime}\), and \(e^{\prime}\), with bigradings \((2,0)\), \((1,1)\), \((0,0)\), \((1,1)\), and \((0,2)\), respectively. There are four bigons giving the differential \[\partial(a^{\prime})=-Vb^{\prime},\quad\partial(b^{\prime})=0,\quad\partial(c ^{\prime})=UVb^{\prime}-UVd^{\prime},\quad\partial(d^{\prime})=0,\quad\text{ and }\quad\partial(e^{\prime})=Ud^{\prime}.\] To determine the flip map we consider the Floer complex with \(\mu_{\Psi}\), specifically the terms coming from bigons involving the cap portion of \(\mu_{\Psi}\). There are three obvious bigons arising from the three arcs from the left side of \(\mu_{\Psi}\) to the right side of \(\mu_{\Psi}\). Note that because of how \(\Psi\) is defined from a curve, all three of these bigons contribute with positive sign even though one of the arcs is oriented leftward. There is also a fourth bigon contributing to \(\Psi\), which is shown in the figure. From these four bigons we see that \[\Psi(a^{\prime})=UV^{-1}a^{\prime}+V^{-1}c^{\prime},\quad\Psi(b^{\prime})=d^{ \prime},\quad\Psi(c^{\prime})=Ve^{\prime},\quad\Psi(d^{\prime})=0,\quad\text{ and }\quad\Psi(e^{\prime})=0.\] We observe that this complex and flip map agrees with the one in Example 2.6 coming from the dual knot of \(+1\)-surgery on the left handed trefoil after the change of basis given by \[a^{\prime}=-a,\quad b^{\prime}=b,\quad c^{\prime}=-c+Ua-Ve,\quad d^{\prime}= d,\quad\text{ and }\quad e^{\prime}=e.\] Figure 21. The three bigons contributing to the flip map \(\Psi\) associated to the pictured curve \((\Gamma,\mathbf{b})\) in \(\mathcal{Z}\). When we computed \(C\) in the last example, some of the bigons wrapped completely around the cylinder. As a result, if we cut \(\mathcal{Z}\) open along \(\mu_{\frac{1}{2}}\) to get \(\mathcal{S}\) and computed the complex associated with the resulting curve, we would not get the same result. However, it is always possible to perturb a decorated curve \((\Gamma,\mathbf{b})\) in \(\mathcal{Z}\) so that bigons contributing to \(C(\Gamma,\mathbf{b})\) do not intersect \(\mu_{\frac{1}{2}}\). Such a perturbation for Example 5.5 is shown in Figure 23. Note that the homotopy performed to the curve introduced immersed monogons and therefore requires including some of the new self-intersection points in the bounding chain, following move \((j)\) in Figure 8. When curves in \(\mathcal{Z}\) are perturbed as above, we can cut the the cylinder into two strips, a marked strip \(\mathcal{S}\) and an unmarked strip \(\mathcal{F}\), so that the \(C\) is determined by the restriction of \((\Gamma,\mathbf{b})\) to \(\mathcal{S}\) and the flip isomorphism \(\Psi_{*}\) is determined by the restriction to \(\mathcal{F}\). We simply let \(\mathcal{F}\) be a neighborhood of \(\mu_{\frac{1}{2}}\) and let \(\mathcal{S}\) be the complement of \(\mathcal{F}\) in \(\mathcal{Z}\). We choose this neighborhood small enough so that bigons contributing to the differential in \(C\) (which are assumed to be disjoint from \(\mu_{\frac{1}{2}}\)) are disjoint from \(\mathcal{F}\). We also choose \(\mathcal{F}\) to be small enough so that the restriction of \(\Gamma\) to \(\mathcal{F}\) consists of arcs that move from one side of \(\mathcal{F}\) to the other (i.e. \(\Gamma\) has no vertical tangencies in \(\mathcal{F}\)). Under this assumption, we can perturb the curve \(\mu_{\Psi}\) by sliding the vertical portions from \(\mu_{2e}\) to \(\partial_{L}\mathcal{F}\) and from \(\mu_{-2e}\) to \(\partial_{R}\mathcal{F}\) Figure 23. A decorated curve \((\Gamma,\mathbf{b})\) in \(\mathcal{Z}^{z,w}=\mathcal{S}^{z,w}\cup\mathcal{F}\) obtained from the curve in 22 by a homotopy, with the property that bigons contributing to the differential on \(C(\Gamma,\mathbf{b})\) are contained in \(\mathcal{S}\). The bounding chain is now nontrivial. The shaded generalized bigon on the left contributes the term \(UVb\) in \(\partial(c)\). Perturbing \(\mu_{\Psi}\) to lie in \(\mathcal{F}\) as on the right and taking Floer homology determines the flip isomorphism \(\Psi_{*}\). Figure 22. Left: A curve \(\Gamma\) in \(\mathcal{Z}\); the value of the bigrading function \((\tilde{\tau}_{w},\tilde{\tau}_{z})\) at intersections with \(\mu\) is indicated. Middle: The curve in \(\mathcal{Z}^{z,w}\) paired with \(\mu\) to compute the complex \(C(\Gamma)\); the shaded bigon gives the term \(UVb^{\prime}\) in \(\partial(c^{\prime})\). Right: The curve paired with \(\mu_{\Psi}\) to compute the flip map; the shaded bigon contributes \(W(V^{-1}a^{\prime})=Ua^{\prime}\) to \(\Psi(Ua^{\prime})\). and all bigons contributing to \(\Psi_{*}\) are containing in \(\mathcal{F}\). Counting bigons contributing to \(\Psi_{*}\) is then the same as counting paths from \(\partial_{L}\mathcal{F}\) to \(\partial_{R}\mathcal{F}\) (these may be polygonal paths, if the bounding chain includes self-intersection points in \(\mathcal{F}\)). We caution that the intersections of \(\Gamma\) with \(\partial_{L}\mathcal{F}\) and \(\partial_{R}\mathcal{F}\) correspond to a basis of the horizontal and vertical complexes of \(C\), respectively, but these may not be the same as the bases coming from intersecting \(\Gamma\) with \(\mu_{2\epsilon}\) and \(\mu_{-2\epsilon}\). ### Naive curve representatives of bigraded complexes We have seen that any immersed multicurve \(\Gamma\) with a bounding chain \(\mathbf{b}\) in \(\mathcal{S}\) determines a bigraded complex over \(\mathcal{R}^{-}\) along with a choice of (ordered) basis corresponding to the intersection points of \(\Gamma\) with \(\mu\). In this case we say that \((\Gamma,\mathbf{b})\) represents the complex with respect to this basis. We will now observe that any bigraded complex over \(\mathcal{R}^{-}\) can be represented in this way. **Proposition 5.6**.: _For any bigraded complex \(C\) over \(\mathcal{R}^{-}\) and any choice of basis \(\{x_{1},\ldots,x_{n}\}\) for \(C\), there exists an immersed multicurve \(\Gamma\) in \(\mathcal{S}\) with a collection of turning points \(\mathbf{b}\) such that \(C(\Gamma,\mathbf{b})\) is isomorphic to \(C\), with the isomorphism taking the preferred basis of \(C(\Gamma,\mathbf{b})\) to \(\{x_{1},\ldots,x_{n}\}\). The same is true for complexes over any quotient of \(\mathcal{R}^{-}\)._ Proof.: The curve \(\Gamma\) consists of a collection of \(n\) immersed arcs, one for each generator of \(C\), connecting the left and right boundaries of \(\mathcal{S}\). These arcs intersect \(\mu\) at appropriate heights determined by the Alexander grading of the corresponding generator, but the endpoints on each side are rearranged so that every arc crosses every other arc once on each side of \(\mu\). The grading function \(\tilde{\tau}\) on \(\Gamma\) is defined to agree with \(-\operatorname{gr}_{w}\) at intersection points corresponding to generators; in particular, the arc corresponding to \(x_{i}\) is oriented rightward if \(\operatorname{gr}_{w}(x_{i})\) is even and leftward if \(\operatorname{gr}_{w}(x_{i})\) is odd. To define the collection of turning points \(\mathbf{b}\), for each arrow from \(x_{i}\) to \(x_{j}\) in the complex we include the intersection point \(p_{ij}\) between the arcs corresponding to \(x_{i}\) and \(x_{j}\) that is left of \(\mu\) if \(x_{i}\) is above \(x_{j}\) or right of \(\mu\) if \(x_{i}\) is below \(x_{j}\). If the arrow from \(x_{i}\) to \(x_{j}\) has weight \(cU^{a}V^{b}\), \(p_{ij}\) appears in \(\mathbf{b}\) with coefficient \(c\) if the polygonal path in \(\Gamma\) from \(x_{i}\) to \(p_{ij}\) to \(x_{j}\) follows the orientation on \(\Gamma\), and with coefficient \(-c\) otherwise. To compute \(\partial\) on \(C(\Gamma,\mathbf{b})\), we count generalized immersed bigons whose boundary is a polygonal path in \(\Gamma\) consistent with \(\mathbf{b}\) along with a segment of \(\mu\). For any such bigon, the net rotation along the polygonal path in \(\Gamma\) must be \(\pi\). On the other hand, the net rotation along any polygonal path in \(\Gamma\) consistent with \(\mathbf{b}\) between any two points on \(\mu\) is given by \(\pi\) times the number of left turns in the path. This is because the immersed arcs move monotonically rightward or leftward, while at left turns the polygonal path must go from rightward moving to leftward moving or from leftward moving to rightward moving (since arrows in \(C\) connect generators with gradings of opposite parity, so only intersections between oppositely oriented arcs appear in \(\mathbf{b}\)). Thus to compute \(C(\Gamma,\mathbf{b})\) we need only consider the triangles with corners \(x_{i}\), \(x_{j}\), and \(p_{ij}\) for each pair \(1\leq i,j\leq n\). The coefficients on each \(p_{ij}\) in \(\mathbf{b}\) were chosen so that each triangle precisely recovers an arrow in \(C\). Note that the collection of turning points \(\mathbf{b}\) constructed here is not necessarily a bounding chain. However, by construction the precomplex \(C(\Gamma,\mathbf{b})\) is a complex so it still makes sense to say that \((\Gamma,\mathbf{b})\) represents \(C\). A pair \((\Gamma,\mathbf{b})\) constructed as above will be called a _naive immersed curve representative_ for \(C\). As an example, Figure 24 shows the naive immersed curve representative for the complex in Example 5.2. By similar reasoning, we could construct a naive representative for a complex \(C\) with a flip map \(\Psi\) in the cylinder \(\mathcal{Z}\). Viewing \(\mathcal{Z}\) as the union of two strips \(\mathcal{S}\) and \(\mathcal{F}\), the decorated curve restricted to \(\mathcal{S}\) is the naive representative of \(C\). In \(\mathcal{F}\) we place another copy of the decorated curve in \(\mathcal{S}\) representing \(C\) (or the image of this under the obvious map \(\mathcal{S}\to\mathcal{F}\) that ignores the marked points), and then add additional appropriately weighted turning points in \(\mathcal{F}\) from the segment corresponding to \(x_{i}\) to the segment corresponding to \(x_{j}\) for each \(x_{j}\) term in \(\Psi(x_{i})\). One can check that the resulting curve \((\Gamma,\mathbf{b})\) represents \(C\) and \(\Psi\), and that \(\mathbf{b}\) is a bounding chain. While it is good to know that any complex can be represented by _some_ decorated curve, the naive immersed curve representative is not particularly easy to work with. Moreover, it is highly dependent on the representative of the complex \(C\) used to construct it, so a homotopy equivalence class of complexes gives rise to many different decorated curves. Our challenge moving forward will be to find a simpler decorated curve \((\Gamma,\mathbf{b})\) that represents \(C\) and that is uniquely determined (up to an appropriate sense of equivalence) for any chain homotopy equivalence class of complexes. ## 6. Simple immersed curve representatives of bigraded complexes ### Simple position for curves We have established that any complex can be represented by a decorated curve in \(\mathcal{S}\), but these representatives are not particularly nice. With the goal in mind of defining better immersed curve representatives for a complex, we now discuss some constraints we wish to impose on our decorated immersed curves \((\Gamma,\mathbf{b})\). Some assumptions have already been mentioned: we will assume transverse self-intersection with no triple points, and we assume that \(\Gamma\) is perpendicular to \(\mu\). We will also assume that \(\Gamma\) has minimal intersection with \(\mu\) (among curves in its homotopy class, where homotopies do not cross the marked points). We assume that no component of \(\Gamma\) is disjoint from \(\mu\), since such a component would not contribute to \(C(\Gamma,\mathbf{b})\) and can be safely ignored. We will often assume that \(\Gamma\) has minimial self-intersection in its homotopy class, though this assumption will need to be relaxed at times. Figure 24. A naive immersed curve representative for a complex. Figure 25. The crossing region for a non-primitive component of \(\Gamma\). The basepoint for the component lies within the crossing region on the strand which crosses all other strands, as shown, and the strands run parallel outside this region. In addition to requiring curves to be in minimal position, we will also assume that non-primitive components of \(\Gamma\) have a particular form. A closed component \(\gamma\) of \(\Gamma\) is non-primitive if it is homotopic to \(k\) copies of some simpler closed curve. If \(\gamma^{\prime}\) is primitive and \([\gamma]=k[\gamma^{\prime}]\), we say \(\gamma\) has multiplicity \(k\). In this case, we will assume that \(\gamma\) lies in a neighborhood of \(\gamma^{\prime}\) and that \(\gamma\) looks like \(k\) parallel copies of \(\gamma^{\prime}\) outside of a small crossing region of the form depicted in Figure 25. There are \(k-1\) self intersection points of \(\gamma\) in this region, which all have degree \(0\). We will also assume that there is a single basepoint on \(\gamma\) carrying the weight of the curve and that this basepoint lies in the crossing region as shown. Note that for each intersection of \(\gamma^{\prime}\) with \(\mu\), there are \(k\) intersections of \(\gamma\) with \(\mu\). We will refer to this set of intersection points, and the corresponding generators of \(C(\Gamma,\mathbf{b})\), as a _grouping_ of generators. We will assume that \(\gamma\) lies in a small enough neighborhood of \(\gamma^{\prime}\) that no other intersections of \(\Gamma\) with \(\mu\) fall between those in the same grouping. **Definition 6.1**.: An immersed multicurve \(\Gamma\) in \(\mathcal{S}\) is in _simple position_ if * \(\Gamma\) has transverse self-intersection points and no triple points; * \(\Gamma\) is horizontal at intersections with the vertical line \(\mu\) as well as intersections with \(\partial\mathcal{S}\); * Every component of \(\Gamma\) intersects \(\mu\), and does so minimally; * \(\Gamma\) has minimal self-intersection number; and * each non-primitive closed component of \(\Gamma\) has the form of parallel strands outside of a crossing region, as described above. At times, we will need to relax the minimal self-intersection condition in two ways. First, we will require that no two components of \(\Gamma\) bound an immersed annulus; starting from minimal position, this requires adding a pair of intersection points between any two parallel closed components. Second, we will impose a certain ordering on the endpoints of immersed arc components appearing on \(\partial_{L}\mathcal{S}\) and \(\partial_{R}\mathcal{S}\); from minimal position this can be achieved by sliding endpoints of immersed arcs along the relevant boundary of \(\mathcal{S}\). The ordering is specified using the grading functions \(\tilde{\tau}_{w}\) and \(\tilde{\tau}_{z}\) on \(\Gamma\): endpoints on \(\partial_{L}\mathcal{S}\) are ordered so that \(\tilde{\tau}_{w}\) is non-decreasing moving downward along \(\partial_{L}\mathcal{S}\), and endpoints on \(\partial_{R}\mathcal{S}\) are ordered so \(\tilde{\tau}_{z}\) is non-decreasing moving upward along \(\partial_{R}\mathcal{S}\). Note that each endpoint \(x\) of \(\Gamma\) on \(\partial\mathcal{S}\) has a corresponding intersection \(x^{\prime}\) of \(\Gamma\) with \(\mu\), the first such intersection point reached when traveling along \(\Gamma\) from \(x\). Moreover, if \(x\) is on \(\partial_{L}\mathcal{S}\) then \(\tilde{\tau}_{w}(x)=\tilde{\tau}_{w}(x^{\prime})\) since \(\Gamma\) is horizontal at both \(x\) and \(x^{\prime}\) and there are no \(w\)-marked points or corresponding grading arcs on the left side of \(\mu\) in \(\mathcal{S}^{z,w}\). Similarly, if \(x\) is on \(\partial_{R}\mathcal{S}\) then \(\tilde{\tau}_{z}(x)=\tilde{\tau}_{z}(x^{\prime})\). Since the bigrading \((\mathrm{gr}_{w},\mathrm{gr}_{z})\) of the generator of \(C(\Gamma,\mathbf{b})\) is given by \((-\tilde{\tau}_{w}(x),-\tilde{\tau}_{z}(x))\), we can equivalently understand the ordering of endpoints as requiring that endpoints on \(\partial_{L}\mathcal{S}\) with higher \(\mathrm{gr}_{w}\) appear higher and endpoints on \(\partial_{R}\mathcal{S}\) with higher \(\mathrm{gr}_{z}\) appear lower, where here \(\mathrm{gr}_{w}\) and \(\mathrm{gr}_{z}\) refer to the gradings on the corresponding generator of \(C(\Gamma,\mathbf{b})\). A _segment_ of \(\Gamma\) will refer to a connected component of \(\Gamma\setminus(\Gamma\cap\mu)\). Although our curves may not be in minimal position, we will require that any two segments of \(\Gamma\) intersect at most once. To ensure this holds, when we add a pair of extra intersection points between parallel curves we should add them on opposite sides of \(\mu\), as in Figure 26. **Definition 6.2**.: An immersed multicurve \(\Gamma\) in \(\mathcal{S}\) is in _almost simple position_ if * \(\Gamma\) has transverse self-intersection points and no triple points; * \(\Gamma\) is horizontal at intersections with \(\mu\) or \(\partial\mathcal{S}\); * Every component of \(\Gamma\) intersects \(\mu\), and does so minimally; * each non-primitive closed component has the form of parallel strands outside of a crossing region, as described above; * The endpoints of \(\Gamma\) on \(\partial_{L}\mathcal{S}\) (resp. \(\partial_{R}\mathcal{S}\)) appear in order of increasing \(\tilde{\tau}_{w}\) (resp. \(\tilde{\tau}_{z}\)) moving upward (resp. downward); * \(\Gamma\) does not bound an immersed annulus; and * \(\Gamma\) has minimal self-intersection number subject to the above constraints, and any two segments of \(\Gamma\) intersect at most once. We can also define simple position and almost simple position for immersed multicurves in the marked cylinder \(\mathcal{Z}\): the definitions are the same except that we can ignore the conditions concerning endpoints of immersed arcs in \(\partial\mathcal{S}\). ### Local systems The \(k-1\) self-intersection points appearing in the crossing region for a non-primitive curve \(\gamma\) play a special role, we will call these _local system intersection points_. These self-intersection points always have degree zero. Given an immersed multicurve \(\Gamma\) in simple or almost simple position, recall that \(\mathcal{I}\) is the collection of self-intersection points and \(\mathcal{I}_{0}\) is the subset of these points with degree zero. Let \(\mathcal{I}_{0}^{*}\subset\mathcal{I}_{0}\) denote the set of local system intersection points for all non-primitive components of \(\Gamma\). For a collection of turning points \(\mathbf{b}\), let \(\widehat{\mathbf{b}}\) denote the corresponding linear combination of \(\mathcal{I}_{0}\) obtained by forgetting intersection points with negative degree, and similarly let \(\widehat{\mathbf{b}}^{*}\) denote the corresponding linear combination of points in \(\mathcal{I}_{0}^{*}\). For each non-primitive component \(\gamma\) of \(\Gamma\), let \(\widehat{\mathbf{b}}_{\gamma}^{*}\) denote the corresponding linear combination of any local system intersection points on \(\gamma\). A local system is an automorphism of \(\mathbb{F}^{k}\), recorded as a similarity class of invertible \(k\times k\) matrices. If \(\gamma\) has multiplicity \(k\) and the underlying primitive curve is \(\gamma^{\prime}\), then \(\widehat{\mathbf{b}}_{\gamma}^{*}\) along with the weight associated with the basepoint on \(\gamma\) determines a \(k\)-dimensional local system on \(\gamma^{\prime}\). We think of each generator of \(\mathbb{F}^{k}\) as indexing a parallel copy of \(\gamma^{\prime}\) and the matrix as recording how paths jump between these copies of \(\gamma^{\prime}\) when making one full loop, where the \((i,j)\)-entry is a weighted count of paths through the crossing region from the \(i\)th copy of \(\gamma^{\prime}\) on the left side of the crossing region to the \(j\)th copy on the right side of the crossing region. We will index the copies \(1,\dots,k\) moving upward on each side, and we let \(c_{i}\) be the coefficient in \(\widehat{\mathbf{b}}_{\gamma}\) of the intersection between the strand starting at \(1\) and the strand starting at \(i+1\) and let \(c_{0}\) be the weight associated with the basepoint on \(\gamma\). Then the matrix defining the local system has the following form: Figure 26. Multicurves in almost simple position. On the left, starting from minimal position two intersection points were added (on opposite sides of \(\mu\)) to break up an immersed annulus. On the right, the value of the bigrading \((\tilde{\tau}_{w},\tilde{\tau}_{z})\) is shown for each intersection of with \(\mu\), and the endpoints of immersed arcs are arranged in the required order. \[\begin{bmatrix}0&0&\cdots&0&c_{0}\\ 1&0&\cdots&0&c_{1}\\ 0&1&\cdots&0&c_{2}\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ 0&0&\cdots&1&c_{k-1}\end{bmatrix}.\] This matrix is invertible since we require \(c_{0}\) to be nonzero. Note that this matrix is in rational canonical form. Conversely, a \(k\) dimensional local system on \(\gamma^{\prime}\) (that is not a direct sum of smaller local systems) uniquely determines a \(k\times k\) matrix in rational canonical form, and the coefficients of this matrix determine the coefficients of the relevant intersection points in \(\widehat{\mathbf{b}}_{\gamma}^{*}\) and the weight on the basepoint of \(\gamma\). In this way the linear combination \(\widehat{\mathbf{b}}_{\gamma}^{*}\) along with the weight on \(\gamma\) is equivalent to a choice of \(k\)-dimensional local system on the primitive curve underlying \(k\). For primitive curves \(\gamma\) there are no local system intersection points so \(\widehat{\mathbf{b}}_{\gamma}^{*}\) is trivial, leaving only the (nonzero) weight \(c_{0}\) of the curve; this can also be interpreted as a \(1\)-dimensional local system represented by the invertible \(1\times 1\) matrix \([c_{0}]\). Considering all components of the weighted curve \(\Gamma\), \(\widehat{\mathbf{b}}^{*}\) and the weights on \(\Gamma\) encodes a choice of local system on the primitive curves underlying each component. **Definition 6.3**.: We will say that a linear combination \(\widehat{\mathbf{b}}\) of points in \(\mathcal{I}_{0}\) has _local system type_ if the coefficient of any point in \(\mathcal{I}_{0}\setminus\mathcal{I}_{0}^{*}\) is zero, so that \(\widehat{\mathbf{b}}^{*}=\widehat{\mathbf{b}}\). In Section 7 we will show that any bigraded complex over \(\widehat{\mathcal{R}}\) can be represented up to homotopy by a decorated immersed multicurve \((\Gamma,\widehat{\mathbf{b}})\) such that \(\widehat{\mathbf{b}}\) is of local system type. This is equivalent to saying that every complex over \(\widehat{\mathcal{R}}\) can be be represented by an immersed multicurve decorated with local systems (compare [11, Theorem 1] and [12, Theorem 1]). ### Useful properties of precomplexes from curves in simple position In Section 9, we will consider immersed multicurves \(\Gamma\) with collections of turning points \(\mathbf{b}\) with the additional hypotheses that \(\Gamma\) is in almost simple position and that the restriction \(\widehat{\mathbf{b}}\) of \(\mathbf{b}\) to degree \(0\) self-intersection points is of local system type. Decorated curves of this form are relatively well behaved, and we now collect some properties of the corresponding precomplexes that will be useful in Section 9. In particular, although \(\partial^{2}\) need not be zero on this precomplex, we can show that certain terms of \(\partial^{2}\) must be zero. We will also see that if the precomplex is in fact a complex (i.e. if \(\partial^{2}\) is zero), then the collection of turning points must be a bounding chain. Some of the properties below depend on a more general observation about monogons bounded by arcs on the boundary of an immersed disk. The following proposition is a restating of Proposition A.2 in [10]. **Proposition 6.4**.: _Let \(f:D^{2}\to S^{2}\) be an immersion of a disk into \(S^{2}\), and suppose there is an arc \(A\subset\partial D^{2}\) such that \(f(A)\) is the counterclockwise oriented boundary of an immersed monogon in \(S^{2}\) (where the orientation on \(f(A)\) comes from the boundary orientation of \(D^{2}\)). Then there is another arc \(B\subset\partial D^{2}\) such that \(f(B)\) is the clockwise oriented boundary of an immersed monogon in \(S^{2}\), and \(f(B)\) lies in the interior of the monogon bounded by \(f(A)\). _ Note that this proposition applies to immersed disks in \(S^{2}\), but it also applies to immersed disks in the strip \(\mathcal{S}\), since \(\mathcal{S}\) is a subset of \(\mathbb{R}^{2}\) which itself is equivalent to \(S^{2}\setminus\{\infty\}\). It also applies in the infinite cylinder, which is topologically a twice punctured sphere. Recall that a polygonal path in \(\Gamma\) is a piecewise smooth path in \(\Gamma\) in which adjacent smooth sections are connected by left-turns at self-intersection points of \(\Gamma\). Such paths and immersed polygons they bound can be related to the immersed disk in Lemma 6.4 by smoothing the corners. When \(\Gamma\) is in simple or almost simple position, we have the following restrictions on immersed polygons bounded by polygonal paths in \(\Gamma\): **Lemma 6.5**.: _For an immersed multicurve \(\Gamma\) in simple or almost simple position, let \(P\) be a polygonal path in \(\Gamma\) starting and ending at a self-intersection point \(q\) of \(\Gamma\). If \(P\) does not intersect \(\mu\), then the smoothing of \(P\) is not the clockwise oriented boundary of an immersed monogon._ Proof.: Suppose \(P\) is disjoint from \(\mu\). The path \(P\) consists of some number of arcs \(P_{0}\dots P_{k}\), with each \(P_{i}\) contained in a segment \(s_{i}\) of \(\Gamma\), connected by left turns at intersection points \(c_{1},\dots,c_{k}\), where \(c_{i}\in s_{i-1}\cap s_{i}\) and \(q\in s_{0}\cap s_{k}\). If the smoothing of \(P\) is the clockwise boundary of an immersed monogon then \(P\) is the boundary an immersed polygon with a convex corner at \(q\) and all other corners non-convex. It is clear that there must be at least one obtuse corner \(c_{i}\), since the segments \(s_{i}\) do not intersect themselves. On the other hand, we will show that if such a polygon exists we can construct one with strictly fewer corners; repeating until there are no corners left gives a contradiction. At the corner \(c_{1}\), the segment \(s_{2}\) extends into the interior of the polygon and must leave the polygon again at some point \(q^{\prime}\in s_{2}\cap P\). Since any two segments of \(\Gamma\) intersect at most once, \(q^{\prime}\) does not lie on \(s_{1}\). We construct a path \(P^{\prime}\) by concatenating the segment of \(s_{2}\) from \(q^{\prime}\) to \(c_{1}\) with the portion of \(P\) from \(c_{1}\) to \(q^{\prime}\). This new path \(P^{\prime}\) is the clockwise oriented boundary of an immersed polygon with convex corner at \(q^{\prime}\). This polygon is contained in the one bounded by \(P\) and so is still disjoint from \(\mu\), and it has strictly fewer crossings. **Lemma 6.6**.: _For an immersed multicurve \(\Gamma\) in simple or almost simple position, let \(P\) be a polygonal path in \(\Gamma\) from \(x\) to \(y\) in \(\Gamma\cap\mu\) such that \(P\) along with the segment of \(\mu\) from \(y\) to \(x\) bounds an immersed polygon. If \(P^{\prime}\) is a portion of \(P\) starting and ending at a self-intersection point \(q\) and \(P^{\prime}\) does not intersect \(\mu\), then \(P\) is not the boundary of an immersed polygon._ Proof.: If we smooth the corners of the immersed polygon bounded by \(P\) and part of \(\mu\), we get an immersed disk \(D\). If \(P^{\prime}\) is the boundary of an immersed polygon, then after smoothing the corners \(P^{\prime}\) becomes the counterclockwise boundary of an immersed monogon with corner at \(q\). By Lemma 6.4, there must be a portion \(P^{\prime\prime}\) of the smoothing of \(P\) which is contained in the interior of this monogon and which is the clockwise boundary of an immersed monogon. This smooth path \(P^{\prime\prime}\) is the smoothing of some polygonal path \(P^{\prime\prime\prime}\subset P\). This \(P^{\prime\prime\prime}\) is the clockwise boundary of an immersed polygon with an acute corner at \(q\). Since the monogon bounded by the smoothing of \(P^{\prime}\) lies entirely on one side of \(\mu\), the path \(P^{\prime\prime}\) and thus \(P^{\prime\prime\prime}\) is also disjoint from \(\mu\), but this is impossible by Lemma 6.5. We now consider a multicurve \(\Gamma\) in almost simple position equipped with any collection of turning points \(\mathbf{b}\). We do not assume that \(\mathbf{b}\) is a bounding chain. **Lemma 6.7**.: _Let \(x\) and \(x^{\prime}>x\) be generators of \(C(\Gamma,\mathbf{b})\) that are connected by a segment \(s_{x,x^{\prime}}\) of \(\Gamma\) on the right side of \(\mu\)._ Proof.: We first consider the case where \(\Gamma\) is a polygonal path in \(\Gamma\) starting and ending at a self-intersection point \(q\) of \(\Gamma\). By Lemma 6.4, there must be a portion \(P^{\prime\prime}\) of \(P\) starting and ending at a self-intersection point \(q\) of \(\Gamma\). _._ * _If_ \(y>x^{\prime}\)_, then the coefficient of_ \(y\) _in_ \(\partial^{2}(x)\) _is zero._ * _If_ \(y<x\) _then the coefficient of_ \(x^{\prime}\) _in_ \(\partial^{2}(y)\) _is zero._ Proof.: We will prove (i); (ii) is similar. The \(\mathcal{A}_{\infty}\)-relations (Proposition 3.10) imply that \(\partial^{2}(x)=-m_{2}^{\mathbf{b}}(m_{0}^{\mathbf{b}}(0),x)\). In particular, the \(y\) term in \(\partial^{2}(x)\) counts generalized triangles with corners at \(x\), \(y\), and \(p\) for any self-intersection point \(p\) of \(\Gamma\), weighted with an appropriate coefficient depending on \(p\). We will show that there are no such triangles, implying that the \(y\) term in \(\partial^{2}(x)\) is zero. Suppose there is such a generalized immersed triangle \(D\). Let \(P\) be the polygonal path in \(\Gamma\) from \(x\) to \(y\) that along with the segment from \(y\) to \(x\) in \(\mu\) makes up the boundary of \(D\). Note that \(P\) consists of a polygonal path consistent with \(\mathbf{b}\) from \(x\) to \(p\) along with a polygonal path consistent with \(\mathbf{b}\) from \(p\) to \(y\). Let \(z\) denote the first intersection of \(P\) with \(\mu\) (not counting the initial point \(x\)), and let \(P^{\prime}\) denote the portion of \(P\) from \(x\) until \(z\). We first argue, using an induction on paths which agree with \(P^{\prime}\) up to a point, that \(x<z\leq x^{\prime}\). Suppose the path \(P^{\prime}\) makes \(n\) left turns before reaching \(z\), and that the \(i\)th left turn is from a segment \(s_{i-1}\) of \(\Gamma\) to a segment \(s_{i}\) of \(\Gamma\). For \(0\leq i\leq n\), let \(P_{i}\) denote the piecewise smooth path in \(\Gamma\) that starts at \(x\) and agrees with \(P^{\prime}\) through the first \(i\) corners but then continues along \(s_{i}\) without making any more left-turns until reaching \(\mu\) at some point \(z_{i}\). In particular, \(P_{0}\) is just the segment \(s_{0}=s_{x,x^{\prime}}\) so that \(z_{0}=x^{\prime}\), and \(P_{n}\) agrees with the path \(P^{\prime}\) so that \(z_{n}=z\). Note that \(P_{0}\) and the segment in \(\mu\) from \(z_{0}\) to \(x\) bounds a bigon \(D_{0}\). We will now show that \(x<z_{i}<z_{i-1}\) and that \(P_{i}\) along with the segment in \(\mu\) from \(z_{i}\) to \(x\) bounds an immersed polygon \(D_{i}\), for each \(0<i\leq n\). The path \(P_{i}\) diverges from the path \(P_{i-1}\) at some intersection point \(p_{i}\). Since \(P_{i}\) makes a left turn at \(p_{i}\) it turns into the interior of \(D_{i-1}\) along the segment \(s_{i}\) of \(\Gamma\), while the path \(P_{i-1}\) continues along the segment \(s_{i-1}\) to until it reaches \(z_{i-1}\). We consider the point \(q\) at which the path \(P_{i}\) first leaves the polygon \(D_{i-1}\). This point can not be on the segment of \(s_{i-1}\) between \(p_{i}\) and \(z_{i-1}\), because \(p_{i}\) is the only intersection between the segments \(s_{i-1}\) and \(s_{i}\). It also can not be on the part of \(P_{i}\) before the turn at \(p_{i}\). If it were, the portion of \(P_{i}\) from \(q\) to \(q\) would be the counterclockwise oriented boundary of an immersed polygon lying entirely on the right side of \(\mu\). Moreover, since the path \(P\) can only deviate from \(P_{i}\) by turning leftward (into the immersed polygon) and would then need to leave the immersed polygon somewhere, it is clear in this case that a portion of \(P\) also bounds a counterclockwise immersed polygon disjoint from \(\mu\). But this is impossible by Lemma 6.6. Thus \(P_{i}\) must leave the polygon \(D_{i-1}\) at some point \(z_{i}\) on the segment of \(\mu\) between \(x\) and \(z_{i-1}\). It follows that \(x<z_{i}<z_{i-1}\). Removing the triangle formed by \(\mu\), \(s_{i}\), and \(s_{i-1}\) from the polygon \(D_{i-1}\) gives a new polygon \(D_{i}\) whose boundary is the polygonal path \(P_{i}\) followed by the segment of \(\mu\) from \(z_{i}\) to \(x\). Since \(z\leq x^{\prime}\), we have in particular that \(z<y\). We now consider the portion of the boundary of \(D\) given by the segment of \(\mu\) from \(z\) to \(x\) followed by the polygonal path \(P^{\prime}\) from \(x\) to \(z\). This is the counterclockwise boundary of an immersed polygon with a convex corner at \(z\), namely the polygon \(D_{n}\). However, this is not possible by the same reasoning used in the proof of Lemma 6.6: if we smooth the boundary of \(D\) we get an immersed disk and smoothing the boundary of \(D_{n}\) except the corner at \(z\) gives an arc in the boundary of this disc bounding a counterclockwise monogon. By Lemma 6.4, there must be a portion of the boundary of the smoothed disk lying strictly inside \(D_{n}\) that bounds a clockwise monogon. This must come from smoothing a portion of the polygonal path \(P\) that is the clockwise oriented boundary of an immersed polygon in the interior of \(D_{n}\); since the interior of \(D_{n}\) is disjoint from \(\mu\), such a path does not exist by Lemma 6.5. The following Lemma is analogous to Lemma 6.7 considering segments and bigons on the left side of \(\mu\). The proof is identical, with all pictures rotated by \(\pi\). **Lemma 6.8**.: _Consider an immersed multicurve \(\Gamma\) in \(\mathcal{S}\) with a collection of turning points \(\mathbf{b}\). Let \(x\) and \(x^{\prime}<x\) be generators of the precomplex \(C(\Gamma,\mathbf{b})\) that are connected by a segment \(s_{x,x^{\prime}}\) of \(\Gamma\) on the left side of \(\mu\)._ * _If_ \(y<x^{\prime}\)_, then the coefficient of_ \(y\) _in_ \(\partial^{2}(x)\) _is zero;_ * _If_ \(y>x\) _then the coefficient of_ \(x^{\prime}\) _in_ \(\partial^{2}(y)\) _is zero._ Our last observation about \(\partial^{2}=0\) in the precomplex \(C(\Gamma,\mathbf{b})\) concerns when it is identically zero. As noted earlier, if the collection of turning points \(\mathbf{b}\) is a bounding chain then \(\partial^{2}=0\). The converse to this does not hold in general, but it does when \(\Gamma\) in in almost simple position and the restriction \(\widehat{\mathbf{b}}\) of \(\mathbf{b}\) to degree zero intersection points is of local system type. **Proposition 6.9**.: _Consider an immersed multicurve \(\Gamma\) in \(\mathcal{S}\) in almost simple position decorated with a collection of turning points \(\mathbf{b}\) such that \(\widehat{\mathbf{b}}\) is of local system type. In the precomplex \(C(\Gamma,\mathbf{b})\), \(\partial^{2}=0\) if and only if \(\mathbf{b}\) is a bounding chain._ Proof.: By the \(\mathcal{A}_{\infty}\)-relations, we have that \[\partial^{2}(x)=m_{1}^{\mathbf{b}}(m_{1}^{\mathbf{b}}(x))=(-1)^{\mathrm{gr}_ {z}(x)}m_{2}^{\mathbf{b}}(m_{0}^{\mathbf{b}}(),x)-m_{2}^{\mathbf{b}}(x,m_{0}^{ \mathbf{b}}()).\] The first term on the right is zero, since \(m_{0}^{\mathbf{b}}\) is zero on \(\mu\). Clearly if \(\mathbf{b}\) is a bounding chain on \(\Gamma\), meaning that \(m_{0}^{\mathbf{b}}=0\), then \(\partial^{2}=0\). We need to show the converse, that if \(m_{0}^{\mathbf{b}}\) is not zero then \(\partial^{2}\neq 0\). We have that \(m_{0}^{\mathbf{b}}()\) is a linear combination over \(\mathcal{R}^{-}\) of self-intersection points of \(\Gamma\). In fact, since \(\deg_{w}=\deg_{z}\), the powers of \(U\) and \(V\) agree in every coefficient of \(m_{0}^{\mathbf{b}}()\). That is, we can think of the coefficients as being in \(\mathbb{F}[W]\), where \(W\) denotes the product \(UV\). For any \(p\) in \(\mathcal{I}\) the power of \(W\) appearing in the coefficient of \(p\) must be \(1+\deg(p)/2\). Thus we have \[m_{0}^{\mathbf{b}}()=\sum_{p\in\mathcal{I}}c_{p}W^{1+\deg(p)/2}p.\] Figure 28. The path \(P_{i}\) and bigon \(D_{i}\) described in the proof of Lemma 6.7. The path \(P_{i}\) differs from \(P_{i-1}\) by making one additional turn, and \(D_{i}\) is obtained from \(D_{i-1}\) by removing the darkly shaded triangular region. Because \(\Gamma\) is in almost simple position, it is clear that there can be no generalized immersed monogons that do not either enclose a marked point of \(\mathcal{S}\) or have a false corner at an intersection point in \(\mathbf{b}\). Moreover, a false corner at a local system intersection point alone does not allow a path to bound a monogon, since the path lies in a neighborhood of a similar path with no false corners. Thus, since \(\widehat{\mathbf{b}}\) is of local system type, any monogon either encloses a marked point or contains a false corner at an intersection point with strictly negative degree; in either case, the weight of the monogon is a multiple of \(W\). It follows that if the coefficient \(c_{p}\) in \(m_{0}^{\mathbf{b}}()\) is nonzero then \(p\) has non-negative degree. Suppose that \(m_{0}^{\mathbf{b}}()\) is nonzero. We choose an intersection point \(p\) with maximal degree such that \(c_{p}\) is nonzero. Let \(s_{x}\) and \(s_{y}\) be the two segments of \(\Gamma\) that intersect at \(p\). Note that at most one of \(s_{x}\) and \(s_{y}\) has an endpoint on \(\partial\mathcal{S}\); to see this, observe that the ordering on endpoints of immersed arcs assumed when \(\Gamma\) is in almost simple position implies that any crossing between two segments approaching the boundary has strictly negative degree. Since either \(s_{x}\) or \(s_{y}\) has two endpoints on \(\mu\), we may choose an endpoint \(x\) of \(s_{x}\) and an endpoint \(y\) of \(s_{y}\) such that the segments are either oriented from \(x\) and \(y\) to \(p\) or they are oriented from \(p\) to \(x\) and \(y\). Up to relabelling \(s_{x}\) and \(s_{y}\), we can assume that \(x<y\) if \(p\) lies on the right side of \(\mu\) and \(x>y\) if \(p\) lies on the left side of \(\mu\). We will show that the \(y\) term in \(\partial^{2}(x)\) in nonzero. By the \(\mathcal{A}_{\infty}\)-relations \(\partial^{2}(x)=-m_{2}^{\mathbf{b}}(x,m_{0}^{\mathbf{b}}())\), and the coefficient of \(y\) in this is \(\sum_{q\subset\mathbb{T}}c_{q}n_{q}\), where \(c_{q}\) is the coefficient of \(q\) in \(m_{0}^{\mathbf{b}}()\) and \(n_{q}\) is the weighted count of generalized immersed triangles with corners at \(x\), \(y\), and \(q\). The triangle bounded by \(\mu\), \(s_{x}\), and \(s_{y}\) has corners at \(x\), \(y\), and \(p\); since \(c_{p}\) is nonzero by assumption, this gives a nontrivial contribution to the \(y\) coefficient in \(\partial^{2}(x)\). Consider any other triangle with corners at \(x\), \(y\), and \(q\). The \(\Gamma\) part of the boundary of this triangle is a polygonal path from \(x\) to \(y\), which must start by moving along \(s_{x}\) and end by moving along \(s_{y}\). Because this polygonal path can only make left turns, similar reasoning to that in Lemma 6.7 (see Figure 28) implies that the path can not leave the triangle bounded by \(s_{x}\), \(s_{y}\), and \(\mu\). The polygonal path must contain at least one false corner at a point in \(\mathbf{b}\). If the degree of this false corner is negative then \(\deg(q)>\deg(p)\), since the sum of the degrees of all corners on a polygonal path from \(x\) to \(y\) is fixed, being determined by the gradings on \(x\) and \(y\). But this implies that \(c_{q}\) is zero, since \(p\) was chosen to have maximal degree with nonzero \(c_{p}\), and so the triangle does not contribute to the \(y\) coefficient of \(\partial^{2}(x)\). If the false corner has degree zero, then it must be at a local system intersection point since \(\widehat{\mathbf{b}}\) is of local system type. It follows that at least one of \(s_{x}\) or \(s_{y}\) lie on non-primitive components of \(\Gamma\) (say these have multiplicities \(k_{x}\) and \(k_{y}\), respectively, and that \(q\) is the intersection of \(s_{x^{\prime}}\) and \(s_{y^{\prime}}\) where \(s_{x^{\prime}}\) is in the bundle of \(k_{x}\) nearby segments containing \(s_{x}\) and \(s_{y^{\prime}}\) is in the bundle of \(k_{y}\) nearby segments containing \(s_{y^{\prime}}\). Note that there is a partial order on the \(k_{x}k_{y}\) intersection points between these two bundles of segments where \(q<p\) means that \(q\) is contained in the triangle formed by the segments intersecting at \(p\). In addition to choosing \(p\) to have maximal degree among intersection points with \(c_{p}=0\), we should also choose it to be minimal with respect to this partial ordering. It then follows that if there is another triangle with corner at \(q\), \(c_{q}=0\) and this triangle does not contribute to the \(y\) term in \(\partial^{2}(x)\). ## 7. Immersed curves representing \(Uv=0\) complexes We have seen that an immersed curve \(\Gamma\) in the strip \(\mathcal{S}\) decorated with a bounding chain determines a bigraded complex \(C(\Gamma,\mathbf{b})\) over \(\mathcal{R}^{-}\), and that any complex can be represented in this way by some decorated immersed curve in \(\mathcal{S}\). Our next goal is to show that any complex can be represented by a decorated immersed curve of a particularly nice form. In this section, we do this in the substantially easier setting of complexes over \(\widehat{\mathcal{R}}\). That is, we will prove the following: **Proposition 7.1**.: _For any bigraded complex \(C\) over \(\widehat{\mathcal{R}}\), there is a pair \((\Gamma,\widehat{\mathbf{b}})\) where \(\Gamma\) is an immersed multicurve in \(\mathcal{S}\) in simple position and \(\widehat{\mathbf{b}}\) is a bounding chain of local system type so that \(C(\Gamma,\widehat{\mathbf{b}})\) is homotopy equivalent to \(C\)._ Note that since we are working over \(\widehat{\mathcal{R}}\), \(\widehat{\mathbf{b}}\) is a linear combination of points in \(\mathcal{I}_{0}\) rather than \(\mathcal{I}_{\leq 0}\); any point with strictly negative degree would contribute a positive power of \(UV\) any time it appeared in an immersed polygon contributing to \(m_{k}^{\mathbf{b}}\) and these points can be ignored. Moreover, we only require \(\widehat{\mathbf{b}}\) to be a bounding chain over \(\widehat{\mathcal{R}}\); that is, \(m_{0}^{\mathbf{b}}\) vanishes modulo \(UV\). In light of Proposition 5.6, the interesting part of Proposition 7.1 is the fact that \(\Gamma\) is in simple position and \(\widehat{\mathbf{b}}\) is of local system type. In Section 10.3 we will see that for a homotopy equivalence class of complexes the representative \((\Gamma,\widehat{\mathbf{b}})\) of this form is unique in the sense that \(\Gamma\) is well defined up to homotopy in the punctured strip \(\mathcal{S}^{*}\) and \(\widehat{\mathbf{b}}\) is well defined as a linear combination of self-intersection points (up to the obvious identification of local system self-intersection points between two homotopic curves in simple position). **Remark 7.2**.: Proposition 7.1 (and the corresponding uniqueness statement) is equivalent to Theorem 5.14 in [13], and when \(\mathbb{F}=\mathbb{Z}/2\) it also follows from the proof of Theorem 5 in [12]. We include a proof in order to make this paper self-contained, and since we will build on this crucial construction in later sections when we remove the \(UV=0\) simplification. The proof here, while ultimately equivalent to the proofs in [13] and [12], avoids the language of type D structures and uses only the language of bigraded complexes over \(\widehat{\mathcal{R}}\), which may be more comfortable for some readers. ### Naive train tracks We fix a complex \(C\) over \(\widehat{\mathcal{R}}\) that we wish to represent by a decorated immersed curve. By Proposition 2.2, we may assume that \(C\) is reduced. Proposition 5.6 ensures that there is some decorated curve representing \(C\) over \(\widehat{\mathcal{R}}\), so our strategy will be to systematically simplify this representative into the form predicted by Proposition 7.1. To describe this simplification, we will work with immersed train tracks so that we can use the arrow sliding moves developed in Section 4. A naive immersed curve representative for \(C\) determines a train track representing \(C\), which we call a _naive train track_ representing \(C\). One approach would be to start applying arrow slide moves to the crossover arrows in this train track (this would involve resolving crossings and would likely result in splitting off immersed arcs with both boundaries on \(\partial\mathcal{S}\), which could then be deleted). However, we can get a significant head start to simplifying the train track if we pick nice bases for the complex \(C\). **Lemma 7.3**.: _Any bigraded complex \(C\) over \(\widehat{\mathcal{R}}\) is represented by an immersed train track \(\boldsymbol{\vartheta}\) in \(\mathcal{S}\) of the following form:_ * _The restriction of_ \(\boldsymbol{\vartheta}\) _to the regions_ \([-\frac{1}{2},-\frac{1}{4}]\times\mathbb{R}\) _and_ \([0,\frac{1}{2}]\times\mathbb{R}\) _consists of a collection of arcs;_ * _The restriction of_ \(\boldsymbol{\vartheta}\) _to the region_ \([-\frac{1}{4},0]\times\mathbb{R}\) _consists of a collection of horizontal segments (one for each generator of_ \(C\)_) connected by crossover arrows; and_ * _the crossover arrows either point downward or connect segments corresponding to generators of the same Alexander grading;_ A train track of the form described in Lemma 7.3 will be called a _curve-with-arrows train track_ representing \(C\). Before proving Lemma 7.3, we first need to review how arrow sliding in a train track relates to changes of basis in the corresponding complex. **Proposition 7.4**.: _Suppose \(\boldsymbol{\vartheta}\) is a train track in \(\mathcal{S}\) representing a reduced complex \(C\) over \(\widehat{\mathcal{R}}\) with respect to some basis \(\{x_{1},\ldots,x_{n}\}\). Let \(\boldsymbol{\vartheta}^{\prime}\) be the train track obtained from \(\boldsymbol{\vartheta}\) by adding crossover arrows in a neighborhood of \(\mu\) from \(x_{i}\) to \(x_{j}\), as in Figure 29 (note that we add oppositely weighted arrows on either side of \(\mu\) if \(A(x_{i})=A(x_{j})\), an arrow to the right of \(\mu\) if \(A(x_{i})<A(x_{j})\), or an arrow to the left of \(\mu\) if \(A(x_{i})>A(x_{j})\)). Then \(\boldsymbol{\vartheta}^{\prime}\) represents \(C\) with respect to a different basis in _which \(x_{i}\) is replaced with \(x_{i}+cx_{j}\) if \(A(x_{i})=A(x_{j})\), with \(x_{i}+cU^{k}x_{j}\) if \(A(x_{j})-A(x_{i})=k>0\), or with \(x_{i}+cV^{\ell}x_{j}\) if \(A(x_{i})-A(x_{j})=\ell>0\)._ Proof.: We first observe that any of the crossover arrows being added are unobstructed modulo \(UV\). To see this, note that for any bigon bounded by \(\boldsymbol{\vartheta}\) and the left side (that is, left when facing in the direction of the arrow) of a rectangular neighborhood of the crossover arrow there is a corresponding bigon bounded by \(\boldsymbol{\vartheta}\) and \(\mu\). Since \(C\) is reduced, the weight of any such bigon should include either \(U\) or \(V\). If it contains both \(U\) and \(V\), then this bigon may be ignored. If not, then the bigon formed with \(\mu\) must have corners at different Alexander gradings. The corresponding bigon with the left side of a neighborhood of the crossover arrow then cover at least one additional marked point of the type not covered by the bigon with \(\mu\), so again this bigon has weight a multiple of \(UV\). In the case that \(A(x_{i})=A(x_{j})\) we can introduce two oppositely weighted arrows on the left side of \(\mu\) without affecting the corresponding complex (c.f. the \(n\)-strand arrow replacements in the top row of Figure 18). Then sliding one of these arrows across \(\mu\) has the effect of the given change of basis, by Proposition 4.3. In the case that \(A(x_{j})-A(x_{i})=k>0\), we can similarly add two oppositely weighted crossover arrows on the right side of \(\mu\) and then slide the left arrow (with weight \(-c\)) leftward. The arrow first slides past \(k\) different \(w\) marked points, changing its weight to \(-cV^{k}\). It then crosses \(\mu\), which has the effect of the given change of basis, by Proposition 4.3. Finally, it slides past \(k\) different \(z\) marked points, adding a factor of \(U^{k}\) to its weight. At this point the arrow can be deleted since we are working modulo \(UV\). The case that \(A(x_{i})-A(x_{j})=\ell>0\) is similar. Another key observation in the proof of Lemma 7.3 is that, for any train track \(\boldsymbol{\vartheta}\) representing \(C\) over \(\widehat{\mathcal{R}}\), every bigon contributing to \(\partial^{\boldsymbol{\vartheta}}\) lies on one side of the \(\mu\) or the other (or, more precisely, for any pair of generators the weighted sum of bigons that have portions on both sides of \(\mu\) is zero). For any bigon with portions on both sides of \(\mu\), consider a connected component of the bigon minus \(\mu\) that is a bigon. If this bigon encloses a marked point, then the bigger bigon encloses a marked point of each type and does not contribute modulo \(UV\). If the smaller bigon has a positive power of \(W\) in the weight of its boundary, then so does the larger bigon which again does not contribute \(UV\). If neither of these two things happen, then the small bigon contributes to \(\partial^{\boldsymbol{\vartheta}}\) with no power of \(U\) or \(V\). Since we assume that \(C\) is reduced and \(C^{\boldsymbol{\vartheta}}\) agrees with \(C\), this bigon must cancel with another bigon and the same is true for the original larger bigon. Because we only need to consider bigons that lie on one side of \(\mu\), we can split \(\boldsymbol{\vartheta}\) into two pieces \(\boldsymbol{\vartheta}_{L}\) and \(\boldsymbol{\vartheta}_{R}\) on the left and right side of \(\mu\), respective, and work with these two sides independently. The train track \(\boldsymbol{\vartheta}_{L}\) in the left half of \(\mathcal{S}\) represents the hat vertical complex of \(C\) (that is, the \(U=0\) complex) and the train track \(\boldsymbol{\vartheta}_{R}\) in the right half of \(\mathcal{S}\) represents the hat horizontal complex of \(C\) (that is, the \(V=0\) complex). Proof of Lemma 7.3.: Consider a horizontally simplified basis \(\{x_{1}^{h},\ldots,x_{n}^{h}\}\) for \(C\). There is a particularly nice train track \(\boldsymbol{\vartheta}_{R}^{h}\) on the right side of \(\mu\) that represents the horizontal complex with respect Figure 29. Adding the crossover arrows pictured corresponds to a change of basis replacing \(x_{i}\) with \(x_{i}+cx_{j}\), \(x_{i}+cU^{k}x_{j}\), or \(x_{i}+cV^{\ell}x_{j}\), respectively. If the horizontal segments are oriented leftward, we multiply the weights on the crossover arrows by \(-1\). to this basis; \(\boldsymbol{\vartheta}_{R}^{h}\) has a point on \(\mu\) for each generator \(x_{i}^{h}\) and an appropriately weighted immersed curve segment connecting the points corresponding to \(x_{i}^{h}\) to \(x_{j}^{h}\) for each horizontal arrow from \(x_{i}^{h}\) to \(x_{j}^{h}\). Because the basis is horizontally simplified, there is at most one such segment attached to each point. For any point \(x_{i}^{h}\) with no segment attached, we attach a horizontal segment connecting \(x_{i}^{h}\) to \(\partial_{R}\mathcal{S}\). We let \(\boldsymbol{\vartheta}^{h}\) be the union of \(\boldsymbol{\vartheta}_{R}^{h}\) with any train track \(\boldsymbol{\vartheta}_{L}^{h}\) on the left side of \(\mu\) representing the vertical complex of \(C\) with respect to the basis \(\{x_{1}^{h},\ldots,x_{n}^{h}\}\) (for example, this could be the left side of a naive train track). Thus \(\boldsymbol{\vartheta}^{h}\) is simple on the right side of \(\mu\) but (potentially) complicated on the left side of \(\mu\). We can similarly define a train track \(\boldsymbol{\vartheta}^{v}=\boldsymbol{\vartheta}_{L}^{v}\cup\boldsymbol{ \vartheta}_{R}^{v}\) representing \(C\) with respect to a vertically simplified basis \(\{x_{1}^{v},\ldots,x_{n}^{v}\}\) such that \(\boldsymbol{\vartheta}_{L}^{v}\) is a collection of arcs and \(\boldsymbol{\vartheta}_{R}^{v}\) is potentially complicated. We will modify the train track \(\boldsymbol{\vartheta}^{v}\) by compressing \(\boldsymbol{\vartheta}_{L}^{v}\) into the strip \([-\frac{1}{2},-\frac{1}{4}]\times\mathbb{R}\), compressing \(\boldsymbol{\vartheta}_{R}^{v}\) into the strip \([\frac{1}{4},\frac{1}{2}]\times\mathbb{R}\), and replacing each intersection with \(\mu\) with a horizontal line segment across the strip \([-\frac{1}{4},\frac{1}{4}]\times\mathbb{R}\); it is clear that this homotopy does not affect the associated complex. We then modify the train track further by adding crossover arrows in the strip \([-\frac{1}{4},\frac{1}{4}]\times\mathbb{R}\), which realizes a change of basis in the corresponding complex. In particular, consider an elementary change of basis that replaces \(x_{i}\) with \(x_{i}+cU^{k}V^{\ell}x_{j}\), where \(A(U^{k}V^{\ell}x_{j})\leq A(x_{i})\). If \(k\) and \(\ell\) are both positive, then the corresponding basis change has no effect modulo \(UV\) and we do not modify the train track. If \(k=l=0\) then \(x_{i}\) and \(x_{j}\) have the same Alexander grading; we realize a basis change of this form by inserting a pair of crossover arrows on either side of \(\mu\) connecting the \(i\)th horizontal segment to the \(j\)th horizontal segment. These arrows have weights \(c\) and \(-c\) as in Figure 29. If \(k=0\) and \(\ell>0\) we add a crossover arrow with weight \(c\) from the \(i\)th segment to the \(j\)th segment on the left side of \(\mu\) (this arrow moves downward since \(A(x_{j})<A(x_{i})\) in this case). If \(\ell=0\) and \(k\geq 0\) we add an upward moving crossover arrow from the \(i\)th segment to the \(j\)th segment on the right side of \(\mu\). An elementary basis change that replaces \(x_{i}\) with \(cx_{i}\) can be realized by adding basepoints of weights \(c\) and \(c^{-1}\) on the \(i\)th segment on either side of \(\mu\), and a change of (ordered) basis switching two generators can be realized by introducing a crossing between the corresponding horizontal strands on either side of \(\mu\). The horizontally simplified basis \(\{x_{i}^{h}\}\) can be obtained from the vertically simplified basis \(\{x_{i}^{v}\}\) by some sequence of elementary basis changes. Adding arrows to \(\boldsymbol{\vartheta}^{v}\) as above to realize this sequence of elementary basis changes results in a train track \(\boldsymbol{\vartheta}^{\prime}=\boldsymbol{\vartheta}_{L}^{\prime}\cup \boldsymbol{\vartheta}_{R}^{\prime}\) that represents \(C\) with respect to the horizontally simplified basis. We now have two train tracks, \(\boldsymbol{\vartheta}^{h}\) and \(\boldsymbol{\vartheta}^{\prime}\), that represent \(C\) with respect to the basis \(\{x_{i}^{h}\}\). The first is simple on the right side of \(\mu\) and the second is simple on the left side of \(\mu\). We define \(\boldsymbol{\vartheta}\) to be \(\boldsymbol{\vartheta}_{L}^{\prime}\cup\boldsymbol{\vartheta}_{R}^{h}\) and observe that it has the desired form. For example, consider the complex from Example 5.2. Figure 30 shows the construction of a curve-with-arrows train track representing this complex, using the vertically simplified basis \(\{a,b,d,c,e,f,g-f\}\) and the horizontally simplified basis \(\{a-b,b,d,c,e,f,g\}\). This horizontally simplified basis is obtained from the vertically simplified basis by two elementary basis changes, with one replacing \(a\) with \(a-b\) and the other replacing \(g-f\) with \((g-f)+f=g\). ### Removing crossover arrows To get from Lemma 7.3 to Proposition 7.1, we need to show that essentially all crossover arrows can be removed without changing the complex determined by the train track up to homotopy. This is done using an arrow sliding algorithm that was first described in [HRW, Section 3.7]. We will now describe the arrow sliding algorithm needed for the train tracks in \(\mathcal{S}\) relevant to bifiltered complexes. The proof presented here is self contained, but some proofs that also appear in [HRW] are repeated more tersely. The reader may wish to compare to the more detailed explanation in [HRW], which is adapted to train tracks in arbitrary surfaces. Keep in mind that [HRW] assumes \(\mathbb{Z}/2\mathbb{Z}\) coefficents, while we work with an arbitrary field \(\mathbb{F}\) (see also [KWZa] for a general treatment of the algorithm with field coefficients). We remark that for train tracks in \(\mathcal{S}\), the algorithm simplifies slightly from the general case. Let \(\boldsymbol{\vartheta}\) be a curve-with-arrows train track of the form predicted by Lemma 7.3. \(\boldsymbol{\vartheta}\) consists of three regions: the middle region, whose right side is the line \(\mu\), consists of a horizontal segment for each generator of \(C(\boldsymbol{\vartheta})\) and crossover arrows connecting these segments, while the left and right regions consists of arcs connecting pairs of the endpoints of these horizontal segments. We will refer to the Alexander grading of a horizontal segment, by which we mean the Alexander grading of the corresponding generator; this is determined by the height of the segment. We say that the crossover arrows in the middle region are _short_ if they connect horizontal segments with the same Alexander grading, and _long_ otherwise. By the definition of curve-with-arrows train tracks, all long arrows point downwards. The first step of the simplification is to remove all long arrows by sliding them rightward; once they pass all other arrows and reach the line \(\mu\) at the right edge of the middle region, they can be removed by Proposition 7.4 (this corresponds to an elementary basis change in the corresponding complex \(C(\boldsymbol{\vartheta})\)). In the process of sliding the long arrow rightward past some number of short arrows, we apply the local moves pictured in Figure 18. This may at some point introduce a new long arrow, the composition of the long arrow being moved and a short arrow it passes, but this new arrow can be removed in the same way. We can use induction to ensure this process terminates. Suppose the rightmost long arrow has \(k\) short arrows to its right. If \(k=0\), we can remove the arrow, and if \(k>0\), we slide it past the short arrow immediately to its right. If this short arrow commutes with the long one, the result is a long arrow with \(k-1\) short arrows to its right, and if not the result is two long arrows with \(k-1\) short arrows to their right. In either case, by induction on \(k\), these arrows can be removed, and thus all long arrows can be removed in finite time. We now turn to removing short arrows. If there is a single crossover arrow, this is easy. The idea is to push the crossover arrow along parallel strands as far as possible until those strands diverge; note that this may involve pushing the arrow through \(\mu\), applying a basis change according to Proposition 7.4. We say that a crossover arrow is _removable_ if there is a path, disjoint from \(\boldsymbol{\vartheta}\) or \(\mu\), from the left side of the arrow either to a puncture or to the boundary of the strip \(\mathcal{S}\) (see Figure 31). In the latter case, there are clearly no bigons whose boundary involves the crossover arrow so it can be deleted with no effect on the complex \(C^{\boldsymbol{\vartheta}}\). In the former case, any bigon involving the crossover arrow must meet a puncture. If the crossover arrow is pushed as far as possible it becomes a long arrow, either pointing down on the left side of \(\mu\) or up on the right side of \(\mu\), and can be removed by Proposition 7.4. When two strands diverge, there is a left strand and a right strand; a crossover arrow connecting the two strands will be removable, once pushed to where the strands diverge, if it moves from the left strand to the right strand. Thus to remove a single crossover arrow connecting two strands which eventually diverge, we push the arrow one direction until the strands diverge and remove the arrow if it moves left-to-right. If it moves right-to-left, we slide it the other direction and again remove it if it moves left-to-right. If the arrow is not removable on either end then the strands must have crossed, and we can apply the local move in the last row of Figure 18 to resolve the crossing at the expense of adding a second arrow. The two resulting crossover arrows will be removable when pushed in Figure 30. The train tracks described in the proof of Lemma 7.3 for the example complex shown in Example 5.2. The gray dots represent basepoints with weight \(-1\), all others edges have weight \(1\). opposite directions. In this way, any single crossover arrow can be removed unless it connects closed immersed curves that never diverge. We need to extend the basic strategy to train tracks \(\boldsymbol{\vartheta}\) starting with any number of short crossover arrows, showing the \(\boldsymbol{\vartheta}\) can be reduced to a new train track which consists only of immersed curves and crossover arrows that connect parallel closed curves. The obvious strategy is to remove one arrow at a time as above. The only potential problem is that sliding an arrow past other arrows often introduces new arrows; we need to ensure that the original arrows and any new ones created can all be removed in finite time. To do this, we introduce a notion of complexity for arrows and remove them in order of increasing complexity. We show that arrows can be removed while only creating new arrows of higher complexity. There is an upper bound for complexity of arrows that do not connect parallel closed curves, so eventually these are the only arrows left. Colors and complexityRecall that if crossover arrows are ignored, \(\boldsymbol{\vartheta}\) consists of a collection of immersed curves and arcs. We will label each horizontal segment in the middle region with a _left depth 1 color_ and a _right depth 1 color_, which describe how the path in \(\boldsymbol{\vartheta}\) starting from the segment moving in the given direction behaves. Suppose the horizontal segment has Alexander grading \(k\). If the path leaving the segment to the left goes to the left boundary of \(\mathcal{S}\) without returning to the middle region, then the left depth 1 color of the segment is \(s\) (this stands for "straight"). Otherwise, the path returns to the middle region at a horizontal segment with some different Alexander grading \(k^{\prime}\). If the path turned rightward (that is, if \(k^{\prime}>k\)), then the left depth 1 color of the starting segment is \(r_{k^{\prime}-k}\). If the path turned leftward (that is, if \(k>k^{\prime}\)), then the depth 1 left color of the starting segment is \(\ell_{k-k^{\prime}}\). The depth 1 right color is defined similarly with paths leaving the segment to the right: if the path returns to the middle region at Alexander grading \(k^{\prime}\) the right depth 1 color is \(r_{k-k^{\prime}}\) if \(k>k^{\prime}\) and \(\ell_{k^{\prime}-k}\) if \(k^{\prime}>k\), otherwise the path ends on the right boundary of \(\mathcal{S}\) and the right depth 1 color is \(s\). Right and left depth \(n\) colors can be be defined inductively to give more information about the paths starting at a horizontal segment \(x\). These are length at most \(n\) sequences of the letters \(\ell_{i}\), \(r_{i}\) and \(s\). If the right (resp. left) depth 1 color of \(x\) is \(s\), then the right (resp. left) depth \(n\) color of \(x\) is also \(s\); otherwise, the path leaving \(x\) on the right (resp. left) returns to the middle region at a horizontal segment \(y\), and the right (resp. left) depth \(n\) color of \(x\) is the concatenation of the right (resp. left) depth 1 color of \(x\) and the left (resp. right) depth \(n-1\) color of \(y\). See Figure 32 for the depth 2 colorings in an example train track. The depth \(n\) coloring on the left or right side of a horizontal segment is a sequence of letters in \(\{\ell_{i}\}_{i=1}^{\infty}\cup\{s\}\cup\{r_{i}\}_{i=1}^{\infty}\). These letters are ordered so that \(\ell_{i}<s<r_{j}\) for any \(i,j\), \(\ell_{i}<\ell_{j}\) if \(i<j\), and \(r_{i}<r_{j}\) if \(i>j\). In other words, if paths representing each letter are drawn from a segment without crossing, they are ordered from sharpest left turn to sharpest right turn. The depth \(n\) colors among horizontal segments of the same Alexander grading are ordered lexicographically. Colors on endpoints of horizontal segments give rise to labels on crossover arrows, which we call the _left and right complexity_ of the arrow. Suppose a crossover arrow connects a horizontal segment \(x\) to a horizontal segment \(y\), where \(x\) and \(y\) have the same Alexander grading. The left complexity \(w_{\ell}\) of the crossover arrow is defined as follows. Figure 31. Examples of removable crossover arrows. If an arrow is parallel to \(\mu\) and has a puncture to its left, then it can be removed by a change of basis. If the arrow has an unbounded region of \(\mathcal{S}\setminus(\boldsymbol{\vartheta}\cup\mu)\) to its left, then it can not contribute to any bigons and thus can be deleted. * If \(x\) and \(y\) have the same left depth \(n-1\) color and different depth \(n\) color, then \(w_{\ell}\) is \(\pm n\), where the sign is positive if the left depth \(n\) color of \(x\) is less than the left depth \(n\) color of \(y\) and negative otherwise. * If \(x\) and \(y\) have the same left depth \(n\) color for all \(n\), and this color never ends in an \(s\), then \(w_{\ell}\) is \(\infty\). * If \(x\) and \(y\) have the same left color, which ends with an \(s\) for sufficiently large depth, and if \(n\) is the first depth for which the left depth \(n\) color ends in \(s\), then \(w_{\ell}\) is \(n\). In simpler terms, if we push the crossover arrow to the left, the left complexity \(w_{\ell}\) counts how many times the arrow leaves the middle region before the strands it connects diverge or go to \(\partial\mathcal{S}\), and the sign is positive if the arrow is removable when pushed to this point. The right complexity \(w_{r}\) of a crossover arrow is defined analogously. The complexity of the arrow is then defined to be the minimum of \(|w_{r}|\) and \(|w_{\ell}|\). Note that if one complexity is \(\infty\) then both are; this can only occur when the horizontal segments \(x\) and \(y\) lie on the same non-primitive immersed curve or they lie on two closed immersed curves that are multiples of the same primitive curve. **Remark 7.5**.: The coloring used here by the letters \(\ell_{i}\), \(r_{i}\) and \(s\) are analogous to the colorings by \(\{n,e,s,w\}\) used in [HRW]. The left and right complexity are analogous to the weights \(\hat{w}\) and \(\hat{w}\) used in [HRW]; we use the term "complexity" here to avoid confusion with the weights on crossover arrows and train track edges, which were not present in [HRW] since only \(\mathbb{Z}/2\mathbb{Z}\) coefficients were considered. #### Removing lowest complexity arrows Let \(m\geq 1\) denote the minimum complexity of all crossover arrows in \(\boldsymbol{\vartheta}\). We will now show that \(\boldsymbol{\vartheta}\) can be modified so that \(C(\boldsymbol{\vartheta})\) is unchanged modulo \(UV\), up to basis changes, and all crossover arrows in the resulting train track have complexity at least \(m+1\). This simplification will be done one Alexander grading at a time. For each Alexander grading \(k\), the simplification is done in three steps: #### Step 1: Sort crossover arrows The horizontal segments in the middle region of \(\boldsymbol{\vartheta}\) with Alexander grading \(k\) and the crossover arrows between them form an \(n\)-strand arrow configuration. We apply Lemma 4.6 to this configuration with respect to any ordering of the left (resp. right) endpoints which is consistent with the partial ordering determined by their left (resp. right) depth \(m+1\) colors. This replaces the \(n\)-strand arrow configuration with a new configuration which has crossings in the middle and crossover arrows on either side which are non-decreasing with respect to the depth \(m+1\) coloring on that side. It follows that the arrows on the left have \(w_{\ell}=m\), \(w_{\ell}=m+1\), or \(|w_{\ell}|\geq m+2\), and that the arrows on the right have \(w_{r}=m\), \(w_{r}=m+1\), or \(|w_{r}|\geq m+2\). Note that applying Lemma 4.6 to modify the arrow configuration at grading \(k\) may introduce crossings and change the immersed curves in \(\boldsymbol{\vartheta}\), which potentially alters the left and right coloring of horizontal segments. However, this can only affect colors of depth greater than \(m\). This is because crossings are only added between horizontal segments which have the same depth \(m-1\) coloring on each side; to see this, we could have grouped the horizontal strands of Alexander grading \(k\) in bundles having the same left depth \(m-1\) coloring and the same right depth \(m-1\) coloring, and applied Lemma 4.6 to each of these bundles instead of the whole configuration. Note that all crossover arrows are contained in one of these bundles, since there are no arrows with complexity less than \(m\). Since a path in \(\boldsymbol{\vartheta}\) that is modified by a crossing change is unchanged for the first \(m-1\) turns after the crossing, and since the path from any endpoint of any horizontal segment must leave the middle region at least once before reaching a modified crossing, the depth \(m\) coloring of the given endpoint is unchanged. Moreover, for any endpoint of a segment with Alexander grading \(k\), a path from this endpoint must leave the middle region at least twice before returning to a segment at Alexander grading \(k\). It follows that the depth \(m+1\) colors are unaffected for segments with Alexander grading \(k\). #### Step 2: Remove arrows with outer complexity \(m\) or \(m+1\) Once the arrow configuration at Alexander grading \(k\) is sorted, it is straightforward to remove all crossover arrows on the left with \(\{m,m+1\}\) and all crossover arrows on the right with \(w_{r}\in\{m,m+1\}\). The following Lemma will be useful. **Lemma 7.6**.: _Suppose \(\boldsymbol{\vartheta}\) consists of immersed curves with crossover arrows. If a given crossover arrow has either \(w_{\ell}\) or \(w_{r}\) equal to \(m-1\), while all other crossover arrows have complexity at least \(m\), then removing the arrow does not change \(C(\boldsymbol{\vartheta})\) modulo \(UV\), up to change of basis._ Proof.: Suppose without loss of generality that \(w_{r}=m-1\). Since \(w_{r}\) is positive and finite, the arrow will be removable if it is pushed to the right as far as possible. The only potential concern is new arrows that are formed along the way. But note that if two arrows have right complexity \(w_{r}\) and \(w_{r}^{\prime}\) and these arrows passing each other forms a new composition arrow, then the right complexity of this new arrow is either \(w_{r}\) or \(w_{r}^{\prime}\) (whichever has the smaller absolute value) unless \(w_{r}=-w_{r}^{\prime}\), in which case the new right complexity can be anything with absolute value at least \(|w_{r}|\). In particular, if an arrow with \(w_{r}=m-1\) slides past an arrow with \(|w_{r}|\geq m\) and a new arrow is introduced, this new arrow also has \(w_{r}=m-1\), and both \(w_{r}=m-1\) arrows are now one step closer to being removed. By induction on the number of arrows a \(w_{r}=m-1\) must slide past before it is removable, all \(w_{r}=m-1\) arrows can be removed by sliding them to the point where the strands they connect diverge. The argument is the same for left complexities \(w_{\ell}\). Consider the crossover arrows on to the left of the crossings in the new \(n\)-strand arrow configuration at Alexander grading \(k\). We can take the leftmost of these arrows for which \(w_{\ell}=m\) and slide it leftward to the end of the configuration. Note that this may introduce new arrows, but these arrows will also have \(w_{\ell}=m\) and we will push these leftward as well. In this way, we can arrange that all crossover arrows with \(w_{\ell}=m\) are at the far left of the middle region. We then take the leftmost arrow and continue pushing leftward until it leaves the middle region of \(\boldsymbol{\vartheta}\) and returns at a new Alexander grading. This changes the complexity labels on the crossover arrow: \(w_{\ell}-1\) becomes the new right complexity, while \(w_{r}+1\) becomes the new left complexity. In particular, the arrow now has \(w_{r}=m-1\) and thus can be removed by by Lemme 7.6. This can be repeated until all the \(w_{\ell}=m\) arrows on the left side of the configuration at Alexander grading \(k\) are removed. We now do the same thing for arrows on the left side of the configuration at grading \(k\) which have \(w_{\ell}=m+1\). We push them all, and any new arrows formed in the process, to the far left and then one by one push them out of the middle region and back to a configuration at a new Alexander grading. This last slide results in an arrow with \(w_{r}=m\). If this happens at an Alexander grading for which complexity \(m\) arrows have not yet been removed, we may stop. If we have pushed the arrow to a grading where complexity \(m\) arrows have already been removed, then we can continue pushing the arrow all the way across the middle region, which can only introduce new \(w_{r}=m\) arrows which can be pushed along too, until the arrow leaves the middle region again. When the arrow is pushed back into the middle region once more it will have \(w_{\ell}=m-1\) and can be removed. The same procedure can be done to arrows with \(w_{r}\in\{m,m+1\}\) on the right side of the arrow configuration. In this way, we arrive at new configuration at Alexander grading \(k\) for which all arrows on the left side have \(|w_{\ell}|\geq m+2\) and all arrows on the right side have \(|w_{r}|\geq m+2\), and we have not introduced any new arrows of complexity less than \(m\), or of complexity \(m\) at gradings for which complexity \(m\) arrows have already been removed. _Step 3: Remove remaining complexity \(m\) arrows._ Consider the crossover arrows on to the left side of the \(n\)-strand arrow configuration at Alexander grading \(k\). These now have \(|w_{\ell}|\geq m+2\) and \(|w_{r}|\geq m\). We now wish to deal with arrows with \(|w_{r}|=m\). Take the leftmost of these arrows and push it to the leftmost end of the configuration, and then further until it returns to the middle region at a different Alexander grading. Since \(w_{\ell}-1\) becomes the new right complexity and \(w_{r}+1\) becomes the new left complexity, the arrow now has overall complexity of \(m+1\). Again, sliding the arrow to the left of the configuration may produce new arrows, but these new arrows can be pushed leftward in the same way to produce a complexity \(m+1\) arrow at a different Alexander grading. The same can be done to arrows on the right side with \(|w_{\ell}|=m\), pushing the arrow rightward until it becomes a complexity \(m+1\) arrow at a different Alexander grading. At the conclusion of these three steps, the minimum complexity of crossover arrows between segments at Alexander grading \(k\) is \(m+1\). Repeating this for all Alexander gradings, we arrive at a new train track with no crossover arrows of complexity less than \(m+1\). We can now prove Proposition 7.1 using induction on \(m\). Proof of Proposition 7.1.: The argument above shows that any train track with minimum arrow complexity \(m\) can be replaced by a train track with minimum arrow complexity \(m+1\). By induction, it is clear that we can make the minimum complexity of crossover arrows arbitrarily high. Finally, we observe that since there are finitely many horizontal segments, any path must eventually stop or repeat. It follows that there exists an integer \(N\) such that if two horizontal segments have the same left depth \(N\) color or the same right depth \(N\) color then they have the same color for arbitrary depth, and thus any arrow of complexity at least \(N\) actually has \(w_{\ell}=w_{r}=\infty\). We have constructed a weighted immersed multicurve \(\Gamma\) in strip along with a collection of infinite weight crossover arrows which represents a given bigraded complex \(C\). It only remains to interpret these infinite weight crossover arrows as a collection of left turn crossover arrows at local system intersection points. For a given homotopy class of primitive curve \(\gamma\) in \(\mathcal{S}\), consider all the components of \(\Gamma\) which are homologous to some multiple of \(\gamma\), along with any crossover arrows with endpoints on these curves. By sliding crossings and crossover arrows, we can realize this as some number \(n\) of parallel copies of \(\gamma\) with an \(n\)-strand arrow configuration inserted in one place. We may assume the matrix associated to this \(n\)-strand configuration is in rational canonical form, since we conjugate the matrix by sliding crossings or crossover arrows around the curve. Suppose the matrix decomposes into \(k\) blocks with the \(i\)th block of dimension \(n_{i}\). We replace the collection of curves in question with \(k\) immersed curves, where the \(i\)th curve is homologous to \(k_{i}\) times \(\gamma\), and we assume this curve is in simple position. The coefficients of each block in the rational canonical form then specify coefficients for left turn crossover arrows at the local system intersection points on these curves, as discussed in Section 6.2, giving rise to an equivalent train track of the desired form. We end this section by demonstrating the above construction in an example. The left side of Figure 32 shows the curve-with-arrows train track from Figure 30 that represents the complex, with depth \(2\) colors labeled; both crossover arrows have complexity \(1\). Applying one step of the inductive process requires sliding both arrows toward the middle from the bundles of horizontal segments at height \(1\) and \(-1\) to the bundle of segments at height \(0\), as shown in the middle of the figure. These two arrows have complexity \(2\). The next step is to sort the arrows in the middle bundle of horizontal strands with respect to depth \(3\) colors, which requires replacing the two crossover arrows with one crossover arrow and a crossing, along with a basepoint weighted by \(-1\), as shown on the right side of the figure. The resulting arrow is removable when pushed in either direction, so the result is an immersed multicurve with two components (one closed component and one arc component) and trivial bounding chain. ## 8. Immersed curves for \(Uv=0\) complexes with flip maps ### Nice curves in \(\mathcal{Z}\) Having established that any bigraded complex over \(\widehat{\mathcal{R}}\) can be represented by a (suitably nice) decorated immersed multicurve in the infinite strip \(\mathcal{S}\), we now wish to extend this construction to include flip maps. Our aim is to show the following: **Proposition 8.1**.: _A bigraded complex \(C\) over \(\widehat{\mathcal{R}}\) equipped with a flip isomorphism \(\widehat{\Psi}_{*}:H_{*}\widehat{C}^{h}\to H_{*}\widehat{C}^{v}\) can be represented by a decorated curve \((\Gamma,\widehat{\mathbf{b}})\) in the marked cylinder \(\mathcal{Z}\), where \(\Gamma\) is in simple position and \(\widehat{\mathbf{b}}\) is a bounding chain consisting of only local system self-intersection points._ The first step in proving Proposition 8.1 is to construct nearly simplified curves with crossover arrows in the marked cylinder representing the given data. This essentially follows from reversing the process for extracting a complex and flip maps from a decorated curve in a cylinder described in Section 5.3 and uses the curves in representing the complex constructed in Section 7. The main difficulty, as with constructing decorated curves in, is to then remove crossover arrows to ensure the resulting bounding chain is of local system type. Proof of Proposition 8.1.: Our first task is to construct a reasonably nice curve representative in for and. We do this by starting with the decorated curve in representing by Proposition 7.1 and gluing the opposite sides of the strip after first inserting a piece of curve matching up the endpoints according to. More precisely, we will cut the marked cylinder into two pieces, a marked strip and an unmarked strip. We view the marked strip as a copy of (though it is scaled to be half as wide) and we call the unmarked strip. The cylinder is formed by gluing and gluing. We construct a decorated curve in by constructing decorated curves in and in gluing these together. The decorated curve in is the decorated curve representing the complex as in Proposition 7.1. This means that is in simple position and consists of only local system intersection points. The decorated curve of in will consist of arcs from to connecting the endpoints of, along with a collection of turning points. Recall that the endpoints of on correspond to generators of the hat horizontal homology of while endpoints on correspond to generators of the hat vertical homology of. The flip isomorphism is a graded isomorphism from to taking, in particular, for any integer the isomorphism restricts to an isomorphism between the spans of generators with grading. Suppose there are generators with grading ; we connect the corresponding collections of endpoints on and endpoints on by a bundle of strands with an -strand arrow configuration realizing the flip isomorphism restricted to grading. The decorated curve represents the pair over. To see this, observe that adding and gluing the sides of the strip does not affect the Floer homology with modulo. This is because, since is reduced, any bigon that is not contained in must enclose a pair of Figure 32. Left: The train track from Figure 30, an example of the form generated by Lemma 7.3. The middle region is shaded, and the depth colors of the left and right endpoints of each horizontal segment are shown. The top crossover arrow has and, while the bottom arrow has and. Middle: The result of applying one step of the inductive simplification, to increase the minimum arrow complexity from 1 to 2; the left arrow now has and the right arrow has. Right: The result of replacing the -strand arrow configuration at the middle Alexander grading, as in Lemma 4.6. The gray dot represents a basepoint with weight. The remaining crossover arrow has and can be removed by pushing it leftward. marked points. Similarly, if we consider Floer homology with \(\mu_{\Psi}\) perturbed to lie in \(\mathcal{F}\) (as in Figure 23) the obvious bigons in \(\mathcal{F}\) encode \(\widehat{\Psi}_{*}\) by construction, and because \(C\) is reduced any bigon that is not contained in \(\mathcal{F}\) must cover a marked point and does not contribute mod \(UV\). The curve constructed so far represents \((C,\widehat{\Psi}_{*})\) but it does not have the desired form. We do have that \(\widehat{\mathbf{b}}_{\mathcal{S}}\) contains only local system intersection points, but the same may not be true for \(\widehat{\mathbf{b}}_{\mathcal{F}}\). To complete the proof, we need to remove any turning points in \(\widehat{\mathbf{b}}_{\mathcal{F}}\) that are not at local system intersection points. We will use the language of train tracks, interpreting each turning point as a left-turn crossover arrow, and we repeat the the arrow sliding algorithm from Section 7.2, with some modifications. First, we will ignore components of \(\Gamma_{\mathcal{S}}\cup\Gamma_{\mathcal{F}}\) that are contained in the strips \(\mathcal{S}\), since arrows sliding from \(\mathcal{F}\) will never interact with these. Note that this means we can ignore all crossover arrows coming from the \(\widehat{\mathbf{b}}_{\mathcal{S}}\), since these are by assumption at local system intersection points which can only be on closed components of \(\Gamma_{\mathcal{S}}\). The strip \(\mathcal{F}\) will play the role of the central strip \([-\frac{1}{4},0]\) in \(\mathcal{S}\) from Section 7.2. Just as the central strip in \(\mathcal{S}\) contained a bundle of horizontal segments at each Alexander grading with a collection of crossover arrows within each bundle, so \(\mathcal{F}\) contains a bundle of arcs for each value of \(\operatorname{gr}_{w}\) with any crossover arrows contained in a bundle; note that now arcs from different bundles may cross each other, but this does not affect the argument. We again label the arcs in \(\mathcal{F}\) by left and right colors indicating the path the curve takes leaving \(\mathcal{F}\) to the left or right from that arc before returning to \(\mathcal{F}\). While previously the depth \(1\) colors took values in \(\{\ell_{k}\}_{k=1}^{\infty}\cup\{s\}\cup\{r_{k}\}_{k=1}^{\infty}\), there are now more possible depth \(1\) colors with each color representing a homotopy class of path starting and ending on the boundary of a marked strip \(\mathcal{S}\). These colors can be expressed as an integer \(n\) followed by a sequence of letters in \(\{\ell_{k}\}_{k=1}^{\infty}\cup\{r_{k}\}_{k=1}^{\infty}\) and then an \(s\); the initial integer represents the Alexander grading of the first crossing of the path in \(\Gamma_{\mathcal{S}}\) with \(\mu\), multiplied by \(-1\) for right colors in which the relevant path starts on the left side of \(\mathcal{S}\), and the remaining letters represent the turns the path makes each time it returns to \(\mu\) before finally leaving \(\mathcal{S}\). These colors are ordered lexicographically using the usual order on \(\mathbb{Z}\) and the order on \(\{\ell_{k}\}_{k=1}^{\infty}\cup\{s\}\cup\{r_{k}\}_{k=1}^{\infty}\) defined in Section 7.2; equivalently, if \(c\) and \(c^{\prime}\) are colors representing two paths then \(c<c^{\prime}\) if and only if the paths are not parallel and the path with color \(c\) is to the left of the path with color \(c^{\prime}\) when they first diverge. As before, depth \(n\) colors are length \(n\) words whose letters are the colors described above that record how a path following \(\Gamma\) behaves after leaving \(\mathcal{F}\) until the \(n\)th time it returns to \(\mathcal{F}\). A crossover arrow in \(\mathcal{F}\) can be given a complexity \((w_{\ell},w_{r})\) in \(\mathbb{Z}_{>0}\times\mathbb{Z}_{>0}\cup\{(\infty,\infty)\}\) defined in terms of the colors as before. The absolute values \(|w_{\ell}|\) and \(|w_{r}|\) tell us how many times an arrow needs to be pushed out of the strip \(\mathcal{F}\) before the curves it connects diverge, and \(w_{\ell}\) or \(w_{r}\) is positive if the arrow will be left to right moving (and thus removable) when the curves first diverge. We now proceed inductively as in Section 7.2, assuming that all arrows have some minimal complexity \(m\) and removing all arrows with complexity \(m\). We do this on one bundle of like-graded strands in \(\mathcal{F}\) at a time. For each bundle we follow a version of the numbered steps from the algorithm in Section 7.2, although the steps are slightly more complex. In Section 7.2 we were able to make a simplifying assumption, but we now require the full generality of the algorithm as it was introduced in [HRW]. The spirit of the algorithm is the same as the argument in Section 7.2, but more care is needed to control side effects of sliding arrows; we briefly explain the necessary modifications here. The reason the algorithm simplifies for curves \(\mathcal{S}\) is that when a path leaves a bundle of segments at a given Alexander grading, it can not return to the same bundle of strands until it leaves and returns to the middle region at least twice. This ensures that when the crossover arrows are sorted in Step 1 the depth \(m+1\) colors are not affected. However, it is now possible for a path to leave a bundle of arcs in \(\mathcal{F}\), cross to the other side of \(\mathcal{S}\), and return to the same bundle of arcs the next time it returns to \(\mathcal{F}\) (note that the path still can not return to the same side of the bundle it left from, since arcs with the same grading must be oriented the same way). When applying arrow moves to a collection of arrows with minimum depth \(m\), we may now only assume that depth \(m\) colors are preserved and not depth \(m+1\) colors. In the following steps, we have fixed a bundle of arcs in \(\mathcal{F}\) with the same grading \(k\). Suppose there are \(n\) arcs in this bundle, so that the arcs and any crossover arrows between them form an \(n\)-strand arrow configuration. We have assumed all arrows have complexity at least \(m\). _Step 1: Sort crossover arrows to remove outer complexity \(-m\)._ As before, we apply Lemma 4.6 to the given \(n\)-strand arrow configuration. The difference is that we do this with respect to any ordering on the endpoints of the arcs consistent with the depth \(m\) colorings (rather than the depth \(m+1\) colorings as before, since this information will not be preserved). This replaces the \(n\)-strand arrow configuration by a new one with crossings in the middle, crossover arrows on the left of the bundle which have \(w_{\ell}=m\) or \(|w_{\ell}|\geq m+1\), and crossover arrows on the right of the bundle that have \(w_{r}=m\) or \(|w_{r}|\geq m+1\). Depth \(m\) colors are unchanged by this sorting, so all arrows still have complexity at least \(m\). _Step 2: Remove arrows with outer complexity \(m\)._ Any arrows on the left with \(w_{\ell}=m\) can be pushed leftward into the strip \(\mathcal{S}\). If \(m=1\) then the curves the arrow connects will diverge at some point before leaving \(\mathcal{S}\) and the arrow will come up against a marked point on its left side, so the arrow can be removed. If \(m>1\), then the arrow will slide between parallel strands until it returns to \(\mathcal{F}\), at which point it will have \(w_{r}=m-1\) or \(w_{\ell}=m-1\) and can be removed by Lemma 7.6. At this point all arrows on the left have \(|w_{\ell}|\geq m+1\) and \(|w_{r}|\geq m\), while all arrows on the right have \(|w_{r}|\geq m+1\) and \(|w_{\ell}|\geq m\). _Step 3: Remove \(|w_{r}|=m\) arrows on the left._ After the previous steps it makes sense to split our bundle into two \(n\)-strand arrow configuration, one containing the arrows on the left side and one containing the arrows on the right side. We now apply Lemma 4.6 again to the left of these configurations with respect to orderings of the endpoints that are consistent with depth \(m+1\) coloring on the left and depth \(m\) coloring on the right. As a result, the left configuration is replaced with a new configuration such that arrows on the left have \(w_{\ell}\neq-(m+1)\) and arrows on the right have \(w_{r}\neq-m\) (the conditions that \(|w_{\ell}|\geq m+1\) and \(|w_{r}|\geq m\) from Step 2 are also preserved). Any arrows on the right side of the new left configuration with \(w_{r}=m\) can be slid rightward, starting with the rightmost of these arrows. Since the right \(n\)-strand arrow configuration has \(|w_{r}|\geq m+1\), these arrows with \(w_{r}\) can be slid rightward through that configuration without issue until they leave \(\mathcal{F}\). When they return to \(\mathcal{F}\) they will have either \(w_{\ell}=m-1\) or \(w_{r}=m-1\) and they can be removed by Lemma 7.6. We then slide all arrows from the left side of the new left configuration leftward out of \(\mathcal{F}\). When each of these arrows returns to \(\mathcal{F}\) one complexity will have magnitude \(|w_{r}|+1\geq m+1\) and the other with magnitude \(|w_{\ell}|-1\). If \(|w_{\ell}|\geq m+2\) then the resulting arrow has complexity at least \(m+1\). Otherwise \(w_{\ell}=m+1\) and the resulting arrow has weight \(+m\) on one side when it returns to a bundle of arcs. If it returns to a different bundle from which complexity \(m\) arrows have not yet been removed we can now ignore it. If it returns to a bundle from which complexity \(m\) arrows have already been removed, it can be slid across this bundle and eventually removed. Finally, if it returns to the opposite side of the bundle currently being simplified, it will have \(w_{\ell}=m\) and \(|w_{r}|\geq m+1\) and can simply be included in right \(n\)-strand arrow configuration. After this step, all arrows in the left configuration have \(|w_{\ell}|\geq m+1\) and \(|w_{r}\geq m+1|\), while arrows in the right configuration still have \(|w_{\ell}|\geq m\) and \(|w_{r}|\geq m+1\). _Step 4: Remove \(|w_{\ell}|=m\) arrows on the right._ This step is analogous to Step 3, but we apply Lemma 4.6 to the right configuration with respect to a depth \(m\) ordering on the left and a depth \(m+1\) ordering on the right. Arrows on the left of the new right configuration that have \(w_{\ell}=m\) can be slid leftward and removed, leaving only complexity \(m+1\) arrows on the left of the right configuration. All arrows from the right of the new right configuration can be slid rightward, and when they return to \(\mathcal{F}\) they will have complexity \(m+1\) unless the arrow had \(w_{r}=m+1\). In this case the new arrow will have complexity \(+m\) on one side and can be either removed or ignored until a later step depending on which bundle it returns to. Following the above steps removes all complexity \(m\) arrows from a given bundle of arcs in \(\mathcal{F}\) without introducing any new arrows of complexity less than \(m\) or of complexity \(m\) in a bundle from which these have already been removed. Repeating over all bundles of arcs, we can remove all arrows with complexity less than \(m+1\). The rest of the proof is the same as before: by induction we can make the minimum complexity of crossover arrows arbitrarily large. Since there is an upper bound on the complexity of arrows that are not between parallel curves, eventually these are all that remain. Parallel curves with crossover arrows between them can be replaced by non-primitive curves with left-turn crossover arrows at local system intersection points. If necessary we can homotope the curves to put them in simple position, and we have a decorated curve of the desired form. We remark that when apply the arrow sliding algorithm, the restriction of the curves to \(\mathcal{S}\) never changes; this is because crossings are only ever resolved within the strip \(\mathcal{F}\). However, when we slide crossover arrows through \(\mathcal{S}\) in order to remove them, this generally involves changing the basis of the complex \(C\) corresponding to the curve. Since the immersed curve in \(\mathcal{S}\) is unchanged the curve still represents the complex \(C\), but we should understand it as representing the complex with respect to a different basis. This will be relevant later when we add minus information to the curves, since the flip maps should be expressed in terms of this new basis. ### Examples from knot Floer homology We are mainly interested in applying Proposition 8.1 to represent the data from knot Floer homology associated to a nullhomologous knot \(K\) in a \(3\)-manifold \(Y\). Given such a knot, let \(M\) denote the the knot complement \(Y\setminus\nu(K)\). Recall that \(T_{M}\) denotes the torus \(\partial M\), which is naturally identified with \(H_{1}(\partial M;\mathbb{R})/H_{1}(\partial M;\mathbb{Z})\), with a marked point at \(\{0\}\). We consider the covering space \(\widetilde{T}_{M}=H_{1}(\partial M;\mathbb{R})\), with a set of marked points identified with \(H_{1}(\partial M;\mathbb{Z})\), as well as the intermediate covering space \(\overline{T}_{M}=\widetilde{T}_{M}/\langle\lambda\rangle\), where the homological longitude \(\lambda\in H_{1}(\partial M;\mathbb{Z})\) generates the kernel of the inclusion \(i_{*}:H_{1}(\partial M;\mathbb{Z})\to H_{1}(M;\mathbb{Z})\). Because \(K\) is nullhomologous, we can take \(\lambda\) to be the Seifert longitude; note that \(\lambda\) is a primitive element of \(H_{1}(\partial M;\mathbb{Z})\) and is dual to the meridian \(\mu\). In this case we can identify \(\overline{T}_{M}\) with the infinite cylinder \(\mathcal{Z}\), where \(\lambda\) is identified with the horizontal direction and the meridian \(\mu\) is identified with the vertical direction. The set of spin\({}^{c}\) structures Spin\({}^{c}(M)\) can be identified with \(H^{2}(M)\cong H_{1}(M,\partial M)\). Since \(K\) is nullhomologous, the same is true of Spin\({}^{c}(Y)\); we will abuse notation and not distinguish between a spin\({}^{c}\) structure in Spin\({}^{c}(M)\) and the corresponding spin\({}^{c}\) structure in Spin\({}^{c}(Y)\). For each spin\({}^{c}\) structure \(\mathfrak{s}\in\text{Spin}^{c}(M)\) we define \(\widehat{HF}(Y,K;\mathfrak{s})\) to be the decorated curve \(\Gamma(\widehat{C}_{\mathfrak{s}},\widehat{\Psi}_{\mathfrak{s},*})\) in \(\overline{T}_{M}\cong\mathcal{Z}\) representing the complex \(\widehat{C}_{\mathfrak{s}}=CFK_{\widehat{\mathcal{R}}}(Y,K;\mathfrak{s})\) equipped with the flip ismorphism \(\widehat{\Psi}_{\mathfrak{s},*}\) as constructed in Proposition 8.1. The simplest case is that of knots in \(S^{3}\). In this case \(M\) has a single spin\({}^{c}\) structure \(\mathfrak{s}\), and the horizontal and vertical homology of \(\widehat{C}=CFK_{\widehat{\mathcal{R}}}(S^{3},K)\) are one dimensional so the flip isomorphism is simply multiplication by a nonzero constant \(c\) in \(\mathbb{F}\). To construct the curve \(\widehat{HF}(S^{3},K;\mathfrak{s})\) we construct the immersed curve in \(\mathcal{S}\) representing the complex \(\widehat{C}\) by following Section 7 and then glue the sides of \(\mathcal{S}\) together, identifying the endpoints of the curve and inserting a basepoint with weight \(c\). In particular, no crossover arrows are introduced when the flip map data is added, so the second application of the arrow sliding algorithm is never needed for knots in \(S^{3}\). **Example 8.2**.: The immersed multicurves in \(\mathcal{Z}\) representing the knot Floer complex of the left-handed trefoil and the figure eight knot in \(S^{3}\) are shown in Figure 33. If \(Y\) is not an L-space, the horizontal and vertical homology of the knot Floer complexes are more complicated, and the flip maps can carry interesting information. To illustrate this, we construct decorated curves from the complexes and flip maps given in Examples 2.5 and 2.6. **Example 8.3**.: Consider the the knot Floer data associated with the dual knot in \(+1\)-surgery on on the figure eight knot given in Example 2.5, viewed as a complex over \(\widehat{\mathcal{R}}\). Because the complexes have horizontally and vertically simplified bases, it is straightforward to construct immersed multicurve in the marked strip \(\mathcal{S}\) representing the complex \(CFK_{\widehat{\mathcal{R}}}(Y,K)\) (in particular, there are no crossover arrows to remove in this construction); this multicurve is shown on the left of Figure 34. Note that a basepoint of weight \(-1\) is required to get the correct sign on the \(d\) term of \(\partial(e)\). The right endpoints of this multicurve correspond to the generators of hat horizontal homology, \(\{a,b,c\}\), while the left endpoints correspond to the generators \(\{c,d,e\}\) of vertical homology. Recall that up to equivalence the possible flip isomorphisms on this complex are indexed by a nonzero constant \(c_{4}\) in \(\mathbb{F}\), with the flip map taking \(a\) to \(e\), \(b\) to \(d\), and \(c\) to \(c_{4}\cdot c\); though we know the flip isomorphism associated to \(K\subset Y\) corresponds to \(c_{4}=1\), we will describe the construction for any of these flip isomorphisms. We add an unmarked strip \(\mathcal{F}\) containing arcs that match up the endpoints in the appropriate way; that is, we add arcs connecting the \(a\), \(b\), and \(c\) endpoints on left of \(\mathcal{F}\) to the \(e\), \(d\), and \(c\) endpoints on the right of \(\mathcal{F}\), respectively, and we place a weighted basepoint with weight \(c_{4}\) on the arc from \(c\) to \(c\). Gluing the strips \(\mathcal{S}\) and \(\mathcal{F}\) along with the curves they contain produces the decorated immersed multicurve in the marked cylinder \(\mathcal{Z}\) representing \(CFK_{\widehat{\mathcal{R}}}(Y,K)\) with the given choice of flip map; see the right side of Figure 34. When \(c_{4}=1\), this decorated curve is \(\widehat{HF}(Y,K;\mathfrak{s})\); note that in this case the basepoint in \(\mathcal{F}\) can be omitted. **Example 8.4**.: Consider the knot Floer data associated with the dual knot in \(+1\)-surgery on on the left-handed trefoil given in Example 2.6, viewed as a complex over \(\widehat{\mathcal{R}}\). As in Example 8.3, it is straightforward to compute the immersed curves in \(\mathcal{S}\) representing the complex over \(\widehat{\mathcal{R}}\); in fact, the complex is identical to the complex in Example 8.3 so the immersed curves in \(\mathcal{S}\) are the same apart from orientations and weights; see Figure 35(a). Recall that for this complex there are two fundamentally different families of non-equivalent flip isomorphisms: in the first family \(\widehat{\Psi}_{\mathfrak{s}}\) takes \(a\) to \(e\) while in the second family it \(\widehat{\Psi}_{\mathfrak{s}}\) takes \(a\) to \(c+e\), and in both cases \(\widehat{\Psi}_{\mathfrak{s}}\) takes \(b\) to \(d\) and \(c\) to Figure 34. The immersed curves constructed in Example 8.3:(a) the immersed multicurve in \(\mathcal{S}\) representing the complex with respect to the given basis; (b) the immersed curve in \(\mathcal{Z}\) representing the complex with a flip isomorphism—when \(c_{4}=1\) this gives the curve associated to the dual knot of \(+1\)-surgery on the figure eight knot; (c) the curve (for \(c_{4}=1\)) lifted to the covering space \(\tilde{T}_{M}\). Figure 33. The immersed curves associated with the left-handed-trefoil (left) and the figure eight knot (right). \(c_{4}\cdot c\) for some nonzero \(c_{4}\) in \(\mathbb{F}\). For the first family of flip isomorphisms, the resulting curves are the same as those in Example 8.3 (apart from orientations); see Figure 35(b). For the second family of isomorphisms, we add the same arcs in \(\mathcal{F}\) as before but also add a left-turn crossover arrow (with weight \(1\)) from the segment from \(a\) to \(e\) to the segment from \(c\) to \(c\), as shown in Figure 35(c). Note that since \(a\) and \(c\) have the same value of \(\operatorname{gr}_{z}\), the segments starting at \(a\) and \(c\) in \(\mathcal{F}\) form a bundle, and the \(2\)-strand arrow configuration consisting of the arcs starting at \(a\) and \(c\) along with the crossover arrow encodes the flip map restricted to the appropriate grading. We now need to perform the arrow sliding algorithm to remove the crossover arrow. The first step is to replace the \(2\)-strand arrow configuration connecting \(a\) and \(c\) to \(c\) and \(e\) so that it is sorted with respect to depth \(1\) colors; this amounts to resolving the crossing at which the left-turn crossover arrow appears, using the local move in the last line of Figure 18, resulting in a new curve with two crossover arrows (both with weight \(1\)) as shown on the right of Figure 35(d). These arrows can be removed since they both immediately encounter a marked point when slid toward their left side into \(\mathcal{S}\), though we note that removing these arrows involves a change of basis. After removing the crossover arrows, the two basepoints weighted by \(-1\) can be slid together and canceled (this may also involve basis changes), leaving a single basepoint with weight \(c_{4}\). When \(c_{4}=1\), the resulting curve is \(\widehat{\mathit{HF}}(Y,K;\mathfrak{s})\); in this case the remaining basepoint has weight \(1\) and can be ignored. The curve constructed in Example 8.4 is the same as the curve considered in Example 5.5; it was observed there that the complex extracted from this curve is the complex from Example 2.6 and the flip isomorphism extracted from this curve is equivalent to the flip isomorphism from Example 2.6 after a change of basis. We remark that the change of basis needed is precisely the one arising from the arrow removal process. ## 9. Enhancing the curves for complexes over \(\mathcal{R}^{-}\) In Sections 7 and 8 we worked in the simpler \(UV=0\) setting and showed that a bigraded complex over \(\widehat{\mathcal{R}}\) can be represented by a decorated curve in a marked strip \(\mathcal{S}\) and that a complex over \(\widehat{\mathcal{R}}\) equipped with a flip isomorphism can be represented by a decorated curve in a marked cylinder \(\mathcal{Z}\). We now aim to extend these results to complexes over \(\mathcal{R}^{-}\), proving the existence part of Theorem 1.2. We first show that for a bigraded complex over \(\mathcal{R}^{-}\), the decorated curve in \(\mathcal{S}\) representing the \(UV=0\) complex can be enhanced to recover the diagonal arrows as well; in fact, the immersed curve will not change (up to homotopy) in this process, we only need to add to the bounding chain. Similarly, given a curve in \(\mathcal{Z}\) representing complexes and flip maps over \(\widehat{\mathcal{R}}\) we can capture the extra information in the flip isomorphism over \(\mathcal{R}^{-}\) by adding self intersection points to the bounding chain. Figure 35. The immersed curves constructed in Example 8.4. (a) the immersed multicurve in \(\mathcal{S}\) representing the complex; (b) a curve in \(\mathcal{Z}\) representing the complex with the first family of flip isomorphisms; (c) a multicurve with crossover arrow in \(\mathcal{Z}\) representing the complex with the second family of flip isomorphisms. (d) the first step of removing the crossover arrow. ### Enhanced curves in \(\mathcal{S}\): two examples Before starting the proof, we discuss two illustrative examples; for simplicity we consider both examples with coefficients in \(\mathbb{Z}/2\mathbb{Z}\). Consider the knot Floer complex for the \((2,-1)\)-cable of the left-handed trefoil. The bigraded complex \(C_{1}\) is shown in Figure 36, along with a immersed curve \(\Gamma_{1}\) in the strip \(\mathcal{S}\) which results from applying the algorithm in Section 7; note that there are no self intersection points of \(\Gamma_{1}\) so the bounding chain coming from the algorithm is necessarily trivial and we omit it from the notation. The construction in Section 7 guarantees that the complex \(C(\Gamma_{1})\) agrees with \(C_{1}\) as complexes over \(\widehat{\mathcal{R}}\), with the bigons contained fully on the left (respectively right) side of \(\mu\) corresponding to the vertical (respectively horizontal) arrows in \(C_{1}\). In this case, it turns out that \(C(\Gamma_{1})\) is in fact a bigraded complex over \(\mathcal{R}^{-}\) and it agrees with the full complex \(C_{1}\); in addition to the bigons contributing to the complex over \(\widehat{\mathcal{R}}\), the differential on \(C(\Gamma_{1})\) over \(\mathcal{R}^{-}\) counts the two bigons shaded in the figure, which exactly recover the two diagonal arrows in \(C_{1}\). The good fortune of the last example does not always hold. Figure 37 shows another bigraded complex \(C_{2}\) over \(\mathcal{R}^{-}\). This complex is one summand of the knot Floer complex for \(T_{2,9}\#-T_{2,3;2,5}\). On the left is the immersed curve \(\Gamma_{2}\) which represents the complex over \(\widehat{\mathcal{R}}\). Once again, the bounding chain determined by the construction in Section 7 is trivial. By construction, the complex \(C(\Gamma_{2})\) agrees with \(C_{2}\) as a complex over \(\widehat{\mathcal{R}}\). However, if we work over \(\mathcal{R}^{-}\) then \(C(\Gamma_{2})\) does not agree with \(C_{2}\); two of the four diagonal arrows in \(C_{2}\) are missing (these two arrows are gray in the figure). In fact, this means that \(C(\Gamma_{2})\) is not even a complex over \(\mathcal{R}^{-}\), as \(\partial^{2}\) is not zero. The situation can be salvaged if we decorate \(\Gamma_{2}\) with a bounding chain \(\mathbf{b}_{2}\). We take \(\mathbf{b}_{2}\) to be the linear combination of the two self-intersection points of \(\Gamma_{2}\), each with weight \(W\), as shown on the right side of the figure. Working over \(\mathcal{R}^{-}\), the complex \(C(\Gamma_{2},\mathbf{b}_{2})\) has the same generators as the precomplex \(C(\Gamma_{2})\) and all the same terms in the differential, in addition to two new terms coming from the shaded generalized bigons in the figure; these correspond precisely to the two missing arrows, so \((\Gamma_{2},\mathbf{b}_{2})\) represents \(C_{2}\) over \(\mathcal{R}^{-}\). ### Enhanced curves in \(\mathcal{S}\): General case In general, for any bigraded complex \(C\) over \(\mathcal{R}^{-}\) let \((\Gamma,\widehat{\mathbf{b}})\) be an immersed multicurve in \(\mathcal{S}\) and bounding chain that represents \(C\) over \(\widehat{\mathcal{R}}\), as constructed in Section 7. Our strategy will be to incrementally improve \((\Gamma,\widehat{\mathbf{b}})\) by adding more turning points to \(\widehat{\mathbf{b}}\) until we arrive at a bounding chain \(\mathbf{b}\) such that \((\Gamma,\mathbf{b})\) represents \(C\) over \(\mathcal{R}^{-}\). The immersed multicurve \(\Gamma\) will not change during this process, though in order to have enough intersection points we must assume that \(\Gamma\) begins in almost simple position rather than simple position. The construction in Section 7 gives curves in simple position, but these can easily be put in almost simple position by sliding endpoints along \(\partial_{L}\mathcal{S}\) and \(\partial_{R}\mathcal{S}\) to put them in the correct order and by applying finger moves to remove any immersed annuli; these changes do not change the fact that \((\Gamma,\widehat{\mathbf{b}})\) represents \(C\) over \(\widehat{\mathcal{R}}\). Figure 36. _CFK\({}^{\infty}\)_ of the the \((2,-1)\) cable of the left-handed trefoil. On the left is a representation of the bifiltered complex; on the right is the corresponding immersed curve. The two shaded bigons, which each cover one puncture, correspond to the diagonal arrows in the complex. A bigraded (pre)complex over \(\mathcal{R}^{-}\) is determined by a set of \(n\) generators with associated Alexander and Maslov gradings and an \(n\times n\) matrix with coefficients in \(\mathbb{F}\). We will order the \(n^{2}\) entries of this matrix and inductively construct collections of turning points \(\mathbf{b}_{N}\) such that the precomplex \(C(\Gamma,\mathbf{b}_{N})\) agrees with the complex \(C\) as a vector space and such that the differentials agree in the first \(k\) entries of this matrix. When \(N=n^{2}\) the matrices agree entirely, and so if we take \(\mathbf{b}=\mathbf{b}_{N^{2}}\) then \((\Gamma,\mathbf{b})\) represents \(C\) over \(\mathcal{R}^{-}\). It will be convenient to work with train tracks so that we can slide crossover arrows in the construction, so let \(\boldsymbol{\vartheta}_{N}\) denote the immersed train track corresponding to the pair \((\Gamma,\mathbf{b}_{N})\) (recall that this consists of the multicurve \(\Gamma\) along with left turn crossover arrows determined by \(\mathbf{b}_{N}\)). As usual we let \(C(\boldsymbol{\vartheta}_{N})\) denote the (pre)complex represented by this train track, which is equipped with the map \(\partial^{\boldsymbol{\vartheta}_{N}}\). As the base case of the induction, we define \(\mathbf{b}_{0}\) to be \(\widehat{\mathbf{b}}\). By assumption, the train track \(\boldsymbol{\vartheta}_{0}\) corresponding to \((\Gamma,\widehat{\mathbf{b}})\) represents \(C\) over \(\widehat{\mathcal{R}}\). Note that \(\widehat{\mathbf{b}}\) is a bounding chain over \(\widehat{\mathcal{R}}\) but not necessarily over \(\mathcal{R}^{-}\), so the \(C(\boldsymbol{\vartheta}_{0})\) may be only a precomplex over \(\mathcal{R}^{-}\). The intersections of \(\Gamma\) with \(\mu\) specify an ordered basis \(\{x_{1},\ldots,x_{n}\}\) for \(C(\boldsymbol{\vartheta}_{0})\), where the ordering is given by height. We will use \(<\) to denote this ordering; that is \(x_{i}<x_{j}\) if the point \(x_{i}\) occurs below \(x_{j}\). Note that this is a refinement of the partial ordering on generators given by the Alexander grading. Over \(\widehat{\mathcal{R}}\) the complex \(C(\boldsymbol{\vartheta}_{0})\) is isomorphic to \(C\), by assumption; by slight abuse of notation we will use \(\{x_{1},\ldots,x_{n}\}\) to refer to the corresponding basis of the complex \(C|_{\widehat{\mathcal{R}}}\), which can also be taken as a basis of \(C\). Let \(\{d_{i,j}\}_{1\leq i,j\leq n}\) be the matrix of coefficients (in \(\mathbb{F}\)) of the differential \(\partial\) on \(C\) with respect to this basis, so that \[\partial(x_{i})=\sum_{j=1}^{n}d_{i,j}U^{a_{i,j}}V^{b_{i,j}}x_{j}\] Figure 37. For the complex shown, on the left is an immersed curve representing the \(UV=0\) quotient of the complex. Note that even if we consider bigons between which fully cover the puncture, the pre-complex determined by the immersed curve does not agree with the full complex; the two diagonal arrows in the middle of the complex correspond to the shaded bigons on the left, but the remaining two diagonal arrows are missing. On the right is the curve decorated with two crossover arrows, each of which is weighted by \(UV\). The complex determined by this train track contains two additional arrows coming from the shaded bigons and determines the correct complex. where \(a_{i,j}\) and \(b_{i,j}\) are defined by Equation (1). Note that \(d_{i,j}=0\) if \(M(x_{i})\) and \(M(x_{j})\) have the same parity or if \(a_{i,j}\) or \(b_{i,j}\) are negative. Similarly, let \(\{d^{\boldsymbol{\theta}_{k}}_{i,j}\}_{1\leq i,j\leq n}\) be the coefficients of the map \(\partial^{\boldsymbol{\theta}_{k}}\) for \(C(\boldsymbol{\theta}_{k})\). Let \(\mathcal{P}\) denote the set of pairs \((i,j)\) with \(1\leq i,j\leq n\); we now define an ordering \(\lessdot\) on \(\mathcal{P}\) given the gradings on the ordered basis \(\{x_{1},\ldots,x_{n}\}\). We first order by the parity of \(M(x_{i})-M(x_{j})\), with the pairs for which this parity is even coming first. The pairs for which \(M(x_{i})-M(x_{j})\) is even can be given any arbitrary order. For pairs with \(M(x_{i})-M(x_{j})\) odd, we next order by \(\min(a_{i,j},b_{i,j})\) and then by \(\max(a_{i,j},b_{i,j})\). That is, we pick the ordering \(\lessdot\) such that \((i,j)\lessdot(i^{\prime},j^{\prime})\) if \(\min(a_{i,j},b_{i,j})<\min(a_{i^{\prime},j^{\prime}},b_{i^{\prime},j^{\prime}})\) or if \(\min(a_{i,j},b_{i,j})=\min(a_{i^{\prime},j^{\prime}},b_{i^{\prime},j^{\prime}})\) and \(\max(a_{i,j},b_{i,j})<\max(a_{i^{\prime},j^{\prime}},b_{i^{\prime},j^{\prime}})\). To refine the ordering further, we define a complexity on pairs \((i,j)\) based on how long the paths following \(\Gamma\) starting from \(x_{i}\) and \(x_{j}\) take to cross or diverge, where the paths begin moving rightward if \(i<j\) and they begin moving leftward if \(i>j\). More precisely, the complexity is zero if either path does not return to \(\mu\), if the two paths cross before returning to \(\mu\), or if the two paths return to \(\mu\) at different heights; otherwise, if the paths starting at \(x_{i}\) and \(x_{j}\) first return to \(\mu\) at points \(x_{i^{\prime}}\) and \(x_{j^{\prime}}\), respectively, than the complexity of \((i,j)\) is one more than the complexity of \((i^{\prime},j^{\prime})\). Among pairs with the same \(\min(a_{i,j},b_{i,j})\) and \(\max(a_{i^{\prime},j^{\prime}},b_{i^{\prime},j^{\prime}})\), we choose \(\lessdot\) so that pairs are ordered by increasing complexity. Finally, pairs with the same \(\min(a_{i,j},b_{i,j})\), \(\max(a_{i^{\prime},j^{\prime}},b_{i^{\prime},j^{\prime}})\), and complexity can be ordered arbitrarily, subject to the following constraint: If \(x_{i}\) is part of a grouping of generators with indices \(\{i_{0},\ldots,i_{r-1}\}\) on a non-primitive curve component of order \(r\) and \(x_{j}\) is part of a grouping of generators with indices \(\{j_{0},\ldots,j_{s-1}\}\) on a non-primitive curve of order \(s\), then the relative ordering on the \(rs\) pairs coming from an index in the first grouping and an index in the second grouping satisfies the following: \[(i_{0},j_{\ell\neq 0})\lessdot(i_{0},j_{0})\lessdot(i_{k\neq 0},j_{\ell \neq 0})\lessdot(i_{k\neq 0},j_{0}).\] If a train track \(\boldsymbol{\vartheta}\) represents the complex \(C\) over \(\widehat{\mathcal{R}}\) with respect to some basis of \(C\), we will say that it _represents \(C\) over \(\mathcal{R}^{-}\) to \(N\) entries_ if \(d_{i,j}=d^{\boldsymbol{\vartheta}}_{i,j}\) for the first \(N\) pairs \((i,j)\) in \(\mathcal{P}\). We will inductively construct the collections of turning points \(\mathbf{b}_{N}\) so that the train tracks \(\boldsymbol{\vartheta}_{N}\) represent \(C\) over \(\mathcal{R}^{-}\) to at least \(N\) entries. Note that for any train track \(\boldsymbol{\vartheta}\) representing \(C\) over \(\widehat{\mathcal{R}}\), \(d_{i,j}\) and \(d^{\boldsymbol{\theta}}_{i,j}\) are both zero if \(M(x_{i})-M(x_{j})\) is even or if \(\min(a_{i,j},b_{i,j})<0\). The entries with \(\min(a_{i,j},b_{i,j})=0\) record the horizontal and vertical arrows in the complex, so the fact that \(\boldsymbol{\vartheta}\) represents \(C\) over \(\widehat{\mathcal{R}}\) implies that \(d_{i,j}\) and \(d^{\boldsymbol{\vartheta}}_{i,j}\) also agree for these entries. Thus, if \(N_{0}\) is the number of pairs with \(M(x_{i})-M(x_{j})\) even or with \(\min(a_{i,j},b_{i,j})\leq 0\) and we can define \(\mathbf{b}_{N}\) to be \(\widehat{\mathbf{b}}\) for all \(N\leq N_{0}\), and the induction really begins at \(N=N_{0}\). To show that there is some \(\mathbf{b}=\mathbf{b}_{n^{2}}\) so that \(\boldsymbol{\vartheta}_{n^{2}}\) represents \(C\) over \(\mathcal{R}^{-}\), we need the following inductive step. **Proposition 9.1** (Main inductive step).: _Let \(C\) be a bigraded complex over \(\mathcal{R}^{-}\). Suppose \(\Gamma\) is a weighted and graded immersed multicurve in \(\mathcal{S}\) in almost simple position and \(\mathbf{b}_{N}\) is a collection of turning points such that \((\Gamma,\mathbf{b}_{N})\) represents \(C\) over \(\widehat{\mathcal{R}}\) and represents \(C\) over \(\mathcal{R}^{-}\) to \(N\) entries (with respect to some given basis of \(C\)) and suppose that the restriction \(\widehat{\mathbf{b}}_{N}\) of \(\mathbf{b}_{N}\) to degree zero intersection points is of local system type. There exists a new collection of turning points \(\mathbf{b}_{N+1}\), also with the property that \(\widehat{\mathbf{b}}_{N+1}\) is of local system type, such that \((\Gamma,\mathbf{b}_{N+1})\) represents \(C\) over \(\widehat{\mathcal{R}}\) and over \(\mathcal{R}^{-}\) to \(N+1\) entries (possibly with respect to a different basis of \(C\))._ Proof of Proposition 9.1.: Let \((i,j)\) be the \((N+1)\)st pair of indices in \(\mathcal{P}\) with respect to the ordering \(\lessdot\) defined above. Our aim is to show that \(d^{\boldsymbol{\theta}_{N}}_{i,j}=d_{i,j}\) already or that this can be made true by modifying \(\boldsymbol{\vartheta}_{N}\) through the addition or removal of points in \(\mathbf{b}_{N}\), where this modification does not affect the fact that \(d^{\boldsymbol{\theta}_{N}}_{i^{\prime},j^{\prime}}=d_{i^{\prime},j^{\prime}}\) for all \((i^{\prime},j^{\prime})\lessdot(i,j)\). For simplicity we will assume that \(i<j\), i.e. that the potential arrow from \(x_{i}\) to \(x_{j}\) is horizontal type. The opposite case, with \(i>j\), is identical after rotating all pictures in the proof by 180 degrees. Let \(a=a_{i,j}\) and \(b=b_{i,j}\) so that a hypothetical arrow from \(x_{i}\) to \(x_{j}\) has coefficient \(cU^{a}V^{b}\) for \(c\in\mathbb{F}\). Since \(i<j\), we have that \(a-b=A(x_{j})-A(x_{i})\geq 0\). Note that since the coefficients \(d_{i^{\prime},j^{\prime}}\) and \(d^{\boldsymbol{\theta}_{N}}_{i^{\prime},j^{\prime}}\) agree for pairs \((i^{\prime},j^{\prime})\) with \((i^{\prime},j^{\prime})\lessdot(i,j)\), the inductive hypothesis implies that \((\Gamma,\mathbf{b}_{N})\) represents \(C\) over \(\mathcal{R}_{b}=\mathcal{R}^{-}/W^{b}\). On the other hand, since we do not care about pairs \((i^{\prime},j^{\prime})\) with \((i,j)\lessdot(i^{\prime},j^{\prime})\), to verify the conclusion of the proposition it is sufficient to do so over \(\mathcal{R}_{b+1}=\mathcal{R}^{-}/W^{b+1}\). Recall that the generators \(x_{i}\) and \(x_{j}\) correspond to intersection points of \(\Gamma\) with the vertical line \(\mu\). We can define an arc \(s_{i}\) by following \(\Gamma\), initially moving rightward from the point corresponding to \(x_{i}\), until it either returns to \(\mu\) or hits the right boundary of \(\mathcal{S}\); if the terminal endpoint of \(s_{i}\) lies on \(\mu\) let \(x_{i^{\prime}}\) be the corresponding generator of \(C(\boldsymbol{\vartheta}_{N})\) and otherwise we say \(x_{i^{\prime}}=\emptyset\). Define \(s_{j}\) and \(x_{j^{\prime}}\) similarly. Note that \(s_{i}\) and \(s_{j}\) can not be the same segment--that is, \(x_{i^{\prime}}\) can not be \(x_{j}\)--since then there would be a horizontal arrow from \(x_{i}\) to \(x_{j}\). We consider cases based on the relative values of \(x_{i}\), \(x_{j}\), \(x_{i^{\prime}}\), and \(x_{j^{\prime}}\). There are nineteen cases, which are depicted in Figure 38. In each case, we will do one of three things: (1) we show that there is an intersection point at which we can add a left turn crossover arrow to \(\boldsymbol{\vartheta}_{N}\) which changes \(d^{\boldsymbol{\vartheta}_{N}}_{i,j}\) without changing any earlier coefficients, (2) we show that \(d^{\boldsymbol{\vartheta}_{N}}_{i,j}\) can be adjusted, without changing any earlier coefficients, by adding a crossover arrow to \(\boldsymbol{\vartheta}_{N}\) which is not a left-turn crossover arrow, and we show that that using arrow slide moves we can remove this arrow or it can become a left-turn crossover arrow, or (3) we show that \(d^{\boldsymbol{\vartheta}_{N}}_{i,j}\) must already agree with \(d_{i,j}\). In cases (1) and (2) we define \(\mathbf{b}_{N+1}\) from \(\mathbf{b}_{N}\) by adding the intersection point corresponding to the left turn crossover arrow, and in case (3) we simply let \(\mathbf{b}_{N+1}=\mathbf{b}_{N}\). **Cases (c), (e), (f), and (g):** In each of these cases the segments \(s_{i}\) and \(s_{j}\) intersect. We will call the intersection point \(p\); it is straightforward to check that \(p\) has degree \(-2b\), so that the power of \(W\) in the weight of any left turn arrow at \(p\) must be \(b\). If \(d^{\boldsymbol{\vartheta}_{N}}_{i,j}=d_{i,j}\) we are done, otherwise let \(c=d_{i,j}-d^{\boldsymbol{\vartheta}_{N}}_{i,j}\). We obtain \(\boldsymbol{\vartheta}_{N+1}\) from \(\boldsymbol{\vartheta}_{N}\) by adding a left turn crossover arrow weighted by \(cW^{b}\) at the point \(p\) as shown in Figure 39. The newly created bigon shaded in the figure contributes \(c(UV)^{b}U^{a-b}x_{j}\) to \(\partial^{\boldsymbol{\vartheta}_{N+1}}(x_{i})\). This is precisely the change to \(d^{\boldsymbol{\vartheta}_{N}}_{i,j}\) needed to make it agree with \(d_{i,j}\); we only need to check that adding this crossover arrow does not have any unwanted side effects on other terms in the differential. Figure 38. The nineteen cases for in the proof of Proposition 9.1. We can restrict our attention to bigons which lie entirely on the right side of \(\mu\), since if a bigon crosses to the left side of \(\mu\) and comes back, it must either enclose a puncture or make a left turn at an intersection point with negative degree, either of which would contribute at least one \(W\) to the weight; if this bigon also involves the crossover arrow at point \(p\), this contributes an additional \(W^{b}\) and the bigon can be ignored modulo \(W^{b+1}\). Similarly, we can ignore any bigons which involve another crossover arrow at a self intersection point of strictly negative degree. For simplicity, first assume that \(\mathbf{b}_{N}\) contains no degree zero intersection points. In this case it is clear that there are no other relevant bigons formed when the new crossover arrow is added. Such a bigon would have to cover the opposite quadrant near \(p\) and the \(\boldsymbol{\vartheta}_{N+1}\) part of its boundary would be the path from \(x_{i^{\prime}}\) to \(p\) to \(x_{j^{\prime}}\); since \(x_{i^{\prime}}\) is above \(x_{j^{\prime}}\) in case (c) and at least one of \(x_{i^{\prime}}\) or \(x_{j^{\prime}}\) is \(\emptyset\) in cases (e), (f), and (g), this can not be the right side of a bigon formed with a segment of \(\mu\). To complete these cases, we need to allow for the possibility that \(\mathbf{b}_{N}\) contains index zero points which lead to unwanted side effect bigons when the crossover arrow is added. Since \(\mathbf{\hat{b}}_{N}\) is local system type by assumption, any index zero points must be local system intersection points. Clearly an unwanted bigon can only occur if the index zero points in question occur along \(s_{i}\) or \(s_{j}\); that is, we must have that \(x_{i}\) or \(x_{j}\) lie on non-primitive curves and that \(s_{i}\) or \(s_{j}\) pass through the crossing region of the relevant curve between \(x_{i}\) or \(x_{j}\) and \(p\). For example, suppose \(x_{i}\) is \(x_{i_{0}}\) in a multiplicity 3 grouping of generators, and that the intersection between \(s_{i_{0}}\) and \(s_{i_{2}}\) is in \(\mathbf{b}\) and falls between \(x_{i_{0}}\) and \(p\), as on the left side of Figure 40. Adding the crossover arrow at \(p\) to introduce a bigon from \(x_{i_{0}}\) to \(x_{j}\) also introduces the bigon from \(x_{i_{2}}\) to \(x_{j}\) shaded in the figure. As another example, suppose \(x_{j}\) is \(x_{j_{1}}\) in a multiplicity 3 grouping of generators, and that \(\mathbf{b}\) contains the intersection between \(s_{j_{1}}\) and \(s_{j_{0}}\) as on the right side of Figure 40; adding the point \(p\) to \(\mathbf{b}\) introduces the shaded bigon from \(x_{i}\) to \(x_{j_{0}}\) in addition to the desired bigon from \(x_{i}\) to \(x_{j_{1}}\). From these examples it is clear that fixing the coefficient \(d^{\boldsymbol{\vartheta}_{N}}_{i,j}\) only affects other terms of the differential if \(i=i_{0}\) and/or \(j=j_{\ell}\) with \(\ell\neq 0\), and in these cases the indices of the other terms affected are obtained from \((i,j)\) by replacing \(i\) with \(i_{k}\) for \(k\neq 0\) and/or replacing \(j\) with \(j_{0}\). Any pair of this form is greater than \((i,j)\) with respect to the ordering \(\lessdot\), so we can ignore these side effects. **Cases (j) and (l):** In these cases, we argue that \(d^{\boldsymbol{\vartheta}_{N}}_{i,j}\) must already agree with \(d_{i,j}\) by considering the coefficient of \(x_{j^{\prime}}\) in \((\partial^{\boldsymbol{\vartheta}_{N}})^{2}(x_{i})\), which up to a power of \(U\) and of \(V\) is given by the left hand sum below. While \(\partial^{\boldsymbol{\vartheta}_{N}}\) is not necessarily a differential, this term of \((\partial^{\boldsymbol{\vartheta}_{N}})^{2}\) is Figure 40. Possible side effect bigons in case (c), (e), (f), or (g) involving weight zero points in \(\mathbf{b}\). Figure 39. To correct \(d^{\boldsymbol{\vartheta}_{j}}_{i,j}\) in cases (c), (e), (f), and (g) we add a multiple of the intersection point shown to \(\mathbf{b}\); this modifies \(\boldsymbol{\vartheta}\) by adding a crossover arrow in a neighborhood of this intersection point as pictured. This amounts to allowing left turns from \(s_{i}\) to \(s_{j}\) at this point. The analogous \(i,j^{\prime}\)-coefficient of \(\partial^{2}\) is also zero, since \(C\) is a chain complex, and so \[\sum_{\ell=1}^{n}d_{i,\ell}^{\boldsymbol{\vartheta}_{N}}d_{\ell,j^{\prime}}^{ \boldsymbol{\vartheta}_{N}}=0=\sum_{\ell=1}^{n}d_{i,\ell}d_{\ell,j^{\prime}}. \tag{6}\] We claim that every coefficient \(d_{i,\ell}^{\boldsymbol{\vartheta}_{N}}\) or \(d_{\ell,j^{\prime}}^{\boldsymbol{\vartheta}_{N}}\) which appears in a nontrivial term of the left hand sum agrees with the corresponding coefficient \(d_{i,\ell}\) or \(d_{\ell,j^{\prime}}\) except possibly \(d_{i,j}^{\boldsymbol{\vartheta}_{N}}\). The sum can be taken only over indices \(\ell\) for which \(M(x_{\ell})\) and \(M(x_{i})\) have opposite parity and for which \(0\leq a_{i,\ell}\leq a_{i,j^{\prime}}\) and \(0\leq b_{i,\ell}\leq b_{i,j^{\prime}}\). Note that because there is a horizontal arrow from \(x_{j}\) to \(x_{j}^{\prime}\), \(b_{i,j^{\prime}}=b_{i,j}=b\). We have that \(b_{i,\ell}\) and \(b_{\ell,j^{\prime}}\) are nonnegative and sum to \(b_{i,j^{\prime}}\), so both are less than \(b\) unless one of them is zero. If both are less than \(b\), then \(d_{i,\ell}^{\boldsymbol{\vartheta}_{N}}\) agrees with \(d_{i,\ell}\) and \(d_{\ell,j^{\prime}}^{\boldsymbol{\vartheta}_{N}}\) agrees with \(d_{\ell,j^{\prime}}\) since \(C^{\boldsymbol{\vartheta}_{N}}\) agrees with \(C\) mod \(W^{b}\). If \(b_{i,\ell}=0\) then \(d_{i,\ell}^{\boldsymbol{\vartheta}_{N}}=d_{i,\ell}\) must be zero, since there are no horizontal arrows starting at \(x_{i}\). If \(b_{\ell,j^{\prime}}=0\) and \(d_{\ell,j^{\prime}}^{\boldsymbol{\vartheta}_{N}}=d_{\ell,j^{\prime}}\) is nonzero, then there is a horizontal arrow from \(x_{\ell}\) to \(x_{j^{\prime}}\). This implies that \(x_{\ell}\) is \(x_{j}\) or possibly another generator in the same grouping as \(x_{j}\), if \(x_{j}\) lies on a non-primitive curve. In the later case, \(x_{j^{\prime}}\) must be the first generator in its grouping since only the first generator can have an incoming horizontal arrow, and thus \(x_{j}\) must be the first generator in its grouping; it follows that \((i,\ell)\lessdot(i,j)\) so we can assume that \(d_{i,\ell}^{\boldsymbol{\vartheta}_{N}}=d_{i,\ell}\). After deleting all terms which agree on both sides, Equation (6) reduces to \[d_{i,j}^{\boldsymbol{\vartheta}_{N}}d_{j,j^{\prime}}^{\boldsymbol{\vartheta}_{ N}}=d_{i,j}d_{j^{\prime},j^{\prime}}\] with \(d_{j,j^{\prime}}^{\boldsymbol{\vartheta}_{N}}=d_{j,j^{\prime}}\neq 0\). Thus \(d_{i,j}^{\boldsymbol{\vartheta}_{N}}=d_{i,j}\). **Case (s):** This is similar to cases (j) and (l) except that we consider the coefficient of the \(x_{j}\) in \((\partial^{\boldsymbol{\vartheta}_{N}})^{2}(x_{i^{\prime}})\), which is zero by Lemma 6.7(i). This leads to the equation \[\sum_{\ell=1}^{n}d_{i^{\prime},\ell}^{\boldsymbol{\vartheta}_{N}}d_{\ell,j}^{ \boldsymbol{\vartheta}_{N}}=0=\sum_{\ell=1}^{n}d_{i^{\prime},\ell}d_{\ell,j}. \tag{7}\] As in the previous case it is clear that the \(\ell\)th term is equal on both sides unless either \(b_{i^{\prime},\ell}\) or \(b_{\ell,j}\) are zero. If \(b_{\ell,j}=0\) then \(d_{\ell,j}^{\boldsymbol{\vartheta}_{N}}=d_{\ell,j}=0\), since \(x_{j}\) has no incoming horizontal arrows. If \(b_{i^{\prime},\ell}=0\) and \(d_{i^{\prime},\ell}^{\boldsymbol{\vartheta}_{N}}=d_{i^{\prime},\ell}\neq 0\) then there is a horizontal arrow from \(x_{i^{\prime}}\) to \(x_{\ell}\), which implies either \(\ell=i\) or that \(\ell=i_{0}\) where \(x_{i}\) is part of a grouping of generators with indices \(\{i_{0},\ldots,i_{r}\}\). If \(\ell=i_{0}\neq i\), then \((\ell,j)\lessdot(i,j)\) so we can assume that \(d_{\ell,j}^{\boldsymbol{\vartheta}_{N}}=d_{\ell,j}\). Equation (7) then reduces to \(d_{i^{\prime},i}^{\boldsymbol{\vartheta}_{N}}d_{i,j}^{\boldsymbol{\vartheta}_{ N}}=d_{i^{\prime},i}d_{i,j}\) and \(d_{i^{\prime},i}^{\boldsymbol{\vartheta}_{N}}=d_{i^{\prime},i}\neq 0\), so \(d_{i,j}^{\boldsymbol{\vartheta}_{N}}=d_{i,j}\). **Cases (a) and (d):** As in the cases (c), (e), (f), and (g) above, the segments \(s_{i}\) and \(s_{j}\) cross and their intersection point \(p\) has index \(b\). As before we can modify the coefficient \(d_{i,j}^{\boldsymbol{\vartheta}_{N}}\) as needed by adding a left turn crossover arrow at \(p\). The difference is that now making this change also introduces another bigon from \(x_{i^{\prime}}\) to \(x_{j^{\prime}}\). If \((i,j)\lessdot(i^{\prime},j^{\prime})\) then we can ignore the effect of this bigon (as well as the effect of any other bigon from a generator in the grouping of \(x_{i^{\prime}}\) to a generator in the grouping of \(x_{j^{\prime}}\), when there are non-primitive curves involved) and the proof proceeds as before. If instead \((i^{\prime},j^{\prime})\lessdot(i,j)\) we argue that \(d_{i,j}^{\boldsymbol{\vartheta}_{N}}\) must already agree with \(d_{i,j}\). In case (a), we consider the coefficient of \(x_{j^{\prime}}\) in \((\partial^{\boldsymbol{\vartheta}_{N}})^{2}(x_{i})\), as in cases (j) and (l) above. The proof is identical to those cases, except that we must consider values of \(\ell\) with \(b_{i,\ell}=0\) in addition to those with \(b_{\ell,j^{\prime}}=0\), since \(x_{i}\) has an outgoing horizontal arrow. However, the only horizontal arrow out of \(x_{i}\) goes to \(x_{i^{\prime}}\), unless \(x_{i}\) is in a grouping of generators with indices \(\{i_{0},\ldots,i_{r-1}\}\) and \(i\neq i_{0}\), in which case there may also be a horizontal arrow from \(x_{i}\) to \(x_{i_{0}^{\prime}}\). Thus if \(b_{i,\ell}=0\) and \(d_{i,\ell}^{\boldsymbol{\vartheta}_{N}}=d_{i,\ell}\neq 0\) then \(\ell\) is \(i^{\prime}\) or is \(i_{0}^{\prime}\), the first index in a grouping containing \(i^{\prime}\). We have assumed that \((i^{\prime},j^{\prime})\lessdot(i,j)\), and if \(\ell=i_{0}^{\prime}\neq i^{\prime}\) then \((\ell,j^{\prime})\lessdot(i^{\prime},j^{\prime})\lessdot(i,j)\), so by the inductive assumption \(d_{\ell,j^{\prime}}^{\boldsymbol{\vartheta}_{N}}=d_{\ell,j^{\prime}}\). As before, Equation (6) implies \(d_{i,j}^{\boldsymbol{\vartheta}}=d_{i,j}\). For case (d) we consider the coefficient of \(x_{j}\) in \((\partial^{\boldsymbol{\vartheta}_{N}})^{2}(x_{i^{\prime}})\) as in case (s) above. Since \(x_{j}\) has an incoming horizontal arrow we must consider terms \(d^{\boldsymbol{\vartheta}_{N}}_{i^{\prime},\ell}d^{\boldsymbol{\vartheta}_{N}}_{ \ell,j}\) where \(b_{\ell,j}=0\), but for such terms if \(d^{\boldsymbol{\vartheta}_{N}}_{i^{\prime},\ell}\neq 0\) then either \(\ell=j^{\prime}\) or \(x_{j^{\prime}}\) is the first in a grouping of generators and \(\ell\) is the index of another generator in that grouping. In either case \((i^{\prime},\ell)\lessdot(i^{\prime},j^{\prime})\), so we can assume \(d^{\boldsymbol{\vartheta}_{N}}_{i^{\prime},\ell}=d_{i^{\prime},\ell}\). Then Equation (7) implies \(d^{\boldsymbol{\vartheta}_{N}}_{i,j}=d_{i,j}\). **Cases (n) and (q):** These are similar to the previous few cases, with one additional consideration that we highlight here. For brevity we focus on case (n); the translation to case (q) is straightforward. Like in case (j), we consider the coefficient of \(x_{j^{\prime}}\) in \((d^{\boldsymbol{\vartheta}_{N}})^{2}(x_{i})\); this is zero by Lemma 6.7(ii). As before the relevant terms in Equation (6) are those for which either \(\ell\) is \(i^{\prime}\) (or the first index in the grouping containing \(i^{\prime}\)) or \(\ell\) is \(j\) (or another index in the grouping containing \(j\), where \(j\) is the first of that grouping). We again argue that for such \(\ell\) the coefficients \(d^{\boldsymbol{\vartheta}_{N}}_{i,\ell}\) and \(d^{\boldsymbol{\vartheta}_{N}}_{\ell,j^{\prime}}\) already agree with their counterparts on the other side of the equation except possibly \(d^{\boldsymbol{\vartheta}_{N}}_{i,j}\). The key observation to make is that \((i^{\prime},j^{\prime})\lessdot(i,j)\), the other cases follow as before from the way pairs coming from groupings of indices are ordered. Note that since there are horizontal arrows from \(i\) to \(i^{\prime}\) and from \(j\) to \(j^{\prime}\) we have \(b_{i^{\prime},j^{\prime}}=b_{i,j}=b\). We have that \(a_{i,j}\geq b_{i,j}\) since \(i<j\), while since \(i^{\prime}>j^{\prime}\) we have \(a_{i^{\prime},j^{\prime}}\leq b_{i^{\prime},j^{\prime}}\). It follows that \(\min(a_{i^{\prime},j^{\prime}},b_{i^{\prime},j^{\prime}})<\min(a_{i,j},b_{i,j})\), and hence \((i^{\prime},j^{\prime})\lessdot(i,j)\), unless \(A(x_{i})=A(x_{j})\) and \(A(x_{i^{\prime}})=A(x_{j^{\prime}})\). If this is the case, then the segments \(s_{i}\) and \(s_{j}\) are parallel, they neither cross nor diverge before returning to \(\mu\), and they return to \(\mu\) at the same height. It follows that the pair \((i,j)\) has complexity one higher than the pair \((i^{\prime},j^{\prime})\), and we have chosen the ordering \(\lessdot\) so that this implies \((i^{\prime},j^{\prime})\lessdot(i,j)\). **Cases (b), (i), (m), (o), (p) and (r):** In each of these cases, \(d^{\boldsymbol{\vartheta}_{N}}_{i,j}\) does not necessarily agree with \(d_{i,j}\), so if it does not we must modify the train track \(\boldsymbol{\vartheta}\) to adjust the coefficient \(d^{\boldsymbol{\vartheta}_{N}}_{i,j}\). However, in all but case (b) there is no intersection point at which to add a crossover arrow as we did before, while in case (b) there is an intersection between the segments \(s_{i}\) and \(s_{j}\) but it has degree \(2b>0\) and thus a crossover arrow added there must be a right turn crossover arrow. Instead, we will modify \(\boldsymbol{\vartheta}_{N}\) by adding a crossover arrow which is not at a self intersection point of \(\Gamma\), as pictured in Figure 41. Each arrow lies just to the right of \(\mu\) and connects \(s_{i}\) to \(s_{j}\) near an endpoint of each of those segments and is weighted by \(\pm cW^{b}\), where \(c=d_{i,j}-d^{\boldsymbol{\vartheta}_{N}}_{i,j}\). In cases (o) and (m) the arrow connects the \(x_{i}\) end of \(s_{i}\) to the \(x_{j^{\prime}}\) end of \(s_{j}\) and is weighted by \(cW^{b}\), in cases (p) and (r) the arrow connects the \(x_{i^{\prime}}\) end of \(s_{i}\) to the \(x_{j}\) end of \(s_{j}\) and is weighted by \(-cW^{b}\), and in cases (b) and (i) we can choose either of those two options. In each case there is an obvious bigon from \(x_{i}\) to \(x_{j}\) introduced by adding the crossover arrow. This bigon has weight \(c(UV)^{b}U^{a-b}\), and so it contributes exactly the desired change to \(d^{\boldsymbol{\vartheta}_{N}}_{i,j}\). We can check that there are no relevant side effect of this change exactly as in cases (c), (e), (f), and (g). First, we ignore bigons crossing \(\mu\) or involving other crossover arrows at intersection points of strictly negative degree, since these will not contribute mod \(W^{b+1}\). We then observe that the only other possible bigons involving the new crossover arrow would connect a generator in the grouping containing \(x_{i}\) to a generator in the grouping containing \(x_{j}\), and we can check that any affected pairs come after \((i,j)\) with respect to the ordering \(\lessdot\). Thus we have modified \(\boldsymbol{\vartheta}_{N}\) to obtain \(\boldsymbol{\vartheta}_{N+1}\) such that the coefficients \(d^{\boldsymbol{\vartheta}_{N+1}}\) and \(d\) agree for all pairs up to and including \((i,j)\). That is, \(\boldsymbol{\vartheta}_{N+1}\) represents \(C\) correctly to \(N+1\) entries. The only problem is that now the train track \(\boldsymbol{\vartheta}_{N+1}\) does not have the form of a collection of immersed curves along with left-turn crossover arrows. To resolve this problem we will slide the new crossover arrow, much as we slid arrows in the proof of Proposition 7.1, until either it reaches a negative degree intersection point, at which it becomes a left-turn arrow, or it can be removed. The argument is in fact much easier than the arrow sliding argument used to prove Proposition 7.1 because we will not resolve any crossings, so the immersed curves do not change, and because most new composition arrows formed by sliding one arrow past another can be immediately ignored. Initially, the arrow is just to the right of \(\mu\) and connects two horizontal segments; let \(x\) denote the generator corresponding to the segment at the tail of the crossover arrow and \(y\) denote the generator corresponding to the segment at the head (the pair \((x,y)\) is either \((x_{i},x_{j^{\prime}})\) or \((x_{i^{\prime}},x_{j})\), depending on the case). If \(A(y)\) is strictly greater than \(A(x)\) then the crossover arrow can be removed by performing a change of basis replacing \(x\) with \(x\pm U^{A(y)-A(x)}wy\) where \(w\) is the weight on the crossover arrow and the sign depends on the orientation on \(\Gamma\). Removing the arrow may modify some of the coefficients of \(\partial^{\boldsymbol{\theta}_{N+1}}\) which we have already arranged to agree with the corresponding coefficients of \(\partial\), but if we change our chosen basis for \(C\) as above then the coefficients of \(\partial\) will change as well, and Proposition 7.4 ensures that the changes to \(\partial^{\boldsymbol{\theta}_{N+1}}\) and \(\partial\) are the same (modulo \(W^{b+1}\)). Thus, after removing the arrow and changing the basis, it is still the case that \(d^{\boldsymbol{\theta}_{N+1}}\) and \(d\) agree for all pairs up to and including \((i,j)\). If \(A(y)=A(x)\), then we can slide the crossover arrow to the other side of \(\mu\). Once again, this may change some coefficients of \(\partial^{\boldsymbol{\theta}_{N+1}}\), but by Proposition 7.4 the change is exactly matched in \(\partial\) by replacing the basis element \(x\) with \(x\pm wy\) where \(w\) is the weight on the crossover arrow and the sign depends on the orientation on \(\Gamma\). Once the arrow passes through \(\mu\), one of four things can happen: (1) the segments connected by the arrow cross; (2) without crossing, the segments turn opposite directions (i.e., the segment from \(y\) returns to \(\mu\) above \(y\) while the segment from \(x\) returns to \(\mu\) below \(x\)) or one or both segments do not return to \(\mu\); (3) the segments turn the same direction but return to \(\mu\) at different heights; or (4) the segments return to \(\mu\) at the same height. Examples of these four cases are shown in Figure 42. In the first case, we can slide the crossover arrow until it reaches the crossing and observe that it is now a left-turn crossover arrow (in particular, this intersection point has negative degree \(-2b\)). In the second case, it is easy to see that the arrow can be removed without affecting the differential \(\partial^{\boldsymbol{\theta}_{N+1}}\) modulo \(W^{b+1}\); since there is a path from the left side of the crossover arrow to \(\partial\mathcal{S}\) disjoint from the two arcs connected by the arrow, any bigon involving the crossover arrow must either involve another crossover arrow at a negative degree intersection point or it must cross to the other side of \(\mu\) and back, either of which would contribute up at least one additional factor of \(W\). In the third case, we slide the crossover arrow to the other end of the two segments so that it points downward just to the left of \(\mu\) connecting the other endpoints of the segments beginning at \(x\) and \(y\) respectively. Applying Proposition 7.4 again, we can remove this arrow if we change the basis for \(C\). Finally, in the fourth case we again slide the arrow so that it points downward from \(x^{\prime}\) to \(y^{\prime}\) but then we slide the arrow across \(\mu\) while changing the basis as dictated by Proposition 7.4. We now have (a rotated version of) the same four cases to consider, and we repeat the argument. Clearly the arrow will eventually be removed or stop at a crossing unless the curves connected by the arrow are completely parallel. However, we have assumed that \(\Gamma\) is in standard position and no two curves bound an immersed annulus, so this is not possible. We have now corrected the coefficient \(d^{\boldsymbol{\theta}_{N}}_{i,j}\), though to do so we were forced to add a crossover arrow which is not a left turn crossover arrow. We have also shown that, at the expense of choosing a new basis, we can always remove this arrow or replace it with a left turn crossover arrow. The only thing remaining to check is that we can also deal with any new arrows that might be introduced while sliding the arrow as described above. Recall from Section 7.1 that if the head of this crossover arrow slides past the tail of another crossover arrow, or vice versa, we need to add a new crossover which is the composition of the two. However, the weight of the new arrow will be the product of the weights of the two arrows; since the arrow we are sliding has weight \(cW^{b}\) and at this stage we only need to preserve \(C(\boldsymbol{\vartheta}_{N+1})\) modulo \(W^{b+1}\), we may immediately ignore any such compositions unless the arrow being passed is at a degree zero intersection point. By assumption, such arrows only occur in the crossing region of some non-primitive curve. If the crossover arrow we wish to remove connects segments \(s_{1}\) and \(s_{2}\), then new arrows introduced will be the same except that they will connect a segment in the same grouping as \(s_{1}\) to a segment in the same grouping as \(s_{2}\). We can slide all of these arrows together as a group, and the segments they connect will always behave the same. Thus when we can remove the original arrow or move it to an intersection point, the same is true of all of the other arrows. **Cases (h) and (k):** These final two cases are similar to cases (b), (i), (o), (p), (m), and (r). Like in those cases, we add a crossover arrow with appropriate weight to the right of \(\mu\) from \(x_{i^{\prime}}\) to \(x_{j}\) in case (h) or from \(x_{i}\) to \(x_{j^{\prime}}\) in case (k); this modifies the coefficient \(d^{\boldsymbol{\theta}_{N}}_{i,j}\) to make it agree with \(d_{i,j}\). The new crossover arrow is not at an intersection point, but we can slide it just as in the previous cases until it reaches an intersection point as a left-turn crossover arrow or it can be removed. The only difference is that in these cases, adding the crossover arrow has the additional effect of changing \(d^{\boldsymbol{\theta}_{N}}_{i^{\prime},j^{\prime}}\), as it also creates a new bigon from \(x_{i^{\prime}}\) to \(x_{j^{\prime}}\). We proceed as in cases \((a)\) and \((d)\). If \((i,j)\lessdot(i^{\prime},j^{\prime})\), then we can ignore any side effect bigons connecting \(x_{i^{\prime}}\) to \(x_{j^{\prime}}\). If on the other hand \((i^{\prime},j^{\prime})\lessdot(i,j)\) then we argue that \(d^{\boldsymbol{\theta}_{N}}_{i,j}\) must already agree with \(d_{i,j}\) without adding the crossover arrow. By assumption, \(d^{\boldsymbol{\theta}_{N}}_{i^{\prime},j^{\prime}}\) agrees with \(d_{i^{\prime},j^{\prime}}\). In case \((h)\), we consider the coefficient of \(x_{j^{\prime}}\) in \((\partial^{\boldsymbol{\theta}_{N}})^{2}(x_{i})\) and proceed exactly as in case \((a)\), while in case \((k)\) we consider the coefficient of \(x_{j}\) in \((\partial^{\boldsymbol{\theta}_{N}})^{2}(x_{i^{\prime}})\) and proceed as in case \((d)\). We can now prove the existence part of Theorem 1.2 for complexes without flip maps. By Proposition 7.1, there exists a decorated curve \((\Gamma,\widehat{\mathbf{b}})\) in \(\mathcal{S}\) that represents \(C\) over \(\widehat{\mathcal{R}}\) and for which \(\widehat{\mathbf{b}}\) is of local system type. By perturbing \(\Gamma\), we may assume that it is in almost simple position. Defining \(\mathbf{b}_{0}\) to be \(\widehat{\mathbf{b}}\) and inducting using Proposition 9.1, we find \(\mathbf{b}=\mathbf{b}_{n^{2}}\) such that \((\Gamma,\mathbf{b})\) represents \(C\) over \(\mathcal{R}^{-}\). ### Enhanced curves in \(\mathcal{Z}\) Let \(C\) be a bigraded complex over \(\mathcal{R}^{-}\) and let \(\Psi_{*}:H_{*}C^{h}\to H_{*}C^{v}\) be a of flip isomorphism. To represent this data with a decorated curve in the marked cylinder \(\mathcal{Z}\), we first represent the corresponding \(UV=0\) complex \(\widehat{C}\) and flip isomorphism \(\widehat{\Psi}_{*}\) as in Section 8. Recall that we construct the decorated curve \((\Gamma,\widehat{\mathbf{b}})\) in \(\mathcal{Z}\) by gluing together a curve in \(\mathcal{S}\) representing the \(C\) over \(\widehat{\mathcal{R}}\) and a curve in \(\mathcal{F}\) representing \(\widehat{\Psi}_{*}\). If necessary we perform the arrow sliding algorithm to ensure that the decoration takes the form of a bounding chain \(\widehat{\mathbf{b}}\) consisting of only local system intersection points. We take note of any basis changes required during the arrow sliding process, even if they do not affect the complex over \(\widehat{\mathcal{R}}\). Once we have a nice representative for the \(UV=0\) data in \(\mathcal{Z}\), we apply a homotopy so that the curve restricted to the marked strip \(\mathcal{S}\) is in almost simple position; this entails pushing the curve up or down at each intersection with the boundaries of the strips to ensure these intersections are ordered correctly, as well as applying homotopies to remove any immersed annuli within \(\mathcal{S}\). We can now enhance the curves within \(\mathcal{S}\) so that the restriction to \(\mathcal{S}\) represents the complex \(C\) over \(\mathcal{R}^{-}\) Figure 41. Crossover arrows added to modify \(d^{\boldsymbol{\theta}}_{i,j}\); each crossover arrow is weighted by a multiple of \(W^{b}\) and adding it introduces a new bigon from \(x_{i}\) to \(x_{j}\). Figure 42. Four possibilities after sliding a crossover arrow (to its left) across \(\mu\). We can either slide the arrow to a crossing where it becomes a left turn arrow and can be incorporated into \(\mathbf{b}\), remove the arrow (possibly after a change of basis) or we slide the arrow between parallel segments until it crosses \(\mu\) again. as in Section 9.2. Because of the ordering of the intersection points with the boundaries of \(\mathcal{S}\), the decorated curve in the whole cylinder \(\mathcal{Z}\) also represents the complex \(C\); that is, all bigons in \(\mathcal{Z}\) contributing to the Floer homology of the curve with \(\mu\) are contained in \(\mathcal{S}\). Importantly, we build the enhanced curves in \(\mathcal{S}\) starting from the basis for \(C\) obtained at the end of the arrow sliding algorithm in the construction of the \(UV=0\) curves, which may be different from the original basis. We may perform further basis changes while enhancing the decorated curve in \(\mathcal{S}\) to represent \(C\) over \(\mathcal{R}^{-}\), but these basis changes have no effect modulo \(UV\). It follows that the decorated arcs in the \(\mathcal{F}\) portion of \(\mathcal{Z}\) still correctly represent the simplified flip isomorphism \(\widehat{\Psi}_{*}\). We now enhance these decorated arcs in \(\mathcal{F}\) to capture any missing information from \(\Psi_{*}\). Let \(\{x_{1},\ldots,x_{m}\}\) denote the intersections of \(\Gamma\) with \(\partial_{R}\mathcal{S}_{i}=\partial_{L}\mathcal{F}_{i}\) and we identify these with a subset of the (new) basis for \(C\) which also forms a basis for the horizontal homology. Similarly, we let \(\{y_{1},\ldots,y_{m}\}\) denote the intersections of \(\Gamma\) with \(\partial_{L}\mathcal{S}_{i+1}=\partial_{R}\mathcal{F}_{i}\) and also the corresponding basis of the vertical homology of \(C\). With respect to this basis, the flip isomorphism takes the form \[\Psi_{*}(x_{i})=\sum_{k=1}^{m}c_{i,j}U^{\frac{\operatorname{gr}_{w}(y_{j})- \operatorname{gr}_{z}(x_{i})}{2}}V^{\frac{\operatorname{gr}_{z}(y_{j})- \operatorname{gr}_{w}(x_{i})}{2}}y_{j},\] where the coefficient \(c_{i,j}\) is zero if \(\operatorname{gr}_{w}(y_{j})-\operatorname{gr}_{z}(x_{i})\) is odd or negative. Restricting to terms for which the power of \(U\) is zero gives the simplified flip map \(\widehat{\Psi}_{*}\). Recall that \(\widehat{\Psi}_{*}\) restricts to an isomorphism at each grading level, using the grading \(\operatorname{gr}_{z}\) on the source and \(\operatorname{gr}_{w}\) on the target, and that the decorated curve in \(\mathcal{F}\) already contains a bundle of segments with a collection of turning points representing this isomorphism for each grading. Because the intersections of \(\Gamma\) on \(\partial_{L}\mathcal{F}=\partial_{R}\mathcal{S}\) are ordered so that \(\operatorname{gr}_{z}\) is non-increasing moving upward and the intersections of \(\Gamma\) with \(\partial_{R}\mathcal{F}=\partial_{L}\mathcal{S}\) are ordered so that \(\operatorname{gr}_{w}\) is non-decreasing moving upward, every bundle of arcs crosses every other bundle (see Figure 43). For any term of \(\Psi_{*}\) not included in \(\widehat{\Psi}_{*}\), the power of \(U\) is positive and so \(\operatorname{gr}_{w}(y_{j})>\operatorname{gr}_{z}(x_{i})\). In this case, we can add the intersection between the segment starting at \(x_{i}\) and the segment ending at \(y_{j}\) to the bounding chain, which introduces a polygonal path across \(\mathcal{F}\) from \(x_{i}\) to \(y_{j}\). This intersection point has degree \(\operatorname{gr}_{z}(x_{i})-\operatorname{gr}_{w}(y_{j})<0\) so adding it to \(\mathbf{b}\) corresponds to adding a left turn crossover arrow weighted by an appropriate multiple of \(W^{a}\), where \(a\) is the power of \(U\) in the relevant term of \(\Psi_{*}\). It is straightforward to add intersection points to \(\mathbf{b}\) in this way so that the decorated arcs across \(\mathcal{F}\) represent the map \(\Psi_{*}\). When this is accomplished, the decorated curves in \(\mathcal{Z}\) represent \(C\) and \(\Psi_{*}\) over \(\mathcal{R}^{-}\). Figure 43. The \(\mathcal{F}\) portion of an immersed curve in \(\mathcal{Z}\) that has been perturbed so that the restriction to \(\mathcal{S}\) is in almost simple position. The bundles of strands with the same grading determine the map \(\widehat{\Psi}_{*}\). We can represent \(\Psi_{*}\) by adding turning points at the intersections between bundles oriented the same direction, weighted by a power of \(W\) determined by the difference in grading between the bundles as indicated. The final step is to apply a homotopy to the curves so that they are in almost simple position as curves in \(\mathcal{Z}\) (note that as of now the restriction of the curves to \(\mathcal{S}\) are in almost simple position, but there may be unnecessary intersection points when the arcs in \(\mathcal{S}\) are glued with arcs in \(\mathcal{F}\) to form closed curves). When performing this homotopy, we take care to apply the local moves in Figure 8; in particular, homotopies that add or remove intersection points require corresponding modifications to the bounding chain. As long as these rules are observed, the resulting decorated curves still represent the complex \(C\) over \(\mathcal{R}^{-}\) and the flip isomrophism \(\Psi_{*}\). Moreover, the restriction of the bounding chain to degree zero intersection points is of local system type, since this was true of the bounding chain from the \(UV=0\) construction and later steps only add intersection points with strictly negative degree. **Remark 9.2**.: When constructing enhanced curves in \(\mathcal{Z}\) representing \(C\) and \(\Psi_{i}\) over \(\mathcal{R}^{-}\), we first construct curves in \(\mathcal{Z}\) representing both the complex and the flip isomorphism over \(\widehat{\mathcal{R}}\) before adding minus information in \(\mathcal{S}\) and then in \(\mathcal{F}\) to get a representative over \(\mathcal{R}^{-}\). Another approach would be to first construct enhanced curves in \(\mathcal{S}\) representing \(C\) over \(\mathcal{R}^{-}\), and only then pass to the cylinder by adding decorated arcs in \(\mathcal{F}\) representing the flip isomorphism \(\Psi_{*}\). The problem with this approach is that representing \(\Psi_{i}\) may then require adding crossover arrows at degree zero intersection points in \(\mathcal{F}\) that are not local system intersection points. To get a decorated curve of the desired form, we would need to slide these arrows to remove them. In practice it is usually clear how to do this, but there are significant difficulties in defining general arrow sliding rules in the minus setting. For this reason we want to do essentially all arrow sliding while working over \(\widehat{\mathcal{R}}\), and then observe that enhancing the curves does not introduce any crossover arrows at degree zero intersection points. We do use limited arrow sliding in the process of enhancing curves in \(\mathcal{S}\); this is mainly possible because the arrows being moved are weighted by \(W^{b}\) for some \(b\) and it is enough to work over \(\mathcal{R}_{b+1}\) at the time we slide the arrow. **Example 9.3**.: Consider the complex \(C\) and and flip isomorphism \(\Psi_{*}\) from Example 2.6 associated with \(+1\)-surgery on the left-handed trefoil. In Example 8.4 we constructed a curve \(\Gamma\) in \(\mathcal{Z}\) (with trivial bounding chain) representing this data over \(\widehat{\mathcal{R}}\). Inspection reveals that the same curve also represents \(C\) and \(\Psi_{*}\) over \(\mathcal{R}^{-}\); nevertheless, it is instructive to step through the process of enhancing curves more carefully in this case. We begin with the curve \(\Gamma_{\mathcal{S}}\) in \(\mathcal{S}\) representing \(C\) over \(\widehat{\mathcal{R}}\), with respect to the basis \(\{a,b,c,d,e\}\), which appears in Figure 35(a). Because \(C\) has no diagonal arrows, it happens that this curve also represents \(C\) over \(\mathcal{R}^{-}\); however, this is not the enhanced curve we wish to use in \(\mathcal{S}\). Instead, we should first add the flip map information and simplify the curve as a representative over \(\widehat{\mathcal{R}}\), as in Example 8.4. Recall that in this process we remove some arrows which corresponds to changing the basis of \(C\), with the new basis given by \[a^{\prime}=-a,\quad b^{\prime}=b,\quad c^{\prime}=-c+Ua-Ve,\quad d^{\prime}=d,\quad\text{ and }\quad e^{\prime}=e.\] With respect to this basis, the differential on \(C\) is \[\partial(a^{\prime})=-Vb^{\prime},\quad\partial(b^{\prime})=0,\quad\partial(c ^{\prime})=UVb^{\prime}-UVd^{\prime},\quad\partial(d^{\prime})=0,\quad\text{ and }\quad\partial(e^{\prime})=Ud^{\prime}.\] Since there are now diagonal arrows, enhancing the curves in \(\mathcal{S}\) to represent \(C\) over \(\mathcal{R}^{-}\) with respect to this basis requires adding a nontrivial bounding chain. In particular, putting the curve from Figure 35(a) in almost simple position introduces two intersections, and we include both of these intersection points in \(\mathbf{b}_{\mathcal{S}}\) with appropriate weights to recover the two diagonal arrows in \(C\), as shown in Figure 44(a). We now add a collection of arcs \(\Gamma_{\mathcal{F}}\) in \(\mathcal{F}\) representing \(\Psi_{*}\), which with respect to the new basis takes \(a^{\prime}\) to \(V^{-1}c\), \(b^{\prime}\) to \(d^{\prime}\), and \(c^{\prime}\) to \(Ue^{\prime}\). Since \(\Psi_{*}\) is simple with respect to this basis (it is a permutation of the generators) the bounding chain \(\mathbf{b}_{\mathcal{F}}\) in \(\mathcal{F}\) is trivial. Gluing the curves in \(\mathcal{S}\) and \(\mathcal{F}\) produces the decorated immersed curve shown in Figure 44(b). Finally, we homotope the curve to have minimal self-intersection, noting that this is possible by move \((j)\) in Figure 8. The resulting curve in \(\mathcal{Z}\) is shown in Figure 44(c), which happens to be the same as the curve constructed to represent this data modulo \(UV\). ## 10. Morphisms and mapping cones In the previous three sections we constructed several versions of decorated immersed curves associated to bigraded complexes and flip maps. We will now identify the Floer homology of two such curves with algebraic operations. In particular, we relate Floer homology of curves in the marked strip to morphism spaces between two complexes, and we relate Floer homology of curves in the marked cylinder to mapping cones of certain maps defined using the flip maps. Using these observations, we can address the question of whether the immersed curves we have constructed are unique. ### Floer homology in \(\mathcal{S}\) as morphism spaces Given two bigraded complexes \(C_{1}\) and \(C_{2}\) over \(\mathcal{R}^{-}\), the space of morphisms from \(C_{1}\) to \(C_{2}\) is also a complex over \(\mathcal{R}^{-}\), which we denote \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})\). Fixing homogeneous bases \(\{x_{1},\dots,x_{n}\}\) for \(C_{1}\) and \(\{y_{1},\dots,y_{m}\}\) for \(C_{2}\), let \((x_{i}\!\to\!y_{j})\) denote the morphism \(f:C_{1}\to C_{2}\) with \(f(x_{i})=y_{j}\) and \(f(x_{k})=0\) for \(k\neq i\). The ring \(\mathcal{R}^{-}\) acts on morphisms in the obvious way: for \(cU^{a}V^{b}\in\mathcal{R}\), the morphism \(cU^{a}V^{b}(x_{i}\!\to\!y_{j})\) takes \(x_{i}\) to \(cU^{a}V^{b}y_{j}\) and takes \(x_{k}\) to \(0\) for \(k\neq i\). As a module over \(\mathcal{R}^{-}\), \(\operatorname{Mor}_{\mathcal{R}}(C_{1},C_{2})\) is generated by the morphisms \((x_{i}\!\to\!y_{j})\) for \(1\leq i\leq n\) and \(1\leq j\leq m\). The bigrading \((\operatorname{gr}_{w},\operatorname{gr}_{z})\) on \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})\) records the change in bigrading under a given morphism; in particular, \(\operatorname{gr}_{w}((x_{i}\!\to\!y_{j}))=\operatorname{gr}_{w}(y_{j})- \operatorname{gr}_{w}(x_{i})\) and \(\operatorname{gr}_{z}((x_{i}\!\to\!y_{j}))=\operatorname{gr}_{z}(y_{j})- \operatorname{gr}_{z}(x_{i})\). Similarly, the Alexander grading \(A=\frac{1}{2}(\operatorname{gr}_{w}-\operatorname{gr}_{z})\) on \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})\) is given by the change in Alexander grading under a given morphism. As usual, multiplication by \(U\) and \(V\) shift the bigrading by \((-2,0)\) and \((0,-2)\), respectively. The differential on \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})\) is defined by \[(\partial f)(x)=\partial_{2}(f(x))-(-1)^{\operatorname{gr}_{w}(f)}f(\partial _{1}x).\] Note that since the differential preserves the Alexander grading on \(C_{1}\) and \(C_{2}\), the same is true on \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})\); we let \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})_{s}\) denote the direct summand of \(\operatorname{Mor}_{\mathcal{R}}(C_{1},C_{2})\) in Alexander grading \(s\) for each \(s\) in \(\mathbb{Z}\). \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})_{s}\) is a module over \(\mathbb{F}[W]\) generated by \(U^{A(y_{j})-A(x_{i})-s}(x_{i}\!\to\!y_{j})\) for \(1\leq i\leq n\) and \(1\leq j\leq m\) with \(A(y_{j})-A(x_{i})\geq s\) and \(V^{s+A(x_{i})-A(y_{j})}(x_{i}\!\to\!y_{j})\) for \(1\leq i\leq n\) and \(1\leq j\leq m\) with \(A(y_{j})-A(x_{i})<s\). A similar construction holds in the \(UV=0\) setting: letting \(\widehat{C}_{1}\) and \(\widehat{C}_{2}\) denote the \(UV=0\) quotient of \(C_{1}\) and \(C_{2}\), respectively, \(\operatorname{Mor}_{\widehat{\mathcal{R}}}(\widehat{C}_{1},\widehat{C}_{2})\) is a bigraded complex over \(\widehat{\mathcal{R}}\). In particular, \(\operatorname{Mor}_{\widehat{\mathcal{R}}}(\widehat{C}_{1},\widehat{C}_{2})\) is the \(UV=0\) quotient of \(\operatorname{Mor}_{\mathcal{R}}(C_{1},C_{2})\). For each \(s\), \(\operatorname{Mor}_{\widehat{\mathcal{R}}}(\widehat{C}_{1},\widehat{C}_{2})_{s}\) is a vector space over \(\mathbb{F}\). Our aim is to relate the homology of the morphism complexes described above to an appropriate version of Floer homology of immersed curves. For \(i\in\{1,2\}\), let \((\Gamma_{i},\mathbf{b}_{i})\) be the curves with bounding chains in the marked strip \(\mathcal{S}\) representing \(C_{i}\), and let \(\boldsymbol{\vartheta}_{i}\) denote the immersed train track in \(\mathcal{S}\) corresponding to this decorated curve. For each integer \(s\), let \(\boldsymbol{\vartheta}_{i}[s]\) denote the result of shifting \(\boldsymbol{\vartheta}_{i}\) upward by \(s\) units. We will relate \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})\) to the _wrapped Floer homology_\(HF(p(\boldsymbol{\vartheta}_{2}),p(\boldsymbol{\vartheta}_{1}))\) Figure 44. (a) An immersed curve in \(\mathcal{S}\) representing the complex in Example 9.3 with respect to the relevant basis; (b) The immersed curve in \(\mathcal{Z}\) representing the complex and flip map in the example; (c) The curves pulled tight and lifted to the plane. where \(p\) is the projection map from \(\mathcal{S}\) to \(\mathcal{T}=\mathcal{S}/(x,y)\sim(x,y+1)\). By wrapped Floer homology we mean the Floer homology after we modify \(p(\boldsymbol{\vartheta}_{1})\) in a neighborhood of \(\partial\mathcal{T}\) so that it spirals around the boundary (following the boundary orientation) infinitely many times as it approaches the boundary. The wrapped Floer homology \(HF(p(\boldsymbol{\vartheta}_{2}),p(\boldsymbol{\vartheta}_{1}))\) in \(\mathcal{T}\) can be understood by considering the lifts in \(\mathcal{S}\), with each Alexander grading summand coming from a different lift of \(p(\boldsymbol{\vartheta}_{1})\). In particular, the Alexander grading \(s\) piece of \(HF(p(\boldsymbol{\vartheta}_{2}),p(\boldsymbol{\vartheta}_{1}))\) can be identified with the wrapped Floer homology \(HF(\boldsymbol{\vartheta}_{2},\boldsymbol{\vartheta}_{1}[s])\) in \(\mathcal{S}\), which is defined to be the Floer homology after the endpoints of \(\boldsymbol{\vartheta}_{1}[s]\) have been pushed upward on \(\partial_{R}\mathcal{S}\) and downwards on \(\partial_{L}\mathcal{S}\) past all endpoints of \(\boldsymbol{\vartheta}_{2}\). **Proposition 10.1**.: _For \(i\in\{0,1\}\), let \(\boldsymbol{\vartheta}_{i}=(\Gamma_{i},\mathbf{b}_{i})\) be an immersed multicurve with bounding chain in \(\mathcal{S}\) representing a bigraded complex \(C_{i}\). For every \(s\in\mathbb{Z}\), there is an isomorphism of graded \(\mathbb{F}[W]\)-modules_ \[H_{*}(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})_{s})\cong HW( \boldsymbol{\vartheta}_{2},\boldsymbol{\vartheta}_{1}[s]),\] _where the right hand side is wrapped Floer homology in the marked strip \(\mathcal{S}\)._ Proof.: We will homotope the immersed curves in \(\boldsymbol{\vartheta}_{1}[s]\) and \(\boldsymbol{\vartheta}_{2}\) into a particular form so that their wrapped Floer chain complex \(CF(\boldsymbol{\vartheta}_{2},\boldsymbol{\vartheta}_{1}[s])\) is isomorphic to \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})_{s}\) a complex over \(\mathbb{F}[W]\). We first homotope both curves so that their intersection with the strip \([-\frac{1}{4},\frac{1}{4}]\times\mathbb{R}\) is a collection of horizontal segments; there is one horizontal segment for each intersection with \(\mu\), and these correspond to generators of the respective complexes. Note that the segment corresponding to \(y_{j}\in C_{2}\) appears at height \(A(y_{j})\) and the segment corresponding to \(x_{i}\in C_{1}\) appears at height \(A(x_{i})+s\). For each generator \(x_{i}\) of \(C_{1}\), let \(x_{i}^{L}\) and \(x_{i}^{R}\) denote the left and right endpoints, respectively, of the horizontal segment in \(\boldsymbol{\vartheta}_{1}[s]\) corresponding to \(x_{i}\); note that \(x_{i}^{L}\) lies on the line \(\mu_{-\frac{1}{4}}=\{-\frac{1}{4}\}\times\mathbb{R}\) and \(x_{i}^{R}\) lies on the line \(\mu_{\frac{1}{4}}=\{\frac{1}{4}\}\times\mathbb{R}\). Similarly, for each generator \(y_{j}\) of \(C_{2}\) let \(y_{j}^{L}\) and \(y_{j}^{R}\) denote the endpoints of the appropriate horizontal segment in \(\boldsymbol{\vartheta}_{2}\). We now choose some heights \(h_{min}\) and \(h_{max}\) so that all horizontal segments in both curve sets fall between these two heights, and we perturb \(\boldsymbol{\vartheta}_{1}[s]\) by sliding the portion in \([\frac{1}{4},\frac{1}{2}]\times\mathbb{R}\) upward until it is entirely above height \(h_{max}\); note that the endpoints \(x_{i}^{R}\) slide upward along \(\mu_{\frac{1}{4}}\) as well and we perturb the horizontal segments in the strip \([-\frac{1}{4},\frac{1}{4}]\times\mathbb{R}\) accordingly. Similarly, we slide the portion of \(\boldsymbol{\vartheta}_{1}[s]\) in \([-\frac{1}{2},-\frac{1}{4}]\times\mathbb{R}\) downward until it is entirely below the height \(h_{min}\) and perturb the horizontal segments accordingly. See Figure 45 for an example. With the curves in the position described above, all intersection points lie in the rectangle \([-\frac{1}{4},\frac{1}{4}]\times[h_{min},h_{max}]\) (the shaded rectangle in Figure 45). Each perturbed horizontal segment in \(\boldsymbol{\vartheta}_{1}[s]\) (which now run from the bottom edge of the rectangle to the top edge) intersects each horizontal segment of \(\boldsymbol{\vartheta}_{2}\) once, so generators of \(CF(\boldsymbol{\vartheta}_{1}[s],\boldsymbol{\vartheta}_{2})\) are in bijection with pairs \((x_{i},y_{j})\) for generators \(x_{i}\) of \(C_{1}\) and \(y_{j}\) of \(C_{2}\); we will refer to intersection points by these ordered pairs throughout the proof. These in turn are in bijection with the generators of \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})_{s}\), where the pair \((x_{i},y_{j})\) corresponds to the morphism \((x_{i}\to y_{j})\) multiplied by the appropriate power of either \(U\) or \(V\) to give a morphism of Alexander grading \(s\). The multiplier needed is \(U^{A(y_{j})-A(x_{i})-s}\) if \(A(y_{j})-A(x_{i})\geq s\) or \(V^{s+A(x_{i})-A(y_{j})}\) if \(A(y_{j})-A(x_{i})<s\). There is a convenient graphical interpretation of this multiplier: for each pair \((x_{i},y_{j})\) there is a triangle formed by the perturbed horizontal segment corresponding to \(x_{i}\), the horizontal segment corresponding to \(y_{j}\), and the vertical line \(\mu\), and if we place \(z\) and \(w\) basepoints just to the left and right of each marked point as usual then the multiplier has one \(U\) for each \(w\) basepoint covered by the triangle or one \(V\) for each \(z\) basepoint covered. This identifies the generators of \(CF(\boldsymbol{\vartheta}_{1}[s],\boldsymbol{\vartheta}_{2})\) and \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})_{s}\), and it is a straightforward exercise to check that the bigradings agree. We now identify the differential on \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})_{s}\) with the Floer differential. The differential on \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})\) comes from combining two contributions: (1) for each generator \(x_{i}\) of \(C_{1}\) and each arrow from \(y_{j}\) to \(cU^{a}V^{b}y_{j^{\prime}}\) in \(C_{2}\), there is an arrow from \((x_{i}\!\rightarrow\!y_{j})\) to \(cU^{a}V^{b}(x_{i}\!\rightarrow\!y_{j^{\prime}})\), and (2) for each generator \(y_{j}\) of \(C_{2}\) and each arrow from \(x_{i}\) to \(cU^{a}V^{b}x_{i^{\prime}}\) in \(C_{1}\), there is an arrow from \((x_{i^{\prime}}\!\rightarrow\!y_{j})\) to \((-1)^{\star}cU^{a}V^{b}(x_{i}\!\rightarrow\!y_{j})\) where \(\star=1+\operatorname{gr}(x_{i}\to y_{j})\). Consider a contribution of the first type. The arrow from \(y_{j}\) to \(cU^{a}V^{b}y_{j^{\prime}}\) in \(C_{2}\) corresponds to a bigon bounded by \(\boldsymbol{\vartheta}_{2}\) and \(\mu\) from \(y_{j}\) to \(y_{j^{\prime}}\) covering the \(w\) and \(z\) basepoints \(a\) and \(b\) times, respectively. We will first assume that \(y_{j}\) is above \(y_{j^{\prime}}\), so that this bigon begins to the left of \(\mu\); note that in this case \(b=a+A(y_{j})-A(y_{j^{\prime}})\geq a\). Removing the immersed rectangle with corners \(y_{j}\), \(y_{j^{\prime}}\), \(y_{j^{\prime}}^{L}\), and \(y_{j}^{L}\) clearly gives rise to a bigon bounded by \(\boldsymbol{\vartheta}_{2}\) and the line \(x=-\frac{1}{4}\) from \(y_{j}^{L}\) to \(y_{j^{\prime}}^{L}\). This new bigon covers both the \(w\) and \(z\) basepoints \(a\) times, since the rectangle contains \(A(y_{j})-A(y_{j^{\prime}})=b-a\) copies of the \(z\) basepoint. For each generator \(x_{i}\) of \(C_{1}\), the perturbed horizontal segment corresponding to \(x_{i}\) crosses both the horizontal segments corresponding to \(y_{j}\) and \(y_{j^{\prime}}\), forming a rectangle with these segments and the line \(x=-\frac{1}{4}\); this rectangle covers both the \(z\) and \(w\) basepoints \(\max(A(y_{j}),A(x_{i})+s)-\max(A(y_{j^{\prime}},A(x_{i})+s)\) times. Adding this rectangle to the bigon described above gives a bigon from the intersection point \((x_{i},y_{j})\) to the intersection point \((x_{i},y_{j^{\prime}})\). This bigon covers the marked point \(a\) times if \(A(x_{i})+s\geq A(y_{j})\), \(a+A(y_{j})-A(x_{i})-s\) times if \(A(y_{j})>A(x_{i})+s\geq A(y_{j^{\prime}})\), and \(b\) times if \(A(y_{j^{\prime}})>A(x_{i})+s\). If \(A(x_{i})+s\geq A(y_{j})\), the contribution to the differential on \(\operatorname{Mor}_{\mathcal{R}}(C_{1},C_{2})_{s}\) is the arrow \[V^{A(x_{i})+s-A(y_{j})}(x_{i}\!\rightarrow\!y_{j})\to V^{A(x_{i})+s-A(y_{j}) }[cU^{a}V^{b}(x_{i}\!\rightarrow\!y_{j^{\prime}})]=c(UV)^{a}V^{A(x_{i})+s-A(y _{j^{\prime}})}(x_{i}\!\rightarrow\!y_{j^{\prime}}).\] Since in this case the intersection points \((x_{i},y_{j})\) and \((x_{i},y_{j^{\prime}})\) correspond to the morphisms \(V^{A(x_{i})+s-A(y_{j})}(x_{i}\!\rightarrow\!y_{j})\) and \(V^{A(x_{i})+s-A(y_{j^{\prime}})}(x_{i}\!\rightarrow\!y_{j^{\prime}})\), respectively, this corresponds to an arrow \[(x_{i},y_{j})\to c(UV)^{a}(x_{i},y_{j^{\prime}})\] in the differential of \(CF(\boldsymbol{\vartheta}_{2},\boldsymbol{\vartheta}_{1}[s])\), which is precisely the arrow given by the bigon constructed above. If instead \(A(y_{j})>A(x_{i})+s\geq A(y_{j^{\prime}})\), the contribution to the differential on \(\operatorname{Mor}_{\mathcal{R}}(C_{1},C_{2})_{s}\) is the arrow \[U^{A(y_{j})-A(x_{i})-s}(x_{i}\!\rightarrow\!y_{j}) \to U^{A(y_{j})-A(x_{i})-s}[cU^{a}V^{b}(x_{i}\! \rightarrow\!y_{j^{\prime}})]\] \[=c(UV)^{a+A(y_{j})-A(x_{i})-s}V^{A(x_{i})+s-A(y_{j^{\prime}})}(x _{i}\!\rightarrow\!y_{j^{\prime}}).\] Figure 45: The curves \(\boldsymbol{\vartheta}_{1}[0]\) (blue) and \(\boldsymbol{\vartheta}_{2}\) (red), where the complexes \(C_{1}\) and \(C_{2}\) are the knot Floer homology of the left handed and right handed trefoil, respectively. The Floer homology of these curves computes the homology of \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})_{0}\). On the left the curves are in the position described in the proof of Proposition 10.1, and on the right they are in minimal position. In this case the intersection points \((x_{i},y_{j})\) and \((x_{i},y_{j^{\prime}})\) correspond to the morphisms \(U^{A(y_{j})-A(x_{i})-s}(x_{i}\!\rightarrow\!y_{j})\) and \(V^{A(x_{i})+s-A(y_{j^{\prime}})}(x_{i}\!\rightarrow\!y_{j^{\prime}})\), respectively, and the bigon constructed above contributes the corresponding arrow \[(x_{i},y_{j})\to c(UV)^{a+A(y_{j})-A(x_{i})-s}(x_{i},y_{j^{\prime}})\] to the differential. Finally, if \(A(y_{j^{\prime}})>A(x_{i})+s\) then the contribution to the differential on \(\operatorname{Mor}_{\mathcal{R}}(C_{1},C_{2})_{s}\) is the arrow \[U^{A(y_{j})-A(x_{i})-s}(x_{i}\!\rightarrow\!y_{j})\to U^{A(y_{j})-A(x_{i})-s }[cU^{a}V^{b}(x_{i}\!\rightarrow\!y_{j^{\prime}})]=c(UV)^{b}U^{A(y_{j^{\prime} })-A(x_{i})-s}(x_{i}\!\rightarrow\!y_{j^{\prime}}).\] In this case the intersection points correspond to the morphisms \(U^{A(y_{j})-A(x_{i})-s}(x_{i}\!\rightarrow\!y_{j})\) and \(U^{A(y_{j^{\prime}})-A(x_{i})-s}(x_{i}\!\rightarrow\!y_{j^{\prime}})\), respectively, and the bigon constructed above contributes the corresponding arrow \[(x_{i},y_{j})\to c(UV)^{b}(x_{i},y_{j^{\prime}}).\] The case that \(y_{j}\) is below \(y_{j^{\prime}}\) is similar, except that the bigon representing the arrow from \(y_{j}\) to \(cU^{a}V^{b}\) lies to the right of \(\mu\) near its \(\mu\) boundary. Removing an appropriate rectangle gives a bigon bounded by \(\boldsymbol{\vartheta}_{2}\) and the line \(x=\frac{1}{4}\) from \(y_{j}^{R}\) to \(y_{j^{\prime}}^{R}\), which covers the marked point \(b\) times; adding back on a different rectangle gives a bigon connecting \((x_{i},y_{j})\) to \((x_{i},y_{j^{\prime}})\). To count the multiplicity with which this bigon covers the marked point, we can again consider cases depending on whether \(A(x_{i})+s\) is above \(A(y_{j^{\prime}})\), below \(A(y_{j})\), or in between them. In each case it is straightforward to check that the contribution of the bigon to the differential of \(CF(\boldsymbol{\vartheta}_{2},\boldsymbol{\vartheta}_{1}[s])\) exactly matches the contribution to the differential of \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})_{s}\). Contributions to the differential of \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})_{s}\) of the second type, coming from a generator \(y_{j}\) of \(C_{2}\) and an arrow from \(x_{i}\) to \(cU^{a}V^{b}x_{i^{\prime}}\) in \(C_{1}\), can be dealt with similarly. We will only describe the case that \(x_{i}\) is above \(x_{i^{\prime}}\) as generators of \(C_{1}\) and that \(A(y_{i})\leq A(x_{i^{\prime}})+s\), leaving the remaining (similar) cases to the reader. Note that \(b=a+A(x_{i})-A(x_{i^{\prime}})\geq a\). The arrow from \(x_{i}\) to \(cU^{a}V^{b}x_{i^{\prime}}\) corresponds to a bigon between \(\boldsymbol{\vartheta}_{1}[s]\) and \(\mu\) which lies locally to the left of \(\mu\) near its \(\mu\) boundary. Removing an appropriate rectangle gives a bigon bounded by \(\boldsymbol{\vartheta}_{1}[s]\) and the line \(x=-\frac{1}{4}\) from \(x_{i}^{L}\) to \(x_{i^{\prime}}^{L}\), which covers the marked point \(a\) times. Given a generator \(y_{j}\), we can add a rectangle to produce a bigon connecting \((x_{i},y_{j})\) and \((x_{i^{\prime}},y_{j})\). Note that in \(CF(\boldsymbol{\vartheta}_{2},\boldsymbol{\vartheta}_{1}[s])\) we count bigons with the right boundary on \(\boldsymbol{\vartheta}_{2}\), so this bigon connects \((x_{i^{\prime}},y_{j})\) to \((x_{i},y_{j})\). Since we have assumed \(A(y_{i})<A(x_{i^{\prime}})+s\), the rectangle we added contained no marked points and so this bigon contributes \[(x_{i^{\prime}},y_{j})\rightarrow(-1)^{*}(UV)^{a}(x_{i},y_{j})\] to the differential. The sign term comes from the fact that the sign of the bigon depends on the orientation of the \(\boldsymbol{\vartheta}_{2}\) portion of the boundary while the sign of the arrow in \(C_{1}\) depends on the orientation of the \(\boldsymbol{\vartheta}_{1}[s]\) portion of boundary, and these agree precisely when the intersection point \((x_{i},y_{j})\) has odd grading. Since \((x_{i},y_{j})\) and \((x_{i^{\prime}},y_{j})\) correspond to the morphisms \(V^{A(x_{i})+s-A(y_{j})}(x_{i}\!\rightarrow\!y_{j})\) and \(V^{A(x_{i})+s-A(y_{j^{\prime}})}(x_{i}\!\rightarrow\!y_{j^{\prime}})\), respectively, this matches the contribution of the arrow \[V^{A(x_{i^{\prime}})+s-A(y_{j})}(x_{i}\!\rightarrow\!y_{j})\to V^{A(x_{i^{ \prime}})+s-A(y_{j})}[c(-1)^{*}U^{a}V^{b}(x_{i}\!\rightarrow\!y_{j})]\] to the differential on \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})\). We have shown that for each term in the differential on \(\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})_{s}\), there is a bigon producing the analogous term in the differential of \(CF(\boldsymbol{\vartheta}_{2},\boldsymbol{\vartheta}_{1}[s])\); it only remains to show that there are no other bigons contributing to the differential of \(CF(\boldsymbol{\vartheta}_{2},\boldsymbol{\vartheta}_{1}[s])\). Suppose there is a bigon \(B\) whose initial corner is the intersection point \((x_{i},y_{j})\). Starting from this initial point, we will follow the \(\boldsymbol{\vartheta}_{2}\) part of \(\partial B\), which initially follows the horizontal segment corresponding to \(y_{j}\). Suppose first that we are moving leftward along this horizontal segment. One of two things can happen: the \(\boldsymbol{\vartheta}_{2}\) part of the \(\partial B\) can leave the strip or it can reach the final corner of the bigon within the strip. If it leaves the strip then it will return to the strip again without interacting with \(\boldsymbol{\vartheta}_{1}[s]\) and then cross the perturbed segment of \(\boldsymbol{\vartheta}_{1}[s]\) corresponding to \(y_{j}\), which necessarily completes a bigon of the first type described above, and if it reaches the terminal corner of \(B\) without leaving the strip then \(B\) is clearly of the second type described above. **Remark 10.2**.: A version of Proposition 10.1 in the \(UV=0\) setting follows from the general pairing theorems for immersed curves representing type D structures, in particular Theorem 1.5 of [12], and the proof here is similar even when extending to the minus setting. In particular, this pairing is considered for type D structures over \(\widehat{\mathcal{R}}\) in Theorem 3 of [12], where the knot Floer complex associated with a connected sum, which comes from the morphism space of two complexes, is identified with the wrapped Floer homology in the doubly marked disk of corresponding immersed curves. Recall that, as mentioned in Section 1.3, the doubly punctured disk is analogous to the punctured cylinder (by switching punctures with boundary components), and the punctured infinite strip \(\mathcal{S}^{*}\) is a covering space of this. ### Floer homology in \(\mathcal{Z}\) as a mapping cone; an unshifted pairing Consider two bigraded complexes \(C_{1}\) and \(C_{2}\) equipped a flip isomorphisms \(\Psi_{1,s}\) and \(\Psi_{2,*}\). Each set of data can be represented by a decorated immersed multicurves in the cylinder \(\mathcal{Z}\), and we can consider the Floer homology of these two curves. We will now define an algebraic pairing on the two sets of data and show that this agrees with the Floer homology of the corresponding curves. To define the algebraic pairing, we consider several maps between morphism spaces. In particular, consider the \(\mathbb{F}[W]\)-complexes \[A_{s}=\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})|_{A=s},\] \[B^{v}=\operatorname{Mor}_{\mathbb{F}[W]}(H_{*}C_{1}^{v},H_{*}C_{2}^{v}),\text{ and}\] \[B^{h}=\operatorname{Mor}_{\mathbb{F}[W]}(H_{*}C_{1}^{h},H_{*}C_{2}^{h}).\] Just as setting \(V=1\) and \(U=W\) gives inclusion maps \(C|_{A=s}\hookrightarrow C^{v}\), we also get inclusions \(A_{s}\hookrightarrow\operatorname{Mor}_{\mathbb{F}[W]}(C_{1}^{v},C_{2}^{v})\). Recall that the generators of \(A_{s}=\operatorname{Mor}_{\mathcal{R}^{-}}(C_{1},C_{2})|_{A=s}\) as an \(\mathbb{F}[W]\)-module are \((x_{i}\to y_{j})\) for generators \(x_{i}\) of \(C_{1}\) and \(y_{j}\) of \(C_{2}\) multiplied by either \(U^{A(y_{j})-A(x_{i})-s}\) if \(A(y_{j})-A(x_{i})\geq s\) or by \(V^{s+A(x_{i})-A(y_{j})}\) if \(A(y_{j})-A(x_{i})\leq s\), while the generators of \(\operatorname{Mor}_{\mathbb{F}[W]}(C_{1}^{v},C_{2}^{v})\) are simply \((x_{i}\to y_{j})\) for generators \(x_{i}\) of \(C_{1}\) and \(y_{j}\) of \(C_{2}\). It follows that the inclusion map \(A_{s}\hookrightarrow\operatorname{Mor}_{\mathbb{F}[W]}(C_{1}^{v},C_{2}^{v})\) is given by multiplying each generator by \(W^{a}\) where \(a=\max(A(y_{j})-A(x_{i})-s,0)\). The homology functor induces a map from \(\operatorname{Mor}_{\mathbb{F}[W]}(C_{1}^{v},C_{2}^{v})\) to \(B^{v}\). We define the composition of these two maps to be \(v_{s}:A_{s}\to B^{v}\). Similarly, setting \(U=1\) and \(V=W\) gives an inclusion map \(A_{s}\hookrightarrow\operatorname{Mor}_{\mathbb{F}[W]}(C_{0}^{h},C_{2}^{h})\), which after taking homology gives a map \(h_{s}:A_{s}\to B^{h}\). The flip isomorphisms \(\Psi_{1,*}\) and \(\Psi_{2,*}\) induce a map \(F_{\Psi_{1,*},\Psi_{2,*}}:B^{h}\to B^{v}\) taking a morphism \(f\) to \(\Psi_{2,*}\circ f\circ(\Psi_{1,*})^{-1}\). We will define \(h_{s}^{\Psi}:A_{s}\to B^{v}\) to be the composition \(F_{\Psi_{1,*},\Psi_{2,*}}\circ h_{s}\). We now consider the map \(D=v_{0}+h_{0}^{\Psi}\) from \(A_{0}\) to \(B^{v}\), and we define \(\mathbb{X}=\mathbb{X}(C_{1},\Psi_{1,*},C_{2},\Psi_{2,*})\) to be the mapping cone of \(D\). The homology of \(\mathbb{X}\) is a graded differential module over \(\mathbb{F}[W]\); this is what we take to be the algebraic pairing of \((C_{1},\Psi_{1,*})\) with \((C_{2},\Psi_{2,*})\). **Proposition 10.3**.: _For \(i\in\{1,2\}\), let \(\boldsymbol{\vartheta}_{i}=(\Gamma_{i},\mathbf{b}_{i})\) be a decorated curve in the marked cylinder \(\mathcal{Z}\) representing the complex \(C_{i}\) and the flip isomorphism \(\Psi_{i,*}\), and let \(\mathbb{X}\) be the complex defined above. The Floer complex \(CF(\boldsymbol{\vartheta}_{2},\boldsymbol{\vartheta}_{1})\) is quasi-isomorphic to \(\mathbb{X}\) as graded complexes over \(\mathbb{F}[W]\)._ Proof.: Recall that the complex \(\mathbb{X}=\operatorname{Cone}(D)\) can be realized as \(A_{0}\otimes B^{v}\) with differential \[\left(\begin{array}{cc}\partial_{A_{0}}&0\\ \widetilde{D}&\partial_{B^{v}}\end{array}\right),\] where throughout this proof for a map \(f\) we use \(\widetilde{f}\) to denote the function defined by \[\widetilde{f}(x)=(-1)^{\operatorname{gr}_{\omega}(x)}f(x).\] It is clear that the complex \(\mathbb{X}\) is also quasi-isomorphic to the complex \[\begin{array}{ccc}A_{0}&\stackrel{{\widetilde{n}_{0}}}{{ \longrightarrow}}&B^{h}\\ \downarrow\,\widetilde{v}_{0}&&\uparrow\,_{-\widetilde{\operatorname{i}} \widetilde{\operatorname{i}}}\\ B^{v}&-\widetilde{F}_{\Psi_{1,*},\Psi_{2,*}}&B^{h}\end{array}\] where \({\rm Id}\) is the identity map on \(B^{h}\). We will perturb the curves \(\mathbf{\vartheta}_{1}\) and \(\mathbf{\vartheta}_{2}\) so that the Floer complex \(CF(\mathbf{\vartheta}_{2},\mathbf{\vartheta}_{1})\) is isomorphic to this complex. An example is shown in Figure 46. We divide the marked cylinder \({\mathcal{Z}}\) (identified with \(({\mathbb{R}}/{\mathbb{Z}})\times{\mathbb{R}}\)) into a marked strip \({\mathcal{S}}_{0}=[-\frac{1}{8},\frac{1}{8}]\times{\mathbb{R}}\) and three unmarked strips \({\mathcal{F}}_{a}=[a-\frac{1}{8},a+\frac{1}{8}]\times{\mathbb{R}}\) for \(a\in\{-\frac{1}{4},\frac{1}{4},\frac{1}{2}\}\), and we let \({\mathcal{F}}\) denote \({\mathcal{F}}_{\frac{1}{4}}\cup{\mathcal{F}}_{\frac{1}{2}}\cup{\mathcal{F}}_{ -\frac{1}{4}}\). We first homotope each curve \(\mathbf{\vartheta}_{i}\) as in the proof of Proposition 10.1 so that the curves restricted to \({\mathcal{S}}_{0}\) are in almost simple position and represent the complex \(C_{i}\) and the curves restricted to \({\mathcal{F}}\) are a collection of arcs from one side of the strip to the other, possibly with left-turn crossover arrows between arcs oriented the same direction, representing the flip isomorphism \(\Psi_{i,*}\). We perturb the curves in \({\mathcal{S}}_{0}\) as in the proof of Proposition 10.1, so that the Floer complex of the curves restricted to \({\mathcal{S}}_{0}\) agrees with the complex \(A_{0}\) exactly. Note that on the right the endpoints of \(\mathbf{\vartheta}_{1}\) are above the endpoints of \(\mathbf{\vartheta}_{2}\), while the opposite is true on the left. We perturb each curve in \({\mathcal{F}}\) so that \(\mathbf{\vartheta}_{1}\) is below \(\mathbf{\vartheta}_{2}\) on \(\partial_{R}{\mathcal{F}}_{\frac{1}{4}}=\partial_{L}{\mathcal{F}}_{\frac{1}{2}}\) and above \(\mathbf{\vartheta}_{2}\) on \(\partial_{R}{\mathcal{F}}_{\frac{1}{2}}=\partial_{L}{\mathcal{F}}_{-\frac{1}{4}}\), and so that all crossings and crossover arrows between arcs in \(\mathbf{\vartheta}_{1}\) or between arcs in \(\mathbf{\vartheta}_{2}^{2}\) occur in \({\mathcal{F}}_{\frac{1}{2}}\) to the right of all crossings between \(\mathbf{\vartheta}_{1}\) and \(\mathbf{\vartheta}_{2}\). The restriction of the Floer complex to generators from \({\mathcal{F}}_{\frac{1}{2}}\) can be identified with \(B^{h}\), where each arc of \(\mathbf{\vartheta}_{i}\) in \({\mathcal{F}}_{\frac{1}{2}}\) corresponds to a generator of \(H_{*}C_{i}^{h}\), and the intersection of arcs corresponding to generators \(x\) and \(y\) can be identified with the morphism \((x\to y)\). The only bigons connecting two intersection points in \({\mathcal{F}}_{\frac{1}{2}}\) extend through \({\mathcal{F}}_{\frac{1}{4}}\) into \(S_{0}\) on either \(\mathbf{\vartheta}_{1}\) or \(\mathbf{\vartheta}_{2}\) and correspond to a term in the differential of either \(H_{*}C_{1}^{h}\) or \(H_{*}C_{2}^{h}\) along with a fixed generator in the other complex. These bigons precisely recover the differential on \(B^{h}\), so that the restriction of the Floer complex \(CF(\mathbf{\vartheta}_{2},\mathbf{\vartheta}_{1})\) to generators in \({\mathcal{F}}_{\frac{1}{2}}\) is the complex \(B^{h}\). The same is true for the restriction of the Floer complex to generators in \({\mathcal{F}}_{\frac{1}{2}}\), but we need to introduce signs in the identification between intersection points and generators of \(B^{h}\) and a grading shift. The bigons connecting intersection points in \({\mathcal{F}}_{\frac{1}{4}}\) are clearly the same as those connecting intersection points in \({\mathcal{F}}_{\frac{1}{2}}\) after removing a small rectangular strip running between \({\mathcal{F}}_{\frac{1}{4}}\) and \({\mathcal{F}}_{\frac{1}{2}}\), but the sign of the bigons that extend into \(S_{0}\) on the \(\mathbf{\vartheta}_{1}\) side is flipped. To correct for this, we identify the intersection point of segments corresponding to \(x\) and \(y\) with \((-1)^{\mathrm{gr}_{w}(x)}\) times the morphism \((x\to y)\) and observe that once again the bigons recover the differential on \(B^{h}\). We also note that there is a grading difference of \(1\) (for either grading) between intersection points in \({\mathcal{F}}_{\frac{1}{2}}\) and the corresponding intersection points in \({\mathcal{F}}_{\frac{1}{4}}\), so the Floer complex restricted to generators in \({\mathcal{F}}_{\frac{1}{4}}\) is the complex \(B^{h}[-1]\). Similarly, the arcs in \(\mathbf{\vartheta}_{i}\) through \({\mathcal{F}}_{-\frac{1}{4}}\) correspond to generators of \(H_{*}C_{i}^{v}\) and the intersections in \({\mathcal{F}}_{-\frac{1}{4}}\) can be identified up to sign with the generators of \(B^{v}\). If we identify the intersection point between segments corresponding to \(x\) in \(H_{*}C_{1}^{v}\) and \(y\) in \(H_{*}C_{2}^{v}\) with \((-1)^{\mathrm{gr}_{w}(x)+1}\) times the morphism \((x\to y)\), we observe that the Floer complex restricted to generators from \({\mathcal{F}}_{-\frac{1}{4}}\) agrees with \(B^{v}[-1]\). The only bigons connecting intersection points from different strips connect intersection points in adjacent strips; more specifically they connect points in \({\mathcal{S}}_{0}\) or \({\mathcal{F}}_{\frac{1}{2}}\) to points in \({\mathcal{F}}_{\frac{1}{4}}\) or \({\mathcal{F}}_{-\frac{1}{4}}\). Counting bigons from \({\mathcal{F}}_{\frac{1}{2}}\) to \({\mathcal{F}}_{\frac{1}{4}}\) realizes the degree \(-1\) map \(-\widetilde{\mathrm{Id}}[-1]\) from \(B^{h}\) to \(B^{h}[-1]\). It is clear that there is exactly one bigon for each generator of \(B^{h}\). To check the signs, note that the bigon corresponding to the pair \((x,y)\) contributes to the Floer complex with a minus sign if and only if \(\mathrm{gr}_{w}(y)\) is even, and the intersection point on the \({\mathcal{F}}_{\frac{1}{4}}\) end of the boundary represents the opposite of the generator \((x\to y)\) of \(B^{h}[-1]\) if and only if \(\mathrm{gr}_{w}(x)\) is odd; it follows that the map takes a generator \(B^{h}\) to the corresponding generator of \(B^{h}[-1]\) with a minus sign if and only if \(\mathrm{gr}_{w}((x\to y))\) is even. By shifting the degree up by one, the map \(-\widetilde{\mathrm{Id}}[-1]:B^{h}\to B^{h}[-1]\) is equivalent to the degree zero map \(-\widetilde{\mathrm{Id}}:B^{h}\to B^{h}\). We next observe that counting the bigons from \(\mathcal{S}_{0}\) to \(\mathcal{F}_{\frac{1}{4}}\) defines the map \(\widetilde{h}_{0}[-1]\) from \(A_{0}\) to \(B^{h}[-1]\). Note that generators \((x,y)\) map to zero if either \(x\) or \(y\) is the end of a horizontal arrow in the corresponding complex, and if \(x\) and \(y\) survive in horizontal homology the generator \((x,y)\) maps to itself multiplied by \(W^{a}\) where \(a=\max(A(x)-A(y),0)\). The corresponding bigons are counted with a minus sign if and only if \(y\) has odd grading. Combining this sign with the minus sign on generators of \(B^{h}[-1]\) for which \(\mathrm{gr}(x)\) is odd gives a minus sign precisely when \((x\to y)\) has odd grading. Similarly, counting the bigons from \(\mathcal{S}_{0}\) to \(\mathcal{F}_{-\frac{1}{4}}\) realize the map \(\widetilde{v}_{0}[-1]\) form \(A_{0}\) to \(B^{v}[-1]\). Finally, we check that counting bigons from \(\mathcal{F}_{\frac{1}{2}}\) to \(\mathcal{F}_{-\frac{1}{4}}\) realizes the degree \(-1\) map \(\widetilde{F}_{\Psi_{1},*,\Psi_{2},*}[-1]\) from \(B^{h}\) to \(B^{v}[-1]\). The identification is obvious up to sign; to check the signs, note that a bigon starting at \((x\to y)\) contributes with sign \((-1)^{\mathrm{gr}_{w}(y)}\) and the identification between the terminal intersection point of the bigon and a generator of \(B^{v}[-1]\) contributes the sign \((-1)^{\mathrm{gr}_{w}(x)+1}\). Putting all these observations together, and shifting the grading of \(B^{h}[-1]\) and \(B^{v}[-1]\), we see that the Floer complex can be identified with the complex given at the beginning of the proof, which is quasi-isomorphic to \(\mathbb{X}\). An example of the unshifted pairing is shown in Figure 46, where \((C_{1},\Psi_{1,*})\) and \((C_{2},\Psi_{2,*})\) are both the complex and flip isomorphism associated with the dual knot in \(+1\)-surgery on the left handed trefoil from Example 2.6. The curves have been perturbed as in the proof of Proposition 10.3 so that the Floer complex is identified with the complex quasi-isomorphic to \(\mathbb{X}\) given in the proof. Figure 46. The simple pairing of the curve invariant for \(+1\)-surgery on the left handed trefoil from Example 2.6 with itself, in the form described in he proof of Proposition 10.3. The highlighted intersection points have a minus sign when identified with a generator of the corresponding complex. We remark that in the proof of Proposition 10.3, it was not necessary to divide the unmarked strip \(\mathcal{F}\) into three strips and perturb \(\boldsymbol{\vartheta}_{1}\) to intersect \(\boldsymbol{\vartheta}_{2}\) in each of these. We could have instead assumed the collection of \(\boldsymbol{\vartheta}_{1}\) arcs crossed the collection of \(\boldsymbol{\vartheta}_{2}\) arcs once in \(\mathcal{F}\), corresponding to the intersection points in \(\mathcal{F}_{-\frac{1}{4}}\), and shown that counting the bigons moving rightward from \(\mathcal{S}\) to \(\mathcal{F}\) recovers the map \(\widetilde{h}_{0}^{\Psi}\). Thus we chose to performed a finger move to add additional intersections and construct a larger chain complex. We feel this makes the argument more clear, since we can consider the maps \(\widetilde{h}_{0}\) and \(\widetilde{F}_{\Psi_{1,*},\Psi_{2,*}}\) separately, but we will adopt the simpler configuration in similar proofs moving forward. ### Uniqueness of curves The algebraicly defined pairing introduced above only depends on the chain complexes of \(C_{1}\) and \(C_{2}\) up to bigraded chain homotopy equivalence and the flip isomorphisms up to isomorphism. It follows from Proposition 10.3 that the Floer homology of the corresponding decorated curves only depends on this data. In particular, any two decorated curves representing homotopy equivalent complexes equipped with equivalent flip isomorphism are indistinguishable in the context of Floer homology in \(\mathcal{Z}\), and can thus be considered as equivalent objects of the Fukaya category of \(\mathcal{Z}\). **Definition 10.4**.: Two objects in the Fukaya category of the marked cylinder \(\mathcal{Z}\) are said to be _equivalent_ if they have the same Floer homology with any other object in the Fukaya category. We have thus shown the following: **Proposition 10.5**.: _Any chain homotopy equivalence class of bigraded complexes over \(\mathcal{R}^{-}\) and any flip isomorphism from the horizontal homology of any of these complexes to the vertical homology of any of these complexes is represented by a unique decorated curve \((\Gamma,\mathbf{b})\) in \(\mathcal{Z}\) (up to equivalence as objects in the Fukaya category of \(\mathcal{Z}\))._ While uniqueness up to equivalence in the Fukaya category is the right notion abstractly, in practice it can be unsatisfying since it can be difficult to check if two decorated curves are equivalent objects in the Fukaya category. It has already been noted that a complex and flip map can have many naive immersed curve representatives that may bear little resemblance to the simplified constructed in the previous sections, even though they are equivalent by Proposition 10.5. Fortunately, for representatives in simple position with bounding chains of local system type, we can give a stronger uniqueness statement about the curves \(\Gamma\) and the decoration \(\widehat{\mathbf{b}}\) obtained from \(\mathbf{b}\) by restricting to degree zero intersection points. **Proposition 10.6**.: _Suppose \((\Gamma_{1},\mathbf{b}_{1})\) and \((\Gamma_{2},\mathbf{b}_{2})\) are equivalent objects in the Fukaya category of the marked cylinder \(\mathcal{Z}\), and suppose the restriction \(\widehat{\mathbf{b}}_{i}\) of \(\mathbf{b}_{i}\) to degree zero intersection points is of local system type for \(i\in\{1,2\}\). Then \(\Gamma_{1}\) and \(\Gamma_{2}\) are homotopic and \(\widehat{\mathbf{b}}_{1}\) and \(\widehat{\mathbf{b}}_{2}\) agree (under the natural identification between local system intersection points of \(\Gamma_{1}\) and \(\Gamma_{2}\))._ Proof.: This property, which holds more generally for compact objects in the Fukaya category of any marked surface, is an essential part of the uniqueness proof for the structure theorems for type D structures appearing in [HRW] and other generalizations of that work. For the sake of keeping the present paper self-contained we briefly summarize the argument. The approach described here, which is simpler than the uniqueness proof in [HRW], is modeled on the proof of [Zib20, Proposition 4.46]. We need to show that if \(\Gamma_{1}\) and \(\Gamma_{2}\) are not homotopic or if \(\widehat{\mathbf{b}}_{1}\) and \(\widehat{\mathbf{b}}_{2}\) do not agree, then \((\Gamma_{1},\widehat{\mathbf{b}}_{1})\) and \((\Gamma_{2},\widehat{\mathbf{b}}_{2})\) are not equivalent as elements of the Fukaya category. This means that there is some test curve \((\Gamma_{3},\widehat{\mathbf{b}}_{3})\) that pairs differently with \((\Gamma_{1},\widehat{\mathbf{b}}_{1})\) and \((\Gamma_{2},\widehat{\mathbf{b}}_{2})\). It is sufficient to consider the hat version of pairing, that is Floer homology in the punctured cylinder \(\mathcal{Z}^{*}\). The key observation is that the dimension of the Floer homology of two connected decorated curves \((\gamma_{1},\widehat{\mathbf{b}}_{\gamma_{1}})\) and \((\gamma_{2},\widehat{\mathbf{b}}_{\gamma_{2}})\) is given by the minimal intersection number of \(\gamma_{1}\) and \(\gamma_{2}\) (since there are no bigons when the curves are in minimal position), and thus does not depend on the decorations, unless the curves are parallel, where by parallel we mean homotopic to multiples of the same primitive curve. If the curves are parallel then admissibility forces us to perturb the curves from minimal position. In this case, assuming without loss of generality that the orientations agree, we can check that the dimension of Floer homology differs from the minimal intersection number by \[2k_{1}k_{2}-\dim(\ker(A_{1}\otimes A_{2}^{-1}-\operatorname{Id}))\] where \(k_{i}\) is the multiplicity of the possibly non-primitive curve \(\gamma_{i}\), \(A_{i}\) is the \(k_{i}\)-dimensional local system determined by the decoration \(\widehat{\mathbf{b}}_{\gamma_{i}}\), and \(\operatorname{Id}\) is the identity map on \(\mathbb{F}^{k_{1}k_{2}}\). If \(\Gamma_{1}\) contains a component \(\gamma\) that is not parallel to any component of \(\Gamma_{2}\), we consider test curves with \(\Gamma_{3}\) a mulitple of \(\gamma\) and consider different decorations \(\widehat{\mathbf{b}}_{3}\). As \(\widehat{\mathbf{b}}_{3}\) varies the pairing of \((\Gamma_{3},\widehat{\mathbf{b}}_{3})\) with \((\Gamma_{1},\widehat{\mathbf{b}}_{1})\) will change but the pairing with \((\Gamma_{2},\widehat{\mathbf{b}}_{2})\) will not, so for some choice of \(\widehat{\mathbf{b}}_{3}\) the test pairing distinguishes \((\Gamma_{1},\widehat{\mathbf{b}}_{1})\) from \((\Gamma_{2},\widehat{\mathbf{b}}_{2})\). A similar argument applies if there is some primitive curve \(\gamma\) for which the collection of components of \(\Gamma_{1}\) parallel to \(\gamma\) is not homotopic to the collection of components of \(\Gamma_{2}\) parallel to \(\gamma\), or the corresponding decorations do not agree. Let \(k_{i}\) be the total multiplicity of all components of \(\Gamma_{i}\) parallel to \(\gamma\), and let \(A_{i}\) be the dimension \(k_{i}\) local system determined by the decorations on these components. Note that the components of \(\Gamma_{i}\) parallel to \(\gamma\), and their decorations, are uniquely determined by the isomorphism type of \(A_{i}\) (we can construct the decorated curves from a matrix in rational canonical form representing \(A_{i}\)). Thus if these collections are not equivalent we must have that \(A_{1}\) is not isomorphic to \(A_{2}\). In this case we consider the test curve \(\Gamma_{3}=\gamma\) and once again vary the decoration \(\widehat{\mathbf{b}}_{3}\). The pairing with \((\Gamma_{i},\widehat{\mathbf{b}}_{i})\) depends on \[2k_{i}k_{3}-\dim(\ker(A_{i}\otimes A_{3}^{-1}-\operatorname{Id})),\] where here \(A_{3}\) is the local system of dimension \(k_{3}\) determined by \(\widehat{\mathbf{b}}_{3}\). A linear algebra exercise shows that if \(A_{1}\) is not isomorphic to \(A_{2}\) then there is some \(A_{3}\) for which this quantity differs between \(i=1\) and \(i=2\), so \((\Gamma_{1},\widehat{\mathbf{b}}_{1})\) and \((\Gamma_{2},\widehat{\mathbf{b}}_{2})\) are distinguished by pairing. The proof above relies on the immersed curves having only closed components, since immersed arcs have no local system intersection points. However, we can apply this result to get uniqueness of curves in \(\mathcal{S}\) representing complexes by constructing closed curves in \(\mathcal{Z}\) from these. This is similar to the doubling argument used to show uniqueness of non-compact immersed curves in [12, Theorem 5.27] **Proposition 10.7**.: _Suppose \((\Gamma_{1},\mathbf{b}_{1})\) and \((\Gamma_{2},\mathbf{b}_{2})\) are equivalent objects in the Fukaya category of the marked strip \(\mathcal{S}\), and suppose \(\Gamma_{i}\) is in almost simple position and the restriction \(\widehat{\mathbf{b}}_{i}\) of \(\mathbf{b}_{i}\) to degree zero intersection points is of local system type for \(i\in\{1,2\}\). Then \(\Gamma_{1}\) and \(\Gamma_{2}\) are homotopic and \(\widehat{\mathbf{b}}_{1}\) and \(\widehat{\mathbf{b}}_{2}\) agree (under the natural identification between local system intersection points of \(\Gamma_{1}\) and \(\Gamma_{2}\))._ Proof.: For \(i\in\{1,2\}\), let \((\Gamma_{i}^{\dagger},\mathbf{b}_{i}^{\dagger})\) denote the decorated immersed curve obtained by rotating \((\Gamma_{i},\mathbf{b}_{i})\) about the origin and interchanging the two gradings; note that the orientation on \(\Gamma_{i}^{\dagger}\), which is determined by the parity of the gradings, is the opposite of the image of the orientation on \(\Gamma_{i}\) under the rotation. For an integer \(n\) let \((\Gamma_{i},\mathbf{b}_{i})[n]\) be the decorated immersed curve obtained from \((\Gamma_{i},\mathbf{b}_{i})\) by translating upward by \(n\) and subtracting \(2n\) from the grading function \(\tilde{\tau}_{w}\), and let \((\Gamma_{i}^{\dagger},\mathbf{b}_{i}^{\dagger})[-n]\) denote the result of translating \((\Gamma_{i}^{\dagger},\mathbf{b}_{i}^{\dagger})\) down by \(n\) and subtracting \(2n\) from \(\tilde{\tau}_{z}\). We now fix \(n\) sufficiently large so that \((\Gamma_{i},\mathbf{b}_{i})[n]\) lies entirely above height zero and \((\Gamma_{i}^{\dagger},\mathbf{b}_{i}^{\dagger})[-n]\) lies entirely below height zero and we define \((\Gamma_{i}^{\prime},\mathbf{b}_{i}^{\prime})\) to be the decorated immersed curve in the cylinder \(\mathcal{Z}\) (viewed as the union of a marked strip \(\mathcal{S}\) and an unmarked strip \(\mathcal{F}\)) obtained from the union of the decorated curves \((\Gamma_{i},\mathbf{b}_{i})[n]\) and \((\Gamma_{i}^{\dagger},\mathbf{b}_{i}^{\dagger})[-n]\) in \(\mathcal{S}\) by adding arcs in \(\mathcal{F}\) connecting each right endpoint of \(\Gamma_{i}[n]\) on \(\partial_{R}\mathcal{S}\) with the corresponding left endpoint of \(\Gamma_{i}^{\dagger}[-n]\) on \(\partial_{L}\mathcal{S}\) and identifying each right endpoint of \(\Gamma_{i}^{\dagger}[-n]\) on \(\partial_{R}\mathcal{S}\) with the corresponding left endpoint of \(\Gamma_{i}[n]\) on \(\partial_{L}\mathcal{S}\). Let \(C_{i}\), \(C_{i}[n]\) and \(C_{i}^{\dagger}[-n]\) denote the bigraded complexes represented by \((\Gamma_{i},\mathbf{b}_{i})\), \((\Gamma_{i},\mathbf{b}_{i})[n]\), and \((\Gamma_{i}^{\dagger},\mathbf{b}_{i}^{\dagger})[-n]\), respectively. Clearly \(C_{i}[n]\) is obtained from \(C_{i}\) by shifting the grading \(\mathrm{gr}_{w}\) up by \(2n\), and \(C_{i}^{\dagger}[-n]\) is obtained from \(C_{i}[n]\) by interchanging the two gradings, swapping the role of \(U\) and \(V\) and multiplying the differential \(\partial\) by \(-1\). Because the horizontal complex of \(C_{i}^{\dagger}[-n]\) is isomorphic by construction to the vertical complex of \(C_{i}[n]\) and vice versa, there is an obvious flip map \(\Psi_{i}^{\prime}\) on \(C_{i}[n]\oplus C_{i}^{\dagger}[-n]\) that takes each each generator in \((C_{i}[n]\oplus C_{i}^{\dagger}[-n])^{h}\) to the corresponding generator of \((C_{i}[n]\oplus C_{i}^{\dagger}[-n])^{v}\). It is easy to see that \((\Gamma_{i}^{\prime},\mathbf{b}_{i}^{\prime})\) represents the pair \((C_{i}[n]\oplus C_{i}^{\dagger}[-n],\Psi_{i}^{\prime})\). Since \((\Gamma_{1},\mathbf{b}_{1})\) and \((\Gamma_{2},\mathbf{b}_{2})\) are equivalent objects, we have that \(C_{1}\) is chain homotopic to \(C_{2}\), and it is clear that the same is true for \(C_{1}[n]\) and \(C_{2}[n]\) and for \(C_{1}^{\dagger}[-n]\) and \(C_{2}^{\dagger}[-n]\), and that the pairs \((C_{1}[n]\oplus C_{1}^{\dagger}[-n],\Psi_{1}^{\prime})\) and \((C_{1}[n]\oplus C_{1}^{\dagger}[-n],\Psi_{1}^{\prime})\) are homotopy equivalent. It follows from Proposition 10.6 that \(\Gamma_{1}^{\prime}\) and \(\Gamma_{2}^{\prime}\) are homotopic and \(\widehat{\mathbf{b}}_{1}^{\prime}\) and \(\widehat{\mathbf{b}}_{2}^{\prime}\) agree. It follows that the union of decorated curves \((\Gamma_{i},\mathbf{b}_{i})[n]\cup(\Gamma_{i}^{\dagger},\mathbf{b}_{i}^{ \dagger})[-n]\), which is the restriction to \(\mathcal{S}\) of \((\Gamma_{i},\mathbf{b}_{i})\) in \(\mathcal{Z}=\mathcal{S}\cup\mathcal{F}\), is the same up to homotopy for \(i=1,2\). We can uniquely recover the decomposition since \((\Gamma_{i},\mathbf{b}_{i})[n]\) consists of precisely the components of \((\Gamma_{i},\mathbf{b}_{i})[n]\cup(\Gamma_{i}^{\dagger},\mathbf{b}_{i}^{ \dagger})[-n]\) above height zero. Finally, by translating down by \(n\) we see that \(\Gamma_{1}\) is homotopic to \(\Gamma_{2}\) and \(\widehat{\mathbf{b}}_{1}\) agrees with \(\widehat{\mathbf{b}}_{2}\). Propositions 10.6 and 10.7 lead to an obvious question: is a similar uniqueness statement encroporating the whole bounding chain possible? That is, is there some normal form for decorated curves \((\Gamma,\mathbf{b})\) representing pairs \((C,\Psi)\) over \(\mathcal{R}^{-}\) such that every \((C,\Psi)\) is represented by a curve of this form and such that \(\mathbf{b}\) for such a representative is unique as a subset of the self intersection points of \(\Gamma\)? We suspect that this is possible, but we do not undertake the task of proving it in the present paper. A possible strategy is to define an arrow sliding algorithm for the left turn crossover arrows appearing at negative degree self intersection points in \(\mathbf{b}\) and slide arrows until they are either removed if possible or in some preferred position if they can not be removed. Unfortunately, as has been noted already, there are some technical difficulties defining general arrow sliding moves in the minus setting so that this approach requires more work; we hope to explore this in future work. In the meantime, we remark that in practice the uniqueness of the immersed curve is the most important part of the uniqueness result, since once an immersed curve \(\Gamma\) is fixed there are finitely many possible collections of turning points \(\mathbf{b}\). Usually a small number of these are valid bounding chains, and in practice it is not difficult to check when two different collections of turning points \(\mathbf{b}\) on \(\Gamma\) are equivalent and find a unique simplest representative (see for example Corollary 12.6). ### A shifted pairing in \(\mathcal{Z}\) In the next section, we will need to consider more general ways of pairing complexes and their corresponding curves. Fixing complexes \(C_{1}\) and \(C_{2}\) equipped with flip isomorphisms \(\Psi_{1,*}\) and \(\Psi_{2,*}\), for each \(p/q\in\mathbb{Q}\) and for each \(i\in\mathbb{Z}/p\mathbb{Z}\) we will define an algebraic pairing by constructing a chain complex \(\mathbb{X}_{i;p/q}=\mathbb{X}_{i;p/q}(C_{1},\Psi_{1,*},C_{2},\Psi_{2,*})\) over \(\mathbb{F}[W]\) and taking its homology. More precisely, we will define a finitely generated complex \(\mathbb{X}_{i;p/q}^{N}=\mathbb{X}_{i;p/q}^{N}(C_{1},\Psi_{1,*},C_{2},\Psi_{2,*})\) for each sufficiently large \(N\), the homology of which does not depend on \(N\); it is possible to define a single complex \(\mathbb{X}_{i;p/q}\) without fixing an \(N\), but this complex is infinitely generated and we prefer to avoid it for technical reasons. The complex is the simplest when \(p/q=0\), and in this case the complex is independent of \(N\). The complex \(\mathbb{X}_{0;0}\) is precisely the complex \(\mathbb{X}\) defined in Section 10.2, and for any \(s\in\mathbb{Z}\), the complex \(\mathbb{X}_{s;0}\) is defined the same way as \(\mathbb{X}_{0;0}\) with \(A_{0}\) replaced with \(A_{s}\) and the maps \(v_{0}\) and \(h_{0}^{\Psi}\) replaced with \(v_{s}\) and \(h_{s}^{\Psi}\). For \(p/q\neq 0\), we need to choose \(N\geq g_{1}+g_{2}\), where \(g_{i}\) is any integer such that \(|A(x)|\leq g_{i}\) for all generators \(x\) of \(C_{i}\) (if \(C_{i}\) is the knot Floer complex of a knot, we may take \(g_{i}\) to be the genus of the knot). If \(p/q>0\) we then define, for each \(i\) in \(\mathbb{Z}/p\mathbb{Z}\), \[\mathbb{A}_{i;p/q}^{N}=\bigoplus_{n=n_{\min}}^{n_{max}}A_{\lfloor\frac{i+np}{q} \rfloor}\qquad\text{and}\qquad\mathbb{B}_{i;p/q}^{N}=\bigoplus_{n=n_{\min}+1}^{ n_{max}}B^{v}, \tag{8}\] where \(n_{min}\) is the smallest integer \(n\) for which \(\left\lfloor\frac{i+np}{q}\right\rfloor>-N\), and \(n_{max}\) is the largest integer \(n\) such that \(\left\lfloor\frac{i+np}{q}\right\rfloor<N\). We define \(D^{N}_{i;p/q}:\mathbb{A}^{N}_{i;p/q}\to\mathbb{B}^{N}_{i;p/q}\) to be the map \[D^{N}_{i;p/q}=\left(\bigoplus_{n=n_{min}}^{n_{max}-1}h^{\Psi}_{\left\lfloor \frac{i+np}{q}\right\rfloor}\right)\oplus\left(\bigoplus_{n=n_{min}+1}^{n_{max} }v_{\left\lfloor\frac{i+np}{q}\right\rfloor}\right), \tag{9}\] where we understand \(v_{\left\lfloor\frac{i+np}{q}\right\rfloor}\) as taking the summand of \(\mathbb{A}^{N}_{i;p/q}\) corresponding to the index \(n\) to the summand of \(\mathbb{B}^{N}_{i;p/q}\) corresponding to the index \(n\) and \(h_{\left\lfloor\frac{i+np}{q}\right\rfloor}\) as taking the summand of \(\mathbb{A}^{N}_{i;p/q}\) corresponding to the index \(n\) to the summand of \(\mathbb{B}^{N}_{i;p/q}\) corresponding to \(n+1\). We define \(\mathbb{X}^{N}_{i;p/q}\) to be the mapping cone of \(D^{N}_{i;p/q}\). When \(p/q<0\) the definition is similar, with slightly different ranges for the indices. In this case we define \[\mathbb{A}^{N}_{i;p/q}=\bigoplus_{n=n_{min}}^{n_{max}}A_{\left\lfloor\frac{i+ np}{q}\right\rfloor}\qquad\text{and}\qquad\mathbb{B}^{N}_{i;p/q}=\bigoplus_{n=n_{ min}}^{n_{max}+1}B^{v}, \tag{10}\] where \(n_{min}\) is the smallest integer \(n\) for which \(\left\lfloor\frac{i+np}{q}\right\rfloor<N\), and \(n_{max}\) is the largest integer \(n\) such that \(\left\lfloor\frac{i+np}{q}\right\rfloor>-N\), and we define \(D^{N}_{i;p/q}\) to be the map \[D^{N}_{i;p/q}=\left(\bigoplus_{n=n_{min}}^{n_{max}}h^{\Psi}_{\left\lfloor\frac{ i+np}{q}\right\rfloor}\right)\oplus\left(\bigoplus_{n=n_{min}}^{n_{max}}v_{ \left\lfloor\frac{i+np}{q}\right\rfloor}\right). \tag{11}\] Note that up to quasi-isomorphism the choice of \(N\) does not matter (provided \(N\) is at least \(g_{1}+g_{2}\)). A larger choice of \(N\) gives a bigger complex, with more copies of \(A_{s}\) with \(|s|\geq N\) and more corresponding copies of \(B^{v}\), but the homology is the same. This follows from the fact that \(h_{s}\) is an isomorphism for \(s\leq-N\) and \(v_{s}\) is a in isomorphism for \(s\geq N\). It is also possible to define an infinitely generated complex over \(\mathbb{F}[W]\) by allowing \(n\) to range over all integers in Equations (8)-(11), but to make sense of these infinitely generated modules we need to work with completions with respect to the variable \(W\) and replace direct sums with direct products. This infinitely generated complex will be denoted and \(\mathbb{X}_{i;p/q}\), though we will generally work with the truncated complexes \(\mathbb{X}^{N}_{i;p/q}\). We do not need to truncate \(\mathbb{X}_{s;0}\), as it is already finitely generated; for any \(N\), we will understand \(\mathbb{X}^{N}_{s;0}\) to mean \(\mathbb{X}_{s;0}\). The homology of \(\mathbb{X}^{N}_{i;p/q}\) (for any sufficiently large \(N\)) gives an algebraic pairing of \((C_{1},\Psi^{*}_{1})\) with \((C_{2},\Psi^{*}_{2})\) associated with \(i\) and \(p/q\). We will show that this agrees with the Floer homology of certain curves in the cylinder \(\mathcal{Z}\). For \(i\) in \(\{0,1\}\), let \(\boldsymbol{\vartheta}_{i}=\boldsymbol{\vartheta}(\Gamma_{i},\mathbf{b}_{i})\) be a train track in \(\mathcal{Z}\) representing \((C_{i},\Psi^{*}_{i})\). We will define a shifted version \(\boldsymbol{\vartheta}_{1}[i;p/q]\) of \(\boldsymbol{\vartheta}_{1}\). If \(p/q=0\), then for any \(i\in\mathbb{Z}\) the shifted \(\boldsymbol{\vartheta}_{1}[i;0]\) is simply the curve \(\boldsymbol{\vartheta}_{1}\) shifted upward by \(i\). For \(p/q\neq 0\), we construct a non-compact curve by cutting \(\boldsymbol{\vartheta}_{1}\) along the line \(\{\frac{1}{2}\}\times\mathbb{R}\) and gluing together infinitely many shifted copies of this cut open curve. Let \(\boldsymbol{\vartheta}_{1}^{cut}\) denote the decorated curve in \(\mathcal{S}\) obtained by cutting \(\boldsymbol{\vartheta}_{1}\), so that \(\boldsymbol{\vartheta}_{1}\) in \(\mathcal{Z}\) is recovered by gluing the opposite sides of \(\mathcal{S}\) and identifying endpoints of \(\boldsymbol{\vartheta}_{1}^{cut}\). For any integer \(s\), let \(\boldsymbol{\vartheta}_{1}^{cut}[s]\) denote the curve \(\boldsymbol{\vartheta}_{1}^{cut}\) shifted upward by \(s\) units. The curve \(\boldsymbol{\vartheta}_{1}[i;p/q]\) in \(\mathcal{Z}\) is constructed from \[\bigcup_{n\in\mathbb{Z}}\boldsymbol{\vartheta}_{1}^{cut}\left[\left\lfloor\frac{ i+np}{q}\right\rfloor\right]\] by identifying the right endpoints of the copy of \(\boldsymbol{\vartheta}_{1}^{cut}\) corresponding to the index \(n\) with the left endpoints of the copy of \(\boldsymbol{\vartheta}_{1}^{cut}\) corresponding to the index \(n+1\). **Proposition 10.8**.: _For any \(p/q\in\mathbb{Q}\) and any \(i\in\mathbb{Z}/p\mathbb{Z}\), the Floer complex \(CF(\boldsymbol{\vartheta}_{2},\boldsymbol{\vartheta}_{1}[i;p/q])\) is quasi-isomorphic to the complex \(\mathbb{X}^{N}_{i;p/q}\) for any sufficiently large \(N\)._ Proof.: When \(p/q=0\), the proof is exactly the same as Proposition 10.3 except that \(\boldsymbol{\vartheta}_{1}\) is shifted upward by \(i\). For other values of \(p/q\) the proof is similar. We view the cylinder \(\mathcal{Z}\) as \(\mathcal{S}\cup\mathcal{F}\), with \(\mathcal{F}\) small enough that \(\boldsymbol{\vartheta}_{1}\) and \(\boldsymbol{\vartheta}_{2}\) both consist of parallel arcs when restricted to \(\mathcal{F}\). For each \(n\) from \(n_{min}\) to \(n_{max}\), letting \(s=\left\lfloor\frac{i+np}{q}\right\rfloor\), we perturb the corresponding shifted train track \(\boldsymbol{\vartheta}_{1}^{cut}[s]\) in \(\mathcal{S}\) as in the proof of Proposition 10.1 so that the Floer chain complex of \(\boldsymbol{\vartheta}_{2}\) with this train track in \(\mathcal{S}\) is precisely \(A_{s}\). We may assume that the endpoints of \(\boldsymbol{\vartheta}_{1}^{cut}[s]\) occur above height \(N\) on \(\partial_{R}\mathcal{S}\) and below height \(-N\) on \(\partial_{L}\mathcal{S}\). We do not perturb the copies of \(\boldsymbol{\vartheta}_{1}^{cut}\) corresponding to indices \(n>n_{max}\) or \(n<n_{min}\); these train tracks lie entirely above height \(N\) or below height \(-N\) and are thus disjoint from \(\boldsymbol{\vartheta}_{2}\). Note that for any adjacent indices \(n\) and \(n+1\), the arcs in \(\mathcal{F}\) connecting the endpoints of the two corresponding shifted copies of \(\boldsymbol{\vartheta}_{1}\) intersect the arcs of \(\boldsymbol{\vartheta}_{2}\) in \(\mathcal{F}\) if \(n_{min}\leq n<n_{max}\) or if \(p/q<0\) and \(n=n_{min}-1\) or \(n=n_{max}\); we identify these intersection points with a copy of \(B^{v}\) indexed by \(n+1\), so that the Floer complex is identified with \(X^{N}_{i;p/q}\) as a vector space. With the curves perturbed as above, counting bigons exactly recovers the map \(D^{N}_{i;p/q}\). The proof is essentially the same as the proof of Proposition 10.3 (though note that we have not introduced the additional intersection points in \(\mathcal{F}\) corresponding to the two copies of \(B^{h}\) connected by \(\widetilde{\text{Id}}\)). Bigons that do not contribute to the internal differential on one of the summands of \(\mathbb{X}^{N}_{i;p/q}\) can only start at intersection points on a perturbed copy of \(\boldsymbol{\vartheta}_{i}[s]\) corresponding to some index \(n\), and all such bigons end at an intersection points in \(\mathcal{F}\) corresponding to the copies of \(B^{v}\) with index either \(n\) or \(n+1\). Similar to the proof of Proposition 10.3, we can check that counting the bigons of these two types recovers the maps \(\widetilde{v}_{s}\) or \(\widetilde{h}_{s}^{\Psi}\), respectively. An example of a shifted pairing is shown in Figure 47, where \((C_{1},\Psi_{1,*})\) is the knot Floer invariant of the right handed trefoil, \((C_{2},\Psi_{2,*})\) is the knot Floer invariant of the dual knot in \(+1\)-surgery on the left handed trefoil, \(p/q=-1\), \(i=0\). The Figure shows the curves \(\boldsymbol{\vartheta}_{2}\) and \(\boldsymbol{\vartheta}_{1}[i;p/q]\), lifted to the covering space \(\widetilde{T}_{M}\) for clarity, with \(\boldsymbol{\vartheta}_{1}[i;p/q]\) perturbed as in the proof of Proposition 10.8 so that the Floer chain complex agrees exactly with \(\mathbb{X}^{N}_{i;p/q}\) with \(N=2\). ## 11. Surgery formulas In the previous section we related algebraic pairings of complexes and flip maps to geometric pairings of the corresponding curves, but we did not ascribe topological significance to either of these pairings. In this section we will show that these pairings compute the Heegaard Floer homology of Dehn surgeries. ### Rational surgery formula Recall that Ozsvath and Szabo define a rational surgery formula for Heegaard Floer homology in [10]. This surgery formula realizes the Heegaard Floer homology of rational surgery on a knot as the homology of a mapping cone complex constructed from certain subcomplexes of the knot Floer complex of \(K\). In fact, this mapping cone complex is a special case of the complex \(\mathbb{X}_{i;p/q}\) defined in the previous section. The surgery formula was originally stated for the plus version of Heegaard Floer homology, but an analogous formula holds for the minus version. In the minus version, for technical reasons we need to work with completions of the various modules involved and replace direct sums with direct products as described in [11], but this subtlety can be avoided by working with truncated mapping cone complexes (which are finitely generated), as discussed in the previous section. Fix a null-homologous knot \(K\) in a \(3\)-sphere \(Y\), let \(C\) be the complex \(CFK_{\mathcal{R}^{-}}(Y,K)\), and let \(\Psi_{*}\) be the flip isomorphism associated with \(K\). We also consider the complex \(C_{triv}\) that has a single generator in bigrading \((0,0)\) equipped with the identity flip isomorphism \(\Psi_{triv,*}\); note that \((C_{triv},\Psi_{triv,*})\) is the knot Floer invariant associated to the unknot. The proof of the following proposition relies on observing that \(\mathbb{X}^{N}_{i;p/q}(C_{triv},\Psi_{triv},C,\Psi)\) is quasi-isomorphic to the mapping cone complex \(\mathbb{X}_{i;p/q}\) defined in [10]. **Proposition 11.1**.: _Let \(\boldsymbol{\vartheta}\) be the decorated curve in \(\mathcal{Z}\) associated with the knot \(K\). For any nonzero \(p/q\in\mathbb{Q}\) and any \(i\in\mathbb{Z}/p\mathbb{Z}\), let \(\ell_{i;p/q}\) be a line in \(\mathcal{Z}\) of slope \(p/q\) that passes intersects \(\mu\) just above height \(-\frac{1}{2}+\frac{i}{q}\). There is a relatively graded isomorphism of \(\mathbb{F}[W]\) modules_ \[HF^{-}(Y_{p/q}(K),i)\cong HF(\boldsymbol{\vartheta},\ell_{i;p/q}),\] _where the right side is Floer homology in the marked cylinder \(\mathcal{Z}\)._ Proof.: It is easy to check that \(\ell_{i;p/q}\) is homotopic to \(\boldsymbol{\vartheta}_{triv}[i;p/q]\), where \(\boldsymbol{\vartheta}_{triv}\) is the decorated curve representing the pair \((C_{triv},\Psi_{triv})\)--\(\boldsymbol{\vartheta}_{triv}\) is simply the horizontal simple closed curve in \(\mathcal{Z}\) at height zero. Thus by Proposition 10.8 the Floer complex on the right side is quasi-isomorphic to \(\mathbb{X}^{N}_{i;p/q}(C_{triv},\Psi_{triv},C,\Psi)\) for sufficiently large \(N\). We just need to show that this latter complex computes \(HF^{-}(Y_{p/q}(K),i)\); we do this by showing that \(\mathbb{X}^{N}_{i;p/q}(C_{triv},\Psi_{triv},C,\Psi)\) is quasi-isomorphic to the (truncated) mapping cone complex defined by Ozsvath and Szabo. Observe that in the construction of \(\mathbb{X}^{N}_{i;p/q}(C_{triv},\Psi_{triv},C,\Psi)\), \(A_{s}=\operatorname{Mor}(C_{triv},C)|_{A=s}\) is isomorphic to \(C|_{A=s}\), since every morphism is determined by where it takes the generator of \(C_{triv}\). Similarly, \(B^{v}=\operatorname{Mor}(H_{*}C^{v}_{triv},H_{*}C^{v})\) is simply \(H_{*}C^{v}\). We next note that the complex \(A_{s}=C|_{A=s}\) is isomorphic to the minus analog of the complex \(A^{+}\) in [10]. Recall that in the notation of [10], we view \(\mathit{CFK}^{\infty}(Y,K)\) as being generated over \(\mathbb{F}\) by triples \([x,i,j]\) with \(j-i=A(x)\) and \(A_{s}^{+}\) is defined to be quotient complex generated by triples with \(\max(i,j-s)\geq 0\). The analogous \(A_{s}^{-}\) is the subcomplex generated over \(\mathbb{F}\) by triples with \(\max(i,j-s)\leq 0\). In our notation, this is isomorphic to the subcomplex of \(CFK_{\mathcal{R}^{-}}(Y,K)\otimes\mathbb{F}[V,V^{-1}]\) with Alexander grading zero generated by terms of Figure 47. A shifted pairing of the knot Floer invariants of the right handed trefoil and the dual knot in \(+1\) surgery on the left handed trefoil. the form \(U^{A(x)-s}V^{-s}x\) if \(A(x)\geq s\) or \(V^{-A(x)}x\) if \(A(x)<s\); multiplying by \(V^{s}\) gives an isomorphism between this and \(C|_{A=s}\). Similarly, the minus analog \(B_{s}^{-}\) of the \(B_{s}^{+}\) modules appearing in [10] can be identified with the Alexander grading zero summand of \(CFK_{\mathcal{R}^{-}}(Y,K)\otimes\mathbb{F}[V,V^{-1}]\), which by setting \(V=1\) is equivalent to \(C^{v}\) as a module over \(\mathbb{F}[W]\). The maps \(v_{s}\) and \(h_{s}\) in the construction of \(\mathbb{X}_{i;p/q}^{N}(C_{triv},\Psi_{triv},C,\Psi)\) are simply the inclusion maps \(C|_{A=s}\hookrightarrow C^{v}\) and \(C|_{A=s}\hookrightarrow C^{h}\) obtained by setting \(V=1\) and \(U=1\), respectively, followed by taking homology of \(C^{v}\) or \(C^{h}\). The map \(h_{s}^{\Psi}\) is the composition of \(h_{s}\) with the map \(F_{\Psi_{triv_{*},*}}\), which can be identified with \(\Psi_{*}\) since \(\Psi_{triv,*}\) is the identity map and the source \(B^{h}\) and target \(B^{v}\) are identified with \(H_{*}C^{h}\) and \(H_{*}C^{v}\), respectively. The map \(v_{s}^{-}:A_{s}^{-}\to B^{-}\) in the minus analog of the mapping cone construction from [10] is also the inclusion map \(C|_{A=s}\hookrightarrow C^{v}\), and the map \(h_{s}^{-}:A_{s}^{-}\to B^{-}\) is the inclusion map \(C|_{A=s}\hookrightarrow C^{h}\) followed by the flip map \(\Psi\). With these observations in place, we see that \(\mathbb{X}_{i;p/q}^{N}(C_{triv},\Psi_{triv},C,\Psi)\) is closely related to the complex \(\mathbb{X}_{i;p/q}^{-}\) from the minus analog of the construction in [10], with two differences. The first difference is that \(\mathbb{X}_{i;p/q}^{N}(C_{triv},\Psi_{triv},C,\Psi)\) is truncated; that this does not affect the complex up to quasi-isomrphism follows easily from the fact that \(v_{s}\) is an isomorphism for \(s\geq N\) and \(h_{s}\) is an isomorphism for \(s\leq-N\). The other difference is that in \(\mathbb{X}_{i;p/q}^{N}(C_{triv},\Psi_{triv},C,\Psi)\) we have already taken the homology of each copy of \(C^{v}\); this does not affect the homology of the complex. The result then follows from Theorem 1.1 of [10]. If we forget the spin\({}^{c}\) decomposition and project to the marked torus, we recover Theorem 1.3 from the introduction. Proof of Theorem 1.3.: Recall that the marked cylinder \(\mathcal{Z}\) can be identified with \(\overline{T}_{M}\), where \(M\) is the knot complement of \(K\subset Y\) and we identify the vertical direction with \(\mu\) and the horizontal direction with \(\lambda\). If we do not care about the spin\({}^{c}\) decomposition on \(Y_{p/q}(K)\) we can project to the marked torus \(T_{M}\). The curves \(\ell_{i;p/q}\) project to single simple closed curve \(\ell_{p/q}\) of slope \(p/q\), and the Floer complex of \(p(\boldsymbol{\vartheta})\) with \(\ell_{p/q}\) is the direct sum over \(i\) of the Floer complexes of \(\boldsymbol{\vartheta}\) with \(\ell_{i;p/q}\) in \(\mathcal{Z}\). The result then follows from Proposition 11.1. ### Surgery formula for dual knots When performing \(p/q\) surgery on a knot \(K\) in \(Y\), the core of the filling torus defines a dual knot \(K^{*}\subset Y_{p/q}(K)\). In [11], Hedden and Levine enhanced the surgery formula of Ozsvath and Szabo for nonzero integer surgeries to give a surgery formula for the knot Floer complex of the dual knot in a surgery. This enhancement also has a nice description in terms of Floer homology of curves, which we now describe. Recall that for an integral surgery \(n\) on a nullhomologous knot the mapping cone complex \(\mathbb{X}\) is the mapping cone of the map \[\bigoplus_{s\in\mathbb{Z}}v_{s}^{-}+h_{s}^{-}:\bigoplus_{s\in\mathbb{Z}}A_{s}^ {-}\to\bigoplus_{s\in\mathbb{Z}}B_{s}^{-},\] where \(v_{s}^{-}\) maps \(A_{s}^{-}\) to \(B_{s}^{-}\) and \(h_{s}^{-}\) maps \(A_{s}^{-}\) to \(B_{s+n}^{-}\). This complex splits into subcomplexes \(\mathbb{X}_{i;n}\) containing the \(A_{s}^{-}\) and \(B_{s}^{-}\) with \(s\) congruent to \(i\) mod \(n\). For sufficiently large \(N\), each of these complexes can be truncated to include only \(A_{s}\)'s with \(|s|<N\) and only \(B_{s}\)'s with \(-N+n\leq s\leq N\) if \(n>0\) or only the \(B_{s}\)'s with \(-N\leq s\leq N-n\) if \(n<0\). We will view the mapping cone complex as a module over \(\mathbb{F}[W]\). The surgery formula in [11] adds a new rational Alexander filtration \(\mathcal{J}\) to the (truncated) mapping cone complex so that it is filtered chain homotopy equivalent to \(\mathit{CFK}^{-}(Y_{n}(K),K^{*})\). Note that there is already an integer filtration \(\mathcal{I}\) given by negative powers of \(W\). To describe the \(\mathcal{J}\) filtration, recall that we identify the complex \(A_{s}^{-}\) with the complex \(C_{A=s}\), which has generators of the form \(V^{s-A(x)}x\) or \(U^{A(x)-s}x\) for generators \(x\) of \(C\). Each generator \(U^{A(x)-s}x\) of \(A_{s}^{-}\) with \(A(x)\geq s\) has filtration level \(\frac{2s+n-1}{2n}\) while every generator \(V^{s-A(x)}x\) of \(A_{s}^{-}\) with \(A(x)<s\) and each generator of \(B_{s}^{-}\) has \(\mathcal{J}\) filtration level \(\frac{2s+n-1}{2n}-1\). Although \(\mathcal{J}\) is a rational filtration, we are primarily interested in the relative integral filtration on each summand \(\mathcal{X}_{i;n}\). For each \(i\), we fix a rational shift \(s_{i}\) so that \(\mathcal{J}\) takes values in \(\mathbb{Z}+s_{i}\) on \(\mathcal{X}_{i;n}\) and \(\mathcal{J}-s_{i}\) is an integer filtration. A key observation is that when moving from index \(s\) to index \(s+n\) the filtration levels of generators increases by \(1\), and that generators \(U^{A(x)-s}x\) of \(A_{s}^{-}\) with \(A(x)\geq s\) are in the same filtration level as the generators of \(B_{s+n}^{-}\) and the generators \(V^{s+n-A(x)}x\) of \(A_{s+n}^{-}\) with \(A(x)<s+n\). Though [HL] does not use the notation of bigraded complexes used in this paper, we can pass to a bigraded complex over \(\mathcal{R}^{-}\) by replacing the formal variable \(W\) with the pair of variables \(U\) and \(V\) and defining a bigrading \((\mathrm{gr}_{w}^{*},\mathrm{gr}_{z}^{*})\) so that \(\mathrm{gr}_{w}^{*}\) is the Maslov grading on the mapping cone complex and \(\mathrm{gr}_{z}^{*}\) is defined so that \(A^{*}=\frac{\mathrm{gr}_{w}^{*}-\mathrm{gr}_{z}^{*}}{2}\) gives the filtration level \(\mathcal{J}-s_{i}\). To get the differential on the bigraded complex from the differential on the complex over \(\mathbb{F}[W]\) we replace \(W\) with the product \(UV\) and then add additional factors of \(V\) as needed to be consistent with the bigrading; note that forgetting the new filtration by setting \(V=1\) recovers the original complex. We will realize this bigraded complex as the Floer complex of curves in the doubly marked cylinder. Let \(\boldsymbol{\vartheta}\) be the decorated curve in \(\mathcal{Z}\) representing the knot Floer homology of \(K\). For each \(i\in\mathbb{Z}/n\mathbb{Z}\), let \(\ell_{i;n}^{*}\) be a curve of slope \(n\) in \(\mathcal{Z}\) that passes through the marked points at height \(i+kn-\frac{1}{2}\) for integers \(k\) (note that in the doubly marked cylinder \(\mathcal{Z}^{z,w}\) the curve \(\ell_{i;n}^{*}\) passes between the pair of marked points at these heights). Recall that \(Y_{n}(K)\) has \(n\) spin\({}^{c}\) structures, which can be canonically identified with \(\mathbb{Z}/n\mathbb{Z}\). **Proposition 11.2**.: _The bigraded complex \(CFK_{\mathcal{R}^{-}}(Y_{n}(K),K^{*};i)\) is given by the Floer complex of \(\boldsymbol{\vartheta}\) with \(\ell_{i;n}^{*}\) in the doubly marked cylinder \(\mathcal{Z}^{z,w}\)._ Proof.: We identify the truncated complex \(\mathbb{X}_{i;n}^{N}\) with the Floer complex of \(\boldsymbol{\vartheta}\) with \(\ell_{i;n}^{*}\) by perturbing \(\ell_{i;n}^{*}=\boldsymbol{\vartheta}_{triv}[i;n]\) as in the proof of Proposition 10.8. Recall that, cutting the cylinder into marked and unmarked strips \(\mathcal{S}\) and \(\mathcal{F}\), each copy of \(A_{s}\) in the mapping cone complex corresponds to the Floer complex of \(\boldsymbol{\vartheta}\) with a connected component of \(\boldsymbol{\vartheta}_{triv}[i;n]\) restricted to \(\mathcal{S}\). The piece of \(\boldsymbol{\vartheta}_{triv}[i;n]\) in question crosses \(\mu\) between the marked points at heights \(s-\frac{1}{2}\) and \(s+\frac{1}{2}\), lying to the left of \(\mu\) below this point and to the right of \(\mu\) after this point. If we assume that \(\boldsymbol{\vartheta}_{triv}[i;n]\) crosses \(\mu\) at height \(s-\frac{1}{2}+\epsilon\) for a sufficiently small \(\epsilon\), then the generators of \(A_{s}\) of the form \(U^{A(x)-s}x\) for \(A(x)\geq s\) are precisely those corresponding to intersection points on the right side of \(\mu\). Combining this with the observation above we note that, when moving along \(\boldsymbol{\vartheta}_{triv}[i;n]\), the \(\mathcal{J}\) filtration level of generators of the Floer complex increases by one each each time \(\mu\) is crossed moving rightward. We claim that this behavior is reproduced in the Floer complex if we add a new \(z\) marked point on \(\mu\) at height \(2\epsilon\) above each existing \(w\) marked point. Now any bigon contributing to the Floer complex covers the same number of \(z\) and \(w\) marked points except that it covers one extra \(z\) marked point for each time the \(\ell_{i;n}^{*}\) part of the boundary crosses \(\mu\) moving rightward, and one extra \(w\) marked point for each time the \(\ell_{i;n}^{*}\) part of the boundary crosses \(\mu\) moving leftward. Since the differential preserves the new Alexander grading \(A^{*}\), it follows that if there is a bigon from \(x\) to \(y\) with weight \(U^{a}V^{b}\) then \(A^{*}(y)-A^{*}(x)=a-b\). We can of course shift the marked points and the intersection of \(\ell_{i;n}^{*}\) with \(\mu\) down by \(\epsilon\) without any effect on the complex, so that the \(z\) and \(w\) marked point occur a distance of \(\epsilon\) above and below the original marked point, and \(\ell_{i;n}^{*}\) passes through the original marked point. Having identified the knot Floer complex with the Floer homology of curves in their perturbed form, we know that homotopic curves will represent chain homotopic complexes. In particular, we can pull the curve \(\ell_{i;n}^{*}\) to be a straight line of slope \(n\) (that is, a curve in the cylinder that lifts to a straight line of slope \(n\) in the universal cover). We remark that there is a slight subtley coming from the fact that in this proof we place the \(z\) and \(w\) marked points above and below the original marked point, whereas we usually think of \(z\) and \(w\) in \(\mathcal{Z}^{z,w}\) lying to the left and right of the marked points in \(\mathcal{Z}\). If \(n>0\) it is clear that when \(\ell^{*}_{i;n}\) is pulled tight we can just as well place \(z\) and \(w\) to the left and right of the marked point. If \(n<0\) then pulling \(\ell^{*}_{i;n}\) tight results in \(z\) being to the right of \(\ell^{*}_{i;n}\) and \(w\) being to the left of \(\ell^{*}_{i;n}\). This is seemingly a problem, but in fact it has no effect on the complex because of the symmetry of knot Floer homology: the curves are symmetric under 180 degree rotation, and this rotation interchanges the roles of \(z\) and \(w\). Theorem 1.4 in the introduction follows immediately from Proposition 11.2 by projecting from the marked cylinder to the marked torus. Proof of Theorem 1.4.: We identify the marked cylinder \(\mathcal{Z}\) with \(\overline{T}_{M}\) in the usual way and then project to the marked torus \(T_{M}\). The lines \(\ell^{*}_{i;n}\) all project to a single curve \(\ell^{*}_{n}\) of slope \(n\) through the marked point, and the Floer complex of the projection of \(\boldsymbol{\vartheta}\) with \(\ell^{*}_{n}\) in the doubly marked torus \(T^{z,w}_{M}\) is the direct sum of the Floer complexes in \(\mathcal{Z}\) of \(\boldsymbol{\vartheta}\) with \(\ell^{*}_{i;n}\), which by Proposition 11.2 are the spin\({}^{c}\) summands of \(CFK_{\mathcal{R}}-(Y_{n}(K),K^{*})\). **Example 11.3**.: An example of the dual surgery formula is shown in Figure 48, where we consider \(+1\) surgery on the left handed trefoil. On the left of the figure we show the curves perturbed as in the proof of Proposition 10.8 so that the Floer complex realizes the mapping cone complex, and the new filtration is encoded by adding marked points \(z\) above each existing marked point. To more easily compute the knot Floer complex of the dual knot, we pull the curves tight as in the top right part of the figure. We see that there are five generators, \(a\), \(b\), \(c\), \(d\), and \(e\), with Alexander gradings \(1\), \(0\), \(0\), \(0\), and \(-1\). The differential is given by \[\partial(a)=Ub,\quad\partial(b)=0,\quad\partial(c)=-UVb+UVd,\quad\partial(d)= 0,\quad\text{ and }\quad\partial(e)=-Vd,\] as stated in Section 1.4. We can find the immersed curve in the strip representing this new complex from the immersed curve for the left handed trefoil by applying the reparametrization that takes the line of slope \(+1\) to the vertical direction while fixing the horizontal direction, as shown in the bottom right of the Figure 48. Pairing the resulting immersed curve paired with the vertical line \(\mu\) gives the same complex as pairing trefoil curve with the line of slope \(1\), since applying an ambient diffeomorphism to the surface does not affect the Floer complex, so by definition this immersed curve represents the knot Floer complex of the dual knot. This is consistent with Conjecture 1.6, since the immersed curves for both the knot and the dual knot are the same when viewed as curves in the boundary of the knot complement and appear different in strip only because they are expressed in terms of different parametrizations arising from different choices of meridian. ## 12. Examples and further considerations In the final section we will explore some concrete examples and also discuss a current limitation of the invariants \((\Gamma,\mathbf{b})\) we have described, namely that although \(\Gamma\) is well defined up to homotopy and the restriction to \(\widehat{\mathbf{b}}\) of \(\mathbf{b}\) to degree zero self-intersection points is well-defined, the full bounding chain \(\mathbf{b}\) is not uniquely determined as a subset of the self-intersection points. In other words, we do not have a satisfactory normal form for an equivalence class of bounding chains. That said, as we will see in practice there is often an obvious simplest choice of \(\mathbf{b}\). In particular, for a large class of complexes arising frequently for knots in \(S^{3}\), all choices of bounding chain on the immersed curves are equivalent and the trivial bounding chain is a valid choice of bounding chain, so for these complexes we can take \(\mathbf{b}\) to be trivial. We discuss some of the challenges in defining a normal form more generally, and some potential solutions. ### Simplifying bounding chains The non-uniqueness of the bounding chain in our construction as described so far is evident in the following example. For simplicity, we will consider this example with coefficients in \(\mathbb{Z}/2\mathbb{Z}\) and we will not specify the bigrading, which is not relevant to the present discussion. **Example 12.1**.: Let \(C\) be the chain complex over \(\mathcal{R}^{-}\) pictured on the left of Figure 49. It has generators \(a,b,c,d\), and \(e\), differential \[\partial(a)=Vb,\quad\partial(b)=0,\quad\partial(c)=Ua+Ve+UVd,\quad,\partial(d)=0,\quad\partial(e)=Ub,\] and gradings as given in the table below: \[\begin{array}{c|ccccc}&a&b&c&d&e\\ \hline(\mathrm{gr}_{w},\mathrm{gr}_{z})&(0,-2)&(-1,-1)&(-1,-1)&(0,0)&(-2,0)\\ A&1&0&0&0&-1\end{array}\] Both the horizontal and vertical homology are generated by \(e\); let \(\Psi\) be the flip isomorphism taking \(e\) to itself. To compute the decorated immersed multicurve \((\Gamma,\mathbf{b})\) representing this data we first ignore the diagonal arrow and find \((\Gamma,\widehat{\mathbf{b}})\) representing the complex over \(\widehat{\mathcal{R}}\). Since the given basis is both horizontally and vertically simplified, we quickly arrive at the immersed curve in the middle of Figure 49 with no crossover arrows, so we do not need to remove arrows and \(\widehat{\mathbf{b}}\) is trivial. We then enhance the decorated curve to capture the diagonal arrow that was ignored; following the algorithm in Section 9, we see that this is accomplished by decorating \(\Gamma\) with the bounding chain \(\mathbf{b}\) consisting of one intersection point with coefficient \(W\) as pictured on the right of Figure 49. In this example, the procedure we have presented for computing the curve representative stops with the decorated curve \((\Gamma,\mathbf{b})\) since \(\Gamma\) is in almost simple position and \(\widehat{\mathbf{b}}\) contains only local system intersection points (in fact, \(\widehat{\mathbf{b}}\) is trivial). However, it is not hard to see that there is a more convenient representative for this chain homotopy equivalence class of curves. The change of basis replacing \(e\) with \(e+Ud\) has the effect of removing the diagonal arrow from the complex in Example 12.1. Clearly \(\Gamma\) represents this complex over \(\mathcal{R}^{-}\) with no need to decorate with a bounding cochain. In other words, there are two different bounding chains \(\mathbf{b}\) (consisting of the one intersection point in Figure 49) and \(\mathbf{b}^{\prime}\) (which is trivial) on \(\Gamma\) such that \((\Gamma,\mathbf{b})\) and \((\Gamma,\mathbf{b}^{\prime})\) both represent the same homotopy equivalence class of chain complexes. These decorated curves are \((\Gamma,\mathbf{b})\) and \((\Gamma,\mathbf{b}^{\prime})\) are equivalent as elements of the Fukaya category, so this disparity does not violate the uniqueness statement of Theorem 1.2, but we would hope to have a preferred representative for any equivalence class of decorated curves. In the above example, Figure 48. Computing \(\mathit{CFK}^{-}\) of the dual knot in \(+1\) surgery on the left handed trefoil. there is an obvious choice for the preferred representative: it makes sense to remove the decoration entirely if possible and choose \((\Gamma,\mathbf{b}^{\prime})\) as the representative of this equivalence class. In general we'd like to choose the simplest option for the bounding chain representing a given equivalence class, but it is not always clear what "simplest" means when the trivial bounding chain is not an option. We ask the following: **Question 12.2**.: Is there normal form for bounding chains on immersed curves such that every complex over \(\mathcal{R}^{-}\) is represented by \((\Gamma,\mathbf{b})\) and for any \(\mathbf{b}\) and \(\mathbf{b}^{\prime}\) of this form if \((\Gamma,\mathbf{b})\) and \((\Gamma,\mathbf{b}^{\prime})\) are equivalent objects then \(\mathbf{b}\) and \(\mathbf{b}^{\prime}\) agree as linear combinations of self-intersection points of \(\Gamma\)? One way of describing such a normal form would be to extend the arrow sliding algorithm used to simplify \(\widehat{\mathbf{b}}\). Recall that we systematically remove all arrows that can be removed, leaving only left-turn arrows at local system intersection points. In the same way, we could start with a decorated curve \((\Gamma,\mathbf{b})\), consider the corresponding train track in which \(\mathbf{b}\) is represented by a collection of left-turn crossover arrows, and then systematically slide crossover arrows to remove them when possible or if not put them in a preferred position. This strategy works for Example 12.1, as shown in Figure 50. The bounding chain \(\mathbf{b}\) can be interpreted as a single crossover arrow with weight \(W\). We can slide this arrow rightward until it is parallel with \(\mu\), pointing from the segment containing \(e\) to the segment containing \(e\). We can then slide it past \(\mu\) (and over a marked point) at the expense of changing the weight from \(W\) to \(1\); an argument similar to the proof of Proposition 7.4 shows this move has the effect of the change of basis replacing \(e\) with \(e+Ud\). Finally, we continue sliding the arrow until it is removable. We suspect that this strategy will work in general, but it turns out that sliding arrows is fairly subtle in the minus setting. For example, in Lemma 7.4 we proved the fact that sliding an arrow across \(\mu\) corresponds to a change of basis, but we did this in the \(UV=0\) setting. The natural generalization of the statement to the minus setting does not always hold (if only holds if the arrow in question is unobstructed). Indeed, the main reason is that arrows sliding in the minus setting is more difficult is that monogons are plentiful and sliding arrows often gives rise to arrows that are not unobstructed. Figure 49. An immersed curve representative for the knot Floer complex of the figure-eight knot. The curve with trivial bounding chain (middle) represents the complex over \(\widehat{\mathcal{R}}\), and adding the indicated intersection point to the bounding chain (right) encodes the complex over \(\mathcal{R}^{-}\). Figure 50. Simplifying the bounding chain from Example 12.1 by arrow sliding. We avoided these subtleties in the constructions earlier in this paper by doing the majority of the arrow simplification in the \(UV=0\) setting, and when constructing the enhanced curves over \(\mathcal{R}^{-}\) we really worked over a quotient \(\mathcal{R}_{k+1}=\mathcal{R}^{-}/W^{k+1}\) at each step and only manipulated arrows weighted by the maximal power \(W^{k}\). Another requirement for defining a normal form by arrow sliding is giving a clear description of when the process should stop; that is, we need to understand which arrows are removable. The next example is a variation on the previous one but the different choices of bounding chain are not equivalent, and in particular these crossover arrows are not removable. **Example 12.3**.: Consider the complex with five generators and differential \[\partial(a)=V^{2}b,\quad\partial(b)=0,\quad\partial(c)=U^{2}a+V^{2}e+k_{1}UVd, \quad\partial(d)=k_{2},\quad\partial(e)=U^{2}b,\] where \(k_{1}\) and \(k_{2}\) are either \(0\) or \(1\) and they are not both \(1\). The decorated curves representing the three complexes arising from the choice of \(k_{1}\) and \(k_{2}\) are shown in Figure 51. The complexes agree over \(\widehat{\mathcal{R}}\), so the underlying curve \(\Gamma\) is the same in each case and we have three different choices of bounding chain on \(\Gamma\). These three complexes are not homotopy equivalent to each other, so these three decorated curves are not equivalent objects in the Fukaya category of the marked cylinder. In particular, there is no way to simplify either of the nontrivial bounding chains to obtain the trivial bounding chain so each of the crossover arrows shown are not removable. We see that the arrow slide used in the previous example fails because it requires sliding the crossover arrow over two marked points, but the arrow is only weighted by \(W\) and the power of \(W\) must decrease by one for each marked point the arrow crosses. If there is a normal form for bounding chains on an immersed multicurve, all three of these bounding chains must satisfy the conditions of being in normal form. ### Some curves with only trivial bounding chains Having discussed some of the challenges with simplifying the bounding chain decoration to a normal form in general, we now point out that in practice it is often much easier to find a simplest representative. In fact, we will describe a family of immersed multi-curves for which every possible bounding chain is equivalent to the trivial one. For these curves we can always take the bounding chain to be trivial and a complex which is represented over \(\widehat{\mathcal{R}}\) by some curve \(\Gamma\) of this form is also represented over \(\mathcal{R}^{-}\) by \(\Gamma\). The multicurves in question contain components which are figure eight shaped curves enclosing two adjacent marked points like the closed curves in Figure 49; we call curves of this form _simple figure eight curves_. A curve of this form represents a simple complex which we will denote \(C_{box}\); this Figure 51. Decorated immersed curves representing the complex shown at the left with no diagonal arrows or with either one of the dotted diagonal arrows included. complex has generators \(x_{1}\), \(x_{2}\), \(x_{3}\), and \(x_{4}\) and differential \[\partial(x_{1})=Ux_{2}+Vx_{3},\quad\partial(x_{2})=Vx_{4},\quad\partial(x_{3})=- Ux_{4},\quad\partial(x_{4})=0.\] We will show that for a multicurves with simple figure eight components, any bounding chain can me simplified to avoid these components. To avoid the complications with sliding arrows in the minus setting described in the previous section, we will deduce this from an algebraic fact about bigraded chain complexes (though we remark that an arrow sliding proof would be desirable, since it would likely generalize to other families of curves for which proving the simplification algebraically would be difficult). **Lemma 12.4**.: _Suppose a reduced bigraded complex \(C\) over \(\widehat{\mathcal{R}}\) has a basis containing four generators \(x_{1}\), \(x_{2}\), \(x_{3}\), and \(x_{4}\) such that there are length one horizontal arrows from \(x_{1}\) to \(x_{2}\) and from \(x_{3}\) to \(x_{4}\), length one vertical arrows from \(x_{1}\) to \(x_{2}\) and from \(x_{3}\) to \(x_{4}\), and no other horizontal or vertical arrows into or out of these generators. Then possibly after a change of basis \(C\) splits as a direct sum \(C^{\prime}\oplus C_{box}\)._ Proof.: By scaling the generators \(x_{1}\), \(x_{2}\), \(x_{3}\), and \(x_{4}\) by constants we can arrange that the horizontal arrow from \(x_{1}\) has weight \(U\) and the vertical arrows from \(x_{1}\) to \(x_{3}\) and from \(x_{2}\) to \(x_{4}\) have weight \(V\); it then follows from \(\partial^{2}=0\) that the weight of the horizontal arrow from \(x_{3}\) to \(x_{4}\) is \(-U\). These four generators and the arrows between them thus form a copy of \(C_{box}\), so we just need to remove any other arrows in or out of these four generators. Any such arrows are diagonal by assumption. We will first arrange that there are no other arrows into the generator \(x_{4}\). Suppose there is another generator \(y\) with an arrow of weight \(cU^{a}V^{b}\) from \(y\) to \(d\); this is a diagonal arrow so \(a\) and \(b\) are both positive. We perform the change of basis replacing \(y\) with \(y^{\prime}=y-wU^{a}V^{b-1}x_{2}\) and note that the coefficient of \(x_{4}\) in \(\partial(y^{\prime})\) is zero. The only other changes involving the generators in the box complex is that for any arrow \(z\) into \(y\) there is a new arrow from \(z\) to \(x_{2}\). Note that if \(b=1\) and the arrow from \(z\) to \(y\) is a horizontal arrow, then the new arrow from \(z\) to \(x_{2}\) will be horizontal so the complex may no longer be horizontally simplified. However, we can say that if the new arrow from \(z\) to \(x_{2}\) is horizontal it must have length at least two. If we repeat this for all generators \(y\) other than \(x_{2}\) and \(x_{3}\) with arrows into \(x_{4}\), we arrive at a complex with no unwanted arrows into \(x_{4}\). We next eliminate any unwanted arrows into \(x_{2}\) in a similar way. Suppose there is a generator \(y\) other than \(x_{1}\) with an arrow weighted by \(wU^{a}V^{b}\) from \(y\) to \(x_{2}\). We must have that \(a>0\) since the basis is still vertically simplified, and either \(b>0\) or \(a>1\) by the observation in the previous paragraph. We perform the change of basis replacing \(y\) with \(y^{\prime}=y-wU^{a-1}V^{b}x_{1}\), and note that the coefficient of \(x_{2}\) in \(\partial(y^{\prime})\) is zero. The basis change may also introduce new arrows into \(x_{1}\), but otherwise arrows in or out of \(x_{1}\), \(x_{2}\), \(x_{3}\), and \(x_{4}\) are unaffected. Repeating this for all generators \(y\) other than \(x_{1}\) with arrows into \(x_{2}\) results in a complex with no unwanted arrows into \(x_{2}\). We can now deduce that there are no arrows into \(x_{1}\) and no arrows into \(x_{3}\) except the vertical arrow from \(x_{1}\). For the first claim, note that for any \(y\) the coefficient of \(x_{2}\) of \(\partial^{2}(y)\), which must be zero, is simply \(U\) times the coefficient of \(x_{1}\) in \(\partial(y)\) since the only arrow into \(x_{2}\) is from \(x_{1}\). For the second claim, note that for any \(y\) other than \(x_{1}\) the coefficient of \(x_{4}\) in \(\partial^{2}(y)\) is \(U\) times the coefficient of \(x_{3}\) in \(y\) since the only arrows into \(x_{4}\) are from \(x_{2}\) and \(x_{3}\) and the only arrow into \(x_{2}\) is from \(x_{1}\). Thus there are now no extraneous arrows into the four generators \(x_{1}\), \(x_{2}\), \(x_{3}\), and \(x_{4}\). Removing extra arrows out of these four generators is similar. We first remove unwanted arrows that start at \(x_{1}\): if there is a generator \(y\) with an arrow from \(x_{1}\) to \(y\) we perform a basis change that adds an appropriate multiple of \(y\) to \(x_{2}\). Each such basis change might add a new arrow out of \(x_{2}\), which must be either diagonal or vertical with length at least two. We then remove arrows out of \(x_{2}\) other than the vertical arrow to \(x_{4}\) by adding an appropriate multiple of \(y\) to \(x_{4}\) for any \(y\neq x_{4}\) with an arrow from \(x_{2}\) to \(y\). Finally, we use \(\partial^{2}=0\) to deduce that there are no unwanted arrows out of \(x_{3}\) or \(x_{4}\) If an immersed multicurve \(\Gamma\) contains a simple figure eight curve then for any bounding chain \(\mathbf{b}\) on \(\Gamma\) the corresponding complex \(C(\Gamma,\mathbf{b})\) over \(\widehat{\mathcal{R}}\) contains a copy of \(C_{box}\). By splitting off \(C_{box}\) summands from the corresponding complex over \(\mathcal{R}^{-}\), we can show that a bounding chain can be chosen with no intersection points on the simple figure eight components. **Proposition 12.5**.: _Let \(\Gamma\) be an immersed multicurve in the marked strip \(\mathcal{S}\) or \(\mathcal{Z}\) in almost simple position. Any bounding chain on \(\Gamma\) is equivalent to one that contains no self-intersection points on any simple figure eight component of \(\Gamma\)._ Proof.: We may assume that the initial bounding chain \(\mathbf{b}\) has \(\widehat{\mathbf{b}}\) of local system type, since we have shown that every bounding chain is equivalent to one of this form. We now consider the complex \(C=C(\Gamma,\mathbf{b})\) and, fixing a simple figure eight component of \(\Gamma\), we consider the four generators of \(C\) arising from this component. Because \(\widehat{\mathbf{b}}\) has local system type and there are no local system self-intersection points on a simple figure eight curve, the connected component of the complex \(\hat{C}=C|_{UV=0}\) over \(\widehat{\mathcal{R}}\) does not depend on the bounding chain. It follows that if we ignore diagonal arrows, the four generators of \(C\) coming from the specified simple figure eight component generate a copy of \(C_{box}\) with no other horizontal or vertical arrows in or out of these generators. By Lemma 12.4 we can change basis to remove any diagonal arrows in or out of these four generators, realizing \(C\) as a direct sum \(C^{\prime}\oplus C_{box}\) for some smaller complex \(C^{\prime}\). It is clear from the proof of Lemma 12.4 that the gradings of the generators on the \(C_{box}\) summand are the same as the gradings of the four relevant generators of \(C\). We now consider construct and immersed decorated multicurves \((\Gamma^{\prime},\mathbf{b}^{\prime})\) and \((\Gamma_{box},\mathbf{b}_{box})\) representing \(C^{\prime}\) and \(C_{box}\), respectively, and note that \((\Gamma^{\prime}\sqcup\Gamma_{box},\mathbf{b}^{\prime}+\mathbf{b}_{box})\) represents \(C=C^{\prime}\oplus C_{box}\) (with respect to the new basis). It is clear that \(\Gamma_{box}\) is a simple figure eight curve with the same bigradings as to the simple figure eight component of \(\Gamma\) we singled out, and \(\mathbf{b}_{box}\) is trivial. We have that \(\Gamma^{\prime}\sqcup\Gamma_{box}\) is homotopic to \(\Gamma\), by the uniqueness of the immersed curve representing \(C\), so \(\mathbf{b}^{\prime}\) may be viewed as a bounding chain on \(\Gamma\) that does not include self-intersection points on the specified simple figure eight component. We can repeating this argument for all simple figure eight components, modifying \(\mathbf{b}\) to avoid self-intersection points on any of them. **Corollary 12.6**.: _If \(\Gamma\) is an immersed curve in the strip \(\mathcal{S}\) or cylinder \(\mathcal{Z}\) containing one embedded component and some number of simple figure eight components, then any bounding cochain on \(\Gamma\) is equivalent to the trivial bounding cochain._ Proof.: By Proposition 12.5 any bounding cochain is equivalent to a linear combination of the self-intersection points of \(\Gamma\) that are not on a simple figure eight component, but if the only component that is not a simple figure eight is embedded then there are no such self-intersection points. In particular, if \(\Gamma\) contains one embedded component and some number of simple figure eight components then the minus invariant \((\Gamma,\mathbf{b})\) is determined by the weaker hat version \((\Gamma,\widehat{\mathbf{b}})\); equivalently, the full complex over \(\mathcal{R}^{-}\) is uniquely determined up to homotopy equivalance by the \(UV=0\) complex. This is relevant because computing the full knot Floer complex of a knot is hard but we have powerful computational tools for computing the \(UV=0\) complex. Knots whose knot Floer complexes satisfy the conditions of Corollary 12.6 are quite common in practice: using computer computations the author has found the immersed curves associated with all prime knots in \(S^{3}\) up to 15 crossings, and of these 313,230 knots all but one have knot Floer complexes with this property. Thus, although the computational techniques used only compute the complex \(CFK_{\widehat{\mathcal{R}}}\), we can in fact say that we have computed \(CFK_{\mathcal{R}^{-}}\) of these knots. Even when complexes do not take the form of an embedded curve along with simple figure eight components, it is common in for small examples that the the bounding chain \(\mathbf{b}\) is determined up to equivalence by \(\widehat{\mathbf{b}}\) (though the trivial bounding chain may not be an option). Another example is the complex in Figure 37 discussed in Section 9.1. There the bounding chain decoration is essential, since this curve without decoration is obstructed (in particular, the trivial linear combination of self-intersection points is not a valid bounding chain). However, there is only one linear combination of self-intersection points for which \((\Gamma,\mathbf{b})\) satisfies the Maurer-Cartan equations, so in this case again \((\Gamma,\mathbf{b})\) is determined by \(\Gamma\). This is a geometric version of the statement that, up to homotopy, there is a unique way to add diagonal arrows to the \(UV=0\) complex such that \(\partial^{2}=0\). Although as Example 12.3 demonstrates it is not hard to construct complexes over \(\mathcal{R}^{-}\) that are not determined by their quotients over \(\widehat{\mathcal{R}}\), these do not seem to arise often in practice. In fact, at that time of writing the author has not yet found an example of a knot for which \(CFK_{\mathcal{R}^{-}}(K)\) is not determined by \(CFK_{\widehat{\mathcal{R}}}(K)\). ### An example with nontrivial \(\mathbf{\widehat{b}}\) All of the examples presented so far have not required a bounding chain decoration to represent the \(UV=0\) complex; in other words, \(\mathbf{\widehat{b}}\) has been trivial. This is very common in practice, and in fact it is an open question whether the decoration \(\mathbf{\widehat{b}}\) is needed to represent the complex \(CFK_{\widehat{\mathcal{R}}}(Y,K)\) for any knot \(K\subset Y\). But it is not difficult to construct a complex for which the \(\mathbf{\widehat{b}}\) is nontrivial, such as the example below. **Example 12.7**.: The bigraded complex on the left of Figure 52 is represented by the decorated immersed curve in the infinite strip \(\mathcal{S}\) shown on the right side of the figure. For simplicity we use \(\mathbb{Z}/2\mathbb{Z}\) coefficients for this example. Note that the boundaing chain has nonzero coefficients for six self intersection points, five of these are weighted by \(W\) and are ignored when representing the complex over \(\widehat{\mathcal{R}}\) but one intersection point is a local system intersection point of the non-primitive closed component and included in \(\mathbf{\widehat{b}}\) as well. This chain complex can not be \(CFK_{\mathcal{R}^{-}}\) for a knot in \(S^{3}\), since \(\infty\)-filling has two dimensional \(\widehat{HF}\) but it satisfies all known constraints of being the knot Floer complex for a knot in some \(Y\). In particular, the decorated immersed curve is symmetric under the action of the elliptic involution, although this is not obvious since the symmetry holds only up to homotopy and the non-uniqueness of the bounding chain is relevant here. Applying the elliptic involution gives the decorated immersed curve in the middle of Figure 53 and homotoping the curves to their original position results in a different bounding chain as shown on the right of Figure 53. We leave it as an exercise to the motivated reader to check that this bounding chain is equivalent to the original one by adding a pair of crossover arrows from the lower horizontal arc to the higher horizontal arc and sliding them to opposite boundaries of the strip. This example speaks to the difficulty of defining a normal form for bounding chains as a subset of self-intersection points in the minus theory, as we have two Figure 52. The complex on the left is represented by the decorated curve on the right. Even as a complex over \(\widehat{\mathcal{R}}\) (ignoring diagonal arrows), a nontrivial bounding chain is required. valid choices of \(\mathbf{b}\) for the same homotopy class of complexes and it is not obvious which should be considered the preferred representative. ### Remarks on \(\mathbb{Z}\)-coefficients Throughout this paper we have assumed field coefficients, and this is essential in some areas, but many arguments work with \(\mathbb{Z}\)-coefficients as well. We end with a few words about what works and what still needs to be done to have useful immersed curve invariants over \(\mathbb{Z}\). We first observe that the definition of Lagrangian Floer homology of decorated immersed curves is the same using \(\mathbb{Z}\) coefficients, except that to ensure invariance of Floer homology under homotopies we must require the weights associated with basepoints on curves to be \(\pm 1\). This is because sliding one curve past a basepoint with weight \(c\) on the other curves (as in moves (c) or (d) in Figure 8 changes the Floer complex by replacing a generator \(x\) with \(cx\) and this is only a change of basis if \(c\) is a unit. The rest of the proof of homotopy invariance is unaffected, and self intersection points can be decorated with arbitrary elements of \(\mathbb{Z}[W]\) (or \(\mathbb{Z}[U,V]\) in the doubly marked case). By taking Floer homology with \(\mu\) in the doubly marked cylinder \(\mathcal{S}^{z,w}\), any decorated curve in \(\mathcal{S}\) determines a bigraded complex over \(\mathbb{Z}[U,V]\). Similarly, decorated curve in the infinite marked cylinder \(\mathcal{Z}\) determines a a bigraded complex over \(\mathbb{Z}[U,V]\) along with a flip isomorphism. Given two decorated immersed curve in \(\mathcal{Z}\) representing complexes over \(\mathbb{Z}[U,V]\), the shifted pairing of curves still agrees up to homotopy with the mapping cone of morphism complexes as described in Section 10; in particular, if a decorated immersed curve with represents the knot Floer complex of a knot \(K\subset Y\) with \(\mathbb{Z}\) coefficients along with its flip isomorphism then taking Floer homology with a line of slope \(\frac{p}{q}\) computes \(\mathit{HF}^{-}\) of \(\frac{p}{q}\)-surgery on \(K\) with \(\mathbb{Z}\) coefficients. Moreover, it is still true that any bigraded complex over \(\mathbb{Z}[U,V]\) can be represented by some decorated immersed curve in \(\mathcal{S}\) (and similarly that any complex with a flip isomorphism can be represented by some decorated curve in \(\mathcal{Z}\)), namely the naive curve representative described in Section 5.4. The main difference in the case of \(\mathbb{Z}\) coefficients is that these representatives can not be simplified as fully. The crucial failing of the arrow sliding algorithm when working with \(\mathbb{Z}\) coefficients is the local move at the bottom of Figure 18, in which a self intersection point next to crossover arrow is resolved and the crossing is replaced by a new crossover arrow. Because this move introduces an arrow and a basepoint whose weight is the inverse of the weight of the original arrow, this move is only possible Figure 53. Realizing the symmetry on the complex from Example 12.7. On the left is the decorated curve from Figure 12.7, in the middle is the result of rotating this decorated curve by half a rotation, and on the right is the result of homotoping the curve to agree with the curve on the left, keeping track of resulting changes to the bounding chains following the usual local moves. The bounding chains on the left and right are different as subsets of the self intersection points but the decorated curves are equivalent as elements of the Fukaya category of the marked strip. when the original arrow is weighted by \(\pm 1\). Having said this, we observe that all the other \(n\)-strand arrow configuration replacements in Figure 18 are still valid with \(\mathbb{Z}\) coefficients. Without the crossing resolving move, we can not run the arrow sliding algorithm in full. However, we can still follow a modified version of the algorithm, keeping the overall strategy of systematically removing each crossover arrow except for those that are not removable. Recall that the strategy for removing a single arrow with field coefficients is to slide it one direction until the curve segments it connects diverge, and then if the arrow points from the left segment to the right segment it can be removed. If it points from the right segment to the left segment we slide the arrow the other directions until the strands diverge and remove it if possible. If the arrow is not removable on either end, then the curve segments must cross; in this case we slide the arrow to the crossing, resolve the crossing, and then the two resulting arrows will be removable when pushed as far as possible in opposite directions. In this way we can remove all crossover arrows except those for which the curve segments they connect never diverge, and such arrows can be moved to be left-turn crossover arrows at local-system intersection points of non-primitive curve components. With \(\mathbb{Z}\) coefficients we can follow the same strategy, except that when an arrow is not removable at both ends we can not resolve the crossing if the weight on the crossover arrow is not \(\pm 1\). In this case, we still slide the crossover arrow to the crossing, where it will necessarily be a left-turn crossover arrow. This arrow is not removable, so we will simply include this self-intersection point with the appropriate coefficient in the bounding chain \(\widehat{\mathbf{b}}\). In other words, a more complicated bounding chain, using more than just the local system intersection points, is required even setting \(UV=0\). Note that in the original curve sliding algorithm, some work was required to show that this strategy for removing a single arrow could be performed repeatedly to remove many arrows such that the total process would eventually terminate; this is slightly more difficult in the \(\mathbb{Z}\) coefficient setting because there are more unremovable arrows present to interact with the arrow being removed, but the modifications to the algorithm are fairly routine. **Example 12.8**.: Consider the chain complex over \(\mathbb{Z}[U,V]\) shown in Figure 54. One can check that the first immersed curve in the figure represents this complex (with respect to the given basis). Note that even if we restrict to \(UV=0\) we still require a non-trivial bounding chain in order to capture the arrows labeled by \(3U\). We can pass to finite field coefficients in multiple ways by taking different quotients of \(\mathbb{Z}\) and we observe that for this complex different finite field coefficients result in different immersed curves. For \(\mathbb{Z}/3\mathbb{Z}\) coefficients we simply ignore the arrows labelled by \(3U\) and remove the bounding chain decoration from the curve representing the complex with \(\mathbb{Z}\) coefficients, but the underlying curve is unchanged. For \(\mathbb{Z}/2\mathbb{Z}\) coefficients, on the other hand, the two marked intersection points remain in the bounding chain with coefficient \(1\). We interpret the bounding chain as giving left-turn crossover arrows at these intersection points and run the arrow sliding algorithm; we leave it as an exercise to see that this gives the rightmost curve in the figure. Note that if we keep track of the basis changes corresponding to the arrow slides, these introduce diagonal arrows. Following the algorithm for enhancing curves to capture diagonal arrows, we include the indicated intersection points with coefficient \(W\) in \(\mathbf{b}\); alternatively, once we find the curve representing the \(UV=0\) complex we can observe that the bounding chain decoration \(\mathbf{b}\) is forced and these intersection points must appear in \(\mathbf{b}\) in order for the monogons enclosing two punctures to cancel with something. By following a modified arrow sliding algorithm, we can represent any bigraded complex over \(\mathbb{Z}[U,V]/(UV=0)\) by a graded immersed curve \(\Gamma\) decorated with a bounding chain \(\widehat{\mathbf{b}}\) which is a linear combination of degree zero self intersection points of \(\widehat{\mathbf{b}}\) such that the coefficient of any intersection point that is not a local system intersection point is not \(\pm 1\). Moreover, by construction we have made some effort to remove the crossover arrows so it is reasonable to hope that this algorithm produces a representative that is as simple as possible in some sense. However, the notion of "as simple as possible" is not clear in this case, since non-removable crossover arrows can be moved between different self-intersection points and there may be clever basis changes not suggested by the arrow sliding algorithm to replace a collection of crossover arrows with a simpler one. In the field coefficient case the fact that \(\widehat{\mathbf{b}}\) can be taken to be trivial except on local system intersection points is powerful because for immersed curves decorated with such a bounding chain the dimension of Lagrangian Floer homology is simply the minimal intersection point of the curves except when the curves being paired have parallel components (in which case there is an additional term that is not hard to understand). This fact enabled us to prove that the decorated immersed curves are unique by observing that two different curves would have different pairing with some third curve. When \(\widehat{\mathbf{b}}\) is more complicated the Lagrangian Floer homology no longer reduces to the minimal intersection number, so arguments of this form are more difficult. To summarize, bigraded complexes over \(\mathbb{Z}[U,V]\) can be represented by decorated immersed curves in \(\mathcal{S}\), and in practice we can choose a fairly nice representative, but even when considering the simpler \(UV=0\) complex these objects are more complicated than immersed curves with local systems. Unlike the case of field coefficients, it is unclear to what extend the decorated curve representing the \(UV=0\) complex is unique, and in fact the \(UV=0\) case with \(\mathbb{Z}\) coefficients exhibits the same subtleties concerning uniqueness of the bounding chain decoration that arise in the case of minus invariants with field coefficients. Despite this added complexity, we expect that immersed curves will be a valuable tool for studying knot Floer homology with \(\mathbb{Z}\) coefficients and this is a topic for further exploration.
2305.06062
Controlled State Reconstruction and Quantum Secret Sharing
In this article, we present a benchmark for resource characterization in the process of controlled quantum state reconstruction and secret sharing for general three-qubit states. This is achieved by providing a closed expression for the reconstruction fidelity, which relies on the genuine tripartite correlation and the bipartite channel between the dealer and the reconstructor characterized by the respective correlation parameters. We formulate the idea of quantum advantage in approximate state reconstruction as surpassing the classical limit set at 2/3. This article brings out new interoperability between teleportation and state reconstruction. This is detailed through a case-by-case analysis of relevant correlation matrices. We are reformulating the idea of quantum secret sharing by setting up additional constraints on the teleportation capacity of the bipartite channels between the dealer and shareholders by ensuring that, individually, the shareholders cannot reconstruct the secret. We believe that this will give us the ideal picture of how quantum secret sharing should be.
Pahulpreet Singh, Indranil Chakrabarty
2023-05-10T11:30:30Z
http://arxiv.org/abs/2305.06062v4
# A New Quantum Advantage in Quantum Secret Sharing ###### Abstract In this letter, we benchmark the process of resource characterisation for quantum secret sharing by obtaining the classical limit for the tripartite situation and then giving a closed expression for reconstruction fidelity. It depends on both the genuine tripartite correlation and the bipartite channel between the dealer and the reconstructor. This helps us to predict any quantum advantage we can have with tripartite resource states. Another paramount contribution of this paper is finding new interoperability between teleportation and secret sharing, which opens up new research avenues. **Introduction:** Quantum Entanglement [1] acts a cardinal resource in facilitating a plethora of information processing tasks like teleportation [2; 3; 4; 5], super-dense coding [6; 7; 8], remote preparation of states [9], key generation [10], secret sharing [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30], establishing quantum networks [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42], etc. Quantum secret sharing (**QSS**) is the process of sharing a secret between different parties such that none of them can reveal the secret without collaborating with others. [11] It is also known as controlled quantum teleportation (**CQT**) as it is an extended version of quantum teleportation with the extra share acting as the control qubit [48]. QSS is sometimes used as an umbrella term for sharing of both classical and quantum secrets using a quantum entangled state as a resource, which is shared among the parties. It has mainly been studied with tripartite/multipartite entangled states as resources [20; 21; 22], but it is also possible to share quantum secrets using bipartite entangled states [23; 24]. Semi-quantum secrets can be shared with entangled states as resources as well [46]. In the most simplistic scenario with a tripartite entangled resource, consider the setting where Alice is the **dealer** (possesses the secret initially), Bob is the **assistant** (helps in construction) and Charlie is the **reconstructor** (reconstructs the secret with the help of assistant). As the dealer, Alice's aim is to share the secret between Bob and Charlie in such a way that neither of them can reconstruct it on their own. It is well-known that the GHZ state, which is genuinely tripartite entangled, allows _perfect_ reconstruction of the secret [11]. However this is not true for, say, the W state. This letter addresses the following question; if not perfect, what is the threshold limit of reconstruction, above which there is a quantum advantage? By quantum advantage, we refer to a situation where the reconstruction fidelity is better than what can be achieved classically without having any shared quantum resource. Such discussions have been there earlier for other quantum information processing tasks. In quantum teleportation, for example, this threshold is 2/3, which helps us distinguish entangled states with or without quantum advantage [3; 4; 5]. In case of super dense coding, quantum advantage can be defined in terms of negative conditional entropy [7]. We find the classical limit of reconstructing the secret, which happens to be equal to 2/3 as well. Subsequently, we find an expression for the reconstruction fidelity of the secret in terms of the Bloch parameters of the resource state. It eventually enables us to find conditions under which a three-qubit entangled state will have quantum advantage in the process of QSS. It turns out that the reconstruction fidelity is dependent on the correlation tensor between three parties and the correlation matrix of the dealer - reconstructor pair. It is intriguing to note that it does not depend on the correlation matrix of the assistant - reconstructor pair. In a sense, this fidelity just quantifies how much quantum information from the initial state is transferred to the final location. There can be situations with quantum advantage despite the absence of tripartite correlation. This leads us to believe that this fidelity does not originate solely from QSS. Rather, there is also contribution of the teleportation capacity of the bipartite channel between the dealer and the reconstructor. We not only prove this but consider several cases based on the correlation matrices that appear in our expression. This helps us analyze the interoperability between the reconstruction fidelity and the teleportation fidelity for three-qubit resources in a holistic manner. **Classical Limit of QSS:** We define the _Classical Limit_ as the expected fidelity score obtained if only classical channels are used to share a qubit. This score comes out to be 2/3 (See [50] for derivation). This allows us to define the threshold above which a quantum advantage can be claimed, since any resource with score exceeding 2/3 performs better than the classical alternatives. **Approximate QSS and Quantum Advantage:** We start with a three qubit resource state \(\rho_{ABC}\) in the space \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{C}\). This can be written in parametric form as \[\rho_{ABC}=\frac{1}{8}[I^{\otimes 3}+\sum_{i=1}^{3}a_{i}.\sigma_{i} \otimes I^{\otimes 2}+\] \[\sum_{j=1}^{3}I\otimes b_{j}.\sigma_{j}\otimes I^{\otimes 2}+\sum_{k =1}^{3}I^{\otimes 2}\otimes c_{k}.\sigma_{k}+\] \[\sum_{i,j=1}^{3}q_{ij}\sigma_{i}\otimes\sigma_{j}\otimes I+\sum_ {i,k=1}^{3}r_{ik}\sigma_{i}\otimes I\otimes\sigma_{k}+\] \[\sum_{j,k=1}^{3}s_{jk}I\otimes\sigma_{j}\otimes\sigma_{k}+\sum_ {i,j,k=1}^{3}t_{ijk}\sigma_{i}\otimes\sigma_{j}\otimes\sigma_{k}]. \tag{1}\] Here \(a_{i},b_{j},c_{k}\) are local Bloch vectors and the correlation matrices are given by, \(Q=\{q_{ij}\}=Tr(\rho_{ABC}(\sigma_{i}\otimes\sigma_{j}\otimes I)),R=\{r_{ik} \}=Tr(\rho_{ABC}(\sigma_{i}\otimes I\otimes\sigma_{k}))\) and \(S=\{s_{jk}\}=Tr(\rho_{ABC}(I\otimes\sigma_{j}\otimes\sigma_{k}))\) are the correlation matrices of order \(3\times 3\). Here \(\tau=t_{ijk}=Tr(\rho_{ABC}(\sigma_{i}\otimes\sigma_{j}\otimes\sigma_{k}))\) is the correlation tensor. Now we find out the secret reconstruction fidelity in terms of the Bloch parameters of the three qubit resource state \(\rho_{ABC}\). The secret qubit SS on Alice's side, parameterized by the Bloch vector \(\mathbf{\phi}\), is given by \[\rho_{\mathbb{S}}=\frac{1}{2}(I+\sum_{i}\phi_{i}.\sigma_{i}). \tag{2}\] In the standard QSS scheme, measurement takes place at two phases on the protocol. First, at Alice's side on the secret qubit (SS) along with Alice's share of the resource (\(A\)), with projectors \(P_{l}=|\Psi_{l}\rangle\langle\Psi_{l}|\) (\(l=0,1,2,3\)) and second, on Bob's qubit, with projectors \(P_{x}=|x\rangle\langle x|\) (\(x=+,-\)). Here, the Bell states and Hadamard states are given as, \(|\Psi^{3}_{(0)}\rangle=\frac{1}{\sqrt{2}}(|01\rangle\pm|10\rangle),|\Psi^{2}_{ (1)}\rangle=\frac{1}{\sqrt{2}}(|00\rangle\pm|11\rangle)\) and \(|x_{\pm}\rangle=\frac{1}{\sqrt{2}}(|0\rangle\pm|1\rangle)\) respectively. The Bell state projectors can be written in the form \[P_{l}=\frac{1}{2}(I^{\otimes 2}+\sum_{ij}t_{ij}\sigma_{i}\otimes\sigma_{j}). \tag{3}\] The coefficients \(t_{ij}\) form a correlation matrix \(T_{l}\) (\(l=0,1,2,3\) for different projectors). These are given by \(T_{0}=\text{diag(-1, -1, -1)}\), \(T_{1}=\text{diag(-1, +1, +1)}\), \(T_{2}=\text{diag(+1, -1, +1)}\) and \(T_{3}=\text{diag(+1, +1, -1)}\). The second set of Hadamard projectors on Bob's side is given by, \(P_{x}=\frac{1}{2}(I+\mathbf{x}.\sigma)\). Here, \(\mathbf{x}=(\pm 1,0,0)\). Now we find the output state of Charlie's qubit, after the two measurements followed by applying appropriate unitaries. \[\varrho_{\alpha}=\frac{1}{p_{\alpha}}\text{Tr}_{123}\left[(P_{k}\otimes P_{x} \otimes U_{\alpha})(\rho_{\mathbb{S}}\otimes\varrho)(P_{k}\otimes P_{x} \otimes U^{\dagger}_{\alpha})\right]\] The trace is taken over the secret qubit, Alice's share and Bob's share. \(\alpha\) acts as a combined index for \((l,x)\). Here, \(p_{\alpha}=\text{Tr}\left((P_{k}\otimes P_{x}\otimes I)(P_{S}\otimes\varrho))\) is the probability of getting the measurement corresponding to the combination \((P_{l},P_{x})\). Finally, \(U_{\alpha}\) is the unitary operator chosen to reconstruct (a close approximation of) the secret at Charlie's side. Substituting the expressions for the states and the projection operators, we obtain: \[p_{\alpha}\varrho_{\alpha}=\frac{1}{16}\Bigg{(}\bigg{[}1+\frac{1 }{2}\sum_{i}(T_{l})_{ii}A_{i}\phi_{i}+\sum_{i}B_{i}\phi_{i}+\] \[\frac{1}{2}\sum_{i,j}(T_{l})_{ii}Q_{ij}\phi_{i}x_{j}\bigg{]}I+\sum _{jk}\Omega_{jk}\bigg{[}\sum_{j}C_{j}+\sum_{ij}S_{ij}x_{i}\] \[+\frac{1}{2}\sum_{ij}(T_{l})_{ii}R_{ij}\phi_{i}+\frac{1}{2}\sum_ {ijm}(T_{l})_{mm}t_{mij}x_{i}\phi_{m}\bigg{]}\sigma_{k}\Bigg{)}. \tag{4}\] Here, \(\{\Omega_{\alpha}\}\) are rotations in \(\mathbb{R}^{3}\) obtained from the unitaries \(\{U_{\alpha}\}\), given by the relation: \[U_{\alpha}\hat{n}\cdot\sigma U^{\dagger}_{\alpha}=(\Omega^{\dagger}\hat{n}) \cdot\sigma=\sum_{ij}\Omega_{ij}n_{i}\sigma_{j}. \tag{5}\] Now, the expected fidelity of reconstruction, i.e. the "closeness" of Charlie's qubit to the original secret, is given by the following integral over the Bloch sphere with uniform distribution \(M\): \[\mathcal{F}=\oint dM(\phi)\sum_{\alpha}p_{\alpha}\text{Tr}\left(\varrho_{\alpha }\rho_{\mathbb{S}}\right). \tag{6}\] After substituting expressions from eq (4) and (2), followed by omitting the terms that do not contribute to the integral, and using the relation, \[\oint\langle\phi,Y\phi\rangle dM(\phi)=\frac{1}{3}\text{Tr}\left(Y\right), \tag{7}\] the integral in (6) reduces to \[\mathcal{F}=\frac{1}{16}\sum_{\alpha}\bigg{[}1+\mathbf{B}\cdot \mathbf{x}+\frac{1}{3}\text{Tr}\left(\Omega^{\dagger}_{\alpha}R^{\dagger}T_{l }\right)\] \[+\frac{1}{3}\text{Tr}\left(\Omega^{\dagger}_{\alpha}(\tau_{ \lambda\mu\nu}x^{\mu})^{\dagger}T_{l}\right)\bigg{]}. \tag{8}\] This is being summed up over all \(\alpha\), i.e. all the \((l,x)\) possibilities of the two measurements. Note that \(\sum_{\alpha}\mathbf{B}\cdot\mathbf{x}=\sum_{l}\sum_{x}\mathbf{B}\cdot\mathbf{x}=0\). Let \(T\) be the matrix formed by the elements \(\{\sum_{j}t_{ijk}x_{j}\}\), or in tensor notation, \[T=\tau_{\lambda\mu\nu}x^{\mu}, \tag{9}\] for \(\mathbf{x}=(+1,0,0)\). Then, for \(\mathbf{x}=(-1,0,0)\), we have \(\tau_{\lambda\mu\nu}x^{\mu}=-T\). Thus, the summation can be split into two, based on \(x\), \[\mathcal{F}=\frac{1}{2}+\frac{1}{16}\frac{1}{3}\sum_{l}\mathrm{Tr} \left(T_{l}^{\dagger}(R+T)\Omega_{(l,+)}\right)\] \[\qquad+\frac{1}{16}\frac{1}{3}\sum_{l}\mathrm{Tr}\left(T_{l}^{ \dagger}(R-T)\Omega_{(l,-)}\right). \tag{10}\] Now, we want to choose the rotations in order to maximize \(\mathcal{F}\). As \(-T_{l}^{\dagger}\) is also a rotation, \(\Omega_{\alpha}\) can be chosen in order to maximize each term. Now this expression is independent of \(l\). \[\mathcal{F}_{\mathrm{max}}=\max_{\Omega,\Omega^{\prime}}\frac{1}{2}\big{(}1- \frac{1}{6}\mathrm{Tr}\left((R+T)\Omega\right)-\frac{1}{6}\mathrm{Tr}\left((R -T)\Omega^{\prime}\right)\big{)} \tag{11}\] where the maximum is taken over all rotations \(\Omega,\Omega^{\prime}\). Note that \(\Omega\) and \(\Omega^{\prime}\) can be independent of each other. \[\mathcal{F}_{\mathrm{max}}=\frac{1}{2}\bigg{(}1+\frac{1}{6} \mathrm{Tr}\left(\sqrt{(R+T)^{\dagger}(R+T)}\right)+\\ \frac{1}{6}\mathrm{Tr}\left(\sqrt{(R-T)^{\dagger}(R-T)}\right) \bigg{)}. \tag{12}\] As discussed earlier, the state \(\varrho\) is useful for reconstruction of the secret only when \(\mathcal{F}_{\mathrm{max}}>2/3\), or when \(\vartheta(\varrho)>1\), where \(\vartheta(\varrho)\) is defined as as: \[\vartheta(\varrho):=\frac{1}{2}\bigg{(}\|R+T\|_{1}+\|R-T\|_{1}\bigg{)}, \tag{13}\] such that \(\mathcal{F}_{\mathrm{max}}=\frac{1}{2}(1+\frac{1}{3}\vartheta(\varrho))\), and \(\|\cdot\|_{1}\) denotes the trace norm of a matrix, given by \(\|Z\|_{1}=\mathrm{Tr}\sqrt{Z^{\dagger}Z}\). Since \(R\) shows up in the expression, we conclude that the reconstruction of the secret is not entirely because of the tripartite correlation. There can be a situation when \(T=O\) (\(O\) is the null matrix) but the value of \(\mathcal{F}\) is greater than \(2/3\). Here, as there is no tripartite correlation [49], there is no involvement of Bob. In that situation, this can not be called as QSS. In a sense, this fidelity quantifies the information that can be retrieved as a result of this process. In other words, both secret sharing capacity of the three qubit resource and teleportation capacity of the two qubit channel between the dealer and receiver contribute to the reconstruction fidelity. In order to differentiate the two, we also find the teleportation fidelity between dealer and reconstruct subsystem of the resource \(\rho_{ABC}\) (from eq (1)), we get \[\rho_{AC}=\mathrm{Tr}_{B}(\rho_{ABC})=\\ \frac{1}{4}[I^{\otimes 2}+\sum_{i}a_{i}.\sigma_{i}\otimes I+\sum_{k}I \otimes c_{k}.\sigma_{k}+\sum_{ik}r_{ik}\sigma_{i}\otimes\sigma_{k}].\] Using the result from [3], the teleportation fidelity of \(\rho_{AC}\) is given by: \[\mathcal{F}^{\prime}=\frac{1}{2}\big{(}1+\frac{1}{3}\mathrm{Tr}\sqrt{R^{ \dagger}R}\big{)}, \tag{14}\] This analysis is for the case when Alice is the dealer, and the final qubit is being reconstructed at Charlie's end with the assistance of Bob. But there can be other cases of QSS with the same resource state, when the roles are interchanged. This gives us an ordered triplet of (**dealer, assistant, reconstructor**) which we call the _setting_. In some cases like GHZ state, all six settings are equivalent as the state is symmetric over all three parties. However, this cannot be generalised, as \(\mathcal{F}\) for all the settings need not be the same. We define three different \(T\) matrices: \(T_{AB}=\{Tr(\rho_{ABC}(\sigma_{i}\otimes\sigma_{j}\otimes\sigma_{x}))\}_{ij}\), \(T_{AC}=\{Tr(\rho_{ABC}(\sigma_{i}\otimes\sigma_{x}\otimes\sigma_{j}))\}_{ij}\) and \(T_{BC}=\{Tr(\rho_{ABC}(\sigma_{x}\otimes\sigma_{i}\otimes\sigma_{j}))\}_{ij}\). Here, the subscripts are used to denote the subsystems which contribute to the matrix. For example in \(T_{AB}\), the indices of the matrix correspond to the first and second subsystem of the three-qubit resource, as can be seen above. We can, hence, re-write eq (13) for the six settings, as shown in table 1. As wee see in the table, the expression varies only based on the assistant, and is symmetric in swapping the dealer and reconstructor. This symmetry is common with the expression for fidelity of teleportation [3]. \begin{table} \begin{tabular}{|c|l|c|c|} \hline **S. No.** & **Setting** & **Expression for \(\vartheta(\rho)\)** \\ \hline 1 & (Alice, Bob, Charlie) & \(\vartheta_{AC}(\rho)=\frac{1}{2}\big{(}\|R+T_{AC}\|_{1}\) \\ 2 & (Charlie, Bob, Alice) & \(+\|R-T_{AC}\|_{1}\big{)}\) \\ \hline 3 & (Alice, Charlie, Bob) & \(\vartheta_{AB}(\rho)=\frac{1}{2}\big{(}\|Q+T_{AB}\|_{1}\) \\ 4 & (Bob, Charlie, Alice) & \(+\|Q-T_{AB}\|_{1}\big{)}\) \\ \hline 5 & (Bob, Alice, Charlie) & \(\vartheta_{BC}(\rho)=\frac{1}{2}\big{(}\|S+T_{BC}\|_{1}\) \\ 6 & (Charlie, Alice, Bob) & \(+\|S-T_{BC}\|_{1}\big{)}\) \\ \hline \end{tabular} \end{table} Table 1: Table showing the expression of \(\vartheta(\rho)\) for different settings of (dealer, assistant, reconstructor). For simplicity, we will henceforth use the setting (Alice, Bob, Charlie) by default, and follow the representation in eq (13), i.e. \(T=T_{AC}\) and \(\vartheta(\varrho)=\vartheta_{AC}(\varrho)\), unless specified otherwise. We have seen that the correlation matrix \(S\) is not present in eq (13). It is important to note that this does not rule out the role of Bob in the construction of the secret. It only tells that the prior correlation between Bob and Charlie does not affect reconstruction fidelity. The only factors in determining it are first, the genuine correlation between Alice, Bob and Charlie, captured by \(T\), and secondly, the correlation matrix between the dealer and the reconstructor, donated by \(R\). Hence, we study different cases based on \(R,T\) which give an overview of how this score captures both secret reconstructing fidelity and the telportation fidelity of the channel between the source and the reconstructor. These also present us some conditions on quantum advantage, in terms of \(R,T\). **Case 1: \(\mathbf{R}\neq\mathbf{O}\), \(\mathbf{T}\neq\mathbf{O}\)** This is the most general scenario when both \(R\) and \(T\) can take any value. In this case it is not directly evident that whether the fidelity score is entirely because of secret sharing capacity of the entire three qubit or because of teleportation capacity of the dealer-reconstructor channel or because of both. As a consequence, we can not predict the main reason behind the quantum advantage directly. The only way to address this is to look into the teleportation fidelity of the subsystem of the dealer and the reconstructor in this situation. If the teleportation fidelity is \(\leq 2/3\) and the reconstruction fidelity is \(>2/3\) then we can claim that there is a quantum advantage because of the secret sharing resource. We discuss different cases with the help of the following examples: _Example 1:[43; 44]_ As the first simple case we consider the standard GHZ state.\(|GHZ\rangle=\dfrac{1}{\sqrt{2}}(|000\rangle+|111\rangle)\). The matrices \(R\) and \(T\) for the GHZ state can be found after writing the corresponding density state in its Bloch form: \[R=\left(\begin{array}{ccc}0&0&0\\ 0&0&0\\ 0&0&1\end{array}\right),T=\left(\begin{array}{ccc}1&0&0\\ 0&-1&0\\ 0&0&0\end{array}\right).\] This gives \[\vartheta(\rho_{W})=3,\quad\mathcal{F}_{\max}=1, \tag{15}\] which is expected since it is already known that the GHZ is used for perfect QSS. Note that the teleportation fidelity (from eq (14)) for the dealer-reconstructor subsystem, of this state is \(\mathcal{F}^{\prime}=\frac{1}{2}(1+\frac{1}{3})=\dfrac{2}{3}\). This clearly gives us a case with a quantum advantage arising from the secret sharing ability of three qubit resource states. _Example 2:_ In a case where both of the fidelities \(\mathcal{F}^{\prime}\) and \(\mathcal{F}\) are greater than \(2/3\), we cannot be sure whether the quantum advantage in the reconstruction fidelity \(\mathcal{F}\) is entirely because of the secret sharing resource The W state, known to exhibit a different nature of entanglement from the GHZ state [43; 44], is given by \(\left|W\right\rangle=\frac{1}{\sqrt{3}}(|001\rangle+|010\rangle+|100\rangle)\). \(R\) and \(T\) for the W state, from its Bloch representation of \(\rho_{W}=\left|W\right\rangle\left\langle W\right|\), are: \[R=\left(\begin{array}{ccc}2/3&0&0\\ 0&2/3&0\\ 0&0&-1/3\end{array}\right),T=\left(\begin{array}{ccc}0&0&2/3\\ 0&0&0\\ 2/3&0&0\end{array}\right),\] which gives \[\vartheta(\rho_{W})=\dfrac{7}{3},\quad\mathcal{F}_{\max}=\dfrac{8}{9}\approx 0.89. \tag{16}\] We see that \(\mathcal{F}_{\max}\neq 1\), which is expected since it is already known that W states cannot be used for perfect QSS [45], but \(\vartheta(\rho_{W})>1\) (equivalently, \(\mathcal{F}_{\max}>2/3\)). In this case, the subsystem-teleportation fidelity for the dealer- reconstructor channel is found out to be \(\mathcal{F}^{\prime}=\frac{1}{2}(1+\frac{5}{9})=\dfrac{I}{9}\). Since in this case \(\mathcal{F}^{\prime}>2/3\), we can not claim the quantum advantage in the reconstruction fidelity is purely because of QSS. _Example 3:_ Next we consider another example where we show the existence of a state within the paradigm of \(R\neq O,T\neq O\), for which the reconstruction fidelity is greater than \(2/3\), whereas the teleportation fidelity of the dealer- reconstructor subsystem is \(\leq 2/3\). This is a clear example of a state (other than the well known GHZ state) for which quantum advantage is because of the secret sharing resource. In this context, let us consider a generalised W Class of states. These states can be expressed as [47]: \[\left|\psi_{W}\right\rangle=\lambda_{0}\left|000\right\rangle+\lambda_{1} \left|100\right\rangle+\lambda_{2}\left|101\right\rangle+\lambda_{3}\left|110\right\rangle \tag{17}\] where \(\lambda_{i}\in\mathbb{R}\), \(\lambda_{i}\geq 0\) and \(\sum_{i}\lambda_{i}^{2}=1\). For \(\rho_{\tilde{W}}=\left|\psi_{W}\right\rangle\left\langle\psi_{W}\right|\), the matrices of interest are: \[R_{\tilde{W}}=2\left(\begin{array}{ccc}\lambda_{0}\lambda_{2}&0&\lambda_{0} \lambda_{1}\\ 0&-\lambda_{0}\lambda_{2}&0\\ -\lambda_{1}\lambda_{2}&0&\frac{1}{2}-\lambda_{1}^{2}-\lambda_{3}^{2}\end{array} \right),\] \[T_{\tilde{W}}=2\left(\begin{array}{ccc}0&0&\lambda_{0}\lambda_{3}\\ 0&0&0\\ -\lambda_{2}\lambda_{3}&0&-\lambda_{1}\lambda_{3}\end{array}\right).\] Now, we plot the teleportation fidelity of dealer-reconstructor subsystem (given by eq (14)) against the reconstruction fidelity (given by eq (13)) and get Fig. 1. Not all the states in the special states region as mentioned in the figure does not fall under the paradigm of \(R\neq O,T\neq O\). The state described by parameters \(\lambda_{0}=\lambda_{1}=0.7,\lambda_{2}\approx 0.09,\lambda_{3}\approx 0.11\) specifies out a special state which shows quantum advantage in the paradigm of \(R\neq O,T\neq O\). **Case 2: R = O, T \(\neq\) O** In this case, since the correlation matrix \(R=O\), it can be said with certainty that there is no correlation between the dealer and the reconstructor. This means there will be no teleportation capacity of the channel between them and hence it will not affect the total reconstruction fidelity. In a sense if there is a quantum advantage in this region, we can surely claim that this is because of secret sharing capacity of the three qubit state. However, the converse is not true. There can be states with \(R\neq O\) for which there is no correlation between the dealer and the reconstructor. In other words there can be quantum states whose quantum advantage can be solely because of \(T\) with \(R\neq O\). **Theorem 1 :** If R = O, T \(\neq\) O, then the observed quantum advantage is only because of secret sharing. **Proof:** Putting \(R=O\) in the expression for subsystem-teleportation fidelity (from eq (14)), gives \(\mathcal{F}^{\prime}=\frac{1}{2}\). Hence, no information flow can happen from dealer to reconstructor through teleportation. Then, the contribution to the reconstruction fidelity comes from the secret sharing resource. \(\blacksquare\) _Example:_ Consider the following states: \[\left|\gamma_{\pm}\right\rangle=\frac{\left|000\right\rangle\pm\left|100 \right\rangle\pm\left|110\right\rangle+\left|111\right\rangle}{2},\] Both these states fall into Case 1, but taking their equal mixture \(\rho_{\gamma}=\frac{1}{2}(\left|\gamma_{-}\right\rangle\left\langle\gamma_{-} \right|+\left|\gamma_{+}\right\rangle\left\langle\gamma_{+}\right|)\), \(R\) is found to be \(O\). Moreover, we get \(\mathcal{F}=3/4>2/3\), giving us a quantum advantage. Since \(R=O\) and teleportation cannot contribute to reconstruction fidelity, we can definitely say that this advantage arises purely from secret sharing. **Case 3: R \(\neq\) O, T = O** If in equation (13), \(R\neq O\), but \(T=O\), then it can be said that there is no tripartite correlation and hence there is no involvement of Bob. So in principle, there is no question of QSS in this case. In such a case if the reconstruction fidelity is greater than \(2/3\), that is purely because of the teleportation capacity of the subsystem. But the converse is not true as there can be tripartite states with \(T\neq O\) but not having genuine quantum correlation. **Theorem 2 :** If \(R\neq O\) and \(T=O\), then the quantum advantage in the reconstruction is entirely because of the teleportation capacity of the subsystem between the dealer and the reconstructor. **Proof :** Putting \(T=O\) in eq (13), we get the expression \[\vartheta(\rho_{ABC})=\|R\|_{1}. \tag{18}\] The expression in eq (14) now matches with the expression for reconstruction fidelity given by \[\mathcal{F}_{\max}=\frac{1}{2}(1+\frac{1}{3}\vartheta(\rho_{ABC}))=\frac{1}{2} (1+\frac{1}{3}\|R\|_{1}).\] Hence, it can be concluded that quantum advantage in this case is solely because of the teleportation channel between the dealer and the reconstructor. \(\blacksquare\) _Example:_ Consider the following states: \[\left|\delta_{\pm}\right\rangle=\frac{\left|000\right\rangle+\left|100\right\rangle +\left|101\right\rangle\pm\left|110\right\rangle}{2},\] Both of these have \(R\neq O\neq T\), but when their equal mixture is considered, then for the mixed state \(\rho_{\delta}\), we find \(T=O\), where \(\rho_{\delta}=\frac{1}{2}(\left|\delta_{-}\right\rangle\left\langle\delta_{-} \right|+\left|\delta_{+}\right\rangle\left\langle\delta_{+}\right|)\), In this case as well, we get \(\mathcal{F}=3/4>2/3\). But since \(T\) is \(O\), we argue that this fidelity arises solely from the teleportation capacity of the channel between the sender Figure 1: Plot comparing the reconstruction fidelity and subsystem-teleportation fidelity for \(4\times 10^{6}\) uniformly sampled states belonging to the W class. The highlighted region (labelled “special states”) is the one with \(\mathcal{F}^{\prime}<2/3\) and \(\mathcal{F}>2/3\), implying that secret sharing is successful. One such state is labelled separately as an example, with \(R\neq O\) and \(T\neq O\). and the reconstructor and hence the quantum advantage. **Case 4: R = O, T = O** Here, \(\vartheta(\rho)=0\), giving us \(\mathcal{F}_{\rm max}=1/2\) which is no better than a random guess. In this case, as there is no flow of information from the sender to the reconstructor, there cannot be any quantum advantage. **Discussion :** This letter establishes a classical limit of reconstructing the secret in the standard setting. We give an expression for the reconstruction fidelity in terms of the Bloch parameters. Interestingly, the fidelity which depends on both tripartite correlation tensor and the correlation matrix between the dealer and reconstructor, is an artifact of both the secret sharing capacity of the tripartite state as well as the teleportation capacity of the dealer and reconstructor subsystem. All possible cases associated with quantum advantage in the reconstruction fidelity are investigated. We give examples where advantage is only because of the teleportation capacity of the subsystem and those where the advantage is mainly because of the secret sharing capacity of the tripartite resource. We also discuss cases which are ambiguous about the cause of the quantum advantage. Our result introduces an interoperability between teleportation and secret sharing and provides a benchmark for the identification of resources suitable for secret sharing and/or teleportation. By reporting this, it opens up new avenues of research related to the interoperability that can exists in different quantum information processing tasks.
2304.11637
Toy model for the correlation of qudit bipartite states with maximally mixed marginals
In this paper, we consider the local unitary classification of the class of qudit bipartite mixed states for which no information can be obtained locally. These states are represented by symmetrical density matrices in which both tracial states are maximally mixed. Interestingly, this symmetry facilitates the local unitary classification of two-qubit states. However, the same formalism fails in the case of systems of higher dimensions. We consider a broader set of states by introducing a family of qudit bipartite mixed states with maximally mixed marginals. For this family of states, we determine several constants which are in variant under local unitary transformations so can be used for entanglement classification. Finally, we consider the two-qutrit case and in particular, a two-parameter family of states for which the local unitary classification is complete. We relate this classification to known entanglement measures such as purity and negativity.
Constantino Rodriguez-Ramos, Colin M. Wilmott
2023-04-23T12:41:22Z
http://arxiv.org/abs/2304.11637v1
# Toy model for the correlation of qudit bipartite states with maximally mixed marginals ###### Abstract In this paper, we consider the local unitary classification of the class of qudit bipartite mixed states for which no information can be obtained locally. These states are represented by symmetrical density matrices in which both tracial states are maximally mixed. Interestingly, this symmetry facilitates the local unitary classification of two-qubit states. However, the same formalism fails in the case of systems of higher dimensions. We consider a broader set of states by introducing a family of qudit bipartite mixed states with maximally mixed marginals. For this family of states, we determine several constants which are in variant under local unitary transformations so can be used for entanglement classification. Finally, we consider the two-qutrit case and in particular, a two-parameter family of states for which the local unitary classification is complete. We relate this classification to known entanglement measures such as purity and negativity. ## 1 Introduction Entanglement describes a particular type of correlation unique to composite quantum systems [1, 2]. One property of entangled systems is that the state of the system cannot be described by knowing the state of its constituent subsystems. Since the development of quantum information, entanglement was identified as a valuable resource in different scenarios. For example, entanglement is a crucial resource in applications such as teleportation [3], dense coding [4], quantum cryptography [5], quantum computing [5], and more recently in applications such as quantum sensing [6] and quantum internet [7]. For this reason, tremendous efforts have been put into characterising the entanglement of quantum systems. The simplest form of entanglement is bipartite entanglement. In this setup, the system is constituted by two parties, usually denoted as Alice and Bob. A unitary operation acting locally, only on Alice or only on Bob, preserves the entanglement of the joint system [8, 9]. Establishing a complete set of local unitary (LU) equivalence classes is usually the first approach to characterise a bipartite system in terms of its entanglement. However, the complete LU classification of a general quantum system is usually difficult, and only partial results exist. For pure bipartite states, the complete LU classification is given by the set of Schmidt coefficients of the quantum state which can be computed by evaluating the spectrum of the marginals of the original state [1]. For mixed states, establishing the LU classification of a general bipartite state is still an open problem. However, some remarkable contributions were made in this direction. In [10], a complete characterization of the LU classes of the two-qubit system is obtained in terms of 18 parameters. In [11], a complete set of invariant scalars of the LU classes of arbitrary dimension systems was obtained in [12]. However, this classification cannot be performed over states which are not full-rank. Zhang et al. presented a criterion to discriminate states that are not LU equivalent which is based on realignment and partial transposition of the states [13]. In this paper, we investigate the LU classification of bipartite states with maximally mixed marginals. To do this, we adapt the formalism introduced in [14] for quantum channel construction. This formalism allows us to consider a parameterized family of bipartite states with fixed rank. To establish LU classification for this family of states we evaluate sets of scalars that are invariant under local unitary transformations in terms of the defining parameters of the family. We consider the particular bipartite system composed of two qutrits. For this quantum system, we evaluate explicitly the sets LU invariant scalars in terms of the defining parameters of the family. This allows us to obtain analytic expressions for known quantum state measures such as purity and negativity. The outline of this paper is as follows. In section 2, we introduce the tools required for LU classification of bipartite states. In particular, we employ LU invariant scalars derived from the spectrum of the partially transposed matrix and the correlation matrix. In section 3, we provide the parameter family of quantum states generalizing the anti-symmetric Werner state and maximally entangled pure states. In section 4, we provide the explicit evaluation of the sets of LU invariant scalars. We find that for this particular family, the computation of the spectrum matrices is simpler than for a generic state. In section 5, we consider the qutrit-qutrit set-up. As an example, we show that for a bi-parametric family of states we evaluate explicitly the invariant scalars and we associate them to two known quantum measures of the states, purity and negativity. ## 2 Preliminaries ### Quantum states A \(d\)-dimensional quantum system can be described by pure states \(|\psi\rangle\in\mathcal{H}_{d}\) where \(\mathcal{H}_{d}\) corresponds to the Hilbert space endowed with the inner product \(\langle\psi_{1}|\psi_{2}\rangle\in\mathbb{C}\) which is linear in the first argument and anti-linear in the second. We represent by \(\mathcal{B}(\mathcal{H}_{d})\) to the set of bounded linear operators acting on the pure states. Probabilistic mixtures of quantum states are given by the class of bounded linear operators called density operators \(\rho\in\mathcal{B}(\mathcal{H}_{d})\). In particular, these density operators correspond to positive semi-definite operators with unit trace, we denote the set of density operators by \(\mathcal{D}_{n}\). A pure bipartite state \(|\psi^{AB}\rangle\) is separable if it can be expressed as a tensor product of pure states, \(|\psi^{AB}\rangle=|\psi^{A}\rangle\otimes|\psi^{B}\rangle\). In the case that a pure bipartite state cannot be expressed in as a tensor product of pure states we say that the state is entangled. We have that \(\kappa\in\mathbb{C}\) is an local unitary (LU) invariant of \(\rho_{AB}\in\mathcal{D}_{d}\) if \[\kappa(\rho_{AB})=\kappa(\rho^{\prime}_{AB}) \tag{1}\] where \[\rho^{\prime}_{AB}=(U\otimes V)\rho_{AB}(U\otimes V)^{\dagger} \tag{2}\] and \(U,V\in SU(d)\) where \(SU(d)\) correspond to the set of \(d\) dimensional special unitary matrices. ### Quantum channels Consider a pair of quantum systems represented by the Hilbert spaces \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\). A quantum channel from system \(A\) to system \(B\), denoted by \(\mathcal{E}\) represents a linear mapping between the state spaces of both systems \(\mathcal{E}:\mathcal{B}(\mathcal{H}_{A})\rightarrow\mathcal{B}(\mathcal{H}_{B})\). While \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\) may have different dimensions, let us consider quantum channels between systems with the same size \(\dim(\mathcal{H}_{A})=\dim(\mathcal{H}_{B})=d\). Quantum channels can be used to describe the evolution of a quantum system if we impose two extra conditions on their operator representation. These conditions are complete positivity and trace preservation. A quantum channel \(\mathcal{E}\) is positive for any positive semidefinite operator \(X\), \(\mathcal{E}(X)\) is also positive semidefinite. Furthermore, \(\mathcal{E}\) is completely positive if \(\mathcal{E}\otimes\mathbb{1}_{d}\) is positive for all possible \(d\in\mathbb{N}\) where \(\mathbb{1}_{d}\) represents the identity operator acting on \(\mathcal{H}_{d}\). The second condition is trace preservation which corresponds to the condition, \(\operatorname{Tr}(\mathcal{E}(X))=\operatorname{Tr}(X)\;\forall\;X\in \mathcal{D}_{d}\). Finally, we may consider the particular class of unital quantum channels which are those channels mapping the maximally-mixed state to itself \(\mathcal{E}(\mathbb{1}_{d})=\mathbb{1}_{d}\). Consider the states of the joint system of \(A\) and \(B\) which are represented by bipartite states \(\rho_{AB}\in\mathcal{B}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})\). Despite the apparent differences, the mathematical representations of bipartite systems and quantum channels can be related through the Choi-Jamilkowski isomorphism [15]. In particular, the Choi-Jamilkowski isomorphism establishes that, for every quantum channel \(\mathcal{E}:\mathcal{B}(\mathcal{H}_{A})\rightarrow\mathcal{B}(\mathcal{H}_{ B})\), we can associate a bipartite state given by \[\rho_{\mathcal{E}}=(\mathbb{1}_{d}\otimes\mathcal{E})(\ket{\psi}\bra{\psi}), \tag{3}\] where \(\ket{\psi}=\frac{1}{\sqrt{d}}\sum_{j=1}^{d}\ket{j}\ket{j}\) is a maximally entangled state of dimension \(d\). Using the relation given by (3), we obtain that unital quantum channels correspond to bipartite states with maximally mixed partial traces \[\operatorname{Tr}_{A}(\rho_{\mathcal{E}})=\operatorname{Tr}_{B}(\rho_{ \mathcal{E}})=\frac{\mathbb{1}_{d}}{d}. \tag{4}\] ## 3 Constructing locally maximally mixed bipartite states We now construct parameterised families of bipartite states with maximally mixed marginals adapting the methodology introduced by Rodriguez-Ramos and Wilmott in [14] for channel construction. Therein unital quantum channels were defined by means of a map sending the elements of a complex matrix into the matrix representation of the quantum channel. Using the same methodology, we construct families of bipartite states which are maximally mixed from a local point of view. This is achieved by defining the set of complex matrices \(\mathcal{A}_{d}:=(\alpha_{ij})_{i,j\in\mathbb{Z}_{d}}\in\mathbb{C}^{d\times d}\) such that \[\sum_{i,j=0}^{d-1}\alpha_{ij}\alpha_{ij}^{*}=1 \tag{5}\] and \[\left\{\begin{array}{c}\sum_{i,j=0}^{d-1}\alpha_{ij+l}\alpha_{ij}^{*}=0\\ \sum_{i,j=0}^{d-1}\alpha_{ij+l}\alpha_{ij}^{*}\omega^{-il}=0\end{array}\right. \tag{6}\] for \(l=1,...,d-1\) with \(\omega=\exp{(i\frac{2\pi}{d})}\). Next, we map the elements of the set of matrices \(\mathcal{A}_{d}\) to the space of bipartite states. Let \(A=\{(a_{m},b_{m})\}_{m\in\mathbb{Z}}\) denote the set of ordered pairs such that \(a_{m}\neq a_{m^{\prime}}\) and \(b_{m}\neq b_{m^{\prime}}\) if and only if \(m\neq m^{\prime}\). Consider now the map \(\mathcal{P}(a,b):\mathbb{Z}_{d}\times\mathbb{Z}_{d}\rightarrow\mathbb{Z}_{d^{2}}\) given by \(\mathcal{P}(a,b)=a+d(b\mod d)\). We can always find a set \(S_{d}=\{A_{0},\ldots,A_{d-1}\}\) such that \(A_{i}\in\mathcal{A}_{d}\) and \(A_{i}\neq A_{j}\) if \(i\neq j\) for which \[\bigcup_{i=0}^{r-1}\mathcal{P}(A_{i})\equiv\mathbb{Z}_{d^{2}}. \tag{7}\] We construct families of bipartite states with maximally mixed marginals with the use of the map defined as follows. **Definition 1**.: _Let \(r,d\in\mathbb{N}\) such that \(r\geq d\) and let \(S_{r}=\{A_{0},\ldots,A_{r-1}\}\) such that equation (7) is satisfied. Then, we define \(\rho_{AB}(\alpha_{ij})\in\mathcal{B}(\mathcal{H}_{d}\otimes\mathcal{H}_{d})\) where \((\alpha_{ij})\in\mathcal{A}_{d}\) as the following state_ \[\rho_{AB}(\alpha_{ij})=\frac{1}{d}\sum_{n=0}^{r-1}\sum_{\begin{subarray}{c}(k,h)\in A_{n}\\ (i,l)\in A_{n}\end{subarray}}\left(\sum_{j=0}^{d-1}\alpha_{hj}\omega^{jk} \right)\left(\sum_{j=0}^{d-1}\alpha_{ij}^{*}\omega^{-jl}\right)\left|\mathcal{ P}(k,k+h)\right\rangle\left\langle\mathcal{P}(l,l+i)\right|. \tag{8}\] Definition 1 establishes a general method to construct parameterised families of bipartite qudit states with maximally mixed marginals and rank \(r\). Here, we will consider the particular family of states for which \(r=d\). For this particular family, the bipartite states as given by (8) can be expressed as \[\rho_{AB}(\alpha_{ij})=\frac{1}{d}\sum_{i,k,l=0}^{d-1}\left(\sum_{j=0}^{d-1} \alpha_{ij}\omega^{jk}\right)\left(\sum_{j=0}^{d-1}\alpha_{ij}^{*}\omega^{-jl }\right)\left|k+i\right\rangle\left|k\right\rangle\left\langle l+i\right| \left\langle l\right|. \tag{9}\] The states \(\rho_{AB}(\alpha_{ij})\) with \((\alpha_{ij})\in\mathcal{A}_{d}\) are indeed states with maximally mixed marginals to see that we evaluate its partial traces with are given by \[\operatorname{Tr}_{A}\left(\rho_{AB}\right) =\frac{1}{d}\operatorname{Tr}_{A}\left(\sum_{i,k,l=0}^{d-1}\left( \sum_{j=0}^{d-1}\alpha_{ij}\omega^{jk}\right)\left(\sum_{j=0}^{d-1}\alpha_{ij} ^{*}\omega^{-jl}\right)\left|k+i\right\rangle\left|k\right\rangle\left\langle l +i\right|\left\langle l\right|\right)\] \[=\frac{1}{d}\sum_{i,k=0}^{d-1}\left(\sum_{j=0}^{d-1}\alpha_{ij} \omega^{jk}\right)\left(\sum_{j=0}^{d-1}\alpha_{ij}^{*}\omega^{-jk}\right) \left|k\right\rangle\left\langle k\right|\] \[=\frac{1}{d}\sum_{k,l=0}^{d-1}\left(\sum_{i,j=0}^{d-1}\alpha_{ij+ l}\alpha_{ij}^{*}\omega^{lk}\right)\left|k\right\rangle\left\langle k\right| \tag{10}\] By (6), all the elements of the last sum in (10) with \(l\neq 0\) cancel out and consequently we obtain that \[\operatorname{Tr}_{A}\left(\rho_{AB}\right)=\frac{1}{d}\sum_{i,k=0}^{d-1} \left(\sum_{j}^{d-1}\alpha_{ij}\alpha_{ij}^{*}\right)\left|k\right\rangle \left\langle k\right| \tag{11}\] and by (5) we obtain that \[\operatorname{Tr}_{A}\left(\rho_{AB}\right)=\frac{1}{d}\sum_{k=0}^{d-1}\left|k \right\rangle\left\langle k\right|. \tag{12}\] We may also evaluate the second partial trace which is given by \[\mathrm{Tr}_{B}\left(\rho_{AB}\right) =\frac{1}{d}\,\mathrm{Tr}_{B}\left(\sum_{i,k,l=0}^{d-1}\left(\sum_{j =0}^{d-1}\alpha_{ij}\omega^{jk}\right)\left(\sum_{j=0}^{d-1}\alpha_{ij}^{*} \omega^{-jl}\right)\left|k+i\right\rangle\left|k\right\rangle\left\langle l+i \right|\left\langle l\right|\right)\] \[=\frac{1}{d}\sum_{i,k=0}^{d-1}\left(\sum_{j=0}^{d-1}\alpha_{ij} \omega^{jk}\right)\left(\sum_{j=0}^{d-1}\alpha_{ij}^{*}\omega^{-jk}\right) \left|k+i\right\rangle\left\langle k+i\right|\] \[=\frac{1}{d}\sum_{i,k=0}^{d-1}\left(\sum_{j=0}^{d-1}\alpha_{ij} \omega^{j(k-i)}\right)\left(\sum_{j=0}^{d-1}\alpha_{ij}^{*}\omega^{-j(k-i)} \right)\left|k\right\rangle\left\langle k\right|\] \[=\frac{1}{d}\sum_{k,l=0}^{d-1}\left(\sum_{i,j=0}^{d-1}\alpha_{ij +l}\alpha_{ij}^{*}\omega^{l(k-i)}\right)\left|k\right\rangle\left\langle k\right|\] \[=\frac{1}{d}\sum_{k,l=0}^{d-1}\left(\sum_{i,j=0}^{d-1}\alpha_{ij +l}\alpha_{ij}^{*}\omega^{-il}\right)\omega^{kl}\left|k\right\rangle\left\langle k \right|. \tag{13}\] By (6), we get that all the elements of the sum in (13) with \(l\neq 0\) cancel out and consequently we obtain that \[\mathrm{Tr}_{B}\left(\rho_{AB}\right)=\frac{1}{d}\sum_{i,k=0}^{d-1}\left(\sum _{j}^{d-1}\alpha_{ij}\alpha_{ij}^{*}\right)\left|k\right\rangle\left\langle k\right| \tag{14}\] and by (5), consequently \[\mathrm{Tr}_{B}\left(\rho_{AB}\right)=\frac{1}{d}\sum_{k=0}^{d-1}\left|k \right\rangle\left\langle k\right|. \tag{15}\] ## 4 Local unitary classification of \(\rho_{AB}\big{(}\alpha_{ij}\big{)}\) In this section, we consider the classification of the states \(\rho_{AB}(\alpha_{ij})\) as given by (9) in terms of the parameters \((\alpha_{ij})\in\mathcal{A}_{d}\). In particular, we consider their classification in terms of their entanglement properties. We identify sets of constants under local unitary (LU) operations acting on \(\rho_{AB}\) and consequently determine equivalent classes of states in terms of entanglement. ### Spectrum of \(\rho_{AB}\) The spectrum of \(\rho_{AB}\) is invariant under global unitary operations and consequently, it is also invariant under local unitary (LU) operations. To obtain the explicit set of eigenvalues for \(\rho_{AB}\), we apply a unitary conjugation which block-diagonalises the operator as \[\tau_{U}(\rho_{AB}) =U\rho_{AB}U^{\dagger}\] \[=\frac{1}{d}\sum_{i=0}^{d-1}\left(\sum_{j=0}^{d-1}\alpha_{ij} \omega^{jk}\left|i\right\rangle\left|k\right\rangle\right)\left(\sum_{j,k=0}^{ d-1}\alpha_{ij}^{*}\omega^{-jk}\left\langle i\right|\left\langle k\right| \right). \tag{16}\] The matrix \(\tau_{U}(\rho_{AB})\) is a block diagonal matrix and its blocks \(P_{0},\ldots,P_{d-1}\) are given by \[P_{i}=\frac{1}{d}\left(\sum_{k=0}^{d-1}\sum_{j=0}^{d-1}\alpha_{ij}\omega^{jk} \left|k\right\rangle\right)\left(\sum_{k=0}^{d-1}\sum_{j=0}^{d-1}\alpha_{ij}^{ *}\omega^{-jk}\left\langle k\right|\right). \tag{17}\] The matrices \(P_{0},\ldots,P_{d-1}\) are rank one, can be expressed as \(P_{i}=\left|p_{i}\right\rangle\left\langle p_{i}\right|\). The norm of each \(\left|p_{i}\right\rangle\) is an eigenvalue of \(\rho_{AB}\) which is given by \[\left\langle p_{i}|p_{i}\right\rangle =\frac{1}{d}\sum_{k=0}^{d-1}\left(\sum_{j=0}^{d-1}\alpha_{ij} \omega^{jk}\right)\left(\sum_{j=0}^{d-1}\alpha_{ij}^{*}\omega^{-jk}\right)\] \[=\frac{1}{d}\sum_{k,j,l=0}^{d-1}\alpha_{i,j+l}\alpha_{ij}^{*} \omega^{lk}\] \[=\frac{1}{d}\sum_{k,j=0}^{d-1}\left(\alpha_{ij}\alpha_{ij}^{*}+ \sum_{l=1}^{d-1}\alpha_{i,j+l}\alpha_{ij}^{*}\omega^{lk}\right)\] \[=\sum_{j=0}^{d-1}|\alpha_{ij}|^{2}. \tag{18}\] Note that in the last equality, we used the properties of \((\alpha_{ij})\in\mathcal{A}_{d}\) as given by (5). The constants \[\kappa_{i}^{(1)}=\sum_{j=0}^{d-1}|\alpha_{ij}|^{2} \tag{19}\] determine local unitary equivalence classes for the states \(\rho_{AB}\). ### Spectrum of \(\rho_{AB}^{T_{B}}\) The spectrum of the partial transpose of the density matrix is invariant under local unitary operations acting on the original state [13]. The matrix \(\rho_{AB}^{T_{B}}\) is expressed as \[\rho_{AB}^{T_{B}}=\frac{1}{d}\sum_{i,k,l=0}^{d-1}\left(\sum_{j=0}^{d-1}\alpha_ {ij}\omega^{jk}\right)\left(\sum_{j=0}^{d-1}\alpha_{ij}^{*}\omega^{-jl}\right) \left|k+i\right\rangle\left|l\right\rangle\left\langle l+i\right|\left\langle k \right|. \tag{20}\] We can always find a unitary conjugation \(\tau_{U}\) on the partially transposed matrix which block-diagonalises the matrix \(\rho_{AB}^{T_{B}}\) as \[\tau_{U}(\rho_{AB}^{T_{B}}) =U\rho_{AB}^{T_{B}}U^{-1}\] \[=\frac{1}{d}\sum_{i,k,l=0}^{d-1}\left(\sum_{j=0}^{d-1}\alpha_{ij} \omega^{jk}\right)\left(\sum_{j=0}^{d-1}\alpha_{ij}^{*}\omega^{-jl}\right) \left|k+l+i\right\rangle\left|l\right\rangle\left\langle l+k+i\right|\left\langle k \right|. \tag{21}\] and the blocks \(Q_{0},\ldots,Q_{d-1}\) are given by \[Q_{i}=\frac{1}{d}\sum_{k,l=0}^{d-1}\left(\sum_{j=0}^{d-1}\alpha_{i-l-k;j} \omega^{jk}\right)\left(\sum_{j=0}^{d-1}\alpha_{i-l-k;j}^{*}\omega^{-jl} \right)\left|l\right\rangle\left\langle k\right|. \tag{22}\] The spectrum of \(\rho_{AB}^{T_{B}}\) is given by the union of all the blocks \(Q_{i}\) and we denote by \(\kappa_{0}^{(2)},\ldots,\kappa_{d^{2}-1}^{(2)}\) to the set of local unitary invariant scalars given by \[\kappa_{i}^{(2)}\in\{\lambda(Q_{0})\cup\ldots\cup\lambda(Q_{d-1})\}, \tag{23}\] where \(\lambda(X)\) denotes the set of eigenvalues of the matrix \(X\). ### Singular values of the correlation matrix of \(\rho_{AB}\) Bipartite density operators \(\rho_{AB}\in\mathcal{B}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})\) can be expressed in terms of the Fano decomposition as \[\rho_{AB}=\frac{1}{d^{2}}\left(\mathbb{1}_{d}\otimes\mathbb{1}_{d}+\sum_{i}^{d ^{2}-1}s_{i}\lambda_{i}\otimes\mathbb{1}_{d}+\sum_{i}^{d^{2}-1}t_{i}\mathbb{1} _{d}\otimes\lambda+\sum_{i,j}^{d^{2}-1}r_{ij}\lambda_{i}\otimes\lambda_{j} \right), \tag{24}\] where \(s_{i}=\operatorname{Tr}\left(\rho_{AB}\,\lambda_{i}\otimes\mathbb{1}_{d}\right)\), \(t_{i}=\operatorname{Tr}\left(\rho_{AB}\,\mathbb{1}_{d}\otimes\lambda_{i}\right)\), \(r_{ij}=\operatorname{Tr}\left(\rho_{AB}\,\lambda_{i}\otimes\lambda_{j}\right)\) and \(\{\lambda_{i}\}_{i\in d^{2}-1}\) is a base of traceless matrices. For states with maximally mixed marginals, we have that \(\operatorname{Tr}_{A}(\rho_{\mathcal{AB}})=\operatorname{Tr}_{B}(\rho_{ \mathcal{AB}})=\frac{\mathbb{1}_{d}}{d}\) where \(s_{i}=0\) and \(t_{i}=0\) as given in (24), and \(\rho_{AB}\) admits the expression \[\rho_{AB}=\frac{1}{d^{2}}\left(\mathbb{1}_{d}\otimes\mathbb{1}_{d}+\sum_{i,j} ^{d^{2}-1}r_{ij}\lambda_{i}\otimes\lambda_{j}\right). \tag{25}\] The matrix \(R=(r_{ij})_{i,j\in\mathbb{Z}_{d^{2}-1}}\) is called the correlation matrix of \(\rho_{AB}\) and it encodes non-local information about the state. The singular values of the correlation matrix of \(\rho_{AB}\) are invariant under local unitary operations and we will use them for the entanglement classification of \(\rho_{AB}(\alpha_{ij})\) as given by (9). To evaluate the correlation matrix \(R\), we select a particular basis \(\lambda_{1},\ldots,\lambda_{d^{2}-1}\) in which \(R\) is block-diagonal. We define this basis as follows: First, we define the diagonal elements of the basis as \[\lambda_{i}^{(0)}=\sum_{m=0}^{d-1}\omega^{im}\left|m\right\rangle\left\langle m \right|, \tag{26}\] where \(i=1,\ldots,d-1\) and \(\omega\) is a \(d\)th root of unity. Second, we define the off-diagonal elements of the basis by \[\lambda_{i}^{(k)}=\left|i+k\right\rangle\left\langle i\right|\quad\text{where} \quad i=0,\ldots,d-1\quad\text{and}\quad k=1,\ldots,d-1 \tag{27}\] For this particular choice of basis, we find that the correlation matrix \(R\) of \(\rho_{AB}\) can be expressed in block-diagonal form where the blocks \(R_{0},\ldots,R_{d-1}\) are given by \[R_{0}=(r_{i,j}^{(0)})_{i,j=1,\ldots,d-1}, \tag{28}\] where \[r_{i,j}^{(0)} =\left\langle\rho_{AB},\lambda_{i}^{(0)}\otimes\lambda_{j}^{(0)}\right\rangle\] \[=\left\langle\rho_{AB},\sum_{m=0}^{d-1}\omega^{mi}\left|m\right\rangle \left\langle m\right|\otimes\sum_{p=0}^{d-1}\omega^{pj}\left|p\right\rangle \left\langle p\right|\right\rangle\] \[=\sum_{m,p=0}^{d-1}\omega^{mi+pj}\left\langle\rho_{AB},\left|m \right\rangle\left\langle m\right|\otimes\left|p\right\rangle\left\langle p \right|\right\rangle\] \[=\frac{1}{d}\sum_{m,p=0}^{d-1}\omega^{mi+pj}\left(\sum_{s=0}^{d-1 }\alpha_{m-p,s}\omega^{sp}\right)\left(\sum_{s=0}^{d-1}\alpha_{m-p,s}^{*} \omega^{-sp}\right), \tag{29}\] and \[R_{k}=(r_{i,j}^{(k)})_{i,j=0,\ldots,d-1} \tag{30}\] where \[r_{i,j}^{(k)} =\left\langle\rho_{AB},\lambda_{i}^{(k)}\otimes\lambda_{j}^{(k)}\right\rangle\] \[=\left\langle\rho AB,\left|i+k\right\rangle\left\langle i\right| \otimes\left|j+k\right\rangle\left\langle j\right|\right\rangle\] \[=\frac{1}{d}\left(\sum_{s=0}^{d-1}\alpha_{i-j,s}\omega^{sj} \right)\left(\sum_{s=0}^{d-1}\alpha_{i-j,s}^{*}\omega^{-s(j+k)}\right). \tag{31}\] We denote by \(\kappa_{1}^{(3)},\ldots,\kappa_{d-1}^{(3)}\) to the singular values of the correlation matrix \(R\) of \(\rho_{AB}(\alpha_{ij})\) which are given by \[\kappa_{i}^{(3)}\in\{\sigma(R_{0})\cup\ldots\cup\sigma(R_{d-1})\}, \tag{32}\] where \(\sigma(X)\) denotes the set of singular values of the matrix \(X\). We summarize all the local unitary invariant scalars of \(\rho_{AB}\) in Table 1. The values of \(\kappa^{(i)}\) are related to well-known quantum state measures. For example, \(\kappa_{1}^{(1)},\kappa_{2}^{(1)}\) and \(\kappa_{3}^{(1)}\), derived from the spectrum of the density matrix, are associated with the purity of \(\rho_{AB}\). The purity of a density matrix is defined as \(\mathcal{P}(\rho_{AB})=\mathrm{Tr}(\rho_{AB}^{2})\). Equivalently we can express the purity of \(\rho_{AB}\) in terms of \(\kappa_{i}^{(1)}\) by \[\mathcal{P}(\rho_{AB})=\sum_{i=1}^{d}\left(\kappa_{i}^{(1)}\right)^{2}. \tag{33}\] Similarly, the sets \(\kappa_{i}^{(2)}\) and \(\kappa_{i}^{(3)}\) are related to other entanglement measures for \(\rho_{AB}\). In particular, the set \(\kappa_{i}^{(2)}\) relates to the negativity of the quantum state which is defined in terms of the partially transposed state \(\rho^{T_{A}}\) as \[\mathcal{N}(\rho_{AB})=\frac{||\rho^{T_{A}}||_{1}{-}1}{2}, \tag{34}\] where \(||X||_{1}\) denotes the trace norm of the operator \(X\). Equally, we can express the negativity in terms of the eigenvalues of \(\rho_{AB}^{T_{A}}\) and consequently, we can express the negativity of \(\rho_{AB}\) in terms of \(\kappa_{i}^{(2)}\) as \[\mathcal{N}(\rho_{AB})=\sum_{i=1}^{d^{2}}\frac{|\kappa_{i}^{(2)}|{-}1}{2}. \tag{35}\] Finally set \(\kappa_{i}^{(3)}\) relates to the quantum discord of the density state [16]. \begin{table} \begin{tabular}{|c|c|c|} \hline Scalar & Definition & Cardinality \\ \hline \(\kappa_{i}^{(1)}\) & \(\sum_{j=0}^{d-1}|\alpha_{ij}|^{2}\) & \(i\in\mathbb{Z}_{d}\) \\ \(\kappa_{i}^{(2)}\) & \(\lambda(Q_{0})\cup\ldots\cup\lambda(Q_{d-1})\) & \(i\in\mathbb{Z}_{d^{2}}\) \\ \(\kappa_{i}^{(3)}\) & \(\sigma(R_{0})\cup\ldots\cup\sigma(R_{d-1})\) & \(i\in\mathbb{Z}_{d^{2}-1}\) \\ \hline \end{tabular} \end{table} Table 1: In this table we summarize the sets of scalars which are invariant under local unitary operations \(\rho_{AB}\) used for entanglement classification. ## 5 Qutrit case In section 4, we obtained sets of local unitary invariant scalars for the family of bipartite states in (8). Now, we consider the particular case of qutrit states. For \(d=3\), the local unitary invariant scalars of \(\rho_{AB}\) are given by \[\kappa_{i}^{(1)}=\sum_{j=0}^{2}\alpha_{ij}\alpha_{ij}^{*}, \tag{36}\] for \(i=0,1,2\). We can also obtain \(\kappa_{1}^{(2)},\ldots,\kappa_{9}^{(2)}\) which are given by the eigenvalues of the matrices \[Q_{0} =\begin{pmatrix}c_{0,0}c_{0,0}^{*}&c_{2,0}c_{2,1}^{*}&c_{1,0}c_{1, 2}^{*}\\ c_{2,1}c_{2,0}^{*}&c_{1,1}c_{1,1}^{*}&c_{0,1}c_{0,2}^{*}\\ c_{1,2}c_{1,0}^{*}&c_{0,2}c_{0,1}^{*}&c_{2,2}c_{2,2}^{*}\end{pmatrix},\] \[Q_{1} =\begin{pmatrix}c_{1,0}c_{1,2}^{*}&c_{0,1}c_{0,0}^{*}&c_{2,2}c_{2,1}^{*}\\ c_{0,0}c_{0,2}^{*}&c_{2,1}c_{2,0}^{*}&c_{1,2}c_{1,1}^{*}\\ c_{2,0}c_{2,2}^{*}&c_{1,1}c_{1,0}^{*}&c_{0,2}c_{0,1}^{*}\end{pmatrix},\] \[Q_{2} =\begin{pmatrix}c_{2,0}c_{2,2}^{*}&c_{1,1}c_{1,0}^{*}&c_{0,2}c_{ 0,1}^{*}\\ c_{1,0}c_{1,2}^{*}&c_{0,1}c_{0,0}^{*}&c_{2,2}c_{2,1}^{*}\\ c_{0,0}c_{0,2}^{*}&c_{2,1}c_{2,0}^{*}&c_{1,2}c_{1,1}^{*}\end{pmatrix}; \tag{37}\] where \(c_{i,k}=\frac{1}{d}\left(\sum_{j=0}^{d-1}\alpha_{ij}\omega^{jk}\right)\). For qutrits, the LU invariant scalars \(\kappa_{1}^{(3)},\ldots,\kappa_{8}^{(3)}\) are given by the singular values of the matrices \[R_{0} =\sum_{m,p=0}^{2}c_{m-p,p}c_{m-p,p}^{*}\begin{pmatrix}\omega^{m+ p}&\omega^{2m+p}\\ \omega^{m+2p}&\omega^{2m+2p}\end{pmatrix},\] \[R_{1} =\begin{pmatrix}c_{0,0}c_{0,1}^{*}&c_{2,1}c_{2,2}^{*}&c_{1,2}c_{1, 0}^{*}\\ c_{1,0}c_{1,1}^{*}&c_{0,1}c_{0,2}^{*}&c_{2,2}c_{2,0}^{*}\\ c_{2,0}c_{2,1}^{*}&c_{1,1}c_{1,2}^{*}&c_{0,2}c_{0,0}^{*}\end{pmatrix},\] \[R_{2} =\begin{pmatrix}c_{0,0}c_{0,2}^{*}&c_{2,1}c_{2,0}^{*}&c_{1,2}c_{1,1}^{*}\\ c_{1,0}c_{1,2}^{*}&c_{0,1}c_{0,0}^{*}&c_{2,2}c_{2,1}^{*}\\ c_{2,0}c_{2,2}^{*}&c_{1,1}c_{1,0}^{*}&c_{0,2}c_{0,1}^{*}\end{pmatrix}. \tag{38}\] We note that the evaluation of \(\kappa_{i}^{(1)}\),\(\kappa_{i}^{(2)}\),\(\kappa_{i}^{(3)}\) depends on the specific choice of coefficients \(\{\alpha_{ij}\}_{i,j\in\mathbb{Z}_{3}}\), and we present an example of a bi-parametric family of bipartite states for which we obtain a complete LU classification. ### Example: A 2-parameter family of bipartite states The set of complex matrices given by \[(\alpha_{ij})_{i,j\in\mathbb{Z}_{3}}=\frac{\sqrt{2}}{6}\begin{pmatrix}2&e^{i \theta}&e^{i(\theta+\phi)}\\ 2&e^{i\theta-\frac{2\pi}{3}}&e^{-i(\theta+\phi-\frac{4\pi}{3})}\\ 2&e^{i\theta-\frac{4\pi}{3}}&e^{-i(\theta+\phi-\frac{2\pi}{3})}\end{pmatrix} \tag{39}\] determines a 2-dimensional family of bipartite qutrit states with maximally mixed marginals. We can check the matrices given in (39) satisfy \((alpha_{ij})_{i,j\in\mathbb{Z}}\in\mathcal{A}_{3}\) and consequently, this set of matrices can be mapped to a family of bipartite states with maximally mixed marginals. For this family of states, we might evaluate the set of local unitary invariant scalars \(\kappa^{(1)},\kappa^{(2)},\kappa^{(3)}\) derived in the previous section. For this particular family, the first set of LU invariant scalars corresponds to the spectrum of the density matrix. These scalars are given by \[\kappa_{1}^{(1)}=\kappa_{2}^{(1)}=\kappa_{3}^{(1)}=\frac{1}{3}. \tag{40}\] By equation (33), we evaluate the purity of the family of states given by (39) as \[\mathcal{P}(\rho_{AB})=\frac{1}{3}. \tag{41}\] For this particular family, the set \(\kappa_{i}^{(2)}\) consists of nine scalars given by the spectrum of the partially transposed matrix. We have it that \[\kappa_{1+i}^{(2)} =\frac{1}{18}\left(4+2\cos(\phi+\frac{2i\pi}{3})\right)\] \[\kappa_{4+i}^{(2)} =\frac{1}{18}\left(1+4\cos(\theta+\frac{2i\pi}{3})\right)\] \[\kappa_{7+i}^{(2)} =\frac{1}{18}\left(1+4\cos(\theta+\phi+\frac{4i\pi}{3})\right) \tag{42}\] for \(i=0,1,2\). By equation (35) we can evaluate the negativity of the family of states represented by (39). Figure 1 represents the negativity of this family of states as a function of the parameters \(\theta\) and \(\phi\). We observe that negativity is upper bounded by \(\frac{1}{3}\). Finally, we can evaluate the set of LU-invariant scalars set \(\kappa_{i}^{(3)}\) corresponding to the singular values of the correlation matrix. For the particular family of states represented Figure 1: Color plot of \(\mathcal{N}(\rho_{AB})\) in terms of the two parameters spanning the family of states represented by (39). by (39), we obtain the following list of scalars \[\kappa_{1+i}^{(3)} =\frac{1}{6}\] \[\kappa_{3+i}^{(3)} =\frac{1}{18}\sqrt{9-4\cos{(\theta-\phi)}-8\cos{(-2\theta-\phi)}-4 \cos{(\theta+2\phi)}}\] \[\kappa_{5+i}^{(3)} =\frac{1}{18}\sqrt{9-4\cos{(\theta-\phi+\frac{\pi}{3})}-8\cos{(-2 \theta-\phi+\frac{\pi}{3})}-4\cos{(\theta+2\phi+\frac{\pi}{3})}}\] \[\kappa_{7+i}^{(3)} =\frac{1}{18}\sqrt{9-4\cos{(\theta-\phi-\frac{\pi}{3})}-8\cos{(-2 \theta-\phi-\frac{\pi}{3})}-4\cos{(\theta+2\phi-\frac{\pi}{3})}}, \tag{43}\] for \(i=0,1\). For this particular family of states, the entanglement classification provided by \(\kappa_{i}^{(1)}\), \(\kappa_{i}^{(2)}\) and \(\kappa_{i}^{(3)}\) is complete. ## 6 Conclusion In this work, we considered the entanglement classification of bipartite states with maximally mixed marginals. First, we constructed a parameterized family of bipartite states which generalises the anti-symmetric Werner states. Second, we obtained a set of scalars which remain constant under local unitary operations and, consequently, can be used for entanglement classification. In particular, we used the eigenvalues density matrix and the partially transposed matrix as well as the singular values of the correlation matrix to classify the elements of our family of bipartite states. Finally, we considered the qutrit scenario which is the smallest set-up for which a complete entanglement classification is missing. For qutrit bipartite states, we evaluated the sets of local invariant scalars and we relate them to known measures of entanglement. As an example, we showed that this set achieves a complete classification of a bi-parameter family of qutrit states. We believe that this work serves as an intermediate step to achieve better quantum state classification.
2305.09351
Flexible remote attestation of pre-SNP SEV VMs using SGX enclaves
We propose a protocol that explores a synergy between two TEE implementations: it brings SGX-like remote attestation to SEV VMs. We use the notion of a \emph{trusted guest owner}, implemented as an SGX enclave, to deploy, attest, and provision a SEV VM. This machine can, in turn, rely on the trusted owner to generate SGX-like attestation proofs on its behalf. Our protocol combines the application portability of SEV with the flexible remote attestation of SGX. We formalise our protocol and prove that it achieves the intended guarantees using the Tamarin prover. Moreover, we develop an implementation for our trusted guest owner together with example SEV machines, and put those together to demonstrate how our protocol can be used in practice; we use this implementation to evaluate our protocol in the context of creating \emph{accountable machine-learning models}. We also discuss how our protocol can be extended to provide a simple remote attestation mechanism for a heterogeneous infrastructure of trusted components.
Pedro Antonino, Ante Derek, Wojciech Aleksander Wołoszyn
2023-05-16T11:15:53Z
http://arxiv.org/abs/2305.09351v1
# Flexible remote attestation of pre-SNP SEV VMs using SGX enclaves ###### Abstract We propose a protocol that explores a synergy between two TEE implementations: it brings SGX-like remote attestation to SEV VMs. We use the notion of a _trusted guest owner_, implemented as an SGX enclave, to deploy, attest, and provision a SEV VM. This machine can, in turn, rely on the trusted owner to generate SGX-like attestation proofs on its behalf. Our protocol combines the application portability of SEV with the flexible remote attestation of SGX. We formalise our protocol and prove that it achieves the intended guarantees using the Tamarin prover. Moreover, we develop an implementation for our trusted guest owner together with example SEV machines, and put those together to demonstrate how our protocol can be used in practice; we use this implementation to evaluate our protocol in the context of creating _accountable machine-learning models_. We also discuss how our protocol can be extended to provide a simple remote attestation mechanism for a heterogeneous infrastructure of trusted components. _Keywords--_ remote attestation, trusted execution environments, SGX, SEV, security ## 1 Introduction Primitives to implement a Trusted Execution Environment (TEE) [33] are becoming a common feature of modern processors. Such an environment typically allows a program to execute confidentially whereby not even the operator can tell what instructions and data are being used, we refer generically to such a protected execution as an _isolated computation_. Intel's Software Guard Extensions (SGX) [16, 24], AMD's Secure Encrypted Virtualization (SEV) [4, 28], and ARM's TrustZone [42] are examples of TEE implementations available. They are designed to address different application scenarios, but they all share similar core capabilities. Intel's SGX and AMD's SEV provide competing TEE architectures that isolate computations at different levels of granularity. While SGX was designed to isolate (part of) an operating system process (an _enclave_ in SGX terminology), SEV isolates an entire virtual machine (VM). Given these design choices, SGX does not offer the same level of application portability that SEV does. An application has to be redesigned to be made SGX-aware, whereas SEV allows it to be seamlessly executed within a confidential machine. This portability comes at the price of a having a typically larger _trusted computing base_. While SGX allows developers to finely tune which functions and data are part of the enclave, SEV VM would usually contain an entire operating system (OS) together with the relevant applications to be executed. The larger the trusted computing base, the more prone to bugs and vulnerabilities it is. _Remote attestation_ is the process that establishes trust on an isolated computation. It consists of a protocol that produces evidence that a given computation has been properly isolated and, typically, provides a way to establish a secure channel with the isolated computation. While SGX provides a very flexible mechanism to attest enclaves, SEV (pre-SNP1) relies on a very restrictive scheme for that. While SGX's attestation is _undirected_, namely, any third-party can establish trust on a given enclave, SEV proposes a mechanism by which only a designated party, called the _guest owner_, can meaningfully attest (and provision) its SEV VM. Footnote 1: We call _SEV pre-SNP_ the SEV implementations predating SEV SNP (Secure Nested Paging) [50], i.e., the original SEV implementation [28] and SEV-ES (Encrypted State) [27]. We propose, formalise, verify, implement and evaluate a new protocol that provides _SGX-like remote attestation to a SEV VM_. Broadly speaking, it relies on a special enclave that we design, the _trusted guest owner_, that is responsible for deploying, attesting, and provisioning the SEV VM it owns. Moreover, while operating, this VM can request the generation of attestation reports, on its behalf, to the trusted guest owner -- in the similar way to how an enclave can create an attestation report in the SGX architecture. Our innovative combination of TEE implementations brings together the best of both worlds, namely, the application portability of SEV and the flexible attestation of SGX. However, our protocol requires two separate platforms: a SGX-capable machine to run the trusted guest owner and a SEV-capable one for the confidential VM. Therefore, the flexibility comes at a price of a larger trusted computing base. A composition of systems does not necessarily yield a scheme that inherit the security properties of the components -- for instance, composing secure protocols does not automatically yield a secure scheme. Finding a protocol design that ensures the desired attestation properties was therefore challenging, and that is also why we formally analyse our protocol. We use the Tamarin prover [34] to model our protocol and to verify that it indeed achieves the desired goal of authenticity and integrity of attestation proofs. Additionally, we verify security properties of SGX and SEV attestation as used in our protocol -- the authen ticity of the SGX attestation proofs and secrecy of SEV provisioned secrets, respectively. All results hold in a general setting with unbounded number of participants and sessions, assuming a Dolev-Yao attacker [18] and a fine-grained threat model that, for example, allows the attacker to run enclaves of its choice alongside the trusted guest owner and compromise some TEE platforms. To demonstrate the protocol, we implement the protocol participants -- namely the trusted guest owner, the SEV guest VM attestation library and several sample SEV guest VMs. Furthermore, we evaluate our protocol by harnessing it to implement a notion of accountability for machine learning models -- i.e. creating a cryptographic report that ties a model to the technique and data used to generate it. Our evaluation demonstrates that our protocol incurs a negligible overhead while delivering on its security promises. Some recent TEE implementations such as SEV SNP (Secure Nested Paging) [50] and Intel's TDX (Trust Domain eXtensions) [25] were designed to provide a combination of remote attestation flexibility and application portability that is similar to what our protocol achieves with the proposed pairing of SGX and SEV. However, these technologies are still not widely available and the underlying attestation mechanisms and primitives have not yet been fully scrutinized by the research community. Since Q1 2023, there a limited number of Intel CPU models supporting TDX available on the market [15]. However, at the time of writing (May 2023) the general availability of TDX remains planned for future Indel Xeon family releases and no major cloud provider offers TDX capable CPUs. Hardware support for SEV SNP was launched two years ago (Q2 2021), but software support is somewhat lagging and SNP patches were being merged to Linux kernel in Q3 2022. While some cloud provider do offer SEV SNP enabled hardware, we found that no major provider exposes the flexible attestation interface to the end user. Microsoft Azure, for example, only allows their pre-approved VMs to be launched as SEV SNP guests, and exposes attestation only through Azure-issued JWT (JSON Web Token) tokens [35]. Our protocol, on the other hand, is based upon TEE implementations that are reasonably mature and have been available for quite a few years. Even when these new technologies catch up, our protocol will still be relevant for platforms, legacy or not, that do not support SEV SNP or TDX but support SEV pre-SNP. Our protocol sheds light in a new line of research, that is, finding synergies between TEE implementations. In our case, we create a protocol that brings together a pairing of a SGX enclave and a SEV VM in a way that it offers better features than both elements individually. Moreover, it can be extended to handle a related problem, namely, how to attest a homogeneous infrastructure of trusted components. Our protocol can be seen as a degenerate case of this problem where the trusted guest owner deploys a simple trusted infrastructure consists of a single SEV VM. However, our ideas could be carried over to the context of a generic _trusted deployer_ that could deploy, attest and provision a complex composition of trusted components. We sum up our contributions in the following: * We propose a protocol that brings SGX-like remote attestation to SEV VMs, creating a synergy that combines the application portability of SEV with the flexible remote attestation of SGX. * We formalise our protocol and verify it achieves the desired guarantees/goals using the Tamaring prover. * We created implementations for our trusted owner and several protocol-compatible SEV VMs. 2 Footnote 2: We make the protocol implementation, the sample systems used for evaluation, as well as the formal model and proofs publicly available [2] under a permissive open source license. * We carried out an evaluation that demonstrates how our protocol can be used to implement a notion of accountability for machine learning models. It also shows that it delivers its guarantees with negligible overhead. * The proposal of our protocol sheds light in a new line of research consisting of exploring synergies between different TEE implementations. * We discuss how our protocol can be extended to provide a simple way to remotely attest an infrastructure involving heterogeneous trusted components. Outline.In Section 2, we introduce relevant background. Section 3 introduces our protocol, together with minimalist and abstract versions of SEV and SGX attestation protocols, presents the formalisation of our protocol and discuss the properties that we were able to verify using Tamarin, and demonstrate an application of our protocol together with an evaluation of how it fares in practice. Section 4 discusses some of the works related to ours, whereas in Section 5, we present our concluding remarks. ## 2 Background In this section, we introduce the background elements that are necessary for understanding the rest of our paper. ### Sgx Intel's SGX (Software Guard eXtensions) [16] allows an untrusted host process to create a protected virtual-memory range where integrity-protected and confidential code and data are hosted; this protected area is called an _enclave_. SGX extends Intel's traditional instruction set with privileged instructions to create, initialise, and dispose of this protected memory range and also to non-privileged instructions to execute enclave code [22]. A number of hardware and software components take part in enforcing the integrity and confidentiality of an enclave's execution and in attesting these properties. These elements together with the enclave code itself form the _trusted computing base_ (TCB) of that enclave, which is depicted in Figure 1; green elements are trusted, the others are not. At the lowest level, we have the trusted SGX hardware, comprising CPU package and Memory Encryption Engine [19], and low-level code; they ensure the integrity, confidentiality and freshness of the enclave's protected memory area. Privileged code is _untrusted_: privileged instructions cannot be executed in enclave mode. Hence, an enclave has to delegate to untrusted code, in the form of the OS/hypervisor, the execution of system calls, for instance. An enclave does not automatically trust other enclaves; they are isolated from one another. There are, however, some especial _architectural enclaves_ which are trusted. They play a fundamental part in the _attestation process_, namely, in the protocol by which an enclave provides to a counterpart evidence that it is indeed a valid isolated computation executing on an authentic platform. This process attests, in fact, the entire TCB: it provides the _digest_ (or _measurement_) of the code loaded into the enclave, and information about the version of the architectural enclaves used and the SGX hardware and low-level code. We elaborate on this process/protocol later. Applications in user-space are also not trusted by the enclave. We refer generically to the untrusted components around an enclave in a SGX platform as the _SGX host_. ### Sev AMD's SEV (Secure Encrypted Virtualization) [50, 28] proposes an architecture to support _confidential_ virtual machines (VMs), which we refer to as _SEV (guest) VMs_. This TEE implementation was designed so that even if the host (hypervisor included) is untrusted, it is unable to peek into the execution of a SEV guest VM. As for SGX, the AMD's typical instructions set was extended to incorporate directives to manage SEV VMs [4]. The TCB of a SEV guest machine is illustrated in Figure 2. It consists of its own code plus SEV hardware and firmware, especially in the form of the Secure Processor - also known as Platform Security Processor, or PSP. Note that other SEV VMs are not trusted; they are isolated from one another. Other non-SEV VMs are untrusted as well. Similarly to what we do for enclaves, we refer, generically, to the untrusted elements surrounding a SEV VM in a platform as the _SEV host_. Figure 1: SGX enclave trusted computing base in green. The SEV architecture has evolved from (original) SEV [28], to SEV-ES (Encrypted State) [27], and recently to SEV-SNP (Secure Nested Paging) [50]. SEV-ES brings extra confidentiality guarantees when a switch from an trusted to an untrusted execution takes place, namely, the contents of the registers storing the state of the confidential VM are protected/encrypted before the switch occurs. SEV-SNP brings integrity guarantees that are not offered by the former two SEV versions. It also brings a form remote attestation that is more flexible than SEV and SEV-ES. We discuss an abstract version of the pre-SNP attestation protocol later. The difference in the level of granularity for the isolated computations between SGX and SEV has relevant practical consequences. In SGX, a simple (part of a) process is isolated, as opposed to an entire VM in SEV. Therefore, the TCB for a SEV isolated computation tends to be much larger than that of a SGX computation, making it potentially more vulnerable to bugs and design flaws. However, the fact that an entire OS (and its priviledged instructions) is part of the trusted world makes this architecture more attractive in terms of application portability. An application that was not designed specifically to target a SEV VM can seamlessly (i.e. without modification) execute inside one. The same cannot be said of SGX: typically, applications have to be significantly redesigned to fit their enclave model. ### Tamarin prover Tamarin prover [34] is a tool for modeling security protocols and reasoning about their properties in the symbolic model of cryptography. Protocols are specified using _multiset rewriting rules_, while the security properties are specified either as guarded first-order logic formulas over execution traces or as observational equivalences. Proofs can be carried manually using the interactive mode or in a automated fashion where the procedure can be further tuned by supplying a _proof oracle_ that prioritises available proof steps. Tamarin prover has been successfully used to analyse, discover vulnerabilities and provide machine-verifiable proofs of various security properties for real-world protocols such as TLS v1.3 [17], smartcard payment protocols [7], 5G authentication protocols [6], and many others. In the area of trusted hardware, Figure 2: SEV VM trusted computing base in green. the tool has been used for analysis of a Direct Anonymous Attestation protocol based on the Trusted Platform Module (TPM) technology [58, 59]. ## 3 Flexible SEV pre-SNP remote attestation using SGX In this section, we introduce a protocol that combines SGX and SEV attestation protocols in a way that it enables the flexible attestation of SEV machines. We begin by describing abstract versions of SGX and SEV attestation protocols, which we later combine to create our flexible SEV attestation protocol. We formalise these concepts using Tamarin and use this prover to verify that our protocol gives the desired security guarantees. Moreover, we present a concrete implementation (and execution) of our protocol, and close this section with a discussion on some interesting extensions to our protocol and its limitations. In this paper, we assume that side-channels attacks are possible and that the attacker can corrupt and extract secrets from arbitrary SGX/SEV platforms, enclaves, and VMs, except for the _specific_ platforms, enclaves, and VMs used in the protocol sessions. We claim (and formaly verify) that the proposed protocol provides a level of robustness to those attacks. ### Remote attestation for SGX enclaves Intel has proposed two mechanisms to perform the remote attestation of an enclave: Enhanced Privacy ID (EPID) [26, 29] and Data Center Attestation Primitives (DCAP) [48]. We present a minimalist protocol for remote attestation inspired by DCAP but that abstracts away its complexity and details, focusing on its broad trust guarantees and functionality. It should be straightforward to adapt our protocol to work with the fully-fledged DCAP or EPID. Our SGX attestation protocol involves four parties: the attested enclave \(E\), the quoting enclave of the attested platform _QE_, Intel's Root of Trust service _Intel RoT_, and a relying party _RP_. Broadly speaking, _QE_ is a trusted architectural enclave that runs in the same platform as \(E\) and is certified by _Intel RoT_, and it creates proofs to attest \(E\) to _RP_. Note that our italicised notation here denotes the _name_ of the participants in our protocol. So, _QE_ is not an abbreviation for quoting enclave in general but an _identifier_ denoting the attested quoting enclave that participates in our protocol. We adopt this notation consistently for the participants involved in the protocols that we describe in this paper. #### Protocol goal. The protocol produces an attestation proof for \(E\) consisting of a _quote_ in SGX terminology and a SGX platform certificate. It authenticates _E_'s TCB. The platform certificate also contains the Platform Provisioning ID (PPID) uniquely identifying the platform instance. The quote also contains a piece of data \(D\) that is provided by \(E\). Any relying party can, then, cryptographically validate this proof and be convinced that this quote was generated on a platform identified by PPID using the given TCB and that \(E\) provided \(D\) when the protocol was executed. #### Threat model and trust assumptions. We assume that the platform in which \(E\) is deployed has not been compromised but the attacker controls the SGX host, i.e. untrusted platform elements, and the network. So, it can arbitrarily influence communications and computations executed by these elements, and create other enclave instances. The attacker has access to compromised SGX platforms to which is can deploy enclaves. A compromised platform would allow the attacker to have access to the cryptographic keys managed by the quoting enclave and, hence, to construct arbitrary quotes that validate as correct quotes from that particular platorm. The enclave itself is known, and the attacker can deploy it at will on any platform of its choice. However, the entire attested TCB, including \(E\) and _QE_, and _Intel RoT_ are trusted. Hence, the attacker can only interact with them in the ways prescribed by their implementation. We assume that the attacker cannot perform fork attacks or rollback attacks on our enclave. This is a reasonable assumption since the enclaves state will be entirely in-memory with no persisted data. #### Cryptographic schemes. Our protocol relies on the following cryptographic schemes: * _Intel RoT_ uses an asymmetric signature scheme with key-pair generation function \(agen_{IR}()\), signing function \(assign_{IR}(m,k)\), and verification function \(averi_{IR}(m,s,k_{pb})\), \(m\) is a message, \(s\) is a signature, \(k\) is a private key, and \(k_{pb}\) a public one. We use the same notation with a similar meaning when defining other asymmetric signature schemes; * _Intel RoT_'s long term key pair \((IntelLtk_{pb},IntelLtk)\), public and private elements, respectively, is generated using \(agen_{IR}()\) and used by it to issue SGX platform certificates; * _QE_ uses the asymmetric signature scheme with functions \(agen_{QE}()\), \(assign_{QE}(m,k)\) and \(averi_{QE}(m,s,k_{pb})\); * _QE_ key pair \((Qek_{pb},Qek)\), public and private elements, respectively, is generated using \(agen()_{QE}\) and used by the quoting enclave to issue attestation quotes. We assume throughout the paper that all cryptographic payloads are tagged with labels describing the payload structure and intent of the message. For example, the payload in the certificate \(C_{QE}\) below is \(\langle\)'sgx_platform_certificate', \(Qek_{pb},ppid\rangle\). However, we leave the type tags out of the protocol description to simplify notation. Of course, we include the tags in the formal model and in the protocol implementation. ### Protocol. We split the attestation protocol into the setup, quote generation, and quote verification phases. The protocol is depicted in Figure 3. The platform setup phase establishes and ensures the existence of a chain of trust that extends from the Intel's root of trust to the attestation proof. During the setup phase, the SGX platform interacts with _Intel_\(RoT\). Using a secret shared in the manufacturing process, the platform can attest itself and the quoting enclave to the root of trust. Once this attestation is successfully carried out, the root of trust certifies the quoting enclave, that is, the root of trust produces a certificate \(C_{QE}=(Qek_{pb},ppid,asign_{IR}(\langle Qek_{pb},ppid\rangle,IntelLtk))\). We assume that this phase happens successfully as the SGX platform is being set up so that \(C_{QE}\) is made publicly available. The quote generation phase, if successfully executed, produces a quote which is a tuple \((msr,plat,data,sig)\) where \(msr\) is the measurement of the enclave being attested, \(plat\) is a data structure containing information about the SGX platform, \(data\) is a vector of "free" data generated by the enclave being attested, and \(sig\equiv asign_{QE}(\langle msr,plat,data\rangle,Qek)\) is the signature of the quoting enclave on these other quote elements - the notation \(\langle e_{1},\dots,e_{n}\rangle\) provides the ordered concatenations of elements \(e_{1}\) through to \(e_{n}\). It is a statement that an enclave with measurement \(msr\) was running in a authentic SGX platform with characteristics given by \(plat\) and it provided data \(data\) when taking part in the attestation protocol. The quote is only produced if \(E\) provides a _local attestation report_. When the enclave with measurement \(msr\) invokes the SGX instruction EREPORT passing \(data\) as an argument, it creates such a report with which the quoting enclave can verify the integrity of \(data\) and its provenance from enclave \(msr\). Given the expected enclave measurement \(msr_{exp}\), the expected data \(data_{exp}\) Figure 3: DCAP protocol sequence diagram. a quote \(Q=(msr,plat,data,sig_{QE})\), a certificate \(C_{QE}=(Qek_{pb},ppid,sig_{IR})\), \(RP\) can execute quote verification process consisting of: (i) verifying the signature \(sig_{IR}\) using \(averi_{IR}(\langle Qek_{pb},ppid\rangle,sig_{IR},IntelLtk_{pb})\), (ii) verifying \(sig_{QE}\) using \(averi_{QE}(\langle msr,plat,data\rangle,sig_{QE},Qek_{pb})\), (iii) checking that \(msr\) and \(data\) corresponds to the expected enclave measurement \(msr_{exp}\) and \(data_{exp}\). Optionally, in some usage scenarios the relying party may also verify that the \(ppid\) and \(plat\) match expected values or satisfy some other criteria. We use the function VerifyQuote\((msr_{exp},data_{exp},Q,C)\) to capture the validations (i-iii) of the quote verification phase. Our simplified protocol abstracts away the details and complexity of DCAP while focusing on its essential behaviour. The fully-fledged DCAP protocol relies on another architectural enclave (the Provisioning Certification Enclave) in the setup phase, and the certification of the quoting enclave is given by a certificate chain, whereas our protocol abstract that chain by a single certificate. We do not detail what is in the \(plat\) structure as the goal of this paper is not to discuss the practical intricacies of an SGX platform. Despite its simplicity, our protocol still provides achieves the protocol's goal given the threat model and trust assumptions defined, as demonstrated by our formal analysis. Note that a quote is not _directed_ at a specific verifier: any relying party possessing Intel's root of trust key can verify the quote and SGX platform certificate. ### Remote attestation for SEV machines Compared to SGX, SEV's attestation primitives are not as flexible giving rise to an attestation protocol that is arguably more restrictive and intricate. The attestation protocol takes place as the SEV guest VM is being created, and includes a provisioning step. In this paper, we are concerned with the attestation protocol and infrastructure of SEV pre-SNP. As for SGX, we propose an abstracted protocol that focus on the relevant functionality implemented by the fully-fledged SEV protocol. The protocol involves the following parties: the AMD's secure processor of the attested platform _SP_, AMD's root of trust service _AMD_\(RoT\), and the guest VM owner _GO_, and its attested guest VM _SVM_. _AMD_\(RoT\) is in charge of certifying the platform's _SP_, while _GO_ interacts with _SP_ to attest, provision, and create _SVM_. #### Protocol goal. The protocol produces a _GO_-directed attestation proof, a _measurement_ in SEV terminology3 and a _SEV platform certificate_, and provisions _SVM_ with _GO_-generated secret \(S\). Once the protocol is completed, _GO_ is convinced of the authenticity of _SVM_'s TCB, and that \(S\) could only have been provisioned to _SVM_. #### Threat model and trust assumptions. The same threat model and trust assumptions used for the SGX protocol are used in the analysis of the SEV protocol, with the exception that, here, we consider the SEV TCB and platform and AMD RoT service as trusted elements as opposed to the SGX and Intel counterparts, of course. Here, a compromised platform would allow the attacker to obtain any information that _SP_ knows, including the cryptographic keys it manages. We do not allow SEV VM migration. We do not consider memory-remapping, rollback, or fork attacks; we assume integrity-checking mechanisms can be put in place to prevent those. Moreover, our trust in the attested SEV TCB is intended to prevent all architectural attacks -- including the ones affecting attestation primitives [10, 61]. This assumption allows us to to analyse the security properties of the protocol itself, as opposed to weaknesses linked to the bad design/implementation of the underlying primitives. #### Cryptographic schemes. The protocol involves the following cryptographic schemes: * _AMD RoT_ uses an asymmetric signature scheme defined by functions \(agen_{AR}()\), \(assign_{AR}(m,k)\), and \(averi_{AR}(m,s,k_{pb})\); * _AMD RoT_ key pair \((AmdLtk_{pb},AmdLtk)\), public and private elements, respectively, is generated using \(agen()_{AR}\) and used by the root of trust to issue SEV platform certificates; * _SP_ and _GO_ rely on the asymmetric secret-negotiation scheme with key-generation function \(sngen()\) and secret computation function \(snsec(K_{pb},K)\), where \(K_{pb}\) and \(K\) are public and private key elements of the scheme. Diffie-Hellman key-sharing scheme is an instatiation of such a scheme. * _GO_ generates the key pair \((GoSn_{pb},GoSn)\) using \(sngen()\). * _SP_ generates a key pair \((PspSn_{pb},PspSn)\) using \(sngen()\). * _SP_ and _GO_ rely on a key-derivation function \(sder(Sd)\), where \(Sd\) is a derivation seed. * _SP_ and _GO_ rely on the symmetric encryption scheme defined by key-generation function \(sgen_{E}()\), encryption function \(senc(m,k)\), and decryption function \(sdec(m,k)\), where \(m\) is a message and \(k\) is a scheme's key. This scheme is used for encrypting key-wrapping interactions and transported messages between them. * _SP_ and _GO_ rely on the message authentication code (MAC) scheme defined by key-generation function \(sgen_{I}()\), signing function \(ssign(m,k)\), and verification function \(sveri(c,k)\), where \(m\) is a message, \(c\) is an authentication code, and \(k\) is a scheme's key. This scheme is used for integrity-protecting key-wrapping interactions and transported messages between them. Here and in our protocol description we are relying on a single symmetric encryption scheme and a single MAC one for the sake of simplicity. However, one could use multiple schemes, one for each different application, without affecting the protocols' guarantees. ### Protocol. We divide the protocol execution into three phases: SEV platform setup, secure-channel establishment, VM validation & provisioning, all of which we detail next. The protocol is depicted in Figure 4. The platform setup phase for the SEV protocol is very similar to the one that we presented for SGX. It involves only _SP_ and the AMD's root of trust service. It establishes a similar chain of trust, providing similar guarantees, and it also relies on a fused pre-shared secret for platform authentication. So, when successfully executed, this phase produces the SEV platform certificate Figure 4: SEV remote attestation protocol sequence diagram. \(C_{Psp}=(PspSn_{pb}, assign_{AR}(PspSn_{pb},AmdLtk))\). We assume that this phase is successfully completed at the time the platform is set up and that this certificate is made publicly available. Notice that, unlike SGX platform certificates, SEV certificates (by AMD's design) do not contain a platform identifier. In our protocol, we will use _SP_'s public key \(PspSn_{pb}\) to uniquely identify a particular SEV platform. During the secure-channel establishment phase, _SP_ and _GO_ interact to set up a communication channel. _GO_ obtains the PSP certificate \(C_{Psp}=(PspSn_{pb},sig)\) for the platform and verifies it using \(averi_{AR}(PspSn_{pb},sig,AmdLtk_{pb})\). At this point, _GO_ generates the (shared) secret \(Ss=snsec(PspSn_{pb},GoSn)\), which is used in turn to generate keys \(Kek\) and \(Kik\) via the key derivation function \(sder\). These two _key-wrapping keys_ (as per SEV terminology) are then used to transmit the pair of freshly generated transport keys \(Tek=sgn_{E}()\) and \(Tik=sgn_{I}()\) generated by _GO_. It creates the _deploy package message_ (\(GoSn_{pb}\),\(blob_{D}\), \(mac_{D}\), \(vmc\)) to be transmitted to _SP_ where \(vmc\) is _SVM_'s firmware code, \(blob_{D}=senc(\langle Tek,Tik\rangle,Kek)\) is the encrypted-keys blob, and \(mac_{D}=ssign(blob_{D},Kik)\) its authentication code. Note that _SVM_'s code is transmitted in the clear without any integrity protection. Upon receiving the message (\(GoSn_{pb},blob,mac,vmc\)), _SP_ can derive the same secret \(Ss\) using \(snsec(GoSn_{pb},PspSn)\), and use it to derive keys \(Kek\) and \(Kik\) by the same key derivation process as _GO_. These keys can be, in turn, used to decrypt the received blob and recover the transport keys, i.e. \(\langle Tek,Tik\rangle=sdec(blob,Kek)\), and authenticate and integrity check them with \(averi(blob,mac,Kik)\). Therefore, at the end of this phase, _SP_ and _GO_ have set up a secure communication channel by sharing \(Tik\) and \(Tek\). The VM attestation & provisioning phase proceeds as follows. _SP_ prepares _SVM_ with code \(vmc\) for launch and calculates the corresponding code digest \(dig\). Then, it creates the measurement \(msr=\langle plat_{sev},launch_{sev},dig,nonce\rangle\) where \(nonce\) is a freshly generated random value. Structures \(plat_{sev}\) and \(launch_{sev}\) abstract information related to _SVM_'s TCB and launch policies, respectively. _SP_ constructs the _measurement package message_ (\(msr\), \(mac_{TI}\)), where \(mac_{TI}=ssign(msr,Tik)\), which is transmitted to _GO_. Upon receiving message (\(msr,mac\)), _GO_ validates the measurement by checking \(sveri(msr,sig,Tik)\) and that the measurement \(msr\) elements are as expected; it includes checking \(digest(msr)=dig_{exp}\), where \(digest(m)\) gives the code digest element of the measurement \(m\), and \(dig_{exp}\) is the digest independently computed by _GO_ using \(vmc\). If this measurement validation succeeds, _GO_ proceeds to provision _SVM_. It generates secret \(S\), and creates the encrypted blob \(blob_{P}=senc(S,Tek)\), and the corresponding authentication code \(mac_{P}=ssign(\langleblob_{P},msr\rangle,Tik))\). Note that \(mac_{P}\) takes into account the _SVM_'s measurement \(msr\). The _secret package message_ (\(blob_{P},mac_{P}\)) is then sent to _SP_. Upon receiving message (\(blob,mac\)), _SP_ recovers the secret by decrypting the encrypted blob \(S=sdec(blob,Tek)\), and it checks \(sveri(\langleblob,msr\rangle,mac,Tik)\) to verify the secret blob's authenticity and integrity, and that it is provisioning the machine with the correct \(msr\). If this verification does not succeed, this provisioning step is aborted. Otherwise, _SP_ places the secret \(S\) in an encrypted page of _SVM_'s memory. Once this step is completed, _SVM_ is allowed to start its execution. Our protocol focuses on the essential functionality required to prove that it achieves the desired goal given the threat model and trust assumptions defined. So, we simplify and abstract away elements as long as the intended guarantees can be delivered. For instance, the fully-fledged SEV protocol relies on a certificate chain which we "flatten" to a single platform certificate. Moreover, we abstract platform and launch details by relying on opaque structures. Our model could rely on predicates over these opaque structures to identify "desirable" platform and launch settings. There are many implementation details related to identifying memory ranges in the messages exchanged with _SP_. Unlike the SGX protocol, the SEV attestation (and provisioning) is directed at the guest owner, and it does not contain any SEV-VM-provided data. Hence, a relying party cannot independently and convincingly establish an authenticated channel with a SEV VM -- the guest owner alone has this capability. Figure 5: Flexible SEV attestation protocol sequence diagram outline. ### Our protocol Our protocol is built upon the notion of a _trusted guest owner_: an entity that deploys and provisions a SEV guest VM and is trusted to provide attestation reports on the deployed SEV VM's behalf. Our protocol involves the parties in both SGX and SEV attestation protocols. However, the enclave in the SGX attestation coincides with the guest owner of the SEV attestation. So, the parties are: the trusted guest owner _TO_, the SEV guest VM _SVM_, the quoting enclave _QE_, the AMD's secure processor _SP_, Intel's root of trust service _Intel RoT_, and AMD's root of trust service _AMD RoT_, and the relying party _RP_. #### Protocol goal. The protocol produces an attestation proof consisting of a quote, and both SGX and SEV platform certificates. It authenticates both _SVM_ and _TO_'s TCBs. The SGX platform certificate contains the Platform Provisioning ID (PPID) uniquely identifying the SGX platform instance there _TO_ was running, while the quote itself contains a digest of \(PspSn_{pb}\) -- this public key uniquely identifies the SEV platform instance where the _SP_ and _SVM_ were running. Finally, the quote has the digest of a piece of data \(D\) that is provided by _SVM_. Any relying party can, then, cryptographically validate this proof and be convinced that this quote was generated using the SGX platform identified by PPID and the SEV platform identified by \(PspSn_{pb}\) with the corresponding SGX and SEV TCBs, and that _SVM_ provided \(D\) when the protocol was executed. #### Threat model and trust assumptions. We combine both models and assumptions of the two SGX and SEV attestation sub-protocols we use; the assumptions on _TO_ are the same as the ones made about the attested enclave \(E\) in the SGX attestation protocol. Moreover, _SVM_ is trusted not to expose the provisioned secret, which is, in our protocol, a secret key shared between _TO_ and _SVM_ - we call such a machine _compliant_. #### Cryptographic schemes. We rely on the cryptographic schemes that are required by both SGX and SEV attestation protocols, which we do not restate here for the sake of brevity, plus the cryptographic hash function \(hash_{TO}\) used by _TO_ in emitting reports for _SVM_. #### Protocol We split our protocol into phases: setup, secure channel establishment, VM attestation & provisioning, VM report generation, and verification by relying party. The protocol is depicted in Figure 5; we omit the setup phase from the diagram for conciseness. The setup phase successfully carries out the setup phases of both SGX and SEV attestation protocols for the attested platforms, and it precedes the other phases of our protocol. As a result, it produces _SP_ and _QE_ certificates \((PspSn_{pb},asign_{AR}(PspSn_{pb},AmdLtk))\) and \((Qek_{pb},ppid,asign_{IR}(\langle Qek_{pb},ppid\rangle,IntelLtk))\), respectively. _TO_ plays a central role in the remaining phases of our protocol. Its code is presented in Algorithm 1. The global variables, stored in protected memory, define the enclave's state; they are listed after the keyword **vars**. The AMD root of trust public key is the only enclave constant, and is listed after keyword **consts**. The functions describe the _trusted_ behaviour it can engage on. The input arguments for such a function is transmitted from unprotected to protected memory before its execution starts, output ones move in the opposite direction at the end of its execution, and its execution is confidential and integrity-protected. Note that, for a given instance of our trusted owner enclave, the implementation of our trusted functions ensures that DeployVm and ProvisionVm can only be meaningfully (without returning None) executed once and in this order. Function GenerateReportForVm can be meaningfully executed multiple times but only after the other two have meaningfully executed. We do not address the possibility of replayed calls to function GenerateReportForVm. For the sort of usage we envision that possibility does not seem too problematic, but we could address that in future versions of our protocol. The secure-channel establishment and the VM attestation & provision phases correspond to the homonyms of the SEV attestation protocol, presented in Section 3.2, with _TO_ playing the part of _GO_. The function DeployVM implements the guest owner's behaviour in this phase. Given a PSP certificate and a SEV VM code digest as input, this function carries out all the necessary certificate verification, secret negotiation, key derivations and generations on its way to create and return _TO_'s secret-negotiation public key \(\textsc{goSn}_{pb}\), the encrypted blob \(\textsc{blob}_{D}\), and authentication code \(\textsc{mac}_{D}\) for the generated transport keys. These keys are stored in enclave global variables Tek and Tik. This function also fixes the expected code digest of the SEV VM being deployed, which is stored in the global enclave variables VmDig. Note that this function is only concerned with the digest of the VM code -- the code itself can be stored and communicated by untrusted components. The elements returned by this function together with the VM code itself are combined to create the deployment package message. This message is relied to _SP_, who carries out the rest of this phase as described in Section 3.2. The VM attestation & provision phase starts with _SP_ constructing the measurement package message as per Section 3.2. The function ProvisionVM, which implements the behaviour of the guest owner in this phase, takes as input the measurement and authentication code in that message. The function carries out the verification of the input measurement, generates a MAC-scheme key stored in Cik, and produces the secret encrypted blob and authentication code. The blob and code are used to create the secret package message which is sent to _SP_, which carries out the secret package verification and provisioning, bringing this phase to an end, as per Section 3.2. The sharing of the Cik key **Algorithm 1**: _TO_'s code. We use the schemes as defined in the text, and the well-known _Option_ type. The enclave global variables and constants start with an uppercase letter whereas the local ones start with a lowercase one. Their types are not explicitly mentioned but they can be inferred from their usage. The constants hold the values of the corresponding public keys, and the global variables are initialised with _None_. As for the types of our functions, we use \(\mathrm{PUB}_{x}\) to denote the public-key type of scheme identified by \(x\), \(\mathrm{SIG}_{x}\) is a signature type, \(\mathrm{CYP}_{x}\) a cyphertext type, \(\mathrm{DIG}_{sev}\) the SEV code digest type, \(\mathrm{MSR}_{sev}\) the SEV measurement type, \(\mathrm{REP}_{sgx}\) the SGX local attestation report type, and DAT the VM report _data_ type. ``` varsPspId,Tik,Tek,VmDig,Msr,Cik\(\leftarrow\)None consts\(\mathrm{AmdLtk}_{pb}\) functionDepolyVm((PspSn\({}_{pb}\),sig):PUB\({}_{Sn}\)\(\times\)SIG\({}_{AR}\),dig:DIG\({}_{sev}\)):Option(PUB\({}_{Sn}\)\(\times\)CYP\({}_{kek}\)\(\times\)SIG\({}_{kik}\)) ifVmDig = None \(\land\)\(averi_{AR}\)(PspSn\({}_{pb}\),sig,\(\mathrm{AmdLtk}_{pb}\)) then PspId\(\leftarrow\)Some(PspSn\({}_{pb}\)) VmDig\(\leftarrow\)Some(dig) (goSn\({}_{pb}\), goSn)\(\leftarrow\)\(sngen\)() sd\(\leftarrow\)\(snsec\)(PspSn\({}_{pub}\), goSn) kek,kik\(\leftarrow\)\(sdev\)((sd,'sev\(\_\)kek')), \(sdev\)((sd,'sev\(\_\)kik')) Tek,Tik\(\leftarrow\)Some(\(sgen\)()),Some(\(sgen\)) blob\({}_{D}\)\(\leftarrow\)\(senc\)((\(\)Tek,Tik\()\),kek) mac\({}_{D}\)\(\leftarrow\)\(ssign\)(blob\({}_{D}\), kik) returnSome(goSn\({}_{pb}\), blob\({}_{D}\), mac\({}_{D}\)) endif returnNone endfunctionfunctionProvisionVm(msr:MSR\({}_{sev}\),mac:SIG\({}_{Tik}\)):Option(CYP\({}_{TE}\times\)SIG\({}_{Tik}\)) ifVmDig \(\neq\)None \(\land\)Cik = None \(\land\)\(sveri\)(msr,mac,Tik)\(\land\)digest(msr) = VmDig then Msr\(\leftarrow\)Some(msr) Cik\(\leftarrow\)Some(\(sgen\)()) blob\({}_{P}\)\(\leftarrow\)\(senc\)(Cik,Tek) mac\({}_{P}\)\(\leftarrow\)\(ssign\)((\(\)msr,blob\({}_{P}\)),Tik) Tek,Tik\(\leftarrow\)None,None returnSome(blob\({}_{P}\), mac\({}_{P}\)) endif returnNone endfunctionfunctionGenerateReportForVm(vmdata:DAT,mac:SIG\({}_{CI}\)):Option(REP\({}_{sgx}\)) ifCik \(\neq\)None \(\land\)\(sveri_{CI}\)(vmdata,mac,Cik) then rpdata\(\leftarrow\)\(hashr_{IO}\)((PspId,Msr,vmdata)) returnSome(EREPORT(rpdata)) endif returnNone endfunction via this provisioning step establishes an authenticated (but not confidential) channel between _TO_ and _SVM_. The VM quote generation and verification phases involves the execution of the SGX attestation protocol, presented in Section 3.1. These phases of the protocol take place after the initial three have successfully completed and _SVM_ has started. The VM quote generation starts with _SVM_ creating a report request \((vmdata,mac)\), where _vmdata_ is a piece of data generated by it, and \(mac=ssign_{CI}(vmdata,\mbox{\sc{Cik}})\). This report request is then communicated to _TO_ by invoking GenerateReportForVm with _vmdata_ and \(mac\) as inputs. Upon successful verification of \(mac\), this function creates a SGX report addressed to _QE_ containing: _TO_'s enclave measurement \(msr_{TO}\), and a digest of the public key \(PspSn_{pb}\) identifying the attested SEV platform, of _vmdata_ and of _SVM_'s measurement _Msr_ represented as \(rpdata\). This report is transmitted to _QE_ which generate the corresponding quote \((msr_{TO}\), \(rpdata\), \(assign_{QE}(\langle\mbox{msr, data}\rangle\), Qek\())\). _RP_ verifies the VM quote using the function VerifyQuote in Section 3.1. Let \(Q\) be the VM quote received, \(msr_{TO}\) the enclave measurement for _TO_, \(C_{QE}\) the quoting enclave certificate, \(vmdata\) the VM piece of data, \(vmmsr\) the VM measurement, and \(pspid\) the attested SEV platform id. _RP_ calculates the expected report data \(rpdata_{exp}=hash_{TO}(\langle pspid,vmmsr,vmdata\rangle)\), and checks VerifyQuote\((msr_{TO}\), \(rpdata_{exp}\), \(Q\), \(C_{QE}\)). This validation convinces _RP_ that the protocol's goal has been achieved, namely, that the \(vmdata\) was generated by a SEV VM with measurement \(vmmsr\). ### Formal specification and verification To validate our proposal, we give a formal model of the flexible attestation protocol, and use the Tamarin prover to provide machine-verifiable proofs that it has the desired security properties. Hence, the protocol meets its stated goals in a setting with an unbounded number of sessions assuming a Dolev-Yao attacker and a threat model described in Section 3.3. We make the formal model as well as the proofs and the proof oracle needed to replicate the results publicly available at [2]. #### Protocol model We model the protocol by specifying all participants using multiset rewriting rules as in [34]. Each rule is of the form \(id\): \([\![l]\!\!]\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! value_); it creates a Diffie-Hellman private key as well as the transport keys. In the rule conclusions, _TO_ sends the request for guest creation and stores the necessary information in its session state. The request is created by generating the shared secret, deriving keys and encrypting/MAC-ing appropriate data. Action facts are later used to specify security properties. In addition to five protocol participants from Figure 5 (_SVM_, _SP_, _TO_, _QE_, _RP_), we explicitly model Intel and AMD roots of trusts services. The functional part of the formal model consists of 21 rules given in Table 1. The rules are almost in one-to-one correspondence with the description of protocol steps given in Section 3.3. The exception are the attacker rules that we introduced to faithfully capture the threat model and allow the corruption of parts of the system. #### Attacker model The Dolev-Yao attacker rules are automatically embedded in the model by the Tamarin tool, but we need to add additional attacker actions to be faithful Figure 6: One of the rewrite rules modeling the TO to the desired threat model. In particular, we add rules that disclose quoting enclaves and PSPs long term private keys to the attacker, corresponding to corruptions of arbitrary SGX and SEV platforms; these rules do not apply to non-compromised platforms. We also add rules to corrupt both roots of trust as a means to sanity check our model. We list and discuss the attacker rules related to SEV here, the rules related to SGX are similar. The Compromise_AMD_RoT allows the adversary to compromise the _AMD RoT_ and extract the AmdLtk private key. This rule was added purely for sanity checking purposes and, indeed, the main results and well as the lemmas related to SEV are falsified unless we assume the adversary did not use this rule. ``` ruleCompromise_AMD_RoT: [ |AMD_RoT_Ltk(~amd_rot_ltk) ]--[ Compromise_AMD_RoT() ]->[ Out(~amd_rot_ltk) ] ``` The Compromise_SEV_PSP allows the adversary to compromise one specific _SP_ and extract the PspSn private key of that platform. This rule models plat \begin{table} \begin{tabular}{|l|c|} \hline Rule name & Protocol party \\ \hline Intel\_RoT\_Initialize & _Intel RoT_ \\ Intel\_RoT\_Certify & _Intel RoT_ \\ SGX\_QE\_Initialize & _QE_ \\ SGX\_QE\_Generate\_Quote & _QE_ \\ AMD\_RoT\_Initialize & _AMD RoT_ \\ AMD\_RoT\_Certify & _AMD RoT_ \\ SEV\_PSP\_Initialize & _SP_ \\ SEV\_PSP\_Initialize\_Guest & _SP_ \\ SEV\_PSP\_Launch\_Guest & _SP_ \\ TO\_Enclave\_Verify\_Platform\_Cert & _TO_ \\ TO\_Enclave\_Deploy\_VM & _TO_ \\ TO\_Enclave\_Provision\_VM & _TO_ \\ TO\_Enclave\_Generate\_Report\_For\_VM & _TO_ \\ Guest\_VM\_Request\_Report & _SVM_ \\ RP\_Verify\_Quote & _RP_ \\ Compromise\_Intel\_RoT & adversary \\ Compromise\_SGX\_QE & adversary \\ Adversary\_Request\_Quote & adversary \\ Compromise\_AMD\_RoT & adversary \\ Compromise\_SEV\_PSP & adversary \\ Adversary\_Extract\_SEV\_Secret & adversary \\ \hline \end{tabular} \end{table} Table 1: All the rules in the formal model. form compromise (e.g., by side-channel attacks). We show that the main results hold even if the adversary can compromise arbitrary platforms, as long as the _specific SP_ used in the protocol execution is not compromised. ``` ruleCompromise_SEV_PSP: [!PSP_Ltk(~cpu_id,~psp_sn),!PSP_Pk(~cpu_id,psp_pk) ]--[ Compromise_SEV_PSP(psp_pk) ]->[ Out(~psp_sn) ] ``` One of the modeling challenges was formalising the relationship between a measurement and the behaviour of the measured code. Using SGX as an example, we need to be able to combine the fact that the quoting enclave produced a quote with measurement \(msr_{E}\) and data \(data_{E}\) with the fact that measurement \(msr_{E}\) corresponds to specific enclave code \(E\) with certain behaviour when executed on trusted hardware (e.g., \(E\) only provides attestation reports in which \(data_{E}\) is in a specific format). To address this challenge in general, the framework has to support higher-order reasoning about the building blocks of protocol specification -- e.g., we need to use those building blocks both as programs that can be executed and as data that can be hashed or send over the network (perhaps to be executed on the other end). To the best of our knowledge, no protocol verification framework currently allows reasoning about such constructions. As our scope in this paper is limited to modeling and verifying the proposed protocol, we overcome this challenge by using a simple over-approximation of the attacker's capabilities. In the SGX setting, we assign a fixed measurement \(const_{TO}\) to enclave _TO_ is running. Furthermore, we allow the attacker to obtain valid quotes with arbitrary data for any measurement _except_ for \(const_{TO}\). Hence, we hardcode the relationship _TO_ and the measurement of its enclave in our model, and assume enclaves corresponding to all other measurements are under the control of the attacker. We take a similar approach with SEV -- we hardcode the launch digest \(const_{SVM}\) of our guest VM and allow the attacker to extract secrets provisioned by the PSP from any SEV VM whose launch digest is _different_ from \(const_{SVM}\). We list the rule and give more details for SEV here. The Adversary_Extract_SEV_Secret allows the adversary to extract a provision secret from a VM running on arbitrary _SP_. This rule models the fact that adversary can launch and control arbitrary VMs on an arbitrary _SP_. The only thing we disallow (via the _Neq restriction_) is that the adversary extracts the secret from our specific _SVM_ whose digest a constant \(const_{SVM}\) (a string burrito_guest_vm in the Tamarin model). ``` ruleAdversary_Extract_SEV_Secret: [!SEV_PSP_Guest_Running(~cpu_id,psp_sn_pk, $vm_dig,~guest_secret) ``` ]--[ Neq($vm_dig, 'burrito_guest_vm') , Adversary_Extract_SEV_Secret($vm_dig, "guest_secret) ]->[ Out(~guest_secret) ] #### Security properties and proofs The main security property we are interested in verifying is the authenticity and integrity of the resulting VM quotes. As helper lemmas, but also as results of their own merit, we verify the security properties of both SGX attestation and SEV secure guest deployment as used in our system. The most important verified properties are informally described next, and they are followed by the corresponding Tamarin lemmas. **SGX quote authenticity**: If _RP_ verifies a SGX quote with the measurement \(const_{TO}\), with a certificate identifying the \(ppid\) SGX platform, and quote data \(rpdata\), then _TO_ has executed GenerateReportForVm function on a SGX platform identified by \(ppid\) and \(rpdata\) is equal to \(hash_{TO}(\langle\)PspId, Msr, vmdata\(\rangle)) for some PspId, Msr and vmdata. The claim holds unless the attacker has compromised the Intel root of trust or _QE_, the quoting enclave running on platform \(ppid\). **Secrecy of SEV guest secrets**: If _TO_ executes ProvisionVm with the \(const_{SVM}\) parameter and a specific PspId value, then the secret being provisioned Cik is never known to the attacker. The claim holds unless the attacker has compromised _AMD RoT_ or \(SP\), the specific PSP whose public key is PspId. **VM quote authenticity**: If _RP_ verifies an SGX quote with the measurement \(const_{TO}\), with a certificate identifying the \(ppid\) SGX platform, and quote data that is equal to \(hash_{TO}(\langle\)PspId, \(Msr\), vmdata\(\rangle)\) for some PspId and vmdata, and the digest in measurement \(Msr\) being \(const_{SVM}\), then SEV VM has executed GenerateReportForVm while running on a SEV platform identified by PspId with the data in the request equal to vmdata. The claim holds unless one of the following is true: the attacker has compromised the Intel root of trust; the attacker has compromised _QE_, i.e., the specific QE corresponding to platform \(ppid\); the attacker has compromised the AMD root of trust; the attacker has compromised _SP_, i.e., the specific PSP whose public key is PspId. We present formal statements of the main results as well as the most important auxiliary lemmas in Tamarin notation. This notation is somewhat different compared to the informal statements above so we give clarifications when needed. In the **SGX quote authenticity** lemma below, the informal statements "\(RP\) verifies a SGX quote" and "\(TO\) has executed GenerateReportForVm function" are modelled as Tamarin _action facts_ (respectively, RP_Verify_Quote and TO_Enclave_Generate_Report_For_VM). These action facts hold at timestamps when the corresponding rules are executed. The variables ppid and rd correspond to \(ppid\) and \(rpdata\) in the informal statement, while k, d and v correspond to the report hash payload -- \(PspId\), \(Msr\) and \(vmdata\). Note that these are untyped in the lemma statement below and are, hence, quantified over all possible messages. Variables #i and #j are typed as timestamps. The constant SGX measurement of the \(TO\) is simply a string burrito_enclave_sgx_measurement. lemma lm_sgx_quote_authenticity: "All ppid #i rd. RP_Verify_Quote(<'sgx_quote', 'burrito_enclave_sgx_measurement', ppid, rd>) @ i ==> ( (Ex v d k #j. rd = h(<'report_data', k, d, v>) & TO_Enclave_Generate_Report_For_VM(ppid, k, d, v) @ j ) | (Ex #j. Compromise_Intel_RoT() @ j ) | (Ex #j. Compromise_SGX_QE(ppid) @ j ) ) " In the **Secrecy of SEV guest secrets** lemma below, the constant launch digest of the _SVM_ simply the string burrito_guest_vm. The action fact KU models the attacker knowledge, while s is the secret being provisioned to the _SVM_. lemma lm_sev_guest_secret_secrecy: "All k s #i. TO_Enclave_Provision_VM(k, s, 'burrito_guest_vm' ) @ i ==> ( (not Ex #j. KU(s) @ j) | (Ex #j. Compromise_AMD_RoT() @ j ) | (Ex #j. Compromise_SEV_PSP(k) @ j ) ) " In the **VM quote authenticity** lemma below, notation is similar same as in the previous two lemmas. Note that we do not include the platform and the policy metadata \(plat\_sev\) and \(launch\_sev\) to the SEV measurement as they do not play a security-related role on the level of abstraction used on our model. Instead, the SEV measurement is just a pair consisting of a nonce (modelled by variable m) and the launch digest of the _SVM_. lemma lm_burrito_quote_integrity_strong: "All ppiddkm#i. RP_Verify_Quote(<'sgx_quote', 'burrito_enclave_sgx_measurement', ppid, h(<'report_data', k, <m, 'burrito_guest_vm'>, d>)> ) @i ==> ( ( Exts#j. d = <'burrito_report', ts> & Guest_VM_Request_Report(k, ts) @j ) | (Ex #j. Compromise_Intel_RoT() @j ) | (Ex #j. Compromise_SGX_QE(ppid) @j ) | (Ex #j. Compromise_AMD_RoT() @j ) | (Ex #j. Compromise_SEV_PSP(k) @j ) ) " We prove all results using the Tamarin prover's automated procedure with a custom proof oracle that was necessary to achieve proof termination. In addition to the main results stated above, we prove weaker variants of the claims above where we disallow the attacker from compromising any SGX or SEV platform. We also prove a number of helper lemmas and a number of sanity-checking lemmas in order to test the model itself. Most notably, we show that all the premises for main lemmas are indeed necessary by demonstrating the existence of an attack when any of the premises is removed. ### Implementation and Evaluation To demonstrate how our protocol works in practice, we have created an implementation of our trusted guest owner, which can be applied to any compliant SEV VM -- we have published our code [2]. Our prototype relies on (i.e., instantiate the abstract SEV and SGX protocols we present with) the fully-fledged versions of the SEV pre-SNP and SGX DCAP attestation protocols. Our trusted owner enclave implementation uses the SGX SDK [23] to capture the behaviour described in Algorithm 1. The SGX SDK provides two main abstractions for the development of enclaves: trusted functions, which are called _ecalls_, and untrusted ones, which are called _ocalls_. The enclave functions are described by ecalls, which can, in turn, rely on ocalls to execute untrusted privileged code. Our functions DeployVm, ProvisionVm, and GenerateReportForVm are all implemented as ecalls, and they take into account the fully-fledged SEV attestation operations and data formats. So, for instance, DeployVm checks the SEV certificate chain to authenticate the secret negotiation key as opposed to our single certificate abstraction. Our implementation uses the code of the _SEV-Tool_[1] as a library to carry out a number of operations related to the SEV attestation protocol -- this standalone tool has been created to help developers operate SEV VMs and platforms. As a proof of concept and to evaluate how our protocol fares in practice, we applied it to the generation (i.e. training) of machine learning (ML) models. We use our protocol as a way to create a notion of _model accountability_ in the sense that VM quotes can link a specific model with the training algorithm and data set that was used to create it. This sort of quote could be used, for instance, in the context of regulated ML, where one could _a posteriori_ be interested in analysing if a model was created in an unbiased/fair way. The SEV VM that we create runs a single service, called _tf_service_, at startup and shuts itself down after the service execution has finished. This service executes (via a Docker container) a Tensorflow [3] script that creates a ML model, export into file model.tar.gz, and we capture the standard output of this script into file stdout. After creating these files, it produces a VM quote containing a hash of these two files as the VM quote report data. Thus, a relying party can verify that a given model was generated with a given data set and script. Note that the data could even be kept private up until the point it needs to be divulged to a regulator/auditor to ensure the appropriate generation of the associated model. Our VM is based upon the Alpine 4 Linux distribution. It is relies on a modified SEV-ready kernel, an initial ramdisk that includes a root filesystem (containing the _tf_service_ and its dependencies), and a fixed kernel command line -- these are the elements necessary to boot a Linux VM. The hashes of these three pieces of information are recorded in the initial VM firmware and \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline Name & Deploy & Provision & GenReport & VmLife & Over. (\%) \\ \hline advanced.py & 0.118 & 0.088 & 0.139 & 198.268 & 0.174 \\ bidirectional.py & 0.121 & 0.0911 & 0.132 & 532.250 & 0.065 \\ knowledge.py & 0.122 & 0.103 & 0.132 & 1140.456 & 0.031 \\ beginner.py & 0.128 & 0.087 & 0.123 & 92.217 & 0.367 \\ text.py & 0.118 & 0.089 & 0.134 & 98.184 & 0.347 \\ text.trans.py & 0.129 & 0.081 & 0.130 & 1648.881 & 0.021 \\ cnn.py & 0.122 & 0.089 & 0.135 & 339.739 & 0.101 \\ keras.py & 0.121 & 0.094 & 0.134 & 117.956 & 0.295 \\ preprocessing.py & 0.120 & 0.101 & 0.136 & 98.965 & 0.360 \\ classification.py & 0.114 & 0.097 & 0.143 & 889.031 & 0.039 \\ imbalanced.py & 0.118 & 0.086 & 0.149 & 310.548 & 0.113 \\ word2vec.py & 0.116 & 0.093 & 0.131 & 117.545 & 0.289 \\ \hline \end{tabular} \end{table} Table 2: Accountable ML evaluation results. are, hence, part of the VM measurement that can be verified by the relying party. The root filesystem is setup in main memory as opposed to disk. We point out that our machine _does not_ rely on the typical attestation scenario that is suggested by AMD, i.e. using a guest-owner-encrypted disk for which the key is provisioned using the SEV attestation protocol. Of course, once a VM has been setup using our protocol (and an initial root filesystem in main memory like we do), it could include a routine to create an encrypted disk whose key would remain protected in main memory. So, our protocol and example VM could still accommodate disk encryption seamlessly. Our evaluation takes into account 12 Tensorflow scripts. For each of them, we create corresponding VMs as explained and carry out deployment, provisioning, and report generation using our trusted owner, as per our protocol. The results of executing these VMs is presented in Table 2. We use a AMD machine using an EPYC 7402P 24-Core processor to run the VMs and an Intel machine with a Intel(R) Xeon(R) E-2288G CPU @ 3.70GHz processor. In this evaluation, we measure the times taken to perform each of the trusted owner functions -- they include network latency as we use a remote trusted owner. The overhead is calculated as the (Deploy+Provision+GenReport)*100/VmLife; it gives the percentage of time taken by the trusted owner operations with respect to the entire VM execution (VmLife). As expected, the timings for executing trusted owner operations are fairly constant and independent of the VM lifetime (and execution complexity). Note that trusted owner operations are of fixed type and size so those are independent of the type of the VM being run. Moreover, the overhead imposed by our protocol is minimal: in all cases it came under 0.5% of the VM execution time. Therefore, unsurprisingly, our protocol delivers its guarantees without incurring in significant VM-execution overheads. ### Discussion Our protocol can be extended to accommodate a more generic and ambitious application. Instead of a single SEV VM, we could use the same principles to create a _trusted deployer_ that sets up and attests an entire _trusted (and possibly heterogeneous) infrastructure_. Instead of having to attest the components of that infrastructure individually, possible using different protocols with varied levels of flexibility depending on the heterogeneity of the trusted components, the extended version of our protocol would allow a trusted deployer, with a flexible attestation mechanism and the capacity to deploy all the other components, to generate a single attestation report on the infrastructure's behalf. A relying party would, therefore, enjoy a simple and flexible protocol to attest the infrastructure. Our work creates and promotes a new line of research, namely, exploring _synergies between TEE implementations_. SGX provides a flexible and simple attestation mechanism and, arguably, subpar application portability, whereas SEV pre-SNP offers application portability and a overly-rigid attestation protocol. Our protocol confers SGX-like attestation to a SEV VM, thereby bringing out the best combination of application portability and attestation flexibility. Intel and AMD have, recently, proposed TEE architectures and implementations, in the form of SEV SNP [50] and TDX [25], that offer both of these qualities. However, these architectures are still immature in comparison to SGX and (pre-SNP) SEV. At the time of writing (May 2023), there hardware supporting TDX and not generally available, software support for SEV SNP is immature, and no could providers expose the flexible attestation intreface of SEV SNP. To illustrate more concretely the lack of maturity of SEV SNP as of now, the AMD-designed SEV software stack disables the VM firmware recording of kernel, initial ramdisk, and kernal command line measurements5. The current absence of this feature prevents the sort of attested boot that is so useful in establishing a chain of trust on a SEV VM; we use, for instance, this attested boot in our implementation. As for TDX, inconsistencies have been outlined [44, 46] on the specifications proposed by Intel,6 illustrating even its theoretical immaturity. Our protocol could be adapted to use SEV SNP or TDX as the technologies behind the guest VMs; in the context of a heterogeneous infrastructure, for example. Thus, our protocol offer similar guarantees predicated on the trustworthiness of more mature TEE implementations. In any case, our work demonstrates the validity of this type of research by proposing an example of such a synergistic TEE combination. Moreover, even when these new technologies become mature, our protocol will still be relevant as it will provide application portability and attestation flexibility for platforms that support SEV pre-SNP but do not support SEV SNP or TDX. Footnote 5: [https://github.com/AMDESE/qemu/blob/3b6a2b6b7466f6dea53243900b7516c3f29027b7/target/i386/sev.c#L1830](https://github.com/AMDESE/qemu/blob/3b6a2b6b7466f6dea53243900b7516c3f29027b7/target/i386/sev.c#L1830) Footnote 6: [https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html) We could also extend our protocol in different practical ways to allow the trusted owner and SEV VM to exchange other types of information. Our protocol creates an authenticated channel between trusted owner and SEV VM by sharing a shared MAC key. We could extend our protocol to create an authenticated _and confidential_ channel between them by passing additionally a shared encryption key. The SEV VM and trusted owner could also have their APIs extended to exchange other pieces of verifiable information. For instance, they could both offer a remote function to provide a verifiable hardware-generated random string of bits. They could combine this string with a locally generated one to create a random "stronger" source of randomness. Our protocol and implementation has also some limitations. A flaw in either of the TEE implementations that we rely upon can thwart the guarantees/goals of our protocol, as we assume both SGX and SEV TCBs to be trusted. That limitation is inherent to any combination of TEE implementations that makes this assumption. Moreover, in terms of our implementation, the SEV version that we use does not offer integrity protection; only SEV SNP gives integrity guarantees. We could implement our protocol using any SEV-like TEE implementation, with or without integrity protection, provided that the required attestation primitives are available. Related Work In this section, we examine papers that focus on hardware-based TEEs and remote attestation protocols involving them. A number of applications and extensions to the SGX attestation protocols have been proposed. From incorporating attestation information into the TLS protocol [29], to proposing flexible attestation verification infrastructures [14], to proposing flexible mutual attestation protocols [13]. Kucab _et al._[30] propose a protocol that involves similar parties but is very different in many ways to ours. They use SGX attestation to perform an integrity check on the filesystem of (non-SEV) VMs at startup. Another line of research consists of identifying vulnerabilities and attacks specifically targeting attestation primitives [10, 52, 61]. Swami has shown that some of the privacy guarantees are thwarted by Intel's EPID design [52]. Buhren _et al._[10] has shown how the PSP firmware can be updated to a version that allows the extraction of the cryptographic keys managed by the PSP. Wilke _et al._[61] have shown how the memory-permutation insensitivity of the SEV launch measurement can be exploited in a way that allows the VM to execute arbitrary code and yet its original launch measurement remains unchanged. We regard these works as complementary to ours. The findings about SGX's EPID can improve its privacy guarantees, and as a consequence, the benefits it could bring if it was used as part of our protocol. The other two SEV attacks are prevented by our protocol assumptions requiring the attested SEV TCB to be trusted and platform to not be compromised; we focus on the analysis of the cryptographic protocol itself by assuming that the underlying primitives are trusted. These papers provide, then, guidelines to harden attestation primitives so that our assumptions are validated and our protocol can deliver on its guarantees. Studies have compared TEE implementations and their attestation protocols [40, 41, 20, 36]. They limit themselves to point out the different characteristics of such protocols without identifying and exploring interesting synergies like we do. Some papers have used formal techniques to describe and analyse attestation protocols involving trusted hardware. For instance, the Direct Anonymous Attestation scheme, proposed as an attestation mechanism to Trusted Platform Modules (TPMs), has been formally described [9], and analysed using Tamarin [58]. SGX's EPID, DCAP, and TDX attestation mechanisms have been formaly analysed using ProVerif [45, 46, 47]. While these works focus on the detailed/concrete version of SGX's schemes, our protocol and formalisation is based upon an abstract and minimalist SGX scheme as our focus is on the interplay of SGX and SEV attestation as opposed to any of those individually. Hence, there is a degree of overlap between our work and theirs, but there is also a degree of complementarity: showing that concrete versions of these protocols achieve the desired goals demonstrate that we can instantiate our abstract SGX-like subprotocol with a concrete instance and achieve the goals and guarantees of our protocol as we expected. Arfaoui _et al._ have proposed a new scheme to remotely attest a hypervisor and its (non-SEV) VMs, with a formal proof of their _authorized linked attestation_ protocol [5]. Their protocol design, trust assumptions, threat model, and protocol goals are completely different from ours. We have found only another work that combines different TEE architectures. Zhao _et al._[62] propose a framework, called _vSGX_, by which one can emulate the behaviour of a SGX enclaves inside a SEV VM. The main purpose of that work is to allow unmodified SGX enclave binaries to run on SEV hardware. Thus, they do not combine TEE implementations like we do, but they implement the execution model specific to a TEE architecture on top of another. The scheme that they propose for remote attestation relies on a _provider_ to provision vSGX enclaves with "fused secrets." Note that secretly providing this "fused secret" requires the _directed_, rigid SEV remote attestation. That framework could move away from such a _directed_ and provider-centric attestation scheme to a more flexible one by employing our protocol to carry the remote attestation of their virtual enclaves. Many papers have analysed TEE implementations more generally [16, 21, 49, 51, 42], and a considerable number of works have identified vulnerabilities and attacks on SEV [31, 32, 37, 38, 39, 60, 39, 43, 57] and SGX [53, 54, 55, 56, 11, 12, 55]. These papers provide either: insight to designers of TEEs so that they can improve them so their platforms are more secure, guidelines to TEE operators so that they can put in place appropriate mitigation strategies to ensure their TCBs can be trusted. So, they are, arguably, complementary to ours in the sense that they help establish in practice the assumptions that we make in formalising and analysing our protocol. ## 5 Conclusion We propose a cryptographic protocol that explores a synergy between SGX and SEV: it brings together the flexibility of SGX's remote attestation to the application portability of SEV -- neither of these two TEE implementations offer this combination of features independently. Our protocol relies on the notion of a _trusted guest owner_, implemented in an SGX enclave, that is in charge of deploying, attesting, and provisioning a SEV VM. The latter can rely on the former to generate attestation reports on its behalf. Moreover, we formally demonstrate that our protocol enforces security properties related to the authenticity of quotes and confidentiality of provisioned secrets using Tamarin. Furthermore, we demonstrate with an application to machine-learning-models accountability how it can be used in practice while incurring negligible overheads. We plan to further explore the extensions to our protocol that are required to apply it to the remote attestation of an infrastructure of heterogeneous trusted components.
2306.06532
Composed solutions of synchronized patterns in multiplex networks of Kuramoto oscillators
Networks with different levels of interactions, including multilayer and multiplex networks, can display a rich diversity of dynamical behaviors and can be used to model and study a wide range of systems. Despite numerous efforts to investigate these networks, obtaining mathematical descriptions for the dynamics of multilayer and multiplex systems is still an open problem. Here, we combine ideas and concepts from linear algebra and graph theory with nonlinear dynamics to offer a novel approach to study multiplex networks of Kuramoto oscillators. Our approach allows us to study the dynamics of a large, multiplex network by decomposing it into two smaller systems: one representing the connection scheme within layers (intra-layer), and the other representing the connections between layers (inter-layer). Particularly, we use this approach to compose solutions for multiplex networks of Kuramoto oscillators. These solutions are given by a combination of solutions for the smaller systems given by the intra and inter-layer system and, in addition, our approach allows us to study the linear stability of these solutions.
Priya B. Jain, Tung T. Nguyen, Ján Mináč, Lyle E. Muller, Roberto C. Budzinski
2023-06-10T21:59:53Z
http://arxiv.org/abs/2306.06532v3
# Synchronization patterns and stability of solutions in multiplex networks of nonlinear oscillators ###### Abstract Networks with different levels of interactions, including multilayer and multiplex networks, can display a rich diversity of dynamical behaviors and can be used to model and study a wide range of systems. Despite numerous efforts to investigate these networks, obtaining mathematical descriptions for the dynamics of multilayer and multiplex systems is still an open problem. Here, we combine ideas and concepts from linear algebra and graph theory with nonlinear dynamics to offer a novel approach to study multiplex networks of Kuramoto oscillators. Our approach allows us to study the dynamics of a large, multiplex network by decomposing it into two smaller systems: one representing the connection scheme within layers, and the other representing the connections between layers. With this, we can study synchronization patterns and the linear stability of the solutions that emerge in multiplex networks of nonlinear oscillators. **Networks of nonlinear oscillators offer a possibility to model and study many natural systems. The pattern of connections and the coupling structure play a crucial role in the emergent dynamics in these systems. In this context, multilayer and multiplex networks of nonlinear oscillators depict a rich diversity of synchronization patterns. At the same time, the sophisticated connectivity patterns in these systems bring an intrinsic difficulty to the mathematical analyses of the dynamics. Here, we introduce a mathematical approach for multiplex networks of nonlinear oscillators where we can compose solutions with nontrivial patterns of oscillations and study their linear stability.** ## I Introduction Systems composed of coupled units have been used to model and study a diversity of phenomena in nature spanning from physics [1; 2] and engineering [3; 4], to social science [5; 6], to biology [7; 8] and neuroscience [9; 10]. In this context, many systems have different levels of interactions, which can be understood as multilayer networks [11; 12]. In this case, the whole system can be understood as the composition of an internal level, within each layer, and an external level, between layers. This class of system can be visualized as a network of networks, and it has many direct applications [13; 14; 15; 16; 17; 18; 19]. A particular example of this kind of network is given by multiplex networks, which has received great attention in the past years [20; 21; 22; 23; 24]. A multiplex network can be understood as a network with many layers, where each layer has the same number of nodes connected through a given internal connection scheme, and the connection between nodes in different layers is given by an one-to-one scheme, where a node in a given layer is connected to nodes in neighboring layers that are in the same relative position within the layer. Multiplex networks have been studied in many different contexts, where rich dynamics have been found [25; 26; 27; 28; 29]. However, despite the efforts in past years and the advances on the investigation of this class of networks, there are many open questions, mainly regarding mathematical and analytical approaches to study the dynamics of multilayer and multiplex networks. Multilayer and multiplex networks can display a great diversity of synchronization phenomena. For instance, first-order transition, or explosive synchronization has been reported in these networks [30; 31; 32]. Furthermore, different synchronization patterns, including chimera states, have been observed [33; 34; 35; 36]. In this paper, we introduce an approach to study multiplex networks, where we leverage recent results from graph theory and linear algebra [37]. We recently proposed a mathematical approach to study the dynamical behavior of oscillators on multilayer networks where each node in a given layer is connected to all other oscillators in the neighbor layers [38]. In this paper, we extend these ideas to multiplex networks. The approach that we explore here considers a multiplex network as a decomposition into the intra-layer and the inter-layer structures. We remark that similar ideas have been proposed for different multi-level systems [11; 39; 40]. For instance, a related approach in this context is explored in [41], where multilayer networks are decomposed, which allows for the studying of the master stability function of these systems with application to spiking neural networks. Our article focuses on multiplex networks of nonlinear oscillators. The dynamics on these networks are described by the Kuramoto model, a traditional dynamical system used to study many synchronization phenomena [42; 43; 44]. Multilevel systems of Kuramoto oscillators have been extensively studied in the past years, where a rich diversity of dynamics has been observed [45; 46; 47; 48; 49]. The mathematical framework we introduce here gives us novel insights into the dynamics of multiplex Kuramoto networks, which can now be studied in simpler terms. Here, multiplex networks are composed of \(M\) layers with \(N\) oscillators in each one. In this case, our approach indicates that, instead of studying a large system, composed of \(MN\) units with different levels of interaction, we can decompose the multiplex network into two smaller systems: the "intra-layer" system composed of \(N\) units; and the "inter-layer" system composed of \(M\) units. With this, we can study the dynamics of these smaller systems to obtain insights into the dynamics of the whole system. Particularly, our framework allows us to obtain the trajectories of Kuramoto oscillators on a multiplex network by only studying the smaller systems. Further, our approach allows us to use the solutions for the intra and inter-layer systems to compose solutions for the multiplex one, which offers a new perspective on the equilibrium points for multiplex networks of nonlinear oscillators. We can also obtain the collective dynamics of the multiplex network, i.e. the Kuramoto order parameter, by using the same idea. Lastly, this approach allows us to obtain insights into the linear stability of the solutions on the multiplex network by analyzing the spectral properties of the matrices related to the smaller systems. Here, we first introduce a new perspective on multiplex networks using certain constructions in graph theory (Sec. II). We then discuss the Kuramoto model and our approach for Kuramoto oscillators on multiplex networks (Sec. III), which allows us to study the dynamics of multiplex networks in simpler terms (Sec. IV). We extend this approach to the analysis of the stability of equilibrium points in multiplex networks (Sec. V). At last, we display several numerical simulations of Kuramoto oscillators on multiplex networks, highlighting the diversity of dynamical behavior they show and the applicability of our approach (Sec. VI). The discussions and conclusions are in Sec. VII, and all computational details can be found in the appendix. ## II Kronecker sum and representation of a multiplex network Our approach focuses on multiplex networks, i.e. the connection between nodes in different layers is given by a one-to-one scheme, where the node \(i\) in layer \(l\) is connected to node \(i\) in layer \(k\). A schematic example is shown in Fig. 1 where two layers are considered. A key observation here is that we can represent a multiplex network as the Cartesian product of two graphs. Let us first recall this concept from graph theory. Let \(G\) and \(H\) be two graphs. The Cartesian product operation forms a graph \(G\boxtimes H\) of order \(MN\) from a graph \(G=(U,E)\) of order \(M\) and a graph \(H=(V,F)\) of order \(N\). The vertices of \(G\boxtimes H\) are ordered pairs \((u,v)\), where \(u\in U\) and \(v\in V\) ; the vertices \((u,v)\) and \((u^{\prime},v)\) are connected when when \(u\) and \(u^{\prime}\) are connected in \(G\), together with \((u,v)\) and \((u,v^{\prime})\) are connected when \(v\) and \(v^{\prime}\) are connected in \(H\). In other words, \(G\boxtimes H\) is formed by replacing each vertex of \(G\) by a copy of \(H\), and replacing each edge of \(G\) by edges between corresponding vertices of the appropriate copies. With this perspective, we can represent the adjacency matrix of \(G\boxtimes H\) in a rather explicit way using some concepts Figure 1: **Representation of a multiplex network.** In this kind of system, different layers are interconnected. Within a given layer, the oscillators have an “intra-layer” connection scheme; between layers, the oscillators are connected in a one-to-one scheme, which characterizes the “inter-layer” connection structure. from linear algebra and matrix theory which we now recall. For any positive integers \(p,q,r,s\) we define the Kronecker product of two matrices \(\mathbf{A}\in\mathbb{R}^{p\times q}\) and \(\mathbf{B}\in\mathbb{R}^{r\times s}\) as a matrix \(\mathbf{C}\in\mathbb{R}^{pr\times qs}\) given in block form as \[\mathbf{C}=\left(\begin{array}{c|c|c|c}\mathbf{A}b_{11}&\mathbf{A}b_{12}&\cdots&\mathbf{A}b_{ 1s}\\ \hline\mathbf{A}b_{21}&\mathbf{A}b_{22}&\cdots&\mathbf{A}b_{2s}\\ \hline\vdots&\vdots&\ddots&\vdots\\ \hline\mathbf{A}b_{r1}&\mathbf{A}b_{r2}&\cdots&\mathbf{A}b_{rs}\\ \end{array}\right),\] (II.1) where \(\mathbf{A}=[a_{ij}]\) and \(\mathbf{B}=[b_{ij}]\). We denote the Kronecker product of \(\mathbf{A}\) and \(\mathbf{B}\) by \(\mathbf{C}=\mathbf{A}\otimes\mathbf{B}\). For positive integers \(r,s,k\), let \(\mathbf{A}\in\mathbb{R}^{r\times r}\), \(\mathbf{B}\in\mathbb{R}^{s\times s}\), and \(\mathbf{I}_{k}\) be the identity matrix of order \(k\). The sum \(\mathbf{A}\otimes\mathbf{I}_{s}+\mathbf{I}_{r}\otimes\mathbf{B}\) is known as the Kronecker sum of \(\mathbf{A}\) and \(\mathbf{B}\). We denote the Kronecker sum of \(\mathbf{A}\) and \(\mathbf{B}\) by \(\mathbf{A}\bigoplus\mathbf{B}\). By definition, the adjacency matrix of \(G\boxtimes H\) is \(\mathbf{A}\bigoplus\mathbf{B}\), where \(\mathbf{A}\) is the adjacency matrix of \(G\) and \(\mathbf{B}\) is the adjacency matrix of \(H^{50}\). It is well-known that graph spectra behave well under the Cartesian product. Specifically, the spectrum of \(\mathbf{A}\bigoplus\mathbf{B}\) is \(\{\alpha_{i}+\beta_{j}\}_{1\leq i\leq M,1\leq j\leq N}\) where \(\{\alpha_{i}\}_{i=1}^{r}\) is the spectrum of \(\mathbf{A}\) and \(\{\beta_{j}\}_{j=1}^{s}\) is the spectrum of \(\mathbf{B}\). Furthermore, the eigenvectors of \(\mathbf{A}\bigoplus\mathbf{B}\) associated with \(\alpha_{i}+\beta_{j}\) are Kronecker products of the corresponding eigenvectors of \(\mathbf{A}\) and \(\mathbf{B}^{50}\). ## III Kuramoto oscillators on multiplex networks In this paper, we focus on networks of nonlinear oscillators, where the dynamical behavior of each node is given by the Kuramoto model [42; 43] \[\frac{d\theta_{i}(t)}{dt}=\omega_{i}+\sum_{j=1}^{\mathcal{N}}A_{ij}\sin( \theta_{j}(t)-\theta_{i}(t)).\] (III.1) Here, \(\theta_{i}(t)\in[-\pi,\,\pi]\) is the phase of the \(i^{\text{th}}\) oscillator at time \(t\), \(\omega_{i}\) is its natural frequency, \(\mathcal{N}\) is the number of oscillators in the system, and \(A_{ij}\) represents the elements of the adjacency matrix, which represent the connections in the system. Particularly, for multiplex networks, we can divide the connection structure into two different categories: intra-layer (within each layer) and inter-layer (between layers). With this in mind, we can now write the equation for the dynamics of each node as: \[\frac{d(\mathbf{\theta}_{l})_{i}}{dt}=\omega_{l}+\underbrace{\sum_{j=1}^{N}(\mathbf{A }_{ll})_{ij}\sin((\mathbf{\theta}_{l})_{j}-(\mathbf{\theta}_{l})_{i})}_{\text{intra- layer}}+\underbrace{\sum_{k=1,k\neq l}^{M}\sum_{j=1}^{N}(\mathbf{A}_{lk})_{ij} \sin((\mathbf{\theta}_{k})_{j}-(\mathbf{\theta}_{l})_{i})}_{\text{inter-layer}},\] (III.2) where we consider a system with \(M\) layers and \(N\) oscillators in each one. Here, \(l,k\in[1,\,M]\) represents the layer number, and \(i,j\in[1,\,N]\) represents the index of the oscillator within each layer. In this case, the matrices \(\mathbf{A}_{lk}\) represent the connections within each layer, in case \(l=k\) and between layers, when \(l\neq k\), where the coupling strength connection the nodes can be absorbed in the elements of these matrices, leading to weighted adjacency matrices. We can still represent the multiplex network using a similar form than shown in Eq. (III.1). In this case, the information about intra-layer coupling and inter-layer coupling is represented in a single matrix and: \[\frac{d\theta_{i}}{dt}=\omega+\sum_{j=1}^{NM}K_{ij}\sin(\theta_{j}-\theta_{i}),\] (III.3) where now \(i\in[1,\,NM]\), such that: \[\mathbf{\theta}=(\underbrace{\theta_{1},\theta_{2},\cdots,\theta_{N}}_{1^{\text{ \tiny{st-layer}}}},\underbrace{\theta_{N+1},\theta_{N+2},\cdots,\theta_{2N} }_{2^{\text{\tiny{st-layer}}}},\cdots,\underbrace{\theta_{N(M-1)+1},\theta_{N (M-1)+2},\cdots,\theta_{NM}}_{M^{\text{\tiny{st-layer}}}}).\] (III.4) The adjacency matrix describing the multiplex system is the \(NM\times NM\) matrix: \[\mathbf{K}=\left(\begin{array}{c|c|c|c}\mathbf{A}_{11}&\mathbf{A}_{12}&\cdots&\mathbf{A}_{1M} \\ \hline\mathbf{A}_{21}&\mathbf{A}_{22}&\cdots&\mathbf{A}_{2M}\\ \hline\vdots&\vdots&\ddots&\vdots\\ \hline\mathbf{A}_{M1}&\mathbf{A}_{M2}&\cdots&\mathbf{A}_{MM}\\ \end{array}\right),\] (III.5) where the matrices \(\mathbf{A}_{lk}\) are the same as introduced in Eq. (III.2). In this paper, we focus on Kuramoto oscillators on multiplex networks, which satisfy the condition that the off-diagonal blocks \(\mathbf{A}_{lk}\) for \(l\neq k\) are scalar matrices. This assumption means that, for any pair of layers, each oscillator in the first layer is connected with one oscillator of the other layers. Using the ideas explained in the previous sections, combined with results in graph theory [50; 37], we can compose solutions for multiplex networks of Kuramoto oscillators. This is valid for equilibrium points and also for considering the transient behavior. To do so, we consider a multiplex network composed of \(M\) layers with \(N\) nodes each. In this case, we can study the representation of the intra-layer and inter-layer connections, which can be described respectively as \[\frac{d\psi_{i}(t)}{dt}=\omega_{\text{intra}}+\sum_{j=1}^{N}(\mathbf{A}_{\text{ intra}})_{ij}\sin(\theta_{j}(t)-\theta_{i}(t)),\] (III.6) and \[\frac{d\phi_{i}(t)}{dt}=\omega_{\text{inter}}+\sum_{j=1}^{M}(\mathbf{A}_{\text{ inter}})_{ij}\sin(\theta_{j}(t)-\theta_{i}(t)).\] (III.7) The adjacency matrix for each case can be described as: \[\mathbf{A}_{\text{intra}}=\begin{pmatrix}0&a_{12}&\cdots&a_{1N}\\ a_{21}&0&\cdots&a_{2N}\\ \vdots&\vdots&\ddots&\vdots\\ a_{N1}&a_{N2}&\cdots&0\end{pmatrix},\] (III.8) for the intra-layer connection, where \(a_{ij}\) represents the weight of the connection between oscillators \(i\) and \(j\) and no self-connections are considered. The inter-layer connections can be described by: \[\mathbf{A}_{\text{inter}}=\begin{pmatrix}0&\epsilon_{12}&\cdots&\epsilon_{1M}\\ \epsilon_{21}&0&\cdots&\epsilon_{2M}\\ \vdots&\vdots&\ddots&\vdots\\ \epsilon_{1M}&\epsilon_{2M}&\cdots&0\end{pmatrix},\] (III.9) where \(\epsilon_{lk}\) represents the coupling strength between oscillators in layers \(l\) and \(k\). For the multiplex networks we consider in this paper, the inter-layer connection structure can be understood as a first-neighbors connection scheme, where the node \(i\) in the layer \(l\) is connected to the node \(i\) in layers \(l-1\) and \(l+1\). As explained in the previous section, we can now use the Kronecker sum of these two matrices and represent all connections between oscillators in the multiplex system as: \[\mathbf{A}=\mathbf{A}_{\text{intra}}\bigoplus\mathbf{A}_{\text{inter}}=\begin{pmatrix}\bm {A}_{\text{intra}}&\epsilon_{12}\mathbf{I}_{N}&\cdots&\epsilon_{1M}\mathbf{I}_{N}\\ \hline\epsilon_{21}\mathbf{I}_{N}&\mathbf{A}_{\text{intra}}&\cdots&\epsilon_{2M}\mathbf{ I}_{N}\\ \hline\vdots&\vdots&\ddots&\vdots\\ \epsilon_{M1}\mathbf{I}_{N}&\epsilon_{M2}\mathbf{I}_{N}&\cdots&\mathbf{A}_{\text{intra}} \end{pmatrix}.\] (III.10) In addtion to that, throughout the paper, we consider two different coupling strengths that act as scalars being multiplied by the matrices \(\mathbf{A}_{\text{intra}}\) and \(\mathbf{A}_{\text{inter}}\), which are \(\epsilon_{\text{intra}}\) and \(\epsilon_{\text{inter}}\), respectively. These parameters can be interpreted as the coupling strength for the intra-layer system and inter-layer system, which can be always absorbed into the adjacency matrices. ## IV Composition of solutions in multiplex networks With the perspective described in the previous section, we can study a multiplex network composed of \(M\) layers, each one with \(N\) oscillators. In this framework, we can consider the case where the connectivity of each layer is identical and arbitrary. So, we can study the large and sophisticated multiplex system with \(NM\) oscillators, by studying the behavior of simpler, smaller systems: a network with \(N\) oscillators and connectivity given by the intra-layer connection scheme \(\mathbf{A}_{\text{intra}}\), and a network with \(M\) oscillators and connectivity given by the inter-layer connection scheme \(\mathbf{A}_{\text{inter}}\). With this, we can obtain solutions and the transient dynamical behavior of the multiplex network by studying the behavior of the intra-layer and inter-layer representations. This composition procedure can be summarized in the following proposition: **Proposition IV.1**.: _Let_ \[\mathbf{\psi}^{*}=(\psi_{1}^{*},\psi_{2}^{*},\cdots,\psi_{N}^{*}),\] (IV.1) _and_ \[\mathbf{\phi}^{*}=(\phi_{1}^{*},\phi_{2}^{*},\cdots,\phi_{M}^{*}),\] (IV.2) _be solutions of two single layered networks given by matrices \(\mathbf{A}_{\rm intra}\) and \(\mathbf{A}_{\rm inter}\), whose dynamics is represented by Eqs. (III.6) and (III.7), respectively. Then_ \[\mathbf{\theta}^{*}=(\underbrace{\psi_{1}^{*}+\phi_{1}^{*},\psi_{2}^{*}+\phi_{1}^ {*},\cdots,\psi_{N}^{*}+\phi_{1}^{*}}_{\text{1}^{\rm at layer}},\underbrace{ \psi_{1}^{*}+\phi_{2}^{*},\psi_{2}^{*}+\phi_{2}^{*},\cdots,\psi_{N}^{*}+\phi_{ 2}^{*}}_{\text{2}^{\rm at layer}},\cdots,\underbrace{\psi_{1}^{*}+\phi_{M}^{*},\psi_{2}^{*}+\phi_{M}^{*},\cdots,\psi_{N}^{*}+\phi_{M}^{*}}_{\text{M}^{\rm th layer}},\] (IV.3) _is a corresponding solution of the multiplex network given by the matrix \(\mathbf{A}=\mathbf{A}_{\rm intra}\bigoplus\mathbf{A}_{\rm inter}\)._ Proof.: We are given that \[\frac{d\psi_{i}^{*}}{dt}=\sum_{j=1}^{N}(\mathbf{A}_{\rm intra})_{ij}\sin(\psi_{j}^ {*}-\psi_{i}^{*}),\] (IV.4) and \[\frac{d\phi_{l}^{*}}{dt}=\sum_{k=1}^{M}(\mathbf{A}_{\rm inter})_{lk}\sin(\phi_{k}^ {*}-\phi_{l}^{*}).\] (IV.5) We can now write \[\frac{d(\psi_{i}^{*}+\phi_{l}^{*})}{dt}=\frac{d\psi_{i}^{*}}{dt}+\frac{d\phi_ {l}^{*}}{dt}.\] (IV.6) Here, we can use Eqs. (IV.4) and (IV.5), which leads to \[\frac{d(\psi_{i}^{*}+\phi_{l}^{*})}{dt}=\sum_{j=1}^{N}(\mathbf{A}_{\rm intra})_{ij }\sin\left(\psi_{j}^{*}-\psi_{i}^{*}\right)+\sum_{k=1}^{M}(\mathbf{A}_{\rm inter })_{lk}\sin\left(\phi_{k}^{*}-\phi_{l}^{*}\right),\] (IV.7) or even \[\frac{d(\psi_{i}^{*}+\phi_{l}^{*})}{dt}=\sum_{j=1}^{N}(\mathbf{A}_{\rm intra})_{ ij}\sin\left((\psi_{j}^{*}-\psi_{i}^{*})+\left(\phi_{l}^{*}-\phi_{l}^{*} \right)\right)+\sum_{k=1,k\neq l}^{M}\sum_{j=1}^{N}((\mathbf{A}_{\rm inter})_{lk }I)_{ij}\sin\left((\phi_{k}^{*}-\phi_{l}^{*})+(\psi_{j}^{*}-\psi_{j}^{*}) \right),\] (IV.8) which leads to \[\frac{d(\psi_{i}^{*}+\phi_{l}^{*})}{dt}=\underbrace{\sum_{j=1}^{N}(\mathbf{A}_{ \rm intra})_{ij}\sin\left((\psi_{j}^{*}+\phi_{l}^{*})-(\psi_{i}^{*}+\phi_{l}^{ *})\right)}_{\text{intra}-\text{layer}}+\underbrace{\sum_{k=1,k\neq l}^{M} \sum_{j=1}^{N}((\mathbf{A}_{\rm inter})_{lk}I)_{ij}\sin\left((\psi_{j}^{*}+\phi_{k }^{*})-(\psi_{j}^{*}+\phi_{l}^{*})\right)}_{\text{inter}-\text{layer}}\] (IV.9) With this, we proof that the solutions for the multiplex network is equivalent to the sum of the solutions for the intra and inter-layer systems. Furthermore, we can extend this analysis and characterize the dynamical behavior of a multiplex network using the Kuramoto order parameter, which quantifies in one single number the level of phase synchronization a given network has [42; 43]. The Kuramoto order parameter for the multiplex system is defined as \[R(t)=\frac{1}{NM}\left|\sum_{j=1}^{NM}\exp\left({\rm i}\theta_{j}(t)\right) \right|,\] (IV.10) where \(\theta(t)\) is given by Eq. (III.3). Here, \(R(t)=1\) means that all oscillators in all the layers have the same phase at a given time \(t\), which is defined as phase synchronization. For the asynchronous behavior, \(R\) assumes residual values. Using the same idea, we also measure the level of synchronization of the Kuramoto network on the inter-layer and intra-layer representations: \[R_{\mathrm{intra}}(t)=\frac{1}{N}\left|\sum_{j=1}^{N}\exp{(\mathrm{i}\psi_{j}(t ))}\right|,\] (IV.11) and \[R_{\mathrm{inter}}(t)=\frac{1}{M}\left|\sum_{j=1}^{M}\exp{(\mathrm{i}\phi_{j}(t ))}\right|.\] (IV.12) By the definition of the composed solution and direct calculations, we have the following proposition. **Proposition IV.2**.: _Suppose that \(\mathbf{\psi}\) represents the behavior of the Kuramoto model on \(\mathbf{A}_{\mathrm{intra}}\) - given by Eq. (III.6) - and \(\mathbf{\phi}\) is a represents the behavior of the Kuramoto model on \(\mathbf{A}_{\mathrm{inter}}\) - given by Eq. (III.7). Further, \(\mathbf{\theta}\) represents the dynamical behavior of the Kuramoto model on \(\mathbf{A}\) - the multiplex network which is represented by Eq. (III.3). Then the relation between their Kuramoto order parameters is as follows:_ \[R(t)=R_{\mathrm{intra}}(t)*R_{\mathrm{inter}}(t).\] (IV.13) _Here, \(R_{\mathrm{intra}}(t)\) and \(R_{\mathrm{inter}}(t)\) can be directly obtained through Eqs. (IV.11) and (IV.12), respectively, and \(*\) indicates scalar multiplication._ ## V Stability of composed solutions in multiplex networks In order to perform a stability analysis for a given solution, one needs a more in-depth analysis of the Jacobian matrix. In this section, we study the Jacobian for the intra-layer network in order to obtain information about the Jacobian of the multiplex network. Here we denote the Jacobian for the multiplex system at the equilibrium point \(\mathbf{\theta}^{*}\) as \(\mathbf{J}(\mathbf{\theta}^{*})=\mathbf{J}_{\mathbf{A}}(\mathbf{\theta}^{*})\), and the Jacobian for a single layer at the equilibrium point \(\mathbf{\psi}^{*}\) as \(\mathbf{J}(\mathbf{\psi}^{*})=\mathbf{J}_{\mathbf{A}_{\mathrm{intra}}}(\mathbf{\psi}^{*})\). We first recall the matrix \(\mathbf{A}_{\mathrm{intra}}\), which is defined in Eq. (III.8). Based on this matrix, we can now write the Jacobian for a single layer as \[\mathbf{J}_{\mathbf{A}_{\mathrm{intra}}}(\mathbf{\psi}^{*})=\begin{pmatrix}-\lambda_{1}&a _{12}\cos(\psi_{2}^{*}-\psi_{1}^{*})&\cdots&a_{1N}\cos(\psi_{N}^{*}-\psi_{1}^{ *})\\ a_{21}\cos(\psi_{1}^{*}-\psi_{2}^{*})&-\lambda_{2}&\cdots&a_{2N}\cos(\psi_{N}^{ *}-\psi_{2}^{*})\\ \vdots&\vdots&\ddots&\vdots\\ a_{N1}\cos(\psi_{1}^{*}-\psi_{N}^{*})&a_{N2}\cos(\psi_{2}^{*}-\psi_{N}^{*})& \cdots&-\lambda_{N}\end{pmatrix},\] (V.1) where \[\lambda_{i}=\sum_{j=1,j\neq i}^{N}a_{ij}\cos(\psi_{j}^{*}-\psi_{i}^{*}).\] (V.2) We now compute the Jacobian \(\mathbf{J}(\mathbf{\theta}^{*})\). Let, \(G_{l}\) is the graph representing the \(l^{\mathrm{th}}\) layer, and \(\epsilon_{lk}\) is the coupling strength between nodes in the layer \(l\) and \(k\). We emphasize that for the stability analysis, the coupling between layers is considered uniform, i.e., oscillators in layer \(l\) are connected to oscillators in layer \(k\) with the same coupling strength. By definition, for two indices \((i,j)\) such that \(i\in V(G_{l}),j\in V(G_{k})\) where \(l\neq k\) then \[[\mathbf{J}(\mathbf{\theta}^{*})]_{i,j}=(\mathbf{A}_{lk})_{i,j}\cos\left(\psi_{i\ (\mathrm{mod}\ N)}^{*}+\phi_{k}^{*}-\psi_{i\ (\mathrm{mod}\ N)}^{*}-\phi_{l}^{*}\right)=(\epsilon_{lk}\mathbf{I}_{\mathrm{N}})_ {i\ (\mathrm{mod}\ N),j\ (\mathrm{mod}\ N)}\cos\left(\phi_{k}^{*}-\phi_{l}^{*}\right).\] (V.3) If \(i,j\in V(G_{l})\) and \(i\neq j\), then \[[\mathbf{J}(\mathbf{\theta}^{*})]_{i,j}=(\mathbf{A}_{ll})_{i\ ( \mathrm{mod}\ N),j\ (\mathrm{mod}\ N)}\cos\left(\psi_{j\ (\mathrm{mod}\ N)}^{*}+\phi_{l}^{*}-\psi_{i\ (\mathrm{mod}\ N)}^{*}-\phi_{l}^{*}\right)\] (V.4) \[[\mathbf{J}(\mathbf{\theta}^{*})]_{i,j}=(\mathbf{A}_{\mathrm{intra}})_{i\ ( \mathrm{mod}\ N),j\ (\mathrm{mod}\ N)}\cos\left(\psi_{j\ (\mathrm{mod}\ N)}^{*}-\psi_{i\ (\mathrm{mod}\ N)}^{*}\right).\] (V.5) Finally, we need to consider the case \(i=j\) and \(i\in V(G_{l})\). For this part, we observe that \(\mathbf{J}(\mathbf{\theta}^{*})\) is a semi-magic square matrix with a line sum equal to zero. We can then see that \[[\mathbf{J}(\mathbf{\theta}^{*})]_{i,i}=-\lambda_{l}-\sum_{k=1,j\neq i}^{N}\epsilon_{lk}.\] (V.6) with \(\deg(i)\) denoting the weighed degree of node \(i\). By combining these facts, we can write the Jacobian for the multiplex system at the equilibrium point \(\mathbf{\theta}^{*}\) as: \[\mathbf{J}(\mathbf{\theta}^{*})=\mathbf{J}_{\mathbf{A}}(\mathbf{\theta}^{*})=\left(\frac{\mathbf{J}( \mathbf{\psi}^{*})-c_{1}\mathbf{I}}{(\epsilon_{12}I)\cos(\phi_{2}^{*}-\phi_{1}^{*})} \right.\left.\begin{array}{c|c|c|c}\cdots&(\epsilon_{1M}\mathbf{I})\cos(\phi_{M}^ {*}-\phi_{1}^{*})\\ \hline(\epsilon_{21}\mathbf{I})\cos(\phi_{1}^{*}-\phi_{2}^{*})&\mathbf{J}(\mathbf{\psi}^{* })-c_{2}\mathbf{I}&\cdots&(\epsilon_{2M}\mathbf{I})\cos(\phi_{M}^{*}-\phi_{2}^{*})\\ \hline\vdots&\vdots&\ddots&\vdots\\ \hline(\epsilon_{M1}\mathbf{I})\cos(\phi_{1}^{*}-\phi_{M}^{*})&(\epsilon_{M2}\mathbf{ I})\cos(\phi_{2}^{*}-\phi_{M}^{*})&\cdots&\mathbf{J}(\mathbf{\psi}^{*})-c_{M}\mathbf{I} \end{array}\right).\] (V.7) where, we recall that \(\epsilon_{ij}\) is the coupling between oscillators in layer \(i\) and \(j\) and \(c_{i}=\sum_{j=1,j\neq i}^{N}\epsilon_{ij}\cos(\phi_{j}^{*}-\phi_{i}^{*})\) **Proposition V.1**.: _The spectrum of \(\mathbf{J}_{\mathbf{A}}(\mathbf{\theta}^{*})\) can be defined in terms of:_ * _The spectra of the matrix,_ \(\mathbf{J}_{\mathbf{A}_{\rm inter}}(\mathbf{\phi}^{*})\)_._ * _The spectra of the Jacobian,_ \(\mathbf{J}_{\mathbf{A}_{\rm intra}}(\mathbf{\psi}^{*})\)_._ _Namely,_ \[\mathrm{Spec}(\mathbf{J}_{\mathbf{A}}(\mathbf{\theta}^{*}))=\mathrm{Spec}(\mathbf{J}_{\mathbf{A}_{ \rm intra}}(\mathbf{\psi}^{*}))+\mathrm{Spec}(\mathbf{J}_{\mathbf{A}_{\rm inter}})(\mathbf{ \phi}^{*})).\] (V.8) This is clear as \(\mathbf{J}_{\mathbf{A}}(\mathbf{\theta}^{*})=\mathbf{J}_{\mathbf{A}_{\rm intra}}(\mathbf{\psi}^{*}) \bigoplus\mathbf{J}_{\mathbf{A}_{\rm inter}}(\mathbf{\phi}^{*})\). **Proposition V.2**.: _The solution \(\mathbf{\theta}^{*}\) for the multiplex network obtained through Eq. (IV.3) is linearly stable if and only if the solutions for the intra and inter-layer systems \(\mathbf{\psi}^{*}\) and \(\mathbf{\phi}^{*}\) are linearly stable._ Proof.: Assume that \(\mathbf{\psi}^{*}\) and \(\mathbf{\phi}^{*}\)are linearly stable, then \(\mathbf{J}_{\mathbf{A}_{\rm intra}}(\mathbf{\psi}^{*})\) and \(\mathbf{J}_{\mathbf{A}_{\rm inter}}(\mathbf{\phi}^{*})\) are symmetric negative-semidefinite. That is, all the eigenvalues of \(\mathbf{J}_{\mathbf{A}_{\rm intra}}(\mathbf{\psi}^{*})\) and \(\mathbf{J}_{\mathbf{A}_{\rm inter}}(\mathbf{\phi}^{*})\), except \(0\), are negative. So, all the eigenvalues of \(\mathbf{J}_{\mathbf{A}}(\mathbf{\theta}^{*})\), except \(0\), are negative as the sum of negative numbers is negative. Hence, \(\mathbf{\theta}^{*}\) is linearly stable. Conversely, if the composed solution \(\mathbf{\theta}^{*}\) is linearly stable then \(0\) is an eigenvalue of \(\mathbf{J}_{\mathbf{A}}(\mathbf{\theta}^{*})\) with multiplicity \(1\) and all other eigenvalues must be negative. From this, we can conclude that all the eigenvalues of \(\mathbf{J}_{\mathbf{A}_{\rm intra}}(\mathbf{\psi}^{*})\) and \(\mathbf{J}_{\mathbf{A}_{\rm inter}}(\mathbf{\phi}^{*})\), except \(0\), are negative. This shows that \(\mathbf{\psi}^{*}\) and \(\mathbf{\phi}^{*}\) are linearly stable. ## VI Numerical simulations and applications In order to highlight the applicability of the mathematical approach we introduce in this paper to study multiplex networks of nonlinear oscillators, we present here several examples and numerical simulations. We first consider a multiplex system composed of \(M=3\) layers with \(N=20\) oscillators each. In this simple example, we consider all oscillators with the same natural frequency and represent their motion in the rotating framework, i.e. \(\omega=0\). The initial state for the multiplex network is given by random phases in \(\theta_{i}(0)\in\mathcal{U}(-\pi,\pi),i\in[1,NM]\). We then consider the intra-layer and the inter-layer systems, as described before, and use Prop. IV.1 through Eq. (IV.3) to compose the initial conditions for one. With this, we can study the dynamics of these two smaller systems to learn about the dynamical behavior of the multiplex network. Figure IV.1a shows the trajectories of the oscillators in the multiplex system (black solid lines) in each layer which are obtained directly through the numerical integration of Eq. (III.3) representing the multiplex system. We observe that these trajectories perfectly match the trajectories given by the composed solution or equivalent system (colored dashed lines), which are obtained through Eqs. (III.6), (III.7) and (IV.3). In this case, we show the trajectories of oscillators in each layer separately to help with the visualization, but we highlight that Eq. (IV.3) defining the composed solution is general and establishes the correspondence with the multiplex system. Here, we observe that the oscillators in all layers start in at random phases and asynchronous state, due to the intra-layer and inter-layer coupling, though, the oscillators evolve to a common phase, characterizing phase synchronization. We also analyze the dynamics of the multiplex network by using the Kuramoto order parameter, which gives us the level of phase synchronization of the system. We first use Eq. (IV.10) to obtain directly the order parameter through the phases \(\theta_{i}(t)\), which are obtained through the integration of Eq. (III.3). This is represented by the gray solid line (multiplex system) in Fig. 2b. We also obtain the Kuramoto order parameter with the composed solution (black dashed line). In this case, we can use Prop. IV.2 through Eq. (IV.13) to obtain the synchronization level of the whole system, but using information about the smaller ones, i.e. intra and inter-layer systems. We observe a complete match between these two lines, representing the transition from an asynchronous state to phase synchronization. To further explore the dynamical behavior of multiplex networks and the use of the composed solution, we now consider the case where the natural frequency of oscillators is no longer zero. Figure 3a shows the case where all oscillators have the same natural frequency. We then use the same procedure introduced in the previous figure and start the system with random initial conditions. We evaluate the synchronization level of the network, which is obtained directly through Eq.(IV.10) (gray solid line) and also using the composed solution Eq. (IV.13) (black dashed line), where we observe the network is asynchronous in the beginning, but it evolves to phase synchronization. We also analyze the spatiotemporal dynamics of the system, which is represented by the phases \(\theta_{i}(t)\) plotted in color-code as a function of time and node index. We observe that, at first, each layer is desynchronized, but as time evolves emerge of phase synchronization is represented by the horizontal lines. Figure 3b shows the case where the oscillators in different layers have distinct natural frequencies. In this more complicated case, our approach described in Prop IV.1 through Eq. (IV.3) is still valid and we study the Kuramoto order parameter of the multiplex network (gray solid line) using the composed solution (black dashed line). Again, due to random initial conditions, the systems start in an asynchronous state. Due to the intra and inter-layer couplings, however, the networks exhibit higher levels of coherence and synchronization. In this case, though, because the natural frequencies are distinct the order parameter does not reach one, and the multiplex network as a whole does not reach phase synchronization. We observe the spatiotemporal dynamics in this case and notice that each layer evolves to an individual state of phase synchronization within a given layer. Because each layer is oscillating at a different frequency, the system as a whole does not evolve to a common phase. It is important to emphasize that our approach offers a perfect match in this case as well. Multiplex networks composed of Kuramoto oscillators with heterogeneous natural frequency can display a rich diversity of synchronization patterns, and the approach we introduce in this paper is able to offer a simplified way to analyze them. To exemplify this point, we consider a larger multiplex network composed of \(M=10\) layers, each one with \(N=100\) oscillators. In this case, oscillators within a given layer have the same natural frequency, but are distinct from oscillators in another layer. As considered before, we can use Prop. IV.1 to obtain the initial Figure 2: **The composed solution allows us to study the dynamics of the multiplex network.** We consider a multiplex network composed of \(M=3\) layers each one with \(N=20\) oscillators. The initial conditions for the oscillators are given by random phases \(\mathcal{U}(-\pi,\pi)\). **(a)** We show the trajectories of the multiplex system for the oscillators in each layer (black solid lines), which are obtained through the integration of Eq. (III.3). These trajectories have a perfect match with the one obtained through the composed solution Eq. (IV.3), represented here by the colored dashed lines (equivalent system). **(b)** We also show the synchronization level of the multiplex network, which is obtained directly through Eq. (IV.10) (solid gray line), and also through the composed solution Eq. (IV.13) (black dashed line). conditions for the multiplex, intra-layer and inter-layer networks in a way that we can use the smaller systems to obtain information about the multiplex one. Here, we use this proposition with random initial conditions \(\mathcal{U}(-\pi,\pi)\). We then numerically integrate the dynamics of the multiplex network using Eq. (III.3), which allows us to obtain the Kuramoto order parameter for the whole system using directly Eq. (IV.10) (Fig. 4a, gray solid line). As we discussed before, we can use Prop. IV.2 through Eq. (IV.13) to obtain the level of synchronization that the multiplex through the composed solution (Fig. 4a, black dashed line), which perfectly matches the result obtained directly through the integration of the large multiplex system. In this case, we observe that, due to the interplay between the intra and inter-layer couplings and distinct natural frequencies, the dynamical behavior of the multiplex network is quite rich, and the order parameter does not increase monotonically to one. Instead, the synchronization level of the system is in constant change, increasing and decreasing in a sophisticated manner. Further, Fig. 4b shows the spatiotemporal dynamics of the multiplex system, where we can observe that each layer evolves to a phase synchronized state, but due to the distinct natural frequency of oscillation, the layers transition between higher and lower levels of coherence. This can be better appreciated in Figs. 4c and 4d, where we plot the phases of oscillators in each layer and consider two different moments. First, when the synchronization level of the multiplex network is high (pink arrow), we can observe that the layers have a high degree of coherence (Fig. 4c). However, when the level of synchronization of the whole system is low (yellow arrow), we can observe that layers are not synchronized (Fig. 4d). So far, we observe that by using the intra and inter-layer systems, we can study the dynamics of the respective multiplex network. In the cases presented in the previous figures, we start with random initial conditions and study the emergence of patterns of oscillations and synchronization. At the same time, however, we can use Prop IV.1 through Eq. (IV.3) to find possible solutions for the multiplex network of Kuramoto oscillators. To do so, we can find solutions for the systems described by Kuramoto oscillators on the inter-layer and intra-layer matrices, and then use Eq. (IV.3) to compose a solution for the multiplex network. Further, we can then use Prop. V.2 to study the linear stability of these solutions, by considering the linear stability of the solutions for the inter and intra-layer networks. To exemplify this point, we consider multiplex networks where the intra-layer connection is given by a ring graph with Figure 3: **Oscillation patterns in multiplex networks of Kuramoto oscillators.****(a)** We first consider the case where all oscillators have the same, but non-zero natural frequency. We can measure the synchronization level using the order parameter obtained directly through Eq. (IV.10) – gray line – and also using the composed solution through Eq. (IV.13) – black dashed line. The network starts in random initial phases and evolves to phase synchronization, which can be also observed in the spatiotemporal patterns given by the emergence of horizontal lines. **(b)** We also consider the case where oscillators in different layers have distinct natural frequencies. We observe that our approach produces a perfect match with the direct analysis of the multiplex network. Also, in this case, each layer is phase synchronized internally, but the multiplex system as a whole does not reach phase synchronization due to the distinct natural frequency, as observed in the spatiotemporal dynamics. periodic boundary conditions. In this case, possible solutions for the inter and intra-layer system can be expressed as: \[\mathbf{\theta}^{(p)}=\left(0,\frac{-2\pi p}{\mathcal{N}},\cdots,\frac{-2\pi p( \mathcal{N}-1)}{\mathcal{N}}\right),\] (VI.1) where \(\mathcal{N}\) is the number of oscillators, being \(\mathcal{N}=M\) for the inter-layer network and \(\mathcal{N}=N\) for the intra-layer network, and \(p\) defines different solutions, where \(p=0\) represents phase synchronization, and \(0<p\leq\nicefrac{{\mathcal{N}}}{{2}}\) represents different phase-locking or twisted states. As an example, we consider the solutions for the inter and intra-layer networks using Eq. (VI.1) with \(p=1\) and \(p=2\), respectively. In this case, we consider a multiplex network with \(M=3\) layers each one composed of \(N=100\) oscillators with the same natural frequency. With this, we have the solutions for the inter and intra-layer networks, \(\mathbf{\phi}^{*}\) and \(\mathbf{\psi}^{*}\), respectively. We then use Eq. (IV.3) to compose the solution for the multiplex network and use it as the initial condition for the numerical simulation. Figure 5a shows the numerical results for this system, where we observe that the spatiotemporal dynamics (left) stay the same as time evolves since the obtained phase configuration (upper right) is a solution for the multiplex network. This can be also appreciated by the evaluation of the Kuramoto order parameter for each network (lower right), where it is zero for the whole simulation. Here, the order parameter for the multiplex network is obtained directly through Eq. (IV.10), the order parameter for the order parameter for the composed solution is given by Prop. IV.2 through Eq. (IV.13), the order parameter for the inter-layer system is given by Eq.(IV.12), and for the intra-layer network by Eq. (IV.11). We can now consider perturbations over these solutions and study the response of the system. The perturbations are applied to each element of the vector representing the solution in Eq. (VI.1). Mathematically, the perturbation is given by \(\eta\mathcal{U}(-\pi,\pi)\), where \(\eta=0.025\) is the amplitude. After adding the perturbation to the solution given by Eq. (VI.1), we wrap the phase pattern again in the interval of \([-\pi,\pi)\) to facilitate the visualization. We again consider Figure 4: **Multiplex networks of oscillators with heterogeneous natural frequency display rich dynamics.** We consider a large multiplex network composed of \(M=10\) layers each one with \(N=100\) oscillators. In this case, the oscillators in different layers have distinct natural frequency. **(a)** We observe that the synchronization level of the multiplex system no longer monotonically transitions to unity, but rather exhibits sophisticated dynamics, where the system transitions between different levels of coherence. Further, the order parameter obtained directly through Eq. (IV.10) depicts a perfect match with the one obtained using Prop. IV.2 through Eq. (IV.13). **(b)** We observe that each layer transitions to phase synchronization individually, but no phase synchronization is observed in the multiplex network as a whole. In this case, the network transitions between lower and higher levels of coherence among layers, where at specific points **(c)** the layers are in sync (pink arrow) **(d)** but it quickly transitions to a state with a low level of coherence (yellow arrow). the solutions for the inter and intra-layer networks using Eq. (VI.1) with \(p=1\) and \(p=2\), respectively, with \(M=3\) layers each one composed of \(N=100\) oscillators. In Fig. 5b, we consider the case where we apply the perturbation to the solution of the inter-layer network. In a similar way observed before, the spatiotemporal dynamics (left) starts with each layer in distinct phase-locking states, but due to the perturbation on the inter-layer system, the multiplex network transitions to a different phase-locking state (upper right). This new state is still a solution for the multiplex system, and the order parameter for the multiplex network and for the composed solution does not change and is zero for the whole simulation (lower right). Interestingly, the order parameter for the inter-layer network transitions to one due to the perturbation, since the twisted state for this system is not stable (\(M=3\)). The order parameter for the intra-layer networks does not change, otherwise. Going further, we consider the case with perturbation on the intra-layer system (Fig. 5c). In this case, we observe a different transition in the spatiotemporal dynamics (left). Due to the perturbation in the solution for the intra-layer system, we can appreciate a small deviation in the initial state in comparison to the previous cases (upper right). Interestingly, because the perturbation occurs on the intra-layer level, each layer transition to phase synchronization, and the order parameter for the intra-layer system transition to one (lower right). At the same time, because the solution for the multiplex network is composed in combination with the inter-layer solution, the multiplex network as a whole does not transition to a common phase, and the order parameter of the multiplex network, as well as, of the composed solution, remains zero (lower right). In this case, the phase pattern for the whole system is now given by individual phase synchronized layers in different phases (upper right). At last, we consider the case where the perturbation is applied to the inter and intra-layer solutions (Fig. 5d). Under these conditions, the system starts Figure 5: **Composing solutions for multiplex networks.** We use Prop. IV.1 through Eq. (IV.3) to compose solutions for the multiplex system with \(M=3\) layers each one with \(N=100\) oscillators. Here, we consider individual solutions for the inter and intra-layer networks given by Eq. (VI.1) with \(p=1\) and \(p=2\), respectively. **(a)** In the first example, without any perturbation, the system does change its pattern when we use this solution as the initial state. **(b)** We then consider perturbation on the inter-layer solution. In this case, the multiplex network transitions to a different twisted state, which is still a solution for the system. The inter-layer network transitions to phase synchronization due to the perturbation. **(c)** When we perturb the solution of the intra-layer system, we observe a different transition in the spatiotemporal dynamics. In this case, each layer transition to phase synchronization, but on a different phase, which characterizes a new solution for the multiplex network with an order parameter equal to zero. **(d)** At last, we consider perturbation on the solution for the inter and intra-layer systems. In this case, the multiplex network as a whole transitions to phase synchronization, and the order parameter reaches one. with the combination of twisted states, but the whole multiplex network transitions to phase synchronization (left, upper right). This can be also observed using the order parameter, where the order parameter for the multiplex network, composed solution, inter-layer, and intra-layer networks transitions to one (lower right). We emphasize the correspondence between the order parameter obtained directly through Eq. (IV.10) for the multiplex network, and through the composed solution using Eq. (IV.13). ## VII Discussions and conclusions Networked systems have been extensively studied in recent years, where many direct applications for real-world problems are possible. In this paper, we have studied multiplex networks, which are a class of multilevel systems. There has been great interest in this class of networks recently [11; 13; 14; 15; 16; 17; 18; 25; 26; 27; 28; 29; 40; 51]. At the same time, these networks have an intrinsic difficulty regarding mathematical and analytical approaches due to the presence of two different scales of coupling. A common technique used to study these systems consists in dividing the whole, multilevel system into smaller representations of the system, e.g. the intra and inter-coupling, which represent the coupling within a layer and between layers, respectively. In this regard, many researchers have used this kind of approach to study multilayer networks [11; 12; 39; 40; 41]. In this paper, we have contributed to this discussion by introducing an approach to multiplex networks of Kuramoto oscillators. This framework allows us to describe the oscillation patterns observed in coupled Kuramoto oscillators on a multiplex network by studying two smaller, simpler systems of Kuramoto oscillators related to the intra and inter-coupling structures. With this in mind, we can compose solutions and oscillation patterns in multiplex networks of Kuramoto oscillators in a straightforward way, as shown in Sec. IV. This approach allows us to (i) study the dynamics of a whole multiplex system of oscillators by analyzing the simpler systems represented by the intra and inter-couplings, and (ii) find equilibrium points for a sophisticated multiplex network by composing a solution based on the smaller intra and inter systems. This mathematical approach is defined in Props. IV.1 and IV.2, and it has been explored in numerical simulations, as shown in Figs. 2, 3, and 4. Furthermore, we can use a similar idea to extend the approach to study the linear stability of solutions in the multiplex network of Kuramoto oscillators. Because we can represent and study the dynamical behavior of a multiplex network using the intra and inter-coupling systems, we can obtain information about the spectral properties of the Jacobian related to the multiplex system by studying the Jacobian of the smaller systems. With this, we can analyze the linear stability of the solutions for the intra and inter-coupling system, and then obtain information about the linear stability of the composed solution on the multiplex network. This is represented in Prop. V.2 and studied numerically in Fig. 5. The framework and results introduced in this paper contribute to the study of the dynamical behavior of networks with different levels of connection. Particularly, we have recently introduced a similar approach to multilayer networks [38], which allows us to study the dynamics of multilayer networks using a reduced representation. Moreover, a similar approach has been introduced recently for the study of multilayer networks, where the reduced representations of multilevel networks are used to study the master stability function of multilayer networks with application in neural networks [41]. Here, we have used similar ideas to extend the study of Kuramoto oscillators on multiplex networks, an important dynamical system that has extensively studied [45; 46; 47; 48; 49]. The framework we introduce in this paper can be applied to oscillators with different instinct frequency (Figs. 3 and 4), which allows for the study of a diversity of oscillation patterns in multiplex networks. Further, our approach can be extended in the future in light of a new perspective for nonlinear oscillator networks [52; 53; 54], where analytical and geometric insights are available for the emergence of synchronization phenomena. The development of different mathematical approaches for networked systems is of great importance due to its direct applicability to study many problems. For instance, many problems in neuroscience can be modeled and studied using multilayer networks [55; 56; 57], where the emergence of sophisticated spatiotemporal patterns plays an important role [55; 58]. The study of these networks thus opens the possibility to develop further analyses of computations performed with these systems. ###### Acknowledgements. We thank Alex Busch for helping with the illustrations. This work was supported by BrainSCAN at Western University through the Canada First Research Excellence Fund (CFREF), the NSF through a NeuroNex award (#2015276), the Natural Sciences and Engineering Research Council of Canada (NSERC) grant R0370A01, Compute Ontario (computeontaro.ca), Digital Research Alliance of Canada (alliancecan.ca), and the Western Academy for Advanced Research. R.C.B gratefully acknowledges the Western Institute for Neuroscience Clinical Research Postdoctoral Fellowship. ## Code availability An open-source code repository for this work is available on GitHub: [http://mullerlab.github.io](http://mullerlab.github.io). ## Appendix - Computational details Numerical integration was performed with the Euler method with a time step given by \(dt=0.001\). The network structure is given as follows (if not stated otherwise): the connection scheme within each layer is given by an undirected Erdos-Renyi graph with probability of \(0.25\); the connection scheme between layers is given by a \(k\)-regular graph with \(k=1\). The parameters used in the simulations depicted in each figure are listed in Table 1.
2303.03294
Involutions on K3 surfaces and derived equivalence
We study involutions on K3 surfaces under conjugation by derived equivalence and more general relations, together with applications to equivariant birational geometry.
Brendan Hassett, Yuri Tschinkel
2023-03-06T17:12:02Z
http://arxiv.org/abs/2303.03294v2
# Involutions on K3 surfaces and derived equivalence ###### Abstract. We study involutions on K3 surfaces under conjugation by derived equivalence and more general relations, together with applications to equivariant birational geometry. ## 1. Introduction The structure of \(\operatorname{Aut}D^{b}(X)\), the group of autoequivalences of the bounded derived category \(D^{b}(X)\) of a K3 surface \(X\), is very rich but well-understood only when the Picard group \(\operatorname{Pic}(X)\) has rank one [1]. The automorphism group \(\operatorname{Aut}(X)\) of \(X\) lifts to \(\operatorname{Aut}D^{b}(X)\), and one may consider the problem of classification of finite subgroups \(G\subset\operatorname{Aut}(X)\) up to conjugation - either by automorphisms, derived equivalence, or even larger groups. This problem is already interesting for cyclic \(G\), and even for involutions, e.g., Enriques or Nikulin involutions. There is an extensive literature classifying these involutions on a given K3 surface \(X\): topological types, moduli spaces of polarized K3 surfaces with involution, and the involutions on a single \(X\) up to automorphisms, see, e.g., [1], [2], [3], [4]. Here we investigate involutions up to derived equivalence, i.e., derived equivalences respecting involutions. Our interest in "derived" phenomena was sparked by a result in [20]-- there exist complex conjugate, derived equivalent nonisomorphic K3 surfaces--as well as our investigations of arithmetic problems on K3 surfaces [13], [13]. One large class of involutions \(\sigma:X\to X\) are those whose quotient \(Q=X/\sigma\) is rational. Examples include \(Q\) a del Pezzo surface and \(X\to Q\) a double cover branched along a smooth curve \(B\in|-2K_{Q}|\). We may allow \(Q\) to have ADE surface singularities away from \(B\), or \(B\) to have ADE curve singularities; then we take \(X\) as the minimal resolution of the resulting double cover of \(Q\). These were studied by Alexeev and Nikulin in connection with classification questions concerning singular del Pezzo surfaces [1]. Our principal result here (see Section 5) is that * equivariant derived equivalences of such \((X,\sigma)\) are in fact equivariant isomorphisms (see Corollary 11). Our study of stable equivalence of lattices with involution leads us to a notion of _skew equivalence_, presented in Section 6. Here, duality interacts with the involution which is reflected in a functional equations for the Fourier-Mukai kernel. Explicit examples, for anti-symplectic actions with quotients equal to \(\mathbb{P}^{2}\), are presented in Section 7. Next, we focus on _Nikulin_ involutions \(\iota:X\to X\), i.e., involutions preserving the symplectic form, so that the resolution of singularities \(Y\) of the resulting quotient \(X/\iota\) is a K3 surface. A detailed study of such involutions can be found in [13]. In addition to the polarization class, the Picard group \(\operatorname{Pic}(X)\) contains the lattice \(\operatorname{E}_{8}(-2)\); van Geemen and Sarti describe the moduli and the geometry in the case of minimal Picard rank \(\operatorname{rk}\operatorname{Pic}(X)=9\). In Section 8, we extend their results to higher ranks, and * exhibit nontrivial derived equivalences between Nikulin involutions (Proposition 21). These, in turn, allow us to construct in Section 9 examples of equivariant birational isomorphisms \(\phi:\mathbb{P}^{4}\dashrightarrow\mathbb{P}^{4}\) with nonvanishing invariant \(C_{G}(\phi)\), introduced in [11], [11] and extended to the equivariant context in [12]. The case of _Enriques_ involutions \(\epsilon:X\to X\), i.e., fixed-point free involutions, so that the resulting quotient \(X/\epsilon\) is an Enriques surface, has also received considerable attention. There is a parametrization of such involutions in terms of the Mukai lattice \(\tilde{\operatorname{H}}(X)\), and an explicit description of conjugacy classes, up to automorphisms \(\operatorname{Aut}(X)\), in interesting special cases, e.g., for K3 surfaces of Picard rank 11, Kummer surfaces of product type, general Kummer surfaces, or singular K3 surfaces [10], [14], [15], [16]. In Section 10 we observe that * the existence of an Enriques involution on a K3 surface \(X\) implies that every derived equivalent surface is equivariantly isomorphic to \(X\) (Propositions 28 and 29); * while there are _no_ nontrivial equivariant derived autoequivalences, we exhibit nontrivial _orientation reversing_ (i.e., skew) equivalences, e.g., on singular K3 surfaces. **Acknowledgments:** The first author was partially supported by Simons Foundation Award 546235 and NSF grant 1701659, the second author by NSF grant 2000099. We are grateful to Nicolas Addington, Sarah Frei, Lisa Marquand, and Barry Mazur for helpful suggestions. ## 2. Lattice results We recall basic terminology and results concerning lattices: torsion-free finite-rank abelian groups \(\mathrm{L}\) together with a nondegenerate integral quadratic form \((\cdot,\cdot)\), which we assume to be even. Basic examples are \[\mathrm{U}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\] and positive definite lattices associated with Dynkin diagrams (denoted by the same letter). We write \(\mathrm{L}(2)\), when the form is multiplied by \(2\). We let \[d(\mathrm{L}):=\mathrm{L}^{*}/\mathrm{L}\] be the discriminant group and \[q_{\mathrm{L}}:d(\mathrm{L})\to\mathbb{Q}/2\mathbb{Z}\] the induced discriminant quadratic form. **Nikulin's form of Witt cancellation:** **Proposition 1**.: _[_15_, Cor. 1.13.4]_ _Given an even lattice \(\mathrm{L}\), \(\mathrm{L}\oplus\mathrm{U}\) is the unique lattice with its signature and discriminant quadratic form._ If lattices \(\mathrm{L}_{1}\) and \(\mathrm{L}_{2}\) are stably isomorphic - become isomorphic after adding unimodular lattices of the same signature - then \[\mathrm{L}_{1}\oplus\mathrm{U}\simeq\mathrm{L}_{2}\oplus\mathrm{U}.\] **Nikulin stabilization result:** Given a lattice \(\mathrm{L}\), write \(\mathrm{L}_{p}=\mathrm{L}\otimes_{\mathbb{Z}}\mathbb{Z}_{p}\) for the induced \(p\)-adic bilinear form. The \(p\)-primary part of \(d(\mathrm{L})\) depends only on \(\mathrm{L}_{p}\) and is written \(d(\mathrm{L}_{p})\). We use \(q_{\mathrm{L}_{p}}\) for the induced discriminant quadratic form on \(d(\mathrm{L}_{p})\). For a finitely generated abelian group \(A\), let \(\ell(A)\) denote the minimal number of generators. **Proposition 2**.: _[_15_, Thm. 1.14.2]_ _Let \(\mathrm{L}\) be an even indefinite lattice satisfying_ * \(\mathrm{rank}(\mathrm{L})\geq\ell(d(\mathrm{L}_{p}))+2\) _for all_ \(p\neq 2\) * _if_ \(\operatorname{rank}(L)=\ell(d(L_{2}))\) _then_ \(q_{L_{2}}\) _contains_ \(u_{+}^{(2)}(2)\) _or_ \(v_{+}^{(2)}(2)\) _as a summand, i.e., the discriminant quadratic forms of_ \[\operatorname{U}^{(2)}(2)=\begin{pmatrix}0&2\\ 2&0\end{pmatrix},\quad\operatorname{V}^{(2)}(2)=\begin{pmatrix}4&2\\ 2&4\end{pmatrix}.\] _Then the genus of \(L\) admits a unique class and \(\operatorname{O}(L)\to\operatorname{O}(q_{L})\) is surjective._ **Remark 3**.: [20, Rem. 1.14.5] The \(2\)-adic condition can be achieved whenever the discriminant group \(d(L)\) has \((\mathbb{Z}/2\mathbb{Z})^{3}\) as a summand. Thus given a lattice \(L\), any automorphism of \((d(L),q_{L})\) may be achieved via an automorphism of \(L\oplus U\). More precisely, given two lattices \(L_{1}\) and \(L_{2}\) of the same rank and signature and an isomorphism \[\varrho:(d(L_{1}),q_{L_{1}})\stackrel{{\sim}}{{\longrightarrow} }(d(L_{2}),q_{L_{2}})\] there exists an isomorphism \[\rho:L_{1}\oplus U\stackrel{{\sim}}{{\longrightarrow}}L_{2} \oplus U\] inducing \(\varrho\). ### Nikulin imbedding result: **Proposition 4**.: _[_20_, Cor. 1.12.3,Thm. 1.14.4]_ _Let \(L\) be an even lattice of signature \((t_{+},t_{-})\) and discriminant group \(d(L)\). Then \(L\) admits a primitive embedding into a unimodular even lattice of signature \((\ell_{+},\ell_{-})\) if_ * \(\ell_{+}-\ell_{-}\equiv 0\mod 8\)_;_ * \(\ell_{+}\geq t_{+}\) _and_ \(\ell_{-}\geq t_{-}\)_;_ * \(\ell_{+}+\ell_{-}-t_{+}-t_{-}>\ell(d(L))\)_, the rank of_ \(d(L)\)_._ _This embedding is unique up to automorphisms if_ * \(\ell_{+}>t_{+}\) _and_ \(\ell_{-}>t_{-}\)_;_ * \(\ell_{+}+\ell_{-}-t_{+}-t_{-}\geq 2+\ell(d(L))\)_._ In particular, _any_ even nondegenerate lattice of signature \((1,9)\) admits a unique embedding into the K3 lattice \(\operatorname{U}^{\oplus 3}\oplus E_{8}(-1)^{\oplus 2}\). ## 3. Mukai lattices and derived automorphisms Throughout, we work over the complex numbers \(\mathbb{C}\). Let \(X\) be a complex K3 surface and \[\operatorname{Pic}(X)\subset\operatorname{H}^{2}(X,\mathbb{Z})\simeq \operatorname{E}_{8}(-1)^{\oplus 2}\oplus\operatorname{U}^{3}\] its Picard lattice, a sublattice of a lattice of signature \((3,19)\), with respect to the intersection pairing. The Picard lattice determines the automorphisms of \(X\): the natural map \[\operatorname{Aut}(X)\to\operatorname{O}(\operatorname{Pic}(X))/\langle \text{reflections by $(-2)$-classes}\rangle,\] to the quotient of the orthogonal group of the Picard lattice, has finite kernel and cokernel. All possible finite \(G\subset\operatorname{Aut}(X)\) have been classified, see [1]. Classification of \(\operatorname{Aut}(X)\)-conjugacy classes of elements or subgroups boils down to lattice theory of \(\operatorname{Pic}(X)\); we will revisit it in special cases below. The _transcendental lattice_ of \(X\) is the orthogonal complement \[T(X):=\operatorname{Pic}(X)^{\perp}\subset\operatorname{H}^{2}(X,\mathbb{Z}).\] This lattice plays a special role: two K3 surfaces \(X_{1}\), \(X_{2}\) are _derived equivalent_ if and only if there exists an isomorphism of lattices \[T(X_{1})\stackrel{{\sim}}{{\longrightarrow}}T(X_{2}),\] compatible with Hodge structures [10]. Derived equivalence also means that the lattices \(\operatorname{Pic}(X_{1})\) and \(\operatorname{Pic}(X_{2})\) belong to the same genus. Over nonclosed fields, or in equivariant contexts, derived equivalence is a subtle property, see, e.g., [11], [12]. We recall standard examples of Picard lattices of derived equivalent but not isomorphic K3 surfaces **Remark 5**.: In Picard rank one: the number of nonisomorphic derived equivalent surfaces is governed by the number of prime divisors of the polarization degree \(2d\); see [10, Cor. 2.7] and Remark 5. The isomorphisms classes correspond to solutions of the congruence \[x^{2}\equiv 1\pmod{4d} \tag{3.1}\] modulo \(\pm 1\). When \(d>1\) the number of derived equivalent K3 surfaces is \(2^{\tau(d)-1}\), where \(\tau\) is the number of distinct prime factors of \(d\). In Picard rank two: derived equivalences among lattice-polarized K3 surfaces of square-free discriminant are governed by the genera in the class group of the corresponding real quadratic field [10, Sect. 3]. Here are instances where derived equivalence is trivial **Proposition 6**.: _[_10_, Cor. 2.6, 2.7]_ _Derived equivalence implies isomorphism in each of the following cases:_ * _if the Picard rank is_ \(\geq 12\)_;_ * _if the surfaces admits an elliptic fibration with a section;_ * _if the Picard rank is_ \(\geq 3\) _and the discriminant group of the Picard group is cyclic._ We give a further example in Proposition 21. Let \[\tilde{\mathrm{H}}(X):=\mathrm{H}^{0}(X,\mathbb{Z})(-1)\oplus\mathrm{H}^{2}(X, \mathbb{Z})\oplus\mathrm{H}^{4}(X,\mathbb{Z})(1)\] be its _Mukai lattice_, a lattice of signature \((4,20)\), with respect to the Mukai pairing. There is a surjective homomorphism [10, Cor. 3] \[\mathrm{Aut}\,D^{b}(X)\to\mathrm{O}^{+}(\tilde{\mathrm{H}}(X))\subset\mathrm{ O}(\tilde{\mathrm{H}}(X))\] onto the group of _signed_ Hodge isometries, a subgroup of the orthogonal group of the Mukai lattice preserving orientations on the positive \(4\)-planes. We retain the notation from [11, Sect. 2], where we discussed the notion and basic properties of equivariant derived equivalences between K3 surfaces. We recall: Let \(X_{1}\) and \(X_{2}\) be K3 surfaces equipped with a generically free action of a finite cyclic group \(G\). Then \(X_{1}\) and \(X_{2}\) are \(G\)-equivariantly derived equivalent if and only if there exists a \(G\)-equivariant isomorphism of their Mukai lattices \[\tilde{\mathrm{H}}(X_{1})\stackrel{{\sim}}{{\longrightarrow}} \tilde{\mathrm{H}}(X_{2})\] respecting the Hodge structures. Note that the \(G\)-action is necessarily trivial on \[\mathrm{H}^{0}(X,\mathbb{Z})(-1)\oplus\mathrm{H}^{4}(X,\mathbb{Z})(1).\] Even in the event of an isomorphism \(X_{1}\simeq X_{2}\), equivariant derived equivalences are interesting: indeed, there are actions of finite groups \(G\) that are not conjugate in \(\mathrm{Aut}(X)\) but are conjugate via \(\mathrm{Aut}\,D^{b}(X)\) as the action of the latter group is visibly larger. Let \(G\) be a finite group and \(X_{1}\) and \(X_{2}\) K3 surfaces with \(G\)-actions. For simplicity, assume that \(G\) acts on \(T(X_{i})\) via \(\pm\mathrm{I}\). (This is the case if the transcendental cohomology is simple.) Given a \(G\)-equivariant isomorphism \(T(X_{1})\simeq T(X_{2})\), can we lift to a \(G\)-equivariant isomorphism of Mukai lattices \[\tilde{\mathrm{H}}(X_{1},\mathbb{Z})\simeq\tilde{\mathrm{H}}(X_{2},\mathbb{Z}),\] where \(G\) acts trivially on the hyperbolic summand \[\mathrm{U}=\mathrm{H}^{0}\oplus\mathrm{H}^{4}?\] Clearly the answer is NO. Suppose that \(G=C_{2}=\langle\epsilon\rangle\) and the \(\epsilon=-1\) eigenspaces are stably isomorphic but not isomorphic. Adding U does nothing to achieve the desired stabilization. In other words, U is "too small". We need to add summands where \(G\) acts nontrivially to achieve stabilization across all the various isotypic components. ## 4. Cohomological Fourier-Mukai transforms Let \(X_{1}\) and \(X_{2}\) be smooth projective complex K3 surfaces. A fundamental result of Orlov [10] shows that any equivalence \[\Phi:D^{b}(X_{1})\to D^{b}(X_{2})\] arises from a kernel \(\mathcal{K}\in D^{b}(X_{1}\times X_{2})\) through a Fourier-Mukai transform \[\begin{array}{rcl}\Phi_{\mathcal{K}}:D^{b}(X_{1})&\to&D^{b}(X_{2})\\ \mathcal{E}&\mapsto&{\pi_{2}}_{*}(\pi_{1}^{*}\mathcal{E}\otimes\mathcal{K}). \end{array}\] All the indicated functors are taken in their derived senses. Given such a kernel, there is also a Fourier-Mukai transform in the opposite direction \[\begin{array}{rcl}\Psi_{\mathcal{K}}:D^{b}(X_{2})&\to&D^{b}(X_{1})\\ \mathcal{E}&\mapsto&{\pi_{1}}_{*}(\pi_{2}^{*}\mathcal{E}\otimes\mathcal{K}). \end{array}\] Mukai has computed the kernel of the inverse \[\Phi_{\mathcal{K}}^{-1}=\Psi_{\mathcal{K}^{\vee}[2]}\] i.e., a twist of the dual to our original kernel. See [11, 4.10], [12, SS 4.3], and [14, p. 133] for details. The computation relies on Grothendieck-Serre Duality, so the appearance of the dualizing complex is natural. This machinery [14, SS 3.4] also allows us to analyze how Fourier-Mukai transforms interact with taking duals: \[\begin{array}{rcl}\Phi_{\mathcal{K}}(\mathcal{E}^{\vee})&={\pi_{2}}_{*}( \mathcal{K}\otimes\pi_{1}^{*}(\mathcal{E}^{\vee}))\\ &=(({\pi_{2}}_{*}(\mathcal{K}^{\vee}\otimes\pi_{1}^{*}\mathcal{E}))^{\vee})[- 2]\\ &=((\Phi_{\mathcal{K}^{\vee}}\mathcal{E})[2])^{\vee}\\ &=(\Phi_{\mathcal{K}^{\vee}[2]}\mathcal{E})^{\vee}\end{array}\] Suppose that \(X_{1}\) and \(X_{2}\) are equivalent through an isomorphism \[X_{2}=M_{v}(X_{1}),\] i.e., the moduli space of simple sheaves \(\mathcal{E}_{p},p\in X_{2}\), on \(X_{1}\) with Mukai vector \[v(\mathcal{E}_{p})=(r,D,s)\in\tilde{\mathrm{H}}(X,\mathbb{Z}).\] Here \(r\) is the rank of \(\mathcal{E}_{p}\), \(D=c_{1}(\mathcal{E}_{p})\), and \(s=\chi(\mathcal{E}_{p})-r\). We assume there exists another Hodge class \(v^{\prime}\in\tilde{\mathrm{H}}(X_{1},\mathbb{Z})\) such that \(\langle v,v^{\prime}\rangle=1\); in particular, \(v\) is primitive. Let \(\mathcal{E}\to X_{1}\times X_{2}\) denote a universal sheaf; by simplicity of the sheaves, \(\mathcal{E}\) is unique up to tensoring by a line bundle from \(X_{2}\). We may use \(\mathcal{E}\) as a kernel inducing a derived equivalence between \(X_{1}\) and \(X_{2}\)[10, 10.25]. Our formulas for inverses are compatible with tensoring the kernel by line bundles from one of the factors. In searching for Fourier-Mukai kernels, cohomological Fourier-Mukai transforms play a crucial role. Let \(\omega_{i}\in\mathrm{H}^{4}(X_{i},\mathbb{Z})\) denote the point class and set [11, SS1], [12] \[Z_{\mathcal{K}}:=\pi_{1}^{*}(1+\omega_{1})\operatorname{ch}(\mathcal{K})\pi_{ 2}^{*}(1+\omega_{2})\in\mathrm{H}^{*}(X_{1}\times X_{2},\mathbb{Z}),\] where the middle term is the Chern character. Then \(Z_{\mathcal{K}}\) induces an integral isomorphism of Hodge structures \[\phi_{\mathcal{K}}:\tilde{\mathrm{H}}(X_{1},\mathbb{Z})\stackrel{{ \sim}}{{\longrightarrow}}\tilde{\mathrm{H}}(X_{2},\mathbb{Z})\] compatible with Mukai pairings; this is called the _cohomological Fourier-Mukai transform_. For \(\mathcal{E}\in D^{b}(X_{1})\), we have the identity \[\phi_{\mathcal{K}}(v(\mathcal{E}))=v(\Phi_{\mathcal{K}}(\mathcal{E})).\] We use \(\psi_{\mathcal{K}}\) to denote the cohomological transform of \(\Psi_{\mathcal{K}}\). Most cohomological Fourier-Mukai transforms are induced by kernels **Proposition 7**.: _[_1_, 12_]_ _Given an orientation-preserving integral Hodge isometry_ \[\phi:\tilde{\mathrm{H}}(X_{1},\mathbb{Z})\to\tilde{\mathrm{H}}(X_{2},\mathbb{ Z})\] _there exists a derived equivalence_ \[\Phi_{\mathcal{K}}:D^{b}(X_{1})\to D^{b}(X_{2})\] _such that \(\phi\) is the cohomological Fourier-Mukai transform of \(\Phi_{\mathcal{K}}\)._ Suppose that \((X_{1},f_{1})\) is a polarized K3 surfaces of degree \(2r_{0}s\) where \(r_{0}\) and \(s\) are relatively prime positive integers. Let \(d_{0}\) be an integer prime to \(r_{0}\) and fix the isotropic Mukai vector \[v_{0}=(r_{0},d_{0}f_{1},d_{0}^{2}s)\in\tilde{\mathrm{H}}(X_{1},\mathbb{Z}).\] Since \(r_{0}\) and \(d_{0}^{2}s\) are relatively prime, there exists a Mukai vector \(v^{\prime}=(m,0,n)\) such that \(\langle v_{0},v^{\prime}\rangle=1\). Let \(X_{2}=M_{v}(X_{1})\), also a K3 surface, and choose a universal sheaf \(\mathcal{E}\to X_{1}\times X_{2}\). Our goal is to describe the induced isomorphism \[\phi_{\mathcal{E}}:\tilde{\mathrm{H}}(X_{1},\mathbb{Z})\stackrel{{ \sim}}{{\to}}\tilde{\mathrm{H}}(X_{2},\mathbb{Z}).\] Following [10, Ch. 8] and [21, SS2], the polarization on \(X_{2}\) is given by \[\det(\pi_{2*}(\mathcal{E}\otimes\mathcal{O}_{H}(s(r_{0}-2d_{0}))))^{\vee},\quad H \in|f_{1}|,\] a primitive ample divisor \(f_{2}\) on \(X_{2}\). More generally, we have an isomorphism of Hodge structures \[\mathrm{H}^{2}(X_{2},\mathbb{Z})=(v_{0}^{\vee})^{\perp}/\mathbb{Z}v_{0}^{\vee},\] where the perpendicular subspace is taken with respect to the Mukai pairing. **Proposition 8**.: _[_21_]_ _Choose integers \(d_{1}\) and \(\ell\) such that \(sd_{0}d_{1}-r_{0}\ell=1\) and take \(\mathcal{K}=\mathcal{E}\otimes\pi_{2}^{*}L\) for some line bundle \(L\) on \(X_{2}\). With respect to the bases_ \[(1,0,0),(0,f_{i},0),(0,0,1)\in\tilde{\mathrm{H}}(X_{i},\mathbb{Z})\] _the matrix of the cohomological Fourier-Mukai transform takes the form_ \[\phi_{\mathcal{K}}:=\begin{pmatrix}d_{0}^{2}s&2d_{0}sr_{0}&r_{0}\\ d_{0}\ell&2d_{0}d_{1}s-1&d_{1}\\ \ell^{2}r_{0}&2d_{1}s\ell r_{0}&d_{1}^{2}s\end{pmatrix}.\] The inverse is obtained reversing the sign of the middle basis vector and interchanging the role of \(d_{0}\) and \(d_{1}\): \[\begin{pmatrix}d_{0}^{2}s&2d_{0}sr_{0}&r_{0}\\ d_{0}\ell&2d_{0}d_{1}s-1&d_{1}\\ \ell^{2}r_{0}&2d_{1}s\ell r_{0}&d_{1}^{2}s\end{pmatrix}\begin{pmatrix}d_{1}^{2 }s&-2d_{1}sr_{0}&r_{0}\\ -d_{1}\ell&2d_{0}d_{1}s-1&-d_{0}\\ \ell^{2}r_{0}&-2d_{0}s\ell r_{0}&d_{0}^{2}s\end{pmatrix}=I.\] The formula \[\phi_{\mathcal{K}}\psi_{\mathcal{K}^{\vee}}=I\] is the cohomological realization of the identity \[\Phi_{\mathcal{K}}\Psi_{\mathcal{K}^{\vee}[2]}=I.\] The third column of \(\phi_{\mathcal{K}}^{-1}\) is the Mukai vector \(v_{0}^{\vee}\), as \[\Phi_{\mathcal{K}}^{-1}(\mathcal{O}_{p})=\mathcal{E}_{p}^{\vee},\quad p=[ \mathcal{E}_{p}]\in X_{2}=M_{v_{0}}(X_{1}).\] **Example 9**.: Suppose that \((X_{1},f_{1})\) is a degree 12 K3 surface. Consider the isotropic Mukai vector \(v=(2,f_{1},3)\) so that \[X_{2}:=M_{v}(X_{1})\] is also a K3 surface derived equivalent to \(X_{1}\). Taking \[r_{0}=2,s=3,d_{0}=1,d_{1}=\ell=1\] we obtain \[(1,0,0) \mapsto(3,f_{2},2)\] \[(0,f_{1},0) \mapsto(12,5f_{2},12)\] \[(0,0,1) \mapsto(2,f_{2},3)\] with matrix \[\varphi:=\begin{pmatrix}3&12&2\\ 1&5&1\\ 2&12&3\end{pmatrix}. \tag{4.1}\] The determinant is 1 with one eigenvector \((1,0,-1)\) with eigenvalue 1; thus this is orientation preserving. Note that \[(2,-f_{1},3)\mapsto(0,0,1)\] whence \[X_{1}=M_{(2,f_{2},3)}(X_{2}),\quad X_{2}=M_{(2,-f_{1},3)}(X_{1}).\] The fact that \((1,0,-1)\) has eigenvalue 1 gives \[X_{1}^{[2]}\stackrel{{\sim}}{{-\!\!\!\to}}X_{2}^{[2]}.\] ## 5. Generalities concerning involutions on K3 surfaces Let \(i:X\to X\) be an involution on a complex projective K3 surface, which acts faithfully on \(\mathrm{H}^{2}(X,\mathbb{Z})\) by the Torelli Theorem. It is _symplectic_ (resp. _anti-symplectic_) if \[i^{*}\omega=\omega\quad(\text{resp. }\ -\omega),\] where \(\omega\) is a holomorphic two-form. Nikulin [20] showed that any symplectic involution fixes eight isolated points and that all such involutions are topologically conjugate; these are the _Nikulin involutions_ studied in Section 8. An involution without fixed points was classically known to be an _Enriques involution_ arising from a double cover \(X\to S\) of an Enriques surface. The case of anti-symplectic involutions with fixed points is more complicated. Nikulin enumerated 74 cases beyond the Enriques case; see [1, 2, 1, 10] for details of the various cases. Given an anti-symplectic involution \(i:X\to X\) on a K3 surface, we recall the Nikulin invariants \((r,a,\delta)\)[1, SS2]: Let \(r\) denote the rank of the lattice \[S=\operatorname{H}^{2}(X,\mathbb{Z})^{i=1},\] which is indefinite if \(r>1\). We are using the fact that transcendental classes of \(X\) are anti-invariant under \(i\), as the quotient \(X/i\) admits no holomorphic two-form. We write \[T=\operatorname{H}^{2}(X,\mathbb{Z})^{i=-1}=S^{\perp}\] for the complementary lattice with signature \((2,20-r)\), which is indefinite if \(r<20\). The discriminant group \(d(S)\simeq d(T)\) is a 2-elementary group; its rank is denoted by \(a\). This group comes with a quadratic form \[q_{S}:d(S)\to\mathbb{Q}/2\mathbb{Z}.\] The _coparity_\(\delta\) equals 0 if \(q_{S}(x)\in\mathbb{Z}\) for each \(x\in d(S)\) and equals 1 otherwise. We relate this to geometric invariants. For an anti-symplectic involution, there are no isolated fixed points so the fixed locus \(R=X^{i}\) is of pure dimension one or empty. Suppose there are \(k+1\) irreducible components, with genera summing to \(g\). Then we have cf. [1, p.5] \[g=11-(r+a)/2\quad k=(r-a)/2,\] excluding the Enriques case \((r,k,\delta)=(10,10,0)\). Nikulin classifies even indefinite 2-elementary lattices \(\operatorname{L}\). They are determined uniquely by \((r,a,\delta)\) and \(\operatorname{O}(\operatorname{L})\to\operatorname{Aut}(d(\operatorname{L}))\) is surjective. In the definite case, _a priori_ there are multiple classes in each genus but this is not relevant for our applications. Indeed, the possibilities include * \(r=a=1\): \(X\) is a double cover of \(\mathbb{P}^{2}\) branched along a sextic plane curve. * The case where \(T\) is definite (\(r=20,a=2,g=0,k=9\)), we have \(d(T)=\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}\) thus is equal to \[\begin{pmatrix}2&0\\ 0&2\end{pmatrix}.\] Even in this case, automorphisms of the discriminant group are realized by automorphisms of the lattice. **Theorem 10** (Alexeev-Nikulin).: _For each admissible set of invariants \((r,a,\delta)\), there is a unique orthogonal pair of lattices (S,T) embedded in the K3 lattice \(\Lambda\), up to automorphisms of \(\Lambda\). There are 75 such cases._ **Corollary 11**.: _Any equivariant derived equivalence of K3 surfaces with anti-symplectic involutions induces an equivariant isomorphism between the underlying K3 surfaces._ Proof.: Suppose that \((X_{1},i_{1})\) and \((X_{2},i_{2})\) are derived equivalent, compatibly with their anti-symplectic involutions. Indeed, derived equivalence shows that the invariant (resp. anti-invariant) sublattices of the Picard group are stably equivalent (resp. equivalent): \[\operatorname{Pic}(X_{1})^{i_{1}=1}\oplus U\simeq\operatorname{Pic}(X_{2})^{ i_{2}=1}\oplus U,\quad\operatorname{Pic}(X_{1})^{i_{1}=-1}\simeq \operatorname{Pic}(X_{2})^{i_{2}=-1}.\] Since the possibilities for the invariant sublattices are characterized by their 2-adic invariants, we have \[\operatorname{Pic}(X_{1})^{i_{1}=1}\simeq\operatorname{Pic}(X_{2})^{i_{2}=1}.\] We have already observed that all the possible isomorphisms between their discriminants \[\left(d(\operatorname{Pic}(X_{1})^{i_{1}=1}),q_{1}\right)\simeq\left(d( \operatorname{Pic}(X_{2})^{i_{2}=1}),q_{2}\right)\] are realized by isomorphisms of the lattices. In particular, there exists a choice compatible with the isomorphism \[\operatorname{H}^{2}(X_{1},\mathbb{Z})^{i_{1}=-1}\overset{\sim}{\to} \operatorname{H}^{2}(X_{2},\mathbb{Z})^{i_{2}=-1}\] induced by the derived equivalence. Thus we obtain isomorphisms on middle cohomology, compatible with the involutions. The Torelli Theorem gives an isomorphism \(X_{1}\overset{\sim}{\to}X_{2}\) respecting the involutions. **Corollary 12**.: _Let \((X_{1},\sigma_{1})\) and \((X_{2},\sigma_{2})\) denote K3 surfaces with involutions that are \(C_{2}\)-equivariantly derived equivalent. If \(X_{1}/\sigma_{1}\) is rational then \(X_{2}/\sigma_{2}\) is rational as well._ Indeed, the rationality of the quotient forces the involution to be anti-symplectic. **Example 13**.: Having an anti-symmetric involution is _not_ generally a derived property. For example, consider Picard lattices \[A_{1}=\begin{pmatrix}2&13\\ 13&12\end{pmatrix}\quad A_{2}=\begin{pmatrix}8&15\\ 15&10\end{pmatrix}.\] These forms are stably equivalent but not isomorphic. As in Remark 5 - see [17, Sec. 2.3] for details - choose derived equivalent K3 surfaces \(X_{1}\) and \(X_{2}\) with \(\operatorname{Pic}(X_{1})=A_{1}\) and \(\operatorname{Pic}(X_{2})=A_{2}\). Note that \(A_{2}\) does not represent two and admits no involution acting via \(\pm 1\) on \(d(A_{2})\); thus \(X_{2}\) does not admit an involution. This should be compared with Proposition 28: Having an Enriques involution is a derived invariant. ## 6. Orientation reversing conjugation We continue to assume that \(i\) is an anti-symplectic involution on a K3 surfaces \(X\). As we have seen, \[T(X)\subset\operatorname{H}^{2}(X,\mathbb{Z})^{i=-1}\] with complement \(\operatorname{Pic}(X)^{i=-1}\), which is negative definite by the Hodge index theorem. Recall that Orlov's Theorem [14, SS3] asserts that for K3 surfaces (without group action) isomorphisms of transcendental cohomology lift to derived equivalences. Given K3 surfaces \((X_{1},i_{1})\) and \((X_{2},i_{2})\) with anti-symplectic involutions of the same type in the sense of Alexeev-Nikulin, the existence of an isomorphism \[T(X_{1})\stackrel{{\sim}}{{\to}}T(X_{2})\] seldom induces an equivariant derived equivalence; a notable exception is the case where the anti-invariant Picard group has rank zero or one. We only have that \[\operatorname{Pic}(X_{1})^{i_{1}=-1},\quad\operatorname{Pic}(X_{2})^{\epsilon _{2}=-1}\] are stably equivalent - compatibly with the isomorphism on the discriminant groups of the transcendental lattices - but not necessarily isomorphic. In light of this, we propose an orientation reversing conjugation of actions, with a view toward realizing isomorphisms of transcendental cohomology. Assume that \(\operatorname{Pic}(X_{1})^{i_{1}=-1}\) and \(\operatorname{Pic}(X_{2})^{i_{2}=-1}\) are not isomorphic, so there is no \(C_{2}\)-equivariant derived equivalence \[D^{b}(X_{1})\stackrel{{\sim}}{{\to}}D^{b}(X_{2})\] taking \(i_{1}\) to \(i_{2}\), by Corollary 11. However, let \[\operatorname{dual}_{k}:D^{b}(X_{k})\stackrel{{\sim}}{{\to}}D^{ b}(X_{k}),\quad k=1,2,\] denote the involution \[\mathcal{E}_{*}\mapsto\mathcal{E}_{*}^{\vee}.\] Note that shift and duality commute with each other and with any automorphism of the K3 surface. The action of \(\operatorname{dual}_{k}\) on the Mukai lattice \(\tilde{\operatorname{H}}(X_{k},\mathbb{Z})\) is trivial in degrees \(0\) and \(4\) and multiplication by \(-1\) in degree two. Recall that shift acts via \(-1\) in all degrees, so composition with \(\operatorname{dual}_{k}\) is trivial in degree \(2\) and multiplication by \(-1\) in degrees \(0\) and \(4\). We propose a general definition and then explain how it is related to our analysis of quadratic forms with involution: **Definition 14**.: Let \((X_{1},i_{1})\) and \((X_{2},i_{2})\) be smooth projective varieties with involution, of dimension \(n\) with trivial canonical class. They are _skew equivalent_ if there is a kernel \(\mathcal{K}\) on \(X_{1}\times X_{2}\), inducing an equivalence between \(X_{1}\) and \(X_{2}\), such that \[(i_{1}^{*},i_{2}^{*})\mathcal{K}=\mathcal{K}^{\vee}[n]. \tag{6.1}\] Note that this dualization coincides with the relative dualizing complex for both projections \(\pi_{1}\) and \(\pi_{2}\). Suppose again that \(X_{1}\) and \(X_{2}\) are K3 surfaces and \(\mathcal{K}=\mathcal{E}[1]\) for a universal sheaf \[\mathcal{E}\to X_{1}\times X_{2}\] associated with an isomorphism \(X_{2}=M_{v}(X_{1})\). Then relation (6.1) (with \(n=2\)) translates into \[i_{1}^{*}\mathcal{E}_{i_{2}(x_{2})}\simeq(\mathcal{E}_{x_{2}})^{\vee}. \tag{6.2}\] **Proposition 15**.: _Let \((X_{1},i_{1})\) and \((X_{2},i_{2})\) be K3 surfaces with involutions. Then the following are equivalent_ * \((X_{1},i_{1})\) _and_ \((X_{2},i_{2})\) _are skew derived equivalent;_ * _there exists an orientation-preserving equivalence of Mukai lattices_ \[\phi:\tilde{\operatorname{H}}(X_{1},\mathbb{Z})\longrightarrow\tilde{ \operatorname{H}}(X_{2},\mathbb{Z}),\] _satisfying_ (6.3) \[\phi(i_{1}^{*}(v^{\vee}))=(i_{2}^{*}\phi(v))^{\vee}.\] As duality and pull back commute with each other, the order of these operations in (6.3) is immaterial. Furthermore, if \(\phi\) satisfies this relation then so does \(-\phi\). Proof.: The forward implication is clear. Indeed, the cohomological Fourier-Mukai transform \(\phi_{\mathcal{K}}\) of a skew equivalence satisfies \[(i_{1},i_{2})^{*}\phi_{\mathcal{K}}=\phi_{\mathcal{K}^{\vee}}\] but \(\phi_{\mathcal{K}^{\vee}}\) differs from \(\phi_{\mathcal{K}}\) by the involution acting via \(+1\) on \(\mathrm{H}^{0}\) and \(\mathrm{H}^{4}\) and \(-1\) on \(\mathrm{H}^{2}\). Thus \[\phi_{\mathcal{K}}:\tilde{\mathrm{H}}(X_{1},\mathbb{Z})\longrightarrow\tilde{ \mathrm{H}}(X_{2},\mathbb{Z})\] is an isomorphism equivariant under the prescribed "skew" involutions. For the reverse implication, we consider the cohomological Fourier-Mukai transform \[\phi:\tilde{\mathrm{H}}(X_{1},\mathbb{Z})\longrightarrow\tilde{\mathrm{H}}(X_ {2},\mathbb{Z}).\] Proposition 7 yields a kernel \(\mathcal{K}\) such that \(\phi=\phi_{\mathcal{K}}\). We make this more explicit. Set \[v=(\phi^{-1}(0,0,1))^{\vee}\] with view toward relating \(\mathcal{K}\) to a universal bundle \[\mathcal{E}\to X_{1}\times X_{2},\] where \(X_{2}\) is a moduli space of bundles on \(X_{1}\). Write \(v=(r,f_{1},s)\); if \(r<0\), replace \(\phi\) by \(-\phi\). This leaves (6.3) unchanged and corresponds to replacing \(\Phi_{\mathcal{K}}\) by \(\Phi_{\mathcal{K}}\) pre-composed (or post-composed) by a shift on \(X_{1}\) (or \(X_{2}\)). Thus we may assume \(v_{0}=(r_{0},f_{1},s)\) with \(r_{0}>0\) and consider a universal bundle \[\mathcal{E}\to X_{1}\times M_{v_{0}}(X_{1})\simeq X_{1}\times X_{2}.\] We therefore have (see Proposition 8 for the formula and our basis conventions) \[\phi_{\mathcal{E}}=\begin{pmatrix}d_{0}^{2}s&2d_{0}sr_{0}&r_{0}\\ d_{0}\ell&2d_{0}d_{1}s-1&d_{1}\\ \ell^{2}r_{0}&2d_{1}s\ell r_{0}&d_{1}^{2}s\end{pmatrix}\] and \[\phi_{\mathcal{K}}=\begin{pmatrix}d_{0}^{2}s&2d_{0}sr_{0}&r_{0}\\ d_{0}\hat{\ell}&2d_{0}\hat{d}_{1}s-1&\hat{d}_{1}\\ \hat{\ell}^{2}r_{0}&2\hat{d}_{1}s\hat{\ell}r_{0}&\hat{d}_{1}^{2}s\end{pmatrix}.\] Here we have \[sd_{0}d_{1}-r_{0}\ell=sd_{0}\hat{d}_{1}-r_{0}\hat{\ell}=1,\] whence \[\hat{\ell}=\ell+Nsd_{0}\quad\hat{d}_{1}=d_{1}+Nr_{0}.\] Consider the autoequivalence on \(X_{2}\) obtained by tensoring with the invertible sheaf \(\mathcal{O}_{X_{2}}(Nf_{2})\). This has cohomological Fourier-Mukai transform with matrix \[t^{N}:=\begin{pmatrix}1&0&0\\ N&1&0\\ N^{2}r_{0}s&2Nr_{0}s&1\end{pmatrix}.\] Note however that \[t^{N}\phi_{\mathcal{E}}=\phi_{\mathcal{K}}\] and we can renormalize \(\mathcal{E}\) so that it has cohomological Fourier-Mukai transform \(\phi\). Specifically, there exists an isomorphism \[X_{2}\stackrel{{\sim}}{{\longrightarrow}}M_{v_{0}}(X_{1})\] such that the pullback of the universal sheaf \(\mathcal{E}\) to \(X_{1}\times X_{2}\) induces \(\phi\). We analyze how \(i_{1}\) and \(i_{2}\) act on \(\mathcal{E}\), keeping in mind the functional relation. We have that \[\mathcal{E},\quad(i_{1},i_{2})^{*}\mathcal{E}^{\vee}\] are both universal bundles on \(X_{1}\times X_{2}\) with the same numerical invariants. The uniqueness of such bundles gives an isomorphism \[\xi:\mathcal{E}\stackrel{{\sim}}{{\longrightarrow}}(i_{1},i_{2} )^{*}\mathcal{E}^{\vee}\] over \(X_{1}\times X_{2}\), unique up to a scalar. Note there are two distinguished normalizations of this scalar, for which the composition \[(i_{1},i_{2})^{*}\xi^{\vee}\circ\xi:\mathcal{E}\to\mathcal{E}\] is the identity. For purposes of establishing derived equivalences, the choice of normalization is immaterial. **Corollary 16**.: _Under the assumptions above, the functors \(\operatorname{dual}_{1}\circ i_{1}\) and \(\operatorname{dual}_{2}\circ i_{2}\) are \(C_{2}\)-equivariantly derived equivalent._ **Remark 17**.: As we recalled in Section 3, derived equivalences respect orientations on the Mukai lattice [10]. The orientation reversing conjugation violates the orientation condition, in a prescribed way. Duality is the archetypal orientation-reversing Hodge isogeny. In Sections 7 and 10 we give examples of such equivalences. ## 7. Rational quotients and skew equivalence Our first task is to give examples of skew equivalences. Proposition 15 and the discussion preceding it reduce this to exhibiting lattice-polarized K3 surfaces with involution \((X_{1},i_{1})\) and \((X_{2},i_{2})\), such that the anti-invariant Picard groups are stably equivalent but inequivalent. Specifically, we assume \(X_{1}\) and \(X_{2}\) are degree two K3 surfaces with \[\operatorname{Pic}(X_{j})=\mathbb{Z}h_{j}\oplus A_{j}(-1),\quad h_{j}^{2}=2,\] where the involutions fix the \(h_{j}\) and reverse signs on \(A_{j}\)'s. If \(A_{1}\) and \(A_{2}\) are stably-equivalent, inequivalent positive definite lattices then \((X_{1},i_{1})\) and \((X_{2},i_{2})\) are skew equivalent. In contrast to ordinary equivalences (see 11) we do have antisymplectic actions with nontrivial _skew_ equivalences. The resulting quotients are rational surfaces, indeed, \(\mathbb{P}^{2}\). **Example 18** (Explicit matrices).: The matrices, in the basis \(p_{j},q_{j}\), for \(j=1,2\), are given by \[A_{1}:=\begin{pmatrix}4&1\\ 1&12\end{pmatrix},\quad A_{2}:=\begin{pmatrix}6&1\\ 1&8\end{pmatrix}.\] We extract a stable isomorphism \[A_{1}\oplus\operatorname{U}\simeq A_{2}\oplus\operatorname{U},\quad \operatorname{U}=\left\langle u_{1},v_{1}\right\rangle,\text{ with matrix }\begin{pmatrix}0&1\\ 1&0\end{pmatrix}.\] First, we give an isomorphism \[A_{1}\oplus\left\langle e_{1}\right\rangle\simeq A_{2}\oplus\left\langle e_{2 }\right\rangle,\quad e_{1}^{2}=-2.\] We put \[p_{1}\mapsto p_{2}+e_{2},\] and claim that the orthogonal complements to these are equivalent indefinite lattices. Indeed, \[p_{1}^{\perp} =\left\langle p_{1}-4q_{1},e_{1}\right\rangle=\begin{pmatrix}188& 0\\ 0&-2\end{pmatrix},\] \[(p_{2}+e_{2})^{\perp} =\left\langle p_{2}-6q_{2},2q_{2}+e_{2}\right\rangle=\begin{pmatrix} 282&-94\\ -94&30\end{pmatrix}\] \[=\left\langle p_{2}+3e_{2},2q_{2}+e_{2}\right\rangle=\begin{pmatrix} -12&-4\\ -4&30\end{pmatrix}\] These are equivalent via Gaussian cycles of reduced forms \[\begin{matrix}0&18&8&4\\ 188&-2&26&-12&30\end{matrix}\] where the indicated basis elements are \[p_{1}-4q_{1},\quad e_{1},\quad p_{1}-4q_{1}-9e_{1},\quad p_{1}-4q_{1}-10e_{1}, \quad 2(p_{1}-4q_{1})-19e_{1}.\] The composed isomorphism is \[p_{1}-4q_{1}-10e_{1} \mapsto p_{2}+3e_{2},\] \[2(p_{1}-4q_{1})-19e_{1} \mapsto 2q_{2}+e_{2}\] \[p_{1} \mapsto p_{2}+e_{2}\] \[e_{1} \mapsto(2q_{2}+e_{2})-2(p_{2}+3e_{2})=2(q_{2}-p_{2})-5e_{2}\] \[q_{1} \mapsto 5(p_{2}-q_{2})+12e_{2}.\] We extend the isomorphism above where \(e_{i}=u_{i}-v_{i}\) \[u_{1}+v_{1} \mapsto u_{2}+v_{2}\] \[u_{1}-v_{1} \mapsto 2(q_{2}-p_{2})-5(u_{2}-v_{2})\] \[p_{1} \mapsto p_{2}+(u_{2}-v_{2})\] \[q_{1} \mapsto 5(p_{2}-q_{2})+12(u_{2}-v_{2})\] whence we have \[u_{1} \mapsto(q_{2}-p_{2})-2u_{2}+3v_{2}\] \[v_{1} \mapsto(p_{2}-q_{2})+3u_{2}-2v_{2}.\] ## 8. Nikulin involutions ### General properties An involution \(\iota\) on a K3 surface \(X\) over \(\mathbb{C}\) preserving the symplectic form is called a _Nikulin_ involution. We recall basic facts concerning such involutions, following [20]: * \(\iota\) has 8 isolated fixed points; * the (resolution of singularities) \(Y\to X/\iota\) is a K3 surface fitting into a diagram \[\begin{matrix}X&\stackrel{{\beta}}{{\leftarrow}}&\widetilde{X}\\ \downarrow&&\downarrow\pi\\ X/\iota&\leftarrow&Y\end{matrix}\] where \(\beta\) blows up the fixed points and the vertical arrows have degree two; * the action of \(\iota\) on \(\mathrm{H}^{2}(X,\mathbb{Z})\) is uniquely determined, and there is a decomposition \[\mathrm{H}^{2}(X,\mathbb{Z})=(\mathrm{U}^{\oplus 3})_{1}\oplus(\mathrm{E}_{8}(-1) \oplus\mathrm{E}_{8}(-1))_{P},\] where the first term is invariant and the second is a permutation module for \(\iota\); * the invariant and the anti-invariant parts of \(\mathrm{H}^{2}\) take the form: \[\mathrm{H}^{2}(X,\mathbb{Z})^{\iota=1}\simeq\mathrm{U}^{3}\oplus\mathrm{E}_{ 8}(-2),\quad\mathrm{H}^{2}(X,\mathbb{Z})^{\iota=-1}=\mathrm{E}_{8}(-2)\] Let \(E_{1},\ldots,E_{8}\) denote the exceptional divisors of \(\beta\) and \(N_{1},\ldots,N_{8}\) the corresponding \((-2)\)-curves on \(Y\). The union \(\cup N_{i}\) is the branch locus of \(\pi\) so there is a divisor \[\hat{N}=(N_{1}+\cdots+N_{8})/2\] saturating \(\langle N_{1},\ldots,N_{8}\rangle\subset\mathrm{Pic}(Y)\); the minimal primitive sublattice containing these divisors is called the _Nikulin_ lattice, and is denoted by \(\mathrm{N}\). We have [13, Prop. 1.8] \[\pi_{*}:\mathrm{H}^{2}(\widetilde{X},\mathbb{Z}) \to \mathrm{H}^{2}(Y,\mathbb{Z})\] \[\mathrm{U}^{3}\oplus\mathrm{E}_{8}(-1)\oplus\mathrm{E}_{8}(-1) \oplus\langle-1\rangle^{8} \to \mathrm{U}(2)^{3}\oplus\mathrm{N}\oplus\mathrm{E}_{8}(-1)\] \[(u,x,y,z) \mapsto (u,z,x+y)\] and \[\pi^{*}:\mathrm{H}^{2}(Y,\mathbb{Z}) \to \mathrm{H}^{2}(\widetilde{X},\mathbb{Z})\] \[\mathrm{U}(2)^{3}\oplus\mathrm{N}\oplus\mathrm{E}_{8}(-1) \to \mathrm{U}^{3}\oplus\mathrm{E}_{8}(-1)\oplus\mathrm{E}_{8}(-1) \oplus\langle-1\rangle^{8}\] \[(u,n,x) \mapsto (2u,x,x,2\tilde{n})\] where if \(n=\sum n_{i}N_{i}\) then \(\tilde{n}=\sum n_{i}E_{i}\). Thus we obtain a distinguished saturated sublattice \[\mathrm{E}_{8}(-2)\subset\mathrm{Pic}(X)\] that coincides with the \(\iota=-1\) piece. **Proposition 19**.: _Fix a lattice \(\mathrm{L}\) containing \(\mathrm{E}_{8}(-2)\) as a primitive sublattice; assume \(\mathrm{L}\) arises as the Picard lattice of a projective K3 surface. Then there exists a K3 surface \(X\) with Nikulin involution \(\iota\) such that_ \[\mathrm{L}=\mathrm{Pic}(X)\supset\mathrm{Pic}(X)^{\iota=-1}=\mathrm{E}_{8}(-2).\] Proof.: Let \(\mathrm{A}\) denote the orthogonal complement of \(\mathrm{E}_{8}(-2)\) is \(\mathrm{L}\). There is a unique involution \(\iota\) on \(\mathrm{L}\) with \[\mathrm{L}^{\iota=1}=\mathrm{A},\quad\mathrm{L}^{\iota=-1}=\mathrm{E}_{8}(-2).\] Now \(\iota\) acts trivially on \(d(\mathrm{L})\) - keep in mind \(d(\mathrm{E}_{8}(-2))\) is a two-elementary group - so we may naturally extend \(\iota\) to the full K3 lattice. (It acts trivially on \(\mathrm{L}^{\perp}\).) These lattice-polarized K3 surfaces form our family Nikulin [20, SS4] explains how to get involutions for generic K3 surfaces with lattice polarization \(\mathrm{L}\). Choose such a surface \(X\) such that \(\mathrm{Pic}(X)=\mathrm{L}\) - a very general member of the family has this property. Clearly \(X\) is projective - it admits divisors with positive self-intersection. We claim there is an ample divisor \(H\in\mathrm{A}\). Indeed, the ample cone of \(X\) is characterized as the chamber of the cone of positive divisors by the group generated by reflections associated with indecomposable \((-2)\)-classes \(E\) of positive degree [10]. Each \((-2)\)-class \(E\) is perpendicular to a unique ray in \[\mathrm{A}\otimes\mathbb{R}\cap\{\text{ cone of positive divisors }\}\] generated by an element \(a_{E}\in\mathrm{A}\). Note that \(\mathrm{A}\) cannot be contained in \(E^{\perp}\) as \(\mathrm{E}_{8}(-2)\) has no \((-2)\)-classes. We conclude that \(\mathrm{A}\) meets each chamber in the decomposition of the positive cone - it cannot be separated from the ample cone by any of the \(\mathrm{E}^{\perp}\). Once we have the ample cone, we can extract the automorphism group of \(X\) via the Torelli Theorem: It consists of the Hodge isometries taking the ample cone to itself. In particular, any Hodge isometry fixing \(H\) is an automorphism. Thus \(\iota\) is an automorphism of \(X\). **Proposition 20**.: _Let \(\mathrm{L}\) be an even hyperbolic lattice containing \(\mathrm{E}_{8}(-2)\) as a saturated sublattice. Assume that \(d(\mathrm{L})\) has rank at most \(11\). Then \(\mathrm{L}\) is unique in its genus and the homomorphism_ \[\mathrm{O}(\mathrm{L})\to\mathrm{O}(q_{\mathrm{L}})\] _is surjective._ The condition on the rank of \(d(L)\) is satisfied for Picard lattices of K3 surfaces \(X\). We have \[\mathrm{Pic}(X)\subset\mathrm{U}^{\oplus 3}\oplus\mathrm{E}_{8}(-1)^{\oplus 2}\] which has rank \(22\); \(d(\mathrm{Pic}(X))\simeq d(T(X))\) so both groups are generated by \(\leq 11\) elements. Proof.: We apply Proposition 2. For odd primes \(p\), the conditions are easily checked as the rank \(r\) of \(\mathrm{L}\) exceeds the rank of the \(p\)-primary part \(d(\mathrm{L})\). If \(r\geq 12\) then the discriminant group is generated by \(\leq 10\) elements and we are done. Thus we focus on the \(p=2\) case with \(r=9,10\), or \(11\). Let \(\mathrm{A}\) denote the orthogonal complement to \(\mathrm{E}_{8}(-2)\) in \(\mathrm{L}\). The overlattice \[\mathrm{L}\supset\mathrm{A}\oplus\mathrm{E}_{8}(-2)\] corresponds to an isotropic subgroup \[H\subset d(\mathrm{A})\oplus d(\mathrm{L})\] with respect to \(q_{\mathrm{A}}\oplus q_{\mathrm{L}}\). Projection maps \(H\) injectively into each summand - we may interpret these projections as kernels of the natural maps \[d(\mathrm{A})\to d(\mathrm{L}),\quad d(\mathrm{E}_{8}(-2))\to d(\mathrm{L}).\] Thus \(H\) is a \(2\)-elementary group, of rank at most three. It follows that \(d(\mathrm{L})\) contains at least five copies of \(\mathbb{Z}/2\mathbb{Z}\). Remark 3 shows this validates the hypothesis of Proposition 2. The assumption on the _rank_ of the discriminant groups can be replaced by bounds on its _order_[13, Cor. 22, p. 395] - at least for purposes of showing there is one class in each genus. ### Rank nine examples We focus on examples with Picard rank nine, following [11, Prop. 2.2] which lists the possible lattices. Suppose that \(\mathrm{Pic}(X)^{\iota=1}=\mathbb{Z}f\) with \(f^{2}=2d\), which is necessarily ample as there are no \((-2)\)-classes in \[\mathrm{Pic}(X)^{\iota=-1}=\mathrm{E}_{8}(-2).\] We have the lattice \[\Lambda:=(2d)\oplus\mathrm{E}_{8}(-2),\] for all \(d\). For even \(d\) we have the index-two overlattice \(\widetilde{\Lambda}\supset\Lambda\), generated by \[\frac{f+e}{2},\] where \(f\) is a generator of \((2d)\) and \(e\in\mathrm{E}_{8}(-2)\) is a primitive element with \[(e,e)=\begin{cases}-4&\text{ if }d=4m+2\\ -8&\text{ if }d=4m.\end{cases}\] We are using the fact that the lattice \(\mathrm{E}_{8}\) has primitive vectors of lengths \(2\) and \(4\). Using the shorthand \[q(v)=q_{\mathrm{E}_{8}(-2)}(v)\pmod{2\mathbb{Z}}\] elements \(0\neq v\in e_{8}(-2):=d(\mathrm{E}_{8}(-2))\) are of two types * \(120\) elements \(v\) with \(q(v)=1\) (\(A_{1}+E_{7}\) type); * \(135\) elements \(v\) with \(q(v)=0\) (\(D_{8}\) type). Note that \(\widetilde{\Lambda}\) is the unique such overlattice such that \(\operatorname{E}_{8}(-2)\) remains saturated. **Proposition 21**.: _Let \((X_{1},f_{1})\) and \((X_{2},f_{2})\) be polarized K3 surfaces of degree \(2d\), derived equivalent via specialization of the construction in Remark 5. If \(X_{1}\) admits a Nikulin involution fixing \(f_{1}\) then_ * \(X_{2}\) _admits a Nikulin involution fixing_ \(f_{2}\)_;_ * _there is an isomorphism_ \[\varphi:X_{1}\stackrel{{\sim}}{{\to}}X_{2}.\] Proof.: The derived equivalence induces an isomorphism of lattices with Hodge structure \[\operatorname{H}^{2}(X_{1},\mathbb{Z})\supset f_{1}^{\perp}\simeq f_{2}^{ \perp}\subset\operatorname{H}^{2}(X_{2},\mathbb{Z}),\] which means that \(f_{2}^{\perp}\cap\operatorname{Pic}(X_{2})\) contains a sublattice isomorphic to \(E_{8}(-2)\). Thus there exists a Hodge involution \[\iota_{2}^{*}:\operatorname{H}^{2}(X_{2},\mathbb{Z})\to\operatorname{H}^{2}(X _{2},\mathbb{Z})\] with anti-invariant summand equal to this copy of \(\operatorname{E}_{8}(-2)\). The Torelli Theorem - see [22, Prop. 2.3] - shows that \(X_{2}\) admits an involution \(\iota_{2}:X_{2}\to X_{2}\). Isomorphisms of K3 surfaces specialize in families [13, ch. I]. This reduces us to proving the result when the \(X_{k}\) have Picard rank nine, putting us in the case of Proposition 20. The Counting Formula of [10, SS2] - using the conclusions of Proposition 20 - implies that all Fourier-Mukai partners of \(X_{1}\) are isomorphic to \(X_{1}\). **Remark 22**.: We are _not_ asserting that \(\varphi^{*}f_{2}=f_{1}\)! Suppose that \(X_{1}\) and \(X_{2}\) have Picard rank nine, the minimal possible rank. Then \[\varphi^{*}f_{2}\equiv\alpha f_{1}\pmod{\operatorname{E}_{8}(-2)}\] where \(\alpha\pmod{4d}\) is the corresponding solution to congruence (3.1). Thus we obtain nontrivial derived equivalence among Nikulin surfaces even in rank nine! ### Rank ten examples Turning to rank ten, we offer a generalization of [22, Prop. 2.3]: **Proposition 23**.: _Fix a rank two indefinite even lattice \(\operatorname{A}\) and an even extension_ \[\operatorname{L}\supset\operatorname{A}\oplus\operatorname{E}_{8}(-2)\] invariant under \(\iota\); here \(\iota\) fixes \(\mathrm{A}\) and acts by multiplication by \(-1\) on \(\mathrm{E}_{8}(-2)\). Then there exists a K3 surface \(X\) with Nikulin involution \(\iota\) such that_ \[\mathrm{A}=\mathrm{Pic}(X)^{\iota=1}\subset\mathrm{Pic}(X)=\mathrm{L}\supset \mathrm{Pic}(X)^{\iota=-1}=\mathrm{E}_{8}(-2).\] Proof.: The lattice \(\mathrm{L}\) embeds uniquely into the K3 lattice by Proposition 4. Proposition 19 gives the desired K3 surface with involution. We observed in Proposition 20 that the lattice \(\mathrm{L}\) are unique in their genus and admit automorphisms realizing the full group \(\mathrm{O}(d(\mathrm{L}))\). Repeating the reasoning for Proposition 21 we find: **Proposition 24**.: _A K3 surface \(X\) with involution \(\iota_{1}\), produced following Proposition 23 applied to \(\mathrm{A}_{1}\), will have a second involution \(\iota_{2}\) associated with \(\mathrm{A}_{2}\). Moreover \((X,\iota_{1})\) and \((X,\iota_{2})\) are not equivariantly derived equivalent._ We elaborate on the overlattices \(\mathrm{L}\) arising in the assumptions of Proposition 23. What lattices may arise from a given \(\mathrm{A}\)? Each \(\mathrm{L}\) arises from a 2-elementary \[H\subset d(\mathrm{A})\oplus e_{8}(-2)\] isotropic with respect to \(q_{\mathrm{A}}\oplus q_{\mathrm{E}_{8}(-2)}\). We consider the orbits of \[H\simeq(\mathbb{Z}/2\mathbb{Z})^{2}\subset e_{8}(-2)\] under automorphisms of the lattice. These reflect possible quadratic forms on \((\mathbb{Z}/2\mathbb{Z})^{2}\). We enumerate the possibilities, relying on description of maximal subgroups of the simple group of \(\mathrm{O}_{8}^{+}(2)\) (automorphisms of \(e_{8}(-2)\)) [CCN\({}^{+}\)85, p. 85] and subgroups of \(W(\mathrm{E}_{8})\) (a closely related group) associated with reflections [DPR13, Table 5]. For the reader's reference, we list the root systems associated with the subgroups in parentheses: 1. isotropic subspaces, where \(q|H\) is trivial - 1575 elements \((\mathrm{D}_{4}+\mathrm{D}_{4}\) type); 2. rank one subspaces, where \(q|H\) has a kernel, e.g., \(q(x,y)=x^{2}\) - \(3780=28\times 135\) elements \((\mathrm{A}_{1}+\mathrm{A}_{1}+\mathrm{D}_{6}\) type); 3. "minus lines" full rank non-split subspaces, e.g., \(q(x,y)=x^{2}+xy+y^{2}\) - \(1120=28\cdot 120/3\) elements \((\mathrm{A}_{2}+\mathrm{E}_{6}\) type); 4. full rank split subspaces, e.g., \(q(x,y)=xy\) - 4320 elements. As a check, the Grassmannian \(\mathrm{Gr}(2,8)\) has Betti numbers and thus, by the Weil conjectures, 10795 points of \(\mathbb{F}_{2}\). Note that \[10795=1575+3780+1120+4320.\] **What about arbitrary rank?** Let \(\mathrm{A}_{1}\) and \(\mathrm{A}_{2}\) be indefinite lattices of rank \(r\geq 2\) in the same genus. Consider overlattices \[\mathrm{L}_{1}\supset\mathrm{A}_{1}\oplus\mathrm{E}_{8}(-2),\quad\mathrm{L}_ {2}\supset\mathrm{A}_{1}\oplus\mathrm{E}_{8}(-2)\] associated with subspaces \(H\subset e_{8}(-2)\) in the same orbit, so we have \(d(\mathrm{L}_{1})\simeq d(\mathrm{L}_{2})\). It follows that \(\mathrm{L}_{1}\simeq\mathrm{L}_{2}\) provided the \(d(\mathrm{L}_{i})\) have rank at most 11 (see Proposition 20); this holds for Picard lattices of \(K3\) surfaces. Assuming \(\mathrm{L}_{1}\) and \(\mathrm{L}_{2}\) arise as Picard lattices of K3 surfaces, we obtain results as in Propositions 21 and 24. We conclude with one last observation: **Proposition 25**.: _The existence of a Nikulin structure for one member of a derived equivalence class induces Nikulin structures on all K3 surfaces in the equivalence class._ Suppose \(X_{1}\) and \(X_{2}\) are derived equivalent and \(X_{1}\) admits a Nikulin involution. Proposition 20 implies \[\mathrm{Pic}(X_{1})\simeq\mathrm{Pic}(X_{2})\] and we obtain a copy of \(\mathrm{E}_{8}(-2)\subset\mathrm{Pic}(X_{2})\). Proposition 19 guarantees \(X_{2}\) admits a Nikulin involution as well. ## 9. Geometric application In this section, we present a geometric application of the study of Nikulin involutions, up to derived equivalence. Let \((X_{1},f_{1})\) and \((X_{2},f_{2})\) denote derived equivalent K3 surfaces of degree 12, admitting Nikulin involutions \(\iota_{j}:X_{j}\to X_{j}\) with \(\iota_{j}^{*}f_{j}=f_{j}\) for \(j=1,2\). We assume Picard groups \[\mathrm{Pic}(X_{j})=\mathbb{Z}f_{j}\oplus\mathrm{E}_{8}(-2).\] Note that the derived equivalence induces natural identifications between the \(\mathrm{E}_{8}(-2)\) summands of \(\mathrm{Pic}(X_{1})\) and \(\mathrm{Pic}(X_{2})\). In particular, we obtain bijections between the fixed-point loci \[X_{1}^{\iota_{1}}=X_{2}^{\iota_{2}}.\] Let \(Z_{j}\subset X_{j}\) denote triples of fixed points compatible with these bijections. Assuming the \(X_{j}\) are generic, i.e. defined by quadratic equations in \(\mathbb{P}^{7}\), the fixed points are not collinear. Projection from the \(Z_{j}\) gives surfaces \[\operatorname{Bl}_{Z_{j}}(X_{j})\to Y_{j}\subset\mathbb{P}^{4}\] where the blowup normalizes the image of the projection. These constructions are compatible with the involutions on each side. We claim that the construction of [11] gives a Cremona transform \[\phi:\mathbb{P}^{4}\stackrel{{\sim}}{{-\to}}\mathbb{P}^{4}\] such that * the indeterminacy of \(\phi\) is \(Y_{1}\); * the indeterminacy of \(\phi^{-1}\) is \(Y_{2}\); * \(\phi\) is compatible with the involutions \(\iota_{1}\) and \(\iota_{2}\) induced in the \(\mathbb{P}^{4}\)'s. Indeed, the construction induces an isogeny of \(\operatorname{H}^{2}(X_{1},\mathbb{Z})\) and \(\operatorname{H}^{2}(X_{2},\mathbb{Z})\) induced by \(\phi\), restricting to an isomorphism of the primitive cohomology \[f_{1}^{\perp}\stackrel{{\sim}}{{\to}}f_{2}^{\perp}.\] The construction entails designating projection loci \(Z_{j}^{\prime}\in X_{j}^{[3]}\) compatible with the associated \[X_{1}^{[3]}\stackrel{{\sim}}{{-\to}}X_{2}^{[3]},\] our stipulation that the \(Z_{j}\) consist of suitable fixed points gives compatible projection loci. Suppose that \(\phi:\mathbb{P}^{n}\stackrel{{\sim}}{{-\to}}\mathbb{P}^{n}\) is birational and equivariant for the action of a finite group \(G\). In this case, [12, Thm. 1] introduces a well-defined invariant \[C_{G}(\phi):=\sum_{\begin{subarray}{c}E\in\operatorname{Ex}_{G}(\phi^{-1})\\ \operatorname{gen.stab}(E)=\{1\}\end{subarray}}[E\lhd G]-\sum_{ \begin{subarray}{c}D\in\operatorname{Ex}_{G}(\phi)\\ \operatorname{gen.stab}(D)=\{1\}\end{subarray}}[D\lhd G]\in\mathbb{Z}[ \operatorname{Bir}_{G,n-1}], \tag{9.1}\] taking values in the free abelian group on \(G\)-birational isomorphism classes of algebraic varieties of dimension \(n-1\). In this case, the terms are the projectivized normal bundles of \(Y_{1}\) and \(Y_{2}\), taken with opposite signs. It is worth mentioning that the underlying K3 surfaces \(X_{1}\) and \(X_{2}\) are isomorphic by Proposition 21, and the group actions are conjugate under derived equivalences but not under automorphisms. The difference of classes of exceptional loci in (9.1) is _nonzero_ due to Proposition 26 below. This gives an instance where the refinement of the invariant \(c(\phi)\) in [10], [10] using group actions yields new information. **Proposition 26** (cf. Thm. 2, [10]).: _Let \(X_{1}\) and \(X_{2}\) be smooth projective \(G\)-varieties that are not uniruled. Then any \(G\)-equivariant stable birational equivalence_ \[X_{1}\times\mathbb{P}^{r}\stackrel{{\sim}}{{\dashrightarrow}}X_{ 2}\times\mathbb{P}^{s},\] _with trivial \(G\)-action on the second factors, arises from a \(G\)-equivariant birational equivalence_ \[X_{1}\stackrel{{\sim}}{{\dashrightarrow}}X_{2}.\] Proof.: Our assumption - that \(X_{1}\) and \(X_{2}\) are not uniruled - means that \[X_{1}\times\mathbb{P}^{r}\to X_{1},\quad X_{2}\times\mathbb{P}^{s} \to X_{2}\] are maximal rationally-connected (MRC) fibrations. Since \(X_{1}\times\mathbb{P}^{r}\stackrel{{\sim}}{{\dashrightarrow}}X_{ 2}\times\mathbb{P}^{s}\), the functoriality of MRC fibrations [11, IV.5.5] gives a natural birational map \[X_{1}\stackrel{{\sim}}{{\dashrightarrow}}X_{2}.\] When the varieties admit \(G\)-actions, the induced birational map is compatible with these actions. ## 10. Enriques involutions Let \(S\) be an Enriques surface over \(\mathbb{C}\). Its universal cover is a K3 surface \(X\) with covering involution \(\epsilon:X\to X\), a fixed-point-free automorphism of order two, called an _Enriques involution_. The classification of Enriques surfaces \(S\) up to derived equivalence boils down to the classification of pairs \((X,\epsilon)\) up to \(C_{2}\)-equivariant derived equivalence [1, SS6] (and [1] more generally). Derived equivalent Enriques surfaces are isomorphic [1, Prop. 6.1]. A number of authors have classified Enriques involutions on a given K3 surface \(X\), modulo its automorphisms \(\operatorname{Aut}(X)\): * Kondo gave the first examples with finite \(\operatorname{Aut}(S)\)[11]. * Ohashi showed that there finitely many \(\operatorname{Aut}(X)\)-orbits of such involutions. In the Kummer case, the possible quotients are classified by nontrivial elements of the discriminant group of the Neron-Severi group \(\operatorname{NS}(X)\). There are 15 on general Kummer surfaces of product type, 31 in a general Jacobian Kummer surface, but the number is generally unbounded [1], [1]. * Shimada and Veniani consider _singular_ (i.e. rank 20) K3 surfaces; one of their results is a parametrization of \(\operatorname{Aut}(X)\)-orbits on the set of Enriques involutions; the number of such orbits depends only on the genus of the transcendental lattice \(T(X)\)[16, Thm. 3.19]. These results are based on lattice theory: two Enriques involutions on a K3 surface \(X\) are conjugate via \(\operatorname{Aut}(X)\) if an only if the corresponding Enriques quotients are isomorphic [12, Prop. 2.1]. Let \[\operatorname{M}:=\operatorname{U}\oplus\operatorname{E}_{8}(-1)\] be the unique even unimodular hyperbolic lattice of rank \(10\); we have \[\operatorname{Pic}(S)/\text{torsion}\simeq\operatorname{M}\] and \[\operatorname{Pic}(X)\supseteq\operatorname{M}(2)\] as a primitive sublattice. This coincides with the invariant sublattice \[\operatorname{Pic}(X)^{\epsilon=1}\subset\operatorname{Pic}(X)\] under the involution \(\epsilon\). Let \(\operatorname{N}\) denote the orthogonal complement to \(\operatorname{M}\) in \(\operatorname{H}^{2}(X,\mathbb{Z})\), which coincides with \(\operatorname{H}^{2}(X,\mathbb{Z})^{\epsilon=-1}\); note that \(T(X)\subset\operatorname{N}\). We have \[\operatorname{N}\simeq\operatorname{U}\oplus\operatorname{U}(2)\oplus \operatorname{E}_{8}(-2)\] which has signature \((2,10)\). Thus \[\operatorname{Pic}(X)^{\epsilon=-1}=T(X)^{\perp}\subset\operatorname{N}\] has negative definite intersection form. The following result gives a criterion for the existence of Enriques involutions [16, Thm. 1], [12, Thm. 2.2], [16, Thm. 3.1.1]: **Proposition 27**.: _Let \(X\) be a K3 surface. Enriques involutions on \(X\) correspond to the following data: Primitive embeddings_ \[T(X)\subset\operatorname{N}\subset\operatorname{H}^{2}(X,\mathbb{Z})\] _such that the orthogonal complement to \(T(X)\) in \(\operatorname{N}\) does not contain \((-2)\)-classes._ In particular, let \(X\) be a K3 surface with an Enriques involution. Then: * \(\operatorname{rk}\operatorname{Pic}(X)\geq 10\), * if \(\operatorname{rk}\operatorname{Pic}(X)=10\) then there is a unique such involution, * if \(\operatorname{rk}\operatorname{Pic}(X)=11\) then \(\operatorname{Pic}(X)\) is isomorphic to [12, Prop. 3.5] * \(\operatorname{U}(2)\oplus\operatorname{E}_{8}\oplus\langle-2n\rangle\), \(n\geq 2\), or * \(\operatorname{U}\oplus\operatorname{E}_{8}(2)\oplus\langle-4n\rangle\), \(n\geq 1\). **Proposition 28**.: _Let \(X\) and \(Y\) be derived equivalent K3 surfaces. Assume that \(X\) admits an Enriques involution. Then \(X\) is isomorphic to \(Y\). In particular, the existence of an Enriques involution is a derived invariant._ Proof.: In Picard rank \(\geq 12\), derived equivalence implies isomorphism. If \(X\) and \(Y\) and derived equivalent of rank \(10\) and \(X\) admits an Enriques involution, then \(T(X)\simeq T(Y)\) and \(\operatorname{Pic}(X)\) and \(\operatorname{Pic}(Y)\) are _stably isomorphic_. In Picard ranks \(10\) and \(11\), it suffices to show that the lattice \(\operatorname{M}(2)\) is unique in its genus and all automorphisms of the discriminant group \((d(\operatorname{M}(2)),q_{\operatorname{M}(2)}))\) lift to automorphisms of \(\operatorname{M}(2)\). This is implied by [16, Thm. 1.14.2]. Indeed, [11, Lem. 3.1.7] shows that \(\operatorname{Pic}(X)\) satisfies these two conditions whenever \(X\) admits an Enriques involution. Corollary 11 implies (cf. [1, SS6]): **Proposition 29**.: _Any \(C_{2}\)-equivariant derived autoequivalence_ \[(X,\epsilon_{1})\sim(X,\epsilon_{2})\] _arises from an automorphism of \(X\)._ We observe a corollary of Proposition 2: Let \((X_{1},\epsilon_{1})\) and \((X_{2},\epsilon_{2})\) denote K3 surfaces with Enriques involutions. They are orientation reversing (i.e. skew) conjugate if * \(\tau:T(X_{1})\overset{\sim}{\to}T(X_{2})\) as lattices, with compatible Hodge structures; * \(\operatorname{Pic}(X_{1})^{\epsilon_{1}=-1}\) and \(\operatorname{Pic}(X_{2})^{\epsilon_{2}=-1}\) have the same discriminant quadratic form. We explore this in more detail in the case of singular (rank \(20\)) K3 surfaces. The existence of involutions on singular K3 surfaces is governed by: **Proposition 30**.: _[_11_]_ _Let \(X\) be a singular K3 surface with transcendental lattice \(T(X)\) of discriminant \(d\). There is no Enriques involution on \(X\) if and only if \(d\equiv 3\pmod{8}\) or_ \[T(X)=\begin{pmatrix}2&0\\ 0&2\end{pmatrix},\quad\begin{pmatrix}2&0\\ 0&4\end{pmatrix},\text{ or }\begin{pmatrix}2&0\\ 0&8\end{pmatrix}.\] The "most algebraic example", i.e. the smallest discriminant admitting an Enriques involution, has \[T(X)\simeq\begin{pmatrix}2&1\\ 1&4\end{pmatrix}.\] In this situation there are two possibilities. We write the maximal sublattices \[\mathrm{N}\subset\mathrm{Pic}(X)\] such that the involution \(\epsilon\) acts via \(-1\). We follow the notation [20, Table 3.1]. We consider lattices \[N_{10,7}^{144}(2),\quad N_{10,7}^{242}(2)\] where \[N_{10,7}^{242}(-1)\simeq\begin{pmatrix}2&1\\ 1&4\end{pmatrix}\oplus\mathrm{E}_{8}\] with \(\mathrm{E}_{8}\) positive definite and \[N_{10,7}^{144}(2)(-1)\simeq\begin{pmatrix}2&1&1&0&1&0&0&0&0&0\\ 1&2&0&0&0&0&0&0&0\\ 1&0&2&1&0&0&0&0&0&0\\ 0&0&1&4&0&0&0&0&0&0\\ 1&0&0&0&2&1&0&0&0&0\\ 0&0&0&0&1&2&1&0&0&0\\ 0&0&0&0&0&1&2&1&0&0\\ 0&0&0&0&0&0&1&2&1&0\\ 0&0&0&0&0&0&0&1&2&1\\ 0&0&0&0&0&0&0&0&1&2\end{pmatrix}.\] According to magma, these two lattices are inequivalent but are in the same spinor genus thus are stably equivalent. These involutions are not derived equivalent. Indeed, passing to Mukai lattices adds a hyperbolic summand \(\mathrm{U}\) on which the involution acts trivially. However, in the case at hand we are stabilizing the \((-1)\)-eigenspace. Thus these involutions are "skew equivalent" in the sense of Section 6. ## 11. Postscript on involutions in higher dimensions There are many papers addressing the structure of involutions of higher-dimensional irreducible holomorphic symplectic varieties. * Symplectic involutions of varieties of \(K3^{[n]}\)-type and their fixed loci are classified in [13]. - arising from an abelian surface \(A\) - involutions associated with \(\pm 1\) on \(A\) are analyzed in [11, Th. 4.4] and [12, Th. 1.3]. * Anti-symplectic involutions on varieties of \(K3^{[n]}\)-type of degree two are studied in [10]. * Higher-dimensional analogs of Enriques involutions are studied in [13]. * both symplectic (see [10] and [11]) and anti-symplectic - are studied in [14]. The corresponding actions on lattices are described explicitly. * Involutions on O'Grady type examples are considered in [15]. It is natural to consider whether derived equivalences of involutions on K3 surfaces \(X_{1}\) and \(X_{2}\) may be understood via equivalences of the induced involutions on punctual Hilbert schemes and other moduli spaces.
2304.08366
Why is AI not a Panacea for Data Workers? An Interview Study on Human-AI Collaboration in Data Storytelling
Data storytelling plays an important role in data workers' daily jobs since it boosts team collaboration and public communication. However, to make an appealing data story, data workers spend tremendous efforts on various tasks, including outlining and styling the story. Recently, a growing research trend has been exploring how to assist data storytelling with advanced artificial intelligence (AI). However, existing studies may focus on individual tasks in the workflow of data storytelling and do not reveal a complete picture of humans' preference for collaborating with AI. To better understand real-world needs, we interviewed eighteen data workers from both industry and academia to learn where and how they would like to collaborate with AI. Surprisingly, though the participants showed excitement about collaborating with AI, many of them also expressed reluctance and pointed out nuanced reasons. Based on their responses, we first characterize stages and tasks in the practical data storytelling workflows and the desired roles of AI. Then the preferred collaboration patterns in different tasks are identified. Next, we summarize the interviewees' reasons why and why not they would like to collaborate with AI. Finally, we provide suggestions for human-AI collaborative data storytelling to hopefully shed light on future related research.
Haotian Li, Yun Wang, Q. Vera Liao, Huamin Qu
2023-04-17T15:30:05Z
http://arxiv.org/abs/2304.08366v1
# Why is AI not a Panacea for Data Workers? ###### Abstract Data storytelling plays an important role in data workers' daily jobs since it boosts team collaboration and public communication. However, to make an appealing data story, data workers spend tremendous efforts on various tasks, including outliring and styling the story. Recently, a growing research trend has been exploring how to assist data storytelling with advanced artificial intelligence (AI). However, existing studies may focus on individual tasks in the workflow of data storytelling and do not reveal a complete picture of humans' preference for collaborating with AI. To better understand real-world needs, we interviewed eighteen data workers from both industry and academia to learn where and how they would like to collaborate with AI. Surprisingly, though the participants showed excitement about collaborating with AI, many of them also expressed reluctance and pointed out nuanced reasons. Based on their responses, we first characterize stages and tasks in the practical data storytelling workflows and the desired roles of AI. Then the preferred collaboration patterns in different tasks are identified. Next, we summarize the interviewees' reasons why and why not they would like to collaborate with AI. Finally, we provide suggestions for human-AI collaborative data storytelling to hopefully shed light on future related research. Data storytelling, human-AI collaboration, interview study ## 1 Introduction In the era of big data, data analysis has become a routine task for many data workers, such as business analysts, data journalists, and researchers. According to previous research [1, 26, 65], the workflow of data analysis involves multiple steps, including data collection, cleaning, exploration, modeling, and communication1. Among these steps, communicating data findings has a crucial role in boosting collaboration in teams [8], conveying messages to clients [23], raising public awareness [31], and so on. Footnote 1: Data storytelling and data communication are interchangeable in this paper. However, communicating data findings is challenging for data workers. They have to spend substantial effort preparing clear, coherent, and engaging data stories to effectively communicate their results to the target audience. Informed by the advance of artificial intelligence (AI), including the recent development of large-scale AI models (_e.g._, DALL-E [38] and ChatGPT [39]), researchers have attempted to tackle the challenge by introducing AI-powered data storytelling tools. However, most of the existing tools are limited to studying human-AI collaboration in specific steps. For example, Calliope [48] and Erato [53] help humans to create story outlines. InfoColorizer [68] and ColorCook [49] utilize AI models to suggest color usage. These studies failed to present an overall picture of where and how humans would like to collaborate with AI in the entire storytelling workflow. To better leverage the powerful AI capabilities in intelligent tools, the philosophy of human-centered design [12] informs the necessity of a comprehensive understanding of data workers' needs, workflows, and attitudes toward AI, rather than making assumptions about where, how, and even whether these tools should be developed. With this in mind, we conducted an interview study involving data workers, aiming to illuminate future research directions for intelligent tools, particularly in the context of supporting data storytelling. We focused on three research questions in our interview study. The first question is **where would humans like to collaborate with AI in the data storytelling workflow?** The answers to this question can potentially guide the selection of tasks that the tools target. After identifying the tasks, a follow-up point of consideration is how to design the functionality of AI models in these tools. To support it, we proposed the second research question, **how would humans like to collaborate with AI?** To answer the research questions, we conducted interviews with 18 data workers with diverse backgrounds. Overall, our interviewees demonstrated a positive attitude in collaborating with AI in data storytelling. However, they also expressed their deep concerns that slowed their pace of introducing AI to their entire storytelling workflow. As a result, we proposed our third research question, **why and why not do humans prefer to collaborate with AI?** The responses collected in interviews first confirm and refine the theoretical workflow of data storytelling [31] with empirical results. The workflow involves three stages, _planning, execution_, and _communication_, and eight tasks. Then we outline four different roles of AI collaborators and analyze them with the "agency vs. automation" framework [21]. These roles are _creator, optimizer, reviewer_, and _assistant_ following the order of decreasing AI automation. Based on the results, participants' preferences for collaborative tasks and AI roles are summarized. Furthermore, we report the participants' reasons why or why not AI is desired in their data storytelling workflows. For example, AI was appreciated due to its quick turnover time but criticized for its limited ability to understand humans' contexts. To conclude our research, we discuss the implications for future human-AI collaborative data storytelling tool design. Furthermore, we examine the generalizability of our findings and propose future directions. The contributions of our research include the following: * Characterization of stages and tasks in the data storytelling workflow based on empirical evidence; * Summarization of where, how, and why (not) users would like to collaborate with AI in their daily data work; * Suggestions for future AI-powered data storytelling research. ## 2 Related work Our work is situated within HCI research that conducts need-finding studies for AI-powered tools (_e.g._, [24, 41, 52, 70]). These studies emphasize involving targeted users and impacted stakeholders early on, understanding their workflow and social contexts, eliciting their desired ways to use AI, and critically examining their concerns and perceived risks of AI. While these broader works on human-AI collaboration informed our interview methods, below we focus on reviewing existing human-AI collaborative storytelling tools and empirical research in the data science domain. ### Empirical research on human-AI collaboration in data science Currently, data science tasks are still labor intensive and require deep domain expertise. A typical data science workflow includes preparing data, exploring data, modeling data, communicating data insights, and deploying models [69, 1, 26, 36]. To assist these labor-intensive tasks, most relevant to our work is prior research exploring and developing intelligent tools that augment the data science workflow. For example, prior work produced tools such as conversational agents to assist the insight exploration task [45, 46]. More recently, AutoML technologies that leverage advanced ML capabilities to automate data preparation and modeling have received much attention in academia and industry. Besides developing new tools for AutoML [59, 59, 64], HCI researchers have conducted empirical studies to examine data scientists' needs and concerns around AutoML [58, 57, 13, 5, 66]. For example, Wang _et al._[58] conducted an interview study with 20 data scientists about their perception of AutoML tools and discovered their concerns around job security and desire for human-AI collaboration, in which both automation and human expertise of data science are indispensable. Focusing on data workers' trust in AutoML systems, Drozdal _et al._'s [15] study revealed a strong need for transparency features such as performance metrics and visualization to establish trust in AutoML. By interviewing 29 enterprise data scientists, Crisan and Firo-Gartal [13] identified common usage scenarios of AutoML systems and provided a framework summarizing the level of automation desired by data workers. Besides data preparation and modeling, communicating data, or data storytelling, is an important phase in the data science pipeline, usually for data team cooperation [8] and communicating with clients [23]. To eliminate the considerable effort in telling a data story, AI-powered storytelling tools have been studied in recent years (see Sec. 2.2). However, the previous empirical studies about human-AI collaboration in data science mainly focus on the data preparation and modeling phase (AutoML), and thus limited their scope to certain steps. Human-AI collaborative data storytelling in the data science pipeline has not been covered by them. To better design such tools, it is crucial to systematically understand data workers' needs and attitudes toward this part. We conducted an interview study to investigate where, how, and why (not) data workers would like to collaborate with AI in data storytelling. ### Al-powered data storytelling tools Data storytelling and narrative visualization tools have become increasingly popular in recent years. These tools aim to help users create compelling data stories by providing data-driven insights and suggestions for visualization. AI in these tools has the potential to significantly enhance the storytelling process by automating certain tasks and providing users with new ways of analyzing and visualizing data. To enhance the workflow of data storytelling, researchers have used AI techniques for different parts. Researchers have used vision techniques to the visual structure and semantics of infographics [60, 33]. They are also used to help with different dimensions of design to make data storytelling more effective and engaging for communicating the intended message. The dimensions include color [68], layout [42], graphics [63, 28], motions and animations [60, 18, 54]. Researchers also try to understand the design space of data storytelling [43, 44, 3, 5, 9], and to automate or semi-automate the process of creating and recommending data storytelling in the form of infographics [44, 2, 50], data comics [27], etc., from various sources of data. To produce better results, data storytelling researchers also try to evaluate and compare different infographics designs and methods using quantitative and qualitative measures, from the perspectives of memability [7], cognition [6], or aesthetics [17, 20]. These studies have considered the human-AI collaboration from separate perspectives of the whole data communication pipeline. There is also research that considers the whole process of data storytelling. For example, Samuel _et al._[19], Chen _et al._[10], and Li _et al._[32] consider data exploration and storytelling as an integrated workflow. Data practitioners may come back to data exploration when they create data stories, and the workflow should be bridged to ease these iterations. Taking one step further, we conduct an interview study from the data analysts' perspectives to understand their daily work. Further, we dig deeper into how users think of AI collaborators' jobs and roles when they do data storytelling with the help of recent AI advances. ## 3 Methodology Our study was conducted in the first quarter of 2023 when multiple breakthrough AI systems appeared, such as ChatGPT [39], new Bing [35], and GPT-4 [40]. These systems might lead to data workers' changing attitudes toward AI and encouraged us to investigate their needs and ideas about human-AI collaboration. This section introduces how we conducted the research, including the recruitment of participants (Sec. 3.1), the procedure of interviews (Sec. 3.2), and the analysis of results (Sec. 3.3). ### Participant recruitment In our study, we recruited data workers from both academia and industry. We adopted multiple approaches to recruiting participants, including posting ads on social media or in special interest groups through communication software and sending invitations leveraging our professional network. The participants are required to have experiences in data storytelling, such as creating slides for presentations or writing data articles. Ultimately, we recruited 18 participants (12 males and 6 females, 9 from academia and 9 from the industry, average experience in data analysis = 5.39, the standard deviation of experience in data analysis = 3.10). P6 and P17 are now postgraduate researchers, but they mainly talked about their data storytelling-related experiences in the interviews when they took full-time duties in the industry. Therefore, we only reported their industrial backgrounds. All participants have AI-related experiences, such as using chatbots or applying AI algorithms. Their detailed demographic information is in Table I. ### Interview procedure In our study, we conducted semi-structured interviews with the participants through online meetings. Each interview involved one or two authors and the participant. Before the interview, we first introduce the purpose of our study. Then we collected participants' consent to participation in the study and recorded the entire meeting. All interviews started with a query about the participants' data storytelling-related background, such as their job nature. Then we introduced two main parts of our interview: (1) understanding participants' practice of data storytelling and (2) brainstorming where and how they would like to collaborate with AI when creating data stories. In the first part, we asked the participants about their familiar format of data story and then \begin{table} \begin{tabular}{l l l l l l} \hline \hline ID & Gender & Age & Job & Domain & Exp. \\ \hline 1 & Male & 25-30 & Researcher & Data Science & 6 \\ 2 & Male & 40-45 & Software Engineer & Data Science & 10 \\ 3 & Male & 25-30 & Researcher & Chemistry & 4 \\ 4 & Male & 25-30 & Researcher & Data Science & 6 \\ 5 & Female & 25-30 & Researcher & Algorithm & 3 \\ 6 & Male & 25-30 & Data Scientist & Consulting & 4.5 \\ 7 & Male & 25-30 & Researcher & Data Science & 6 \\ 8 & Female & 25-30 & Researcher & Data Science & 5 \\ 9 & Female & 25-30 & Researcher & Data Science & 3 \\ 10 & Female & 25-30 & Data Analyst & Finance & 4 \\ 11 & Male & 25-30 & Business Analyst & Finance & 1.5 \\ 12 & Male & 25-30 & Business Analyst & Consulting & 7 \\ 13 & Male & 25-30 & Researcher & HCI & 3 \\ 14 & Male & 25-30 & Student & Economics & 5 \\ 15 & Male & 25-30 & Researcher & Medical Imaging & 3 \\ 16 & Female & 30-35 & Business Analyst & Finance & 8 \\ 17 & Female & 25-30 & Journalist & Journalism & 3 \\ 18 & Male & 40-45 & Applied Researcher & Software & 15 \\ \hline \hline \end{tabular} \end{table} Table I: The table records our interviewees’ demographic information, including genders, ranges of ages, jobs, domains, and experiences of data work in years (denoted as Exp. in the table). invited them to share a recent piece of their stories with us if convenient. Then we created mind maps with the participants to help them recall and organize their workflow of data storytelling. After the participants finished the introduction of their workflow, we moved on to the second part. The second part began with examples of AI applications in data storytelling to help brainstorm. We encouraged them to consider beyond the existing advanced AI models (_e.g._, ChatGPT). Then the participants freely proposed where and how they prefer to collaborate with AI for data storytelling. We also further probed on the reasons behind their preferences. When the participants were satisfied with their lists of ideas, we moved to the last part of the interview. In the last part, we asked about overall perceptions of human-AI collaboration in data storytelling, such as the difference between collaboration with AI and human designers. Each interview lasted for around 0.5-1 hour. All interviews were recorded and transcribed. ### _Result analysis_ Following the prior practice [26], we adopted an iterative coding approach. At the beginning of our study, two leading authors attended the first six interviews together to ensure a common understanding of the protocol and its coverage of the research questions. After the interviews, the leading authors derived a preliminary version of the frequently desired collaboration patterns with AI and related reasons through open coding and iterative discussions. Along with more interviews conducted, the collaboration patterns and reasons were examined, refined, merged, and enriched. In the next section, we present the interview results supported by participant quotes. ## 3 Results This section targets answering the research questions with the results from interviews. Sec. 4.1 illustrates the interviewees' common workflow of data storytelling to ground results for our research questions. Then we attempt to answer the first research question by summarizing the roles of AI collaborators and where AI can take the roles according to interviewees' feedback (Sec. 4.2). Fig. 1 shows the summary of results in Sec. 4.1 and Sec. 4.2. Next, Sec. 4.3 and Sec. 4.4 present interviewees' reasons about why and why not they would like to collaborate with AI, which is outlined in Fig. 2. In this section, we label participants' IDs for opinions they expressed (P1-P18). ### _What is the general workflow of data storytelling?_ In the first part of the interviews, we invite the participants to share their workflow of data storytelling. Most of them frequently use slide decks as the format of their data stories. Some other formats of data stories include reports and articles. According to their feedback, we summarize the general workflow as three major stages and seven tasks. Fig. 1(b) presents the frequency of tasks mentioned by our interviewees. The first stage in the workflow is **planning**, where the interviewees comprehend their results from data analysis and brainstorm the overall picture of a data story. In this stage, the interviewees commonly first _decide on the core message of the data story_ based on their data findings and the broader context of their projects, such as the background and the target. After having a clearer core idea of the story, they move on to _collecting supportive data facts_. These data facts mainly come from their analysis and sometimes are collected in related materials such as news. When the supportive information is ready, the next task is to _compile the story outline_, where the interviewees organize the related information according to their topics and logical relationship. After the three tasks, the interviewees have a plan for their data story. In the second stage, the interviewees **execute** the plans created in the first stage. They commonly first _prepare story pieces_ using the collected information in the first stage. Unlike the task of collecting related information, this task transforms collected facts into a presentable format. For example, they draft the introduction to their background and methods. They also design and make charts to facilitate the delivery of their findings. The next task is to _integrate story pieces_, where the interviewees may make presentation slides or reports to accommodate the story pieces according to the outline. In this step, some supportive information that is irrelevant to the data may be added to the data story. For example, when making slides, it is common to add pages for the title, section, and outline pages to facilitate the audience's understanding. To make the data story more effective and appealing, at the end of the execution stage, the interviewees often _style the data story_. Some examples of styling the story include adjusting the layout of visual elements, unifying the color usage, annotating the charts, and applying animations. Notably, the interviewees may review the results at the end of each task and iteratively improve the results. At the end of this stage, the interviewees have complete data stories that can be shared with other stakeholders. In the final stage, the interviewees **communicate** their data stories with others. After finishing and checking the story, the authors will _share the story_ informally or formally. When sharing the story informally with the team, the authors often want to seek advice on improving the story quality, such as the core message's cleanness and the content's completeness. Then the authors may perform the previous tasks again to refine their story. When the stories are shared formally, multiple Fig. 1: This figure summarizes participants’ opinions about where and how they would like to collaborate with AI. (a) demonstrates AI collaborators’ roles against AI automation and human agency. (b) shows tasks in the existing workflows of our interviewees and (c) illustrates the expected AI collaborators’ roles and their tasks. In (a), we use fully manual story creation and Heer’s ideal “agency plus automation” collaboration mode [21] as two anchor points and derive the relative rough positions of other roles. Both y-axes in (b) and (c) are the tasks in the data storytelling workflow. In (c), the heatmap presents the breakdown frequency of task-role tuples. The bar chart on the top counts the frequency of roles, while the one on the right shows the frequency of tasks. Notably, since each participant can propose multiple roles of AI collaborators for one task, the count of tasks can be larger than the number of participants. actions can be taken according to the sharing format. For example, if the sharing is a live talk or video recording, it is possibly necessary to write scripts and rehearsal the presentation. When the story is shared as a document, the authors may need to upload it to the cloud or send it to others through communication software. Though we attempt to divide the workflow into stages and tasks according to their chronological order, the procedure of data storytelling is not always linear. Most of the time, the interviewees may experience conducting several tasks back and forth to polish their data stories. For example, it is quite common that in the styling task, the authors can go back to prepare story pieces according to a more unified style. Another example is that the authors may share the story with their items several times. They can review the story outline before execution to improve the story iteratively in an agile manner. For example, P2 mentioned that he often communicated with team members to review his stories in different stages, including planning and execution: _"The first thing is to really decide what you want to focus on, right?... Or which data you want to focus on showing... I actually use paper to take some notes or some ideas. And then I prefer to discuss with someone else."_ _"and trying to make [findings] visual... But then you still need to see if they work into the framework that you want to explain. Sometimes, maybe there is some step missing or sometimes some of the visuals are not clear enough. So I also tried to discuss with someone and got some feedback."_ Reviewing and improving data stories was also considered an opportunity to gain new ideas or identify potential problems in their data analysis by some participants. For example, P13 mentioned: _"When I am reviewing [the data story], I can have some new insights. Then I need to verify these ideas with some new experiments."_ With these steps, his understanding of the problem could be enhanced. We also notice that some participants, under certain scenarios, may not conduct some tasks, as shown in Fig. 1(b). They may skip some steps according to the nature of their stories, such as the target audience's backgrounds and whether the story is communicated in a formal occasion. For example, P5 mentioned that she might not integrate story pieces into slides or reports when discussing findings with her supervisor. She instead showed several charts and presented the messages directly. The reason was that they discussed a familiar project in an informal meeting. ### Where and how do humans want to collaborate with AI? To answer our first research question, we summarize the proposed roles of AI collaborators by interviewees to illustrate how data workers would like to collaborate with AI (Sec. 4.2.1). Then we take one step further to examine our interviewees' preferences towards the collaboration approach in different tasks of their workflow (Sec. 4.2.2). #### 4.2.1 What are AI collaborators' roles? According to our interviews, the desired roles of AI can be categorized into four types, _creator_, _optimizer_, _assistant_, and _reviewer_. The characterization of roles reflects how humans would like to divide the work in data storytelling between AI collaborators and themselves. The first role of AI is the **creator**. When AI collaborators are assigned to a task in the workflow of data storytelling, they finish the entire task independently. The responsibility of humans is limited to providing the raw materials as input and reviewing the AI-created output. In other words, humans are not involved in the creation process. The second role of AI collaborators is the **optimizer**. When AI collaborators act as optimizers, they do not handle the entire task but take the duty of fine-tuning humans' created content. AI optimizers need to understand the input content and then improve the content. In this collaboration mode, humans bear more workload and outsource optimization to AI. The third role of AI collaborators is the **reviewer**. Under this role, AI evaluates the task performance of humans and suggests potential issues or improvements. Compared to AI optimizers, reviewers do not need to improve humans' created content directly. They only need to point out the problems or provide suggestions based on understanding humans' input content. Then humans can decide how to fix the problem or which suggestion is adopted. The fourth role of AI collaborators is the **assistant**. Compared to the previous roles, AI collaborators take even less work when acting as assistants. They are not authorized to modify anything created by humans and do not need to understand the content. Instead, they provide assistance on specific tasks according to humans' requirements. We further assess the four roles using the "_agency vs. automation_" framework [21]. The framework is derived from the ideal goal of human-AI collaborative tools, achieving "_agency plus automation_", where AI automates the tasks (_i.e._, _automation_) under humans' control (_i.e._, _agency_). According to Heer [21], tool designers may need to estimate human-AI collaborative tools' levels of agency and automation and examine if they fit users' tasks and requirements. Following this idea, we characterize the four roles of AI summarized from the interview using the agency and automation levels when humans collaborate with different AI collaborators. Fig. 1(a) shows the estimated levels of human agency and AI automation of four AI roles and two anchor points, _i.e._, fully manual data storytelling without AI and the ideal "agency plus automation". When AI collaborators serve as creators, humans have little control over how AI performs tasks and AI fully automates the task. Therefore, it has the lowest level of agency among the four roles, while the level of automation is the highest. When working with AI optimizers, humans have a higher agency but take more workload. AI optimizers only automate content improvement based on understanding humans' input. AI assistants have a lower level of automation since it only reacts to specific requirements given by humans. We consider that when AI collaborators serve as reviewers, both their level of automation and human collaborators' level of agency are between those of assistants and creators because AI reviewers need to understand the input by themselves and may not execute an optimization plan automatically. Moreover, reviewers do not directly follow humans' instructions but give humans control over whether and how they follow the advice. #### 4.2.2 What are AI collaborators' expected tasks? This section reports concrete desired collaboration patterns, _i.e._, tasks and roles of AI collaborators, which were coded based on participants' comments following the definition in Sec. 4.1 and Sec. 4.2.1. Fig. 1(c) illustrates the frequency of preferred collaboration patterns. In Fig. 1(c), it is apparent that the desired collaboration concentrates on the planning and execution stages, while fewer participants have to receive help from AI in the communication stage. At the task level, the most frequently mentioned tasks include preparing story pieces (23 times), styling the story (23 times), and collecting data facts (12 times). The number of times may exceed the number of participants since some participants mentioned multiple types of collaboration patterns in one task. If a participant mentioned one collaboration pattern multiple times, we only count once in Fig. 1(c). More specifically, most of the participants would like to have AI collaborators as creators in creating story pieces and styles and assistants in collecting data facts. The distribution indicates that participants have different preferences for AI collaborators' roles at different stages. It is clear that the participants prefer AI collaborators as assistants in the planning stage so that they can control the core idea and the outline of the data story. Next, they prefer to let AI automates the execution stage as creators or optimizers to offload their efforts while respecting their defined outline. Finally, most of the participants prefer to share data stories with their audience without collaborating with AI. In this way, their story can be conveyed more faithfully with sufficient interactions with the audience. We hypothesized that it is more likely to achieve the goal of high human agency and high AI automation by combining different collaboration patterns at different stages of the workflow. The rest of this section presents the most frequently mentioned task in each stage. Since preparing and styling story pieces share the highest frequency in the execution stage, both of them are discussed. Collect data factsCollecting supportive data facts is the most popular task for participants to collaborate with AI in the planning stage. It was mentioned by 12 participants (P1-P4, P7, P9, P11-P13, P16-P18), where AI collaborators were expected to be assistants by 10 of them (P1-P4, P9, P11-P13, P16, P17). To utilize AI assistants in this step, participants would like to ask them to collect and organize data facts from various sources. Such work was often considered repetitive and time-consuming. P12, as a business analyst, spoke about his expected AI assistant in collecting numbers from multiple documents: _"When I make a report for a company now, I may need to search for [its annual reports] with search engines, let's say, annual reports of 2021, 2022, and 2020. After collecting these annual reports, I may find each year's income... Then I need to input each number manually to build the chart. This is a very cumbersome task... [The information] is very fragmented so you cannot get the numbers that you want at once. If AI can do something like collecting accurate income data in the past five years for me... I feel I can save lots of time collecting the data."_ The opinion was echoed by P8, who thought AI was good at summarizing existing information. Besides collecting data facts, it was also desired to have AI for searching background information to enrich the data story. P1 expressed his need for an AI assistant to collect public opinions to illustrate the importance of his data story: _"When I design the foreshadowing [for data stories], maybe it is possible to let [AI] write some stories. The stories are not fiction stories. I feel they are more like explaining why the topic [of a data story] is important... What [AI] does is more like collecting some public opinions or focuses."_ The help of AI can offload the efforts of collecting background information and have humans focused on the core of their data story. Prepare story piecesWhen preparing data story pieces, it was necessary to plot charts and write explanatory texts, which introduced a considerable burden to data workers. Therefore, almost all participants mentioned that they preferred to receive help for this task. The largest group of participants preferred to have AI create story pieces, such as writing texts or plotting charts (P5, P7, P8, P10-P12, P14-P16, P18). They appreciated the collaboration where they input the outline and data facts, and AI could generate story pieces automatically. Such collaboration was described with a metaphor by P14, _"it is like that I prepare all ingredients for [AI] and ask them to cook dishes."_ P16 further explained her opinion: _"I want to have AI to write texts... It can read the information in charts and analyze it. It is simple. For example, if the chart writes the average value of years or the yearly compound growth rate, then I don't need to calculate it, and AI can write [the related findings] itself based on the chart."_ Another two groups of users preferred to let AI optimize (P5, P13, P16-P18) or review their accomplished story pieces (P1, P12, P13) to improve the quality of story pieces or customize them according to the context of data stories. For example, P13 sometimes felt unsure whether the story piece was hard to be understood by the audience. Therefore, he hoped that AI could provide suggestions: _"If we discuss the [research] problem with many lay people, their expected and data scientists' expected slide contents must be different. Under such scenarios, AI can provide more advice. For example, it can suggest that this chart is too complex, so you should break it into multiple charts to explain."_ P12 hoped that AI could review his story pieces using professional knowledge, such as statistical theories. Some participants (P2, P3, P12, P13, P16) thought AI could assist story piece creation, such as enabling voice-based input (P12), auto-completing chart plotting code (P13), and one-click translation (P16). Style the data storyStyling data stories often does not improve stories' content but enhances their appearance and facilitates understanding. According to our interviews, common styling actions included adjusting the layout, changing color palettes, applying designated templates, and adding animations. These actions were often cumbersome and could be non-trivial for data workers with a weak background in graphical design. Therefore, aid from AI is welcomed. Six participants preferred to have AI collaborators as assistants in styles (P1-P3, P6, P8, P9). Potential assistance provided by AI included unified color palette management and suggesting layouts as the _design idea_ function in Microsoft PowerPoint [34] does. P3 indicated the color usage in his domain was semantically meaningful since he often dealt with optical phenomena in chemistry experiments. Therefore, he expected that an AI assistant could help him manage color palettes considering both semantic meanings and visual effectiveness. Furthermore, seven participants expected that AI could create styles for their data stories (P7-P9, P11, P14, P16, P18). Several participants (P1, P8, P17) implied that they frequently used the design idea function in Microsoft PowerPoint to suggest alternative styles for their slides. However, its design suggestions were _"limited"_ (P17) and sometimes _"outdated"_ (P8). As a result, P8 would like to further extend the function to transferring the styles in online well-designed slides to their own data story with AI collaborators as style creators. P7 also mentioned a similar function. Other examples of creating styles by AI included animating the presentation slides automatically (P9, P16). Furthermore, P8 expected that generative AIs could create decorative or background figures for her data stories. Lastly, AI might also optimize the existing style according to eight participants (P2, P3, P7, P8, P14-P17). They hoped that AI could beautify the charts, tables, or the entire story automatically. Share the data storyIn the communication stage, the desired collaboration patterns focused on _one-way_ communication where story-rellers might spread their data stories using videos or slides. Five participants expressed their willingness to ask AI collaborators to share the story instead of them in one-way communication (P4, P7, P8, P10, P16). For example, as a researcher on data storytelling strategies herself, P8 had an observation that _"it is necessary to have multilingual videos when I want to spread my videos on worldwide platforms."_ She thought AI could facilitate spreading data stories by automatically creating multilingual videos based on the original data story. Similarly, P16 thought AI could create videos that would be delivered to the public instead of her. Furthermore, P10 and P16 preferred to have AI collaborators as assistants to upload their slides to the internal server or send them in an email to all attendees. Therefore, they could save efforts from these auxiliary tasks. Unlike one-way communication, the participants often thought AI had no role in _two-way_ data story communication, such as meetings. The reasons will be explained in Sec. 4.4.3 and Sec. 4.4.4.4. ### Why AI is desired in data storytelling? Our interviews showed that almost all participants expressed their need to collaborate with AI on some tasks. This section summarizes the participants' four common reasons why AI is desired. We note that these comments reflect the participants' general beliefs about AI technologies, and should not be taken as absolute facts. #### 4.3.1 AI reduces the workload of repetitive and auxiliary tasks. A frequently mentioned reason for having AI collaborators is that they can reduce the workload of repetitive and auxiliary tasks. The reason was mentioned by 14 participants (P1-P5, P7-P13, P16, P18). Our interviewees often complained that they spent tremendous effort on repetitive and auxiliary tasks when creating appealing data stories. P11 described such tasks as _"dirty work."_ Some example tasks included plotting charts, collecting data facts, and adjusting the layout. P16 complained that she spent _"most of the time"_ adjusting the charts' or tables' styles according to her company's presentation template in the execution stage. However, this auxiliary task did not add much value to her story. She hoped that AI collaborators could eliminate her efforts: _"I finish my own work well and leave PPT [slides] to the professionals, like AI... Why do I need to spend much time styling PPT [slides]?"_ Another example was mentioned by P7. As a researcher, he sometimes gave talks at different lengths ranging from 5 minutes to 15 minutes depending on the occasion. He needed several versions of slides then. If AI automatically created slides according to the time requirement, P7 could avoid making slides several times. Despite creators, AI collaborators could also serve as optimizers and assistants to save effort. Some examples can be found in Sec. 4.2.2. #### 4.3.2 AI models are believed to have a higher level of capability than humans in some tasks. Among the 18 participants, 14 expressed that they would like to collaborate with AI since they believed AI might have a higher capability in specific tasks than humans. The participants often mentioned two merits that they believed AI had and might potentially benefit data storytelling. The first potential advantage of AI over humans is that they can possibly _have a broader knowledge coverage_ led by the large quantity of training data (P7-P10, P12-P14, P18). P12 mentioned that humans had limited capability of searching and collecting information but AI could be more powerful. Therefore, he expected that AI could give him more complete results when he asked them to collect data facts. Similarly, P9 mentioned a case where AI might provide explanations for data facts leveraging additional knowledge outside the dataset, such as explaining _"the reasons why the sales in one year increase or decrease suddenly_." P8 proposed that she could ask AI to generate presentation transcripts with different styles, such as humorous or formal ones. The second potential advantage of AI is that its high computational power is likely to _enable a fast and accurate search over the data or design space_ (P1, P3, P4, P6, P10, P12, P14-P16, P18). For example, P18 thought that AI could quickly create several data stories with different styles. Then he could pick one style for sharing conveniently. P6 mentioned the AI's potential in brainstorming ideas: _"AI might help us to find interesting combinations between these ideas or it may be able to show us some possible spaces which are not really explored by our ideas... AI should be really good at this kind of task."_ To utilize these advantages, other than having AI collaborators as creators, a frequently proposed way was to ask them to provide inspiration as assistants to different tasks, such as selecting supportive data facts and choosing styles. P1 further hoped that AI could help him check the problematic content, which might be led by data bias or incompleteness, or only provide a checklist. His motivation was that humans could make mistakes since they were sometimes forgetful or lack of related knowledge. AI could compensate for these issues using its knowledge and computational power. #### 4.3.3 Collaborating with AI may cost less than with humans. When the participants talked about collaborating with AI and designers, an advantage of AI was the low time and money cost. First, they often appreciated that AI could have _a quick response time_ (P7-P12, P15, P17), especially _"when the schedule is tight"_ (P7). P17 described the collaboration with AI as a _"What You See Is What You Get"_ experience. P15 liked that AI could possibly allow him to iterate the data storytly. In contrast, participants commented that working with designers or other human collaborators in data storytelling might take up to hours or days to finish tasks. P9 further said that human collaborators could sometimes _"miss their deadlines"_. When working with AI, she did not have such concerns. Furthermore, several participants (P8, P9, P11) thought AI could be cheaper than human collaborators' money costs. Therefore, P8 believed AI might be more accessible to data workers than professional designers when handling less important tasks, such as preparing data stories for her regular team meetings. #### 4.3.4 AI collaborators bear tasks faithfully and tirelessly. According to our participants, it was common to polish the story pieces or styles several times until the authors felt satisfied with them. Several participants thought AI was much easier to be asked to handle repetitive tasks or revise its outputs many times until data workers felt that the results totally matched their expectations (P4, P5, P7, P9, P14, P16). The reason was that AI collaborators were more like _"machines"_ (P4). Therefore, human users did not need to consider whether they were willing and available to handle the tasks. For example, P4 commented that he could continuously ask AI to do some repetitive work without any worries. When asking human collaborators to polish their work, the participants might feel reluctant or inconvenient since they needed to consider their collaborators' willingness and availability. P16 compared the experience with human and AI collaborators: _"You cannot continuously trouble others. If they did something incorrectly, you may ask them, 'can you modify it here?" But when they continuously do something incorrectly, you may sometimes give up [asking them to revise] since you have already made several requirements or they become busy. If [working with] AI, you can ask them whenever you have new requirements."_ Furthermore, the participants thought that AI collaborators could follow their ideas faithfully. When collaborating with human collabora Fig. 2: This figure summarizes (1) the interviewees’ opinions about why and why not AI collaborators are preferred and (2) our suggestions for future AI-powered data storytelling tools. tors, the participants might need to respect and compromise with others' preferences and opinions. This point stood out when creating data stories since people tended to have their personal styles of storytelling. P14 said: _"When talking about styles, everyone may have different preferences. Therefore, I will respect [human collaborators' opinions] more. If I have some professionals, I believe that their designs can be appreciated more."_ When collaborating with AI, he could ask AI to make the story as close as possible to his intent. P7 and P16 even thought that future AI could learn their personal preferences and create content as they did. ### Why AI is not a panacea for data storytelling? When the research was launched, we anticipated that AI collaboration is desired by data workers. However, to our surprise, almost all participants were skeptical about whether AI can be qualified collaborators in the entire workflow. Furthermore, participants provided various reasons for not collaborating with AI throughout the interviews. This section summarizes these reasons for why AI collaborators may not be favorable. Again, these concerns reflect participants' general beliefs about AI and, as we observed below, are heavily influenced by their experience with recent generative large-scale AI models. AI's capability of creation requires more improvements. 13 participants had a deep concern regarding the performance of AI in creating data stories (P1-P3, P5-P8, P12-P17). The potentially unsatisfactory performance has become an obstacle to introducing them to the production of data stories. For example, P13 said: _"If [AI's] output can only achieve 40 to 50 marks [over 100 marks], I prefer to make drafts [of data stories] by myself or collaborate with other team members."_ Our participants mentioned their prior experience with AI's poor performance. P15 distrusted AI-generated content since he had seen some misleading academic paper explanations created by ChatGPT. He believed that such misleading content could be dangerous to users with limited expertise. P8 expressed her opinion on AI's capability by saying, _"some AI-created figures may have counter-intuitive mistakes, such as drawing a person with one ear missing."_ P5 and P8 worried that AI could only produce a limited variety of voices when narrating data stories and the stories may not be engaging or interesting enough for their audience. #### 4.4.2 AI may lack the understanding of data story contexts. According to our participants, understanding the contexts of data stories is another challenging issue for AI. Here contexts included both the _project and data background_ of the data story and the _authors' and audiences' preference_. It is a common viewpoint among our participants that AI is hard to take contexts into consideration. Five participants considered that AI could not understand the project and data background (P1, P3, P6, P10, P12). Among them, P1 mentioned the importance of the background of analytical projects and doubted whether AI could capture them: _"When you analyze a dataset, I feel somehow the important things are outside the dataset. I feel the background is actually very important. When you only analyze the dataset about cars2, [the findings] are very boring. However, under some specific scenarios, that dataset can be very important... I think AI may not capture [the background] well._ Footnote 2: The _Car_ dataset is a widely used example dataset. It can be found at [https://github.com/vega/vega-datasets/blob/main/data/cars.json](https://github.com/vega/vega-datasets/blob/main/data/cars.json). P6 thought that AI might fail to understand the semantic meaning of domain-specific data, which was a challenging part of his consulting work. He mentioned that he needed to meet his clients several times to understand the meanings of data with domain experts' help. Regarding understanding the authors' and audiences' preferences, several participants expressed their concerns (P3, P8, P9, P11, P12, P14, P16, P18). P3, P12, and P14 did not think AI could understand their intent from their gathered data facts or story pieces. P16 further discussed her difficulty in understanding her data story co-authors' intent when the feedback from them are unclear. It was necessary to leverage her previous experience of working with them and discuss her understanding with the collaborators to align the ideas. She considered that AI could not handle the case since AI required a _"very clear idea"_ as the input. Another issue pointed out by P9 and P18 was that AI might have the same output regardless of the users, which might limit the expression of authors' personal style. Due to the missing consideration of contexts, our participants thought that AI collaborators could hardly work as creators in the planning stage (see Fig. 1(c)), such as deciding the core message or outlining the story. Alternatively, AI collaborators could serve as assistants in the stage to execute humans' instructions and save effort. #### 4.4.3 AI models are hard to communicate as collaborators. Our participants frequently mentioned that they found AI was not as easy to interact with as humans (P2, P8, P9, P11-P13, P15, P16, P18). Therefore, they had doubts about AI collaborators in data storytelling. The first issue is that the participants believe that AI needs _precise instructions_ when taking action (P9, P15, P18). However, giving such instructions in data storytelling might not always be easy. For example, P9 thought it was impossible to tell AI the concrete RGB values or the name of a color, so AI could hardly help her with the color adjustment task. P15 expressed a similar idea about general style adjustment. A more critical issue is that most current AI models are limited to _communicating with humans in specific modalities_ (P8, P9). For example, ChatGPT, a frequently mentioned AI system by interviewees, only communicates with humans using natural languages. P8 considered that multi-modal communication was important in data storytelling: _"When communicating with humans, many modalities can be applied. Despite using natural language to describe what you want, you can also show them pictures or your sketch. You can even use gestures to describe [your idea]. However, the current AI algorithms may only have one way of input. Then we possibly need to learn how to write prompts. Furthermore, we are hard to estimate whether AI understands what we want to do."_ She further pointed out that the lack of communication modalities might lead to miscommunication. Even worse, humans could hardly realize the misunderstanding between humans and AI models. Despite P8, P11 and P16 also expressed their concerns about miscommunication. The last issue is that participants consider _AI to have limited capabilities to communicate with humans to reach a consensus when there is disagreement_ (P8, P16). Our participants thought this issue hindered AI's application in two-way communication. P8 mentioned that AI could not have an insightful discussion with humans when _"[persenters] need to argue with others based on understanding the problem and the project requirements."_ She felt that AI lacked the ability to conduct _"iterative"_ discussions with humans and may not understand the goal of the conversation well. Such discussions could happen in pitch talks or project meetings. #### 4.4.4 AI storytellers can hardly enhance humans' relationships. Another issue is that AI might not be able to help build relationships between storytellers and their audiences (P11, P12). According to P11, a business analyst, data stories can play an important role in enhancing humans' relationships. He mentioned that the communication of data stories between his team and clients helped to build trust: _"I think AI cannot replace the [communication] between humans and humans... It is more important to let others feel that your team has its own insights, a proper way to work, great characters, and good communication skills in our pitches."_ He indicates that the usage of AI in communication might not help create a good image of his team and therefore affect building partnerships with his clients. P12 further thought the application of AI might have a risk of impairing the relationship between him and his clients. He considered that his data stories demonstrated his values as a professional to clients. If his stories were made and communicated completely by AI, the clients might _"question your value"_ and have a lower willingness to work with his team. #### 4.4.5 The overhead of applying AI is considerable. Our participants pointed out that various overheads could hinder the wide application of AI collaborators in data storytelling. The first type of overhead is the _learning cost of effective communication with AI_. According to P13, the quality of AI's output depended on the quality of the communication between humans and AI models. Therefore, it was critical to learn the proper ways of communication, which was echoed by PS's comment above. P13 considered it necessary to have a _"checklist"_ to assist the interaction with AI. The second type of overhead is _configuring AI models_. It might require a tremendous effort to configure AI models before involving them as collaborators. P3, as an amateur user of generative AI models, complained about the overhead of deploying them, _"if the AI model requires setting up the deployment environment, it is quite annoying."_ P12 described configuring AI's functionalities as _"teaching"_ AI and said _"If AI is not smart enough, I need to spend much time on teaching them. Then I prefer to do it myself or ask someone else to do it."_ By saying it, P12 indicated that he needed to balance the overhead and benefit when collaborating with AI. Similarly, P5 thought that it required some effort to configure AI to mimic her voice when asking AI to record a data story for her. Therefore, she preferred to present the story herself. The third type of overhead is led by _AI's potential mistakes_ (P3, P9). Our participants mentioned humans needed to spend effort to correct AI's mistakes. When the task itself was trivial, it might be even easier to conduct it by humans directly. Therefore, the participants preferred to finish the simple tasks, such as adding annotations and moving the positions of visual elements, by themselves. The unclear capability of AI models could also lead to the unnecessary _overhead of trial-and-error_. P15 and P16 indicated their concern that they might spend unnecessary time exploring whether AI could help them and received no assistance in the end. Their time and effort in exploring AI's ability could become an overhead to the task. #### 4.4.6 The reliance on training data may limit AI's usage. Our participants expressed their concerns about AI's reliance on training data. They thought most of the existing AI models' performance often relied on large-scale training datasets built with public data. First, general AI models might only be trained on public _domain-agnostic data_. As a result, several participants worried that the general AI models could fail when dealing with domain problems (P3, P4, P6, P7, P11, P12, P16). Our participants mentioned that some domain-specific data could hardly be collected, processed, and approached due to compliance issues. Therefore, most of the commercial AI systems might not access such data and lack the ability to handle it. Second, AI cannot _perform with the latest data_ since their training data can be outdated and AI may not have the ability to collect new data. P12 expressed his concern that AI trained on outdated data might be useless in his time-sensitive work. P9 had a similar concern. P17, as a journalist, further pointed out: _"[The top journalists] need to conduct in-depth interviews. Each interview can last one hour: Each article can consist of ten to twenty interviews. Then they will organize the viewpoints and write an article with 10,000 words... If you do not interview, there will be nothing in the database. How could you ask AI to generate?"_ She thought that AI was greatly limited by its ability to talk with humans and collect first-hand data to write compelling stories. P7 also thought he had to collect data for AI before collaboration. Finally, since AI models can be trained with data collected from the web, _the created content's reliability is questionable_. P12 mentioned that most of the data on the web was not provided by the authority. Since AI-created content could be based on such data, it was hard to judge the content's accuracy and authority. As a result, he could not use AI-created content in his data story. #### 4.4.7 The application of AI can lead to ethical concerns. When collaborating with AI, many participants thought that the potential ethical problems were not easy to be addressed. Multiple participants (P1, P8, P10-P12, P14) raised their concerns about the _responsibility_ of the story content. They might need to be responsible for AI collaborators' mistakes led by their insufficient capability or unreliable training data (see Sec. 4.4.6). If the responsibility issues were not well addressed, they would not feel confident to collaborate with AI. P3 also worried about how to handle the _copyright_ of human-AI co-created content. Another potential issue is data _security_ (P3, P12). P12 thought that the data security problem hindered the broad usage of commercial AI systems. He mentioned that his company limited the usage of AI due to the risk of data leakage. He could only use the AI systems provided by his company even though their performance was unsatisfactory. The final concern is the _transparency_ of AI models. P8 and P10 thought that they might have trouble knowing whether AI really understand their intent in making data stories. P7 and P18 proposed that AI models should explain their rationales when humans felt skeptical about certain decisions. ## 5 What should future human-AI collaborative tools look like? Our interviewees showed their passion for introducing AI to their workflow. As discussed in Sec. 4.2.2, they commonly preferred collaborating with AI rather than asking AI to create a story from data in an end-to-end manner. Based on the observation, we consider that more human-AI collaborative tools should be studied and offered to data workers. In this section, we propose recommendations for future data storytelling tools according to our findings in the interview.
2306.16721
AoA-based Position and Orientation Estimation Using Lens MIMO in Cooperative Vehicle-to-Vehicle Systems
Positioning accuracy is a critical requirement for vehicle-to-everything (V2X) use cases. Therefore, this paper derives the theoretical limits of estimation for the position and orientation of vehicles in a cooperative vehicle-to-vehicle (V2V) scenario, using a lens-based multiple-input multiple-output (lens-MIMO) system. Following this, we analyze the Cram$\acute{\text{e}}$r-Rao lower bounds (CRLBs) of the position and orientation estimation and explore a received signal model of a lens-MIMO for the particular angle of arrival (AoA) estimation with a V2V geometric model. Further, we propose a lower complexity AoA estimation technique exploiting the unique characteristics of the lens-MIMO for a single target vehicle; as a result, its estimation scheme is effectively extended by the successive interference cancellation (SIC) method for multiple target vehicles. Given these AoAs, we investigate the lens-MIMO estimation capability for the positions and orientations of vehicles. Subsequently, we prove that the lens-MIMO outperforms a conventional uniform linear array (ULA) in a certain configuration of a lens's structure. Finally, we confirm that the proposed localization algorithm is superior to ULA's CRLB as the resolution of the lens increases in spite of the lower complexity.
Joo-Hyun Jo, Jae-Nam Shim, Byoungnam, Kim, Chan-Byoung Chae, Dong Ku Kim
2023-06-29T06:44:29Z
http://arxiv.org/abs/2306.16721v1
AoA-based Position and Orientation Estimation Using Lens MIMO in Cooperative Vehicle-to-Vehicle Systems ###### Abstract Positioning accuracy is a critical requirement for vehicle-to-everything (V2X) use cases. Therefore, this paper derives the theoretical limits of estimation for the position and orientation of vehicles in a cooperative vehicle-to-vehicle (V2V) scenario, using a lens-based multiple-input multiple-output (lens-MIMO) system. Following this, we analyze the Cramer-Rao lower bounds (CRLBs) of the position and orientation estimation and explore a received signal model of a lens-MIMO for the particular angle of arrival (AoA) estimation with a V2V geometric model. Further, we propose a lower complexity AoA estimation technique exploiting the unique characteristics of the lens-MIMO for a single target vehicle; as a result, its estimation scheme is effectively extended by the successive interference cancellation (SIC) method for multiple target vehicles. Given these AoAs, we investigate the lens-MIMO estimation capability for the positions and orientations of vehicles. Subsequently, we prove that the lens-MIMO outperforms a conventional uniform linear array (ULA) in a certain configuration of a lens's structure. Finally, we confirm that the proposed localization algorithm is superior to ULA's CRLB as the resolution of the lens increases in spite of the lower complexity. Position and orientation, Cramer-Rao lower bound (CRLB), lens-based multiple-input multiple-output (lens-MIMO), cooperative localization, AoA estimation. ## I Introduction Higher accuracy positioning is one of the essential requirements for various 5G vehicle-to-everything (V2X) advanced driving use cases--For instance, in location-aware communications at the street intersection depicted in the 3rd generation partnership project (3GPP) Rel-16 [1, 2, 3]. Interestingly, the 5G automotive association (5GAA) highlighted highly accurate localization requirements as one of the key indicators for the autonomous vertical industries [4]. More so, the acquisition of the orientation information of surrounding vehicles helps to predict a vehicle's maneuvering efficiently to make autonomous driving vehicles safer in terms of the planned movements of all of the surrounding vehicles [5]. As a result, cooperative localization based on the millimeter-wave (mmWave) multiple-input-multiple-output (MIMO) in the vehicle-to-vehicle (V2V) systems have been investigated to provide better estimates of the position and orientation of vehicles [6]. Specifically, the mmWave can propagate along the line-of-sight (LoS) path with little reflection and scattering, which is beneficial to higher-precision positioning. However, this requires a significant computational burden and increased energy consumption in the case of a large number of antennas and higher radio frequencies (RFs), which are widely implemented with a hybrid arrangement of analog and digital precoders. Hence, to overcome these inherent challenges, the lens-based multiple-input multiple-output (lens-MIMO) has been explored in vehicular applications [7, 8, 9]. Meanwhile, several positioning techniques within the radio access network (RAN) are standardized to meet the requirements of V2X services. Specifically, the infrastructure plays a central role in determining vehicular locations in terms of the measurement of angles and distances from other vehicles [10]. However, due to the limit of acquiring the real-estate for base-stations (BS) or road side units (RSUs), and more importantly, the potential path-blockage between the gNodeBs and vehicles, the cooperative localization among the vehicles has received extensive interest in V2X positioning use cases [11, 12, 13]. Furthermore, the cooperative positioning schemes using direct channels among vehicles, such as the sidelink, which is currently being examined in the 3GPP Rel-18 work item [14], will help meet the requirements of localization accuracy. Unlike the conventional uniform linear array (ULA), whose error bounds and localization performance have been widely investigated [15, 16], the lens-MIMO still requires further examination to determine the feasibility of its localization capability, especially with lower complexity in V2V scenarios. Furthermore, the accurate estimation of channel parameters between vehicles is crucial to enhancing the overall performance of the position and orientation of vehicles. In lens-MIMO, the lens is inherently capable of focusing the energy of all of the wavefronts, which are incident to the lens surface with a single angle of arrival (AoA), into one focal point. Therefore, each focal point represents a specific angle of arrival, and the lens-MIMO would spatially sample the received signal according to its antenna placement. This capability allows for the physical separation of received signals corresponding to different incident angles across the antenna elements. This facilitates a certain level of inter-path interference suppression among these signals, where the extent of suppression gain depends on the disparities in their AoAs. Consequently, this feature contributes to the enhancement of multiple AoA estimation. It should be noted that the placement of the antenna elements on the focal region is associated with the unique arriving directions to effectively receive the maximum received signal power, which facilitates a simpler AoA estimation implementation [17]. In [18], the authors investigated the maximum likelihood (ML) technique, exploiting the received signal of the lens-MIMO to estimate the AoA. Additionally, to reduce the computational complexity of the AoA estimation, the authors in [19], [20], and [21] have exploited a sparse structure of the mmWave channel and the energy-focusing property of the lens-MIMO. Precisely, the key idea of these schemes is to efficiently utilize the sparsity of the mmWave MIMO channel by using only some antenna elements of the strongly received energy. However, the aforementioned works still constitute a high computational burden either in calculating the covariance matrix of the reduced spatial channel or in searching for the AoA with a large-sized dictionary of the precomputed received array response vectors. Therefore, to the best of our knowledge, the lens-MIMO AoA estimation technique still requires further complexity reduction to be useful for practical applications. This concern has motivated us to further explore the lens-MIMO localization capability in terms of not only a simpler AoA estimation implementation scheme but also its performance superiority over ULA. In this paper, we propose a lower complexity AoA estimation algorithm that exploits the inherent structure of the lens-MIMO. Specifically, its estimate will be utilized in the analysis of the Cramer-Rao lower bounds (CRLBs) for the position and orientation estimation. Additionally, we investigate the cooperative V2V localization performance using the lens-MIMO with lower hardware complexity. More precisely, we summarize our contributions as follows. * First, we investigate the received signal of the lens-MIMO for the cooperative V2V localization and derive the CRLBs of the position and orientation of the vehicles for the given CRLB of AoA. With the considered system model, we confirm that the lens-MIMO is superior to the conventional ULA under the specific condition of the lens's design parameters, which is shown in terms of focal length and lens aperture. * We propose a lens-MIMO AoA estimation algorithm with low complexity exploiting a ratio of only the two most strongly received signals at the antenna elements (R2SA). * We subsequently employ the successive interference cancellation (SIC) technique to the R2SA in order to estimate multiple AoAs. It is shown that the inherent energy-focusing property of lens-MIMO helps the suppression of interference among the multiple incoming paths, especially in the intersection scenario. Consequently, this indicates that the lens-MIMO with SIC can further improve the estimation accuracy of multiple AoAs. * With the estimated AoAs, we explore the feasibility of a relative localization method that solves sensing equations (SE) associated with AoA-based geometric models, whose performance asymptotically approaches the maximum likelihood (ML) localization. * By simulations, we demonstrate that the lens-MIMO considerably outperforms conventional ULA. Furthermore, in both position and orientation estimation, the performance of the proposed relative localization scheme approaches to the derived CRLB and satisfies the target requirements for 5G-V2X positioning services, particularly in high signal-to-noise ratio (SNR) regions and with a larger number of antennas. **Notation:** For a matrix \(\mathbf{A}\), \(\mathbf{A}^{\mathsf{T}}\), \(\mathbf{A}^{\mathsf{H}}\), \(\mathbf{A}^{-1}\), and \(\mathrm{Tr}(\mathbf{A})\) are the transpose, conjugate transpose, inverse, and trace operation respectively. \(\mathbb{E}\) is the expectation of a random variable. ## II system model In this section, we comprehensively describe the received signal model of lens-MIMO systems in terms of wave optics on the geometrical model in the street intersection, where each road has different directions, as shown in Fig. 1(a). Specifically, each vehicle in a set of vehicles \(\mathcal{V}=\{1,\ldots,N_{v}\}\) is approaching or crossing the intersection. Their communication range is assumed to be \(R\) in two-dimensional space. Fig. 1(b) shows that each vehicle is equipped with lens-MIMO, which possesses an energy-focusing characteristic that facilitates simplified AoA estimation using a subset of the array. Fig. 1(c) illustrates that the position of the \(k\)-th vehicle is \(\mathbf{p}_{k}=\left[x_{k},y_{k}\right]^{\mathsf{T}}\in\mathbb{R}^{2}\) with its relative orientation \(\omega_{k,j}\in[0,2\pi)\) along with the \(X\)-axis while the \(j\)-th connected vehicle has a position of \(\mathbf{p}_{j}\). Further, we assume that the \(j\)-th vehicle is known to be aligned with the X-axis, but we can Fig. 1: Cooperative V2V scenario with lens-MIMO. assume any angle because it does not affect the estimation of the relative orientation. ### _Signal model_ We consider a mmWave lens-MIMO system, as depicted in Fig. 1(b), where vehicles are equipped with \(N\) received antenna elements. Precisely, the received signal at the \(n\)-th antenna element of the \(k\)-th vehicle can be defined by an AoA \(\theta_{k,j}\) from the \(j\)-th to \(k\)-th vehicle as follows [22]: \[[\mathbf{y}_{k,j}]_{n}=\frac{h_{k,j}L}{\sqrt{\rho_{\mathrm{o}}^{k,j}x}}\text{ sinc}\left[\frac{L}{\lambda}\left\{\sin\theta_{n}-\sin\theta_{k,j}\right\} \right]e^{-j\Phi_{0}}+[\mathbf{n}]_{n}, \tag{1}\] where \(h_{k,j}\) and \(\rho_{\mathrm{o}}^{k,j}\) are the complex channel gain and path-loss coefficient between the \(k\)-th and \(j\)-th vehicles, respectively; and \(L\), \(\lambda\), and \(x\) are a lens aperture, the wavelength of operating frequency, and the distance from the rear surface of the lens to the antenna array, respectively. The constant phase term, which is represented as \(\Phi_{0}=2\pi x/\lambda\), depends only on the \(x\). Further, the antenna elements on the arc array are placed at the respective focal points corresponding to each angle of \(\theta_{n}\in\left\{\sin^{-1}\left(\frac{d_{\mathrm{max}}}{x}n\right):n=- \frac{N-1}{2},\ldots,\frac{N-1}{2}\right\}\), where \(d_{\mathrm{lens}}\) represents an antenna spacing, which is equal to \(f/L\). Additionally, \(\mathbf{n}\in\mathbb{C}^{N\times 1}\) is the additive white Gaussian noise (AWGN) vector following the distribution of \(\mathcal{CN}(0,\sigma^{2}\mathbf{I}_{N})\). It can be seen in (1) that its amplitude depends on the AoA, while its received phase is not affected by AoA but depends only on the distance from the lens to the antenna array. ### _Geometric model of V2V use cases_ We consider the sensing-based scheduling employed in 3GPP V2V unicast communication. In this scheme, each target link is assigned to a separate sub-channel [23, 24]. Based on the geometric structure illustrated in Fig. 1(c), the measurement model for the AoA of the line-of-sight (LoS) path from the \(j\)-th to the \(k\)-th vehicle can be expressed as \[\tilde{\theta}_{k,j}=g(\mathbf{p}_{k},\mathbf{p}_{j},\omega_{k,j})+n(\theta_{k,j}), \tag{2}\] where \(\tilde{\theta}_{k,j}\) and \(\omega_{k,j}\forall k,j\in\mathcal{V}\) are the measured AoA at the \(k\)-th vehicle and relative orientation of the \(k\)-th vehicle with respect to the \(j\)-th vehicle's orientation, respectively. While the measurement noise model in (2) can be different depending on how to sense the incident angles, we assume that the measured AoA's noise \(n(\theta_{k,j})\) follows a Gaussian distribution with zero mean and variance \(\sigma_{n}^{2}(\theta_{k,j})\), which is widely adopted in [11, 12]. Recalling sensing-based scheduling, it allows the measured AoAs to be independent and identically distributed (i.i.d.) as formulated in (2). The actual AoA of the LoS path from the \(j\)-th vehicle to the \(k\)-th vehicle, denoted as \(g(\cdot)\), can be expressed in terms of the position and orientation parameters as follows: \[g(\mathbf{p}_{k},\mathbf{p}_{j},\omega_{k,j})=\tan^{-1}\left(\frac{y_{j}-y_{k }}{x_{j}-x_{k}}\right)-\omega_{k,j}. \tag{3}\] The model in (3) depends on the position (\(\mathbf{p}_{k}\), \(\mathbf{p}_{j}\)) and orientation (\(\omega_{k,j}\)). While this model has the potential to result in multiple solutions for the localization parameters, we will explore the conditions that are required to ensure a unique solution in Section V. Furthermore, we assume that the variance of AoA measurement, denoted as \(\sigma_{n}^{2}(\theta_{k,j})\), has a lower bound which is equal to the CRLB of the AoA \(\theta_{k,j}\). In the cases of conventional ULA and lens-MIMO systems, their CRLBs of AoA can be readily derived as follows [22]: \[\text{CRLB}_{\text{ULA}}(\theta) =\frac{6\sigma^{2}}{N(N^{2}-1)d_{\text{ULA}}^{2}\cos^{2}\theta}, \tag{4}\] \[\text{CRLB}_{\text{Lens}}(\theta) =\frac{\sigma^{2}}{2}\sum_{n}\mathbf{a}_{n}^{2}(\theta)\Bigg{/} \Bigg{[}\left\{\sum_{n}\mathbf{a}_{n}^{2}(\theta)\right\}\] (5) \[\quad\times\sum_{n}\left\{\frac{\partial}{\partial\theta}\mathbf{ a}_{n}(\theta)\right\}-\left\{\sum_{n}\mathbf{a}_{n}(\theta)\frac{\partial}{ \delta\partial}\mathbf{a}_{n}(\theta)\right\}^{2}\Bigg{]},\] where \(d_{\text{ULA}}\) is an antenna spacing of ULA, and \(\mathbf{a}_{n}(\theta)=\frac{L}{\sqrt{x}}\text{sinc}\left[\frac{L}{\lambda} \left\{\sin\theta_{n}-\sin\theta\right\}\right]\) is a steering vector whose elements are the amplitude of the received signal in (1). As the distance \(x\) between the antenna array and the rear of the lens increases, it leads to a reduction in received amplitude, resulting in potential inaccuracies in AoA estimation. ## III Theoretical bound for localization In this section, we investigate the CRLBs for AoA estimation in both the lens-MIMO and ULA. Furthermore, we derive the theoretical lower bound for estimation of position and orientation. ### _CRLB Analysis for AoA Estimation_ Accurate estimation of the AoA is crucial for achieving precise positioning. Now, we compare the CRLBs of the AoA for the lens-MIMO and ULA. However, the direct comparison of (4) and (5) is challenging to properly explain because the AoA \(\theta\) in (5) is involved squared amplitude in a non-linear fashion, thus, we require a more insightful form of (5) to make it easier to compare both CRLBs. Hence, we assume that all antenna elements of lens-MIMO are placed on the focal region with critical antenna spacing. In other words, the distance from the rear of the lens to the array is the focal length (i.e., \(x=f\)), and the antenna placement can be represented by the critical angular set, defined by \(\theta_{n}\in\mathcal{S}_{\theta_{n}}=\left\{\sin^{-1}\left(\frac{d_{\mathrm{lens }}n}{f}\right):n=-\frac{N-1}{2},\ldots,\frac{N-1}{2}\right\}\) with \(d_{\mathrm{lens}}=\frac{f}{L}\lambda\) given by in [25]. Now, to obtain a derivative of the amplitude in (5), let us define \(\boldsymbol{\mu}_{1}\) and \(\boldsymbol{\mu}_{2}\) as \[\left[\boldsymbol{\mu}_{1}\right]_{n}=\text{sinc}\left(\frac{Ld_{\mathrm{lens }}n}{f\lambda}-\frac{L}{\lambda}\sin\theta\right), \tag{6}\] \[\left[\boldsymbol{\mu}_{2}\right]_{n}=\frac{\partial}{\partial\theta}\text{sinc} \left(\frac{Ld_{\mathrm{lens}}n}{f\lambda}-\frac{L}{\lambda}\sin\theta\right). \tag{7}\] It may be readily denoted that \(\mathbf{a}_{n}=\frac{L}{\sqrt{f}}\boldsymbol{\mu}_{1}\) and \(\frac{\partial}{\partial\theta}\mathbf{a}_{n}=\frac{L^{2}}{\lambda\sqrt{f}} \cos\theta\boldsymbol{\mu}_{2}\). Then, the CRLB of the lens-MIMO in (5) can be represented by \[\text{CRLB}_{\text{Lens}}(\theta) \tag{8}\] \[\quad=\frac{f\lambda^{2}\sigma^{2}}{2p^{2}L^{4}\cos\theta^{2}}\cdot \frac{\boldsymbol{\mu}_{1}^{\intercal}\boldsymbol{\mu}_{1}}{[\boldsymbol{\mu}_{1}^ {\intercal}\boldsymbol{\mu}_{1}][\boldsymbol{\mu}_{2}^{\intercal}\boldsymbol{\mu}_{2} ]-[\boldsymbol{\mu}_{1}^{\intercal}\boldsymbol{\mu}_{2}]}.\] By the property of the lens-MIMO, which works like a discrete Fourier transform (DFT) beamformer, we subsequently obtain the inequalities as follows: \[\begin{split}\mathbf{\mu}_{1}^{\mathsf{T}}\mathbf{\mu}_{1}&=1, \\ \mathbf{\mu}_{1}^{\mathsf{T}}\mathbf{\mu}_{2}&=0,\\ 1\leq\mathbf{\mu}_{2}^{\mathsf{T}}\mathbf{\mu}_{2}&\leq 2. \end{split} \tag{9}\] By these inequalities, the CRLB of lens-MIMO can be simplified as follows: \[\text{CRLB}_{\text{Lens}}(\theta)=\frac{f\lambda^{2}\sigma^{2}}{2p^{2}L^{4} \cos\theta^{2}}\cdot\frac{1}{[\mathbf{\mu}_{2}^{\mathsf{T}}\mathbf{\mu}_{2}]}. \tag{10}\] To ensure that \(\text{CRLB}_{\text{Lens}}\leq\text{CRLB}_{\text{ULA}}\), the condition, in which the performance of the lens-MIMO would prevail over that of ULA, is readily derived in terms of the lens aperture and focal length as follows: \[\text{CRLB}_{\text{Lens}}\leq\text{CRLB}_{\text{ULA}}\text{ where }f\leq\frac{12L^{3}}{(2L+1)(L+1)\lambda^{4}}. \tag{11}\] \(Proof\): See Appendix A. Suppose that the focal length is set to the minimum \(f=\frac{L}{2}\), which is proven in [26], then the condition in (11) can be expressed in terms of lens aperture \(L\) as follows: \[\frac{12L^{3}}{(2L+1)(L+1)\lambda^{4}}>\frac{L}{2}. \tag{12}\] Given that LTE-V2X and NR-V2X operate at frequencies in the GHz range, it follows that the wavelength \(\lambda\) is much smaller than 1 meter (e.g., \(\lambda\) = 0.1 m and 0.01 m for frequencies of 2.4 GHz and 28 GHz, respectively). Consequently, (12) can be readily proved to hold for \(L=k\lambda\), where \(k\) is an integer and is practically greater than 10 in order to efficiently concentrate the received signal energy on the lens's surface [7, 22, 26]. For given system parameters of the operating frequencies in the GHz spectrum, the inequality is always satisfied within the feasible region of the lens's aperture size. When we consider values of \(\lambda\) equal to 0.1 m and 0.01 m, we find that (12) is satisfied if the value of \(k\) is greater than \(10^{-2}\) and \(10^{-3}\) respectively. However, the lens aperture that contradicts these conditions (i.e. \(L<10^{-3}\) m and \(L<10^{-5}\) m for frequencies of 2.4 GHz and 28 GHz, respectively) is impractical for the lens-MIMO system. It can be noted that the AoA's CRLB of the lens-MIMO with a feasible aperture size is always less than that of the ULA in NR-V2X communication systems. In (11) and (12), we highlighted that the lens, whose focal length is half of the lens' aperture, has advantages of performance and implementation of a small form factor of the lens-MIMO. However, its lens gets thicker for higher permittivity and could suffer higher lens transmission loss [27]. Meanwhile, as the focal length increases, the amplitude gain at the antenna array could get a little more path loss between the rear of the lens and antenna array as in (5). Therefore, we could also consider a trade-off between the lens' transmission and path loss in choosing the focal length, but this is beyond our scope and would be considered in future research. Subsequently, Fig. 2 compares the upper bound of lens-MIMO's CRLB, which is derived as (A.5) in Appendix A, and the conventional ULA in (4) as the focal length increases for a given lens aperture. For a fair comparison with the same antenna elements, it is recalled that we placed the (\(2L+1\)) antennas for both the lens-MIMO and ULA, where the signal-to-noise ratio (SNR) is assumed to be 5 dB and antenna spacing is \(f/L\) for lens-MIMO and \(\lambda/2\) for ULA, respectively. Further each \(Y\)-axis and \(X\)-axis in Fig. 2 represent the CRLB averaged over AoA \(\theta\in[-\frac{\pi}{2},\frac{\pi}{2}]\) and focal length \(f\) of wavelength unit. For both cases of the \(10\lambda\) and \(20\lambda\) lens apertures, the lens-MIMO performs much better at the minimum focal length (i.e., \(f=5\lambda\) for \(L=10\lambda\) and \(f=10\lambda\) for \(L=20\lambda\)), and this performance is sustained until the focal length is about five times the lens aperture (i.e., \(f=50\lambda\) for \(L=10\lambda\) and \(f=100\lambda\) for \(L=20\lambda\)). Fig. 2 confirmed that lens-MIMO is superior to the conventional ULA for a certain condition of focal length, which is from the minimum distance \(L/2\) to five times of lens' aperture \(5L\). It should be noted that the suitable focal length design for the configuration of the lens' aperture is essential to enhance the positioning and orientation estimation performance. It is important to highlight that the lens-MIMO can achieve higher accuracy in estimating the position and orientation, given the specific design parameters of the lens, in comparison to the ULA. ### _CRLB derivation for position & orientation_ We now derive the theoretical bounds of the position and orientation estimates in order to compare the localization performance. Initially, we assume that all vehicles communicate without interference by assigning each pair of vehicles to different sub-channels in terms of the sensing-based scheduling of 3GPP V2V communication. We also consider a sparse mmWave channel with a single LoS path, which is assumed to be a prevalent link state in V2V unicast channels with short distances [28, 29, 30]. We will provide further discussion on this topic in Section IV. Let \(\mathcal{V}_{k}\) be the set of neighboring vehicles Fig. 2: Error bound as a function of the focal length. within the communication range of the \(k\)-th vehicle. Assuming that AoAs in (2) are independent and identically distributed (i.i.d.) Gaussian random variables, the joint probability density function of a vector of AoAs \(\mathbf{\theta}\) is given by \[\begin{split} f(\mathbf{\theta}|\mathbf{p},\mathbf{\omega})& =\prod_{k\in\mathcal{V}}\prod_{j\in\mathcal{V}_{k}}f(\tilde{\theta} _{k,j}|\mathbf{p}_{k},\mathbf{p}_{j},\omega_{k,j})\\ &=\prod_{k\in\mathcal{V}}\prod_{j\in\mathcal{V}_{k}}\frac{1}{\sqrt {2\pi\sigma_{n_{k,j}}^{2}}}\text{exp}\left(-\frac{(\tilde{\theta}_{k,j}- \alpha_{k,j})^{2}}{2\sigma_{n_{k,j}}^{2}}\right),\end{split} \tag{13}\] where \(\mathbf{\theta}\), \(\mathbf{p}\), and \(\mathbf{\omega}\) are vectors consisting of all AoA measurements, position, and orientation, respectively; \(\alpha_{k,j}\) is an abbreviated form of the \(g(\mathbf{p}_{k},\mathbf{p}_{j},\omega_{k,j})\) in (3). The variance of the AoA measurement noise, denoted as \(\sigma_{n_{k,j}}^{2}\), depends on the AoA of the LoS path between the \(j\)-th and \(k\)-th vehicles, as noted in (5). To investigate the lower bounds of the position and orientation accuracy, we define a vector consisting of unknown location parameters as \[\mathbf{\eta}=\left[\mathbf{\eta}_{1}^{\mathsf{T}},\ldots,\mathbf{\eta}_{N_{v}}^{\mathsf{ T}}\right]^{\mathsf{T}}\in\mathbb{R}^{3N_{v}\times 1}, \tag{14}\] in which \(\mathbf{\eta}_{k}^{\mathsf{T}}\) consists of the unknown parameters (position \(\mathbf{p}_{k}\) and orientation \(\omega_{k,j}\)) for the \(k\)-th vehicle. Defining \(\hat{\mathbf{\eta}}\) as the unbiased estimator of \(\mathbf{\eta}\), the error variance satisfies the inequality as [31], \[\mathbb{E}_{\mathbf{\theta}|\mathbf{\eta}}[(\mathbf{\eta}-\hat{\mathbf{\eta}})(\mathbf{\eta}-\hat {\mathbf{\eta}})^{\mathsf{H}}]\geq F^{-1}(\mathbf{\eta}), \tag{15}\] where \(\mathbb{E}_{\mathbf{\theta}|\mathbf{\eta}}[.]\) denotes the expectation parameterized by the unknown parameter \(\mathbf{\eta}\), and the Fisher information matrix (FIM) \(F(\mathbf{\eta})\) is defined by \[F(\mathbf{\eta})=-\mathbb{E}_{\mathbf{\theta}|\mathbf{\eta}}\left[\frac{\partial^{2}\ln f (\mathbf{\theta}|\mathbf{\eta})}{\partial\mathbf{\eta}\partial\mathbf{\eta}^{\mathsf{T}}} \right]. \tag{16}\] The \(3N_{v}\times 3N_{v}\) FIM \(F(\mathbf{\eta})\) may be re-constructed in blocks as an \(N_{v}\times N_{v}\) sub-block matrix for each unknown parameter. It is formed as \[F(\mathbf{\eta})=\left[\begin{array}{cc}F_{\mathbf{x}\mathbf{x}}&F_{\mathbf{x} \mathbf{y}}&F_{\mathbf{x}\omega}\\ F_{\mathbf{y}\mathbf{x}}&F_{\mathbf{y}\mathbf{y}}&F_{\mathbf{y}\omega}\\ F_{\mathbf{x}\mathbf{x}}&F_{\mathbf{u}\mathbf{y}}&F_{\mathbf{u}\omega}\\ \end{array}\right]. \tag{17}\] The elements of \(F_{\mathbf{x}\mathbf{x}}\in\mathbb{C}^{N_{v}\times N_{v}}\) matrix may be readily obtained by the geometric model in (3) as follows: \[[F_{\mathbf{x}\mathbf{x}}]_{i,j}=\begin{cases}\sum_{j\in\mathcal{V}_{k}}\frac {1}{\sqrt{2\pi}\sigma_{n_{k,j}}^{2}}\frac{(y_{j}-y_{k})^{2}}{d_{j,k}^{4}},& \text{for}\,i=j\\ 0,&\text{otherwise},\end{cases} \tag{18}\] where \(d_{j,k}\) is the distance from the \(k\)-th to \(j\)-th vehicles. The details are proved in Appendix B. The diagonal term of each sub-matrix may also be readily derived as \[[F_{\mathbf{y}\mathbf{y}}]_{i,i} =\sum_{j\in\mathcal{V}_{k}}\frac{1}{\sqrt{2\pi}\sigma_{n_{k,j}}^ {3}}\frac{(x_{j}-x_{k})^{2}}{d_{j,k}^{4}}, \tag{19}\] \[[F_{\mathbf{x}\mathbf{y}}]_{i,i} =[F_{\mathbf{y}\mathbf{x}}]_{i,i}\] \[=\sum_{j\in\mathcal{V}_{k}}\frac{1}{\sqrt{2\pi}\sigma_{n_{k,j}}^ {3}}\frac{(x_{j}-x_{k})(y_{j}-y_{k})}{d_{j,k}^{4}}, \tag{20}\] \[[F_{\mathbf{x}\mathbf{w}}]_{i,i} =[F_{\mathbf{w}\mathbf{x}}]_{i,i}\] \[=-\sum_{j\in\mathcal{V}_{k}}\frac{1}{\sqrt{2\pi}\sigma_{n_{k,j}}^ {3}}\frac{(y_{j}-y_{k})}{d_{j,k}^{2}}, \tag{21}\] \[[F_{\mathbf{y}\mathbf{w}}]_{i,i} =[F_{\mathbf{w}\mathbf{y}}]_{i,i}\] (22) \[=-\sum_{j\in\mathcal{V}_{k}}\frac{1}{\sqrt{2\pi}\sigma_{n_{k,j}}^ {3}}\frac{(x_{j}-x_{k})}{d_{j,k}^{2}},\] (23) \[[F_{\mathbf{w}\mathbf{w}}]_{i,i} =\sum_{j\in\mathcal{V}_{k}}\frac{1}{\sqrt{2\pi}\sigma_{n_{k,j}}^ {3}}, \tag{24}\] and the rest of the entries (i.e., the off-diagonal terms of each sub-block) are zero. By exploiting the derived FIM, we can obtain the vehicle's position and orientation error bounds, denoted by PEB\({}_{p}\) and PEB\({}_{\omega}\), as follows: \[\text{PEB}_{p} \geq\sqrt{\text{Tr}\left\{[F(\mathbf{\eta})^{-1}]_{1:2N_{v},1:2N_{v} }\right\}}, \tag{25}\] \[\text{PEB}_{\omega} \geq\sqrt{\text{Tr}\left\{[F(\mathbf{\eta}^{-1})]_{2N_{v}+1:3N_{v},2N_ {v}+1:3N_{v}}\right\}}, \tag{26}\] where the operation \([.]_{i:j,i:j}\) denotes the selection of sub-matrix from the \(i\)-th to the \(j\)-th entry. It is worth noting that in the derived FIM from (18) to (24), the diagonal elements are associated with the AoA error variance \(\sigma_{n_{k,j}}^{2}\) and the specified values of positioning parameters, such as position and distance between vehicles. Additionally, we can infer that a smaller CRLB for the AoA leads to improved accuracy in estimating the position and orientation, thus the lens-MIMO can show much better performance compared to ULA, as elaborated in Section III-A. ## IV AoA Estimation in Lens-MIMO Now, we investigate the characteristics of the received signal at lens-MIMO and propose a lower complexity AoA estimation scheme taking full advantage of the unique features of lens-MIMO. ### _Characteristics of the received signal at lens-MIMO_ The essential characteristic of the received signal of the lens-MIMO is depicted in Fig. 1(b). More specifically, in the case of a single AoA, most of the energies arriving on the lens surface are focused on a few antennas. We will introduce the ratio of two adjacent antenna signals and demonstrate how the AoA may be expressed by those ratios, which will be the main idea of the proposed a lower complexity technique for the AoA estimation. As regards \(\theta_{k,j}\) in (1), which is the AoA of a single path, it is the closest to the \(n_{k,j}\)-th critical angle in \(\mathcal{S}_{\theta_{n}}\), in which the antenna element collects the strongest received power. Afterward, the AoA may be expressed as \(\theta_{k,j}=\sin^{-1}\left(\frac{1}{\mathcal{L}}\big{(}n_{k,j}+e_{k,j}\big{)}\right)\), where \(e_{k,j}\) is the error between the angular sample \(\theta_{n_{k,j}}\) and the actual AoA \(\theta_{k,j}\). Hence, the amplitude of the received signal at the \(n\)-th antenna can be represented as \[\mathbf{a}_{n}(\theta_{k,j})=\frac{L}{f}\text{sinc}\left(n-(n_{k,j}+e_{k,j}) \right), \tag{27}\] where \(e_{k,j}\) should be in the range \(-0.5\leq e_{k,j}\leq 0.5\). The ratio of signals between the \(n\)-th and \((n+1)\)-th element, denoted by \(R_{n}(\theta_{k,j})=\mathbf{a}_{n}(\theta_{k,j})/\mathbf{a}_{n+1}(\theta_{k,j})\), is reformulated by an identity of the trigonometric function, it is given by \[R_{n}(\theta_{k,j}) =\frac{\sin(\pi(n-n_{k,j}-e_{k,j}))}{\sin(\pi(n+1-n_{k,j}-e_{k,j}) )}\frac{n+1-n_{k,j}-e_{k,j}}{n-n_{k,j}-e_{k,j}} \tag{27}\] \[=\frac{\cos(\pi(n-n_{k,j}))}{\cos(\pi(n+1-n_{k,j}))}\frac{n+1-n_{k,j}-e_{k,j}}{n-n_{k,j}-e_{k,j}}\] \[=-1-\frac{1}{n-n_{k,j}-e_{k,j}},\] which is represented to \[n-n_{k,j}-e_{k,j}=-\frac{1}{R_{n}(\theta_{k,j})+1}. \tag{28}\] By adding (28) for all antenna indices, the AoA \(\theta_{k,j}\) can be represented as \[n_{k,j}+e_{k,j}=\left[\frac{1}{N-1}\sum_{n=\frac{N-3}{2}}^{\frac{N-3}{2}} \frac{1}{R_{n}(\theta_{k,j})+1}\right]-\frac{1}{2}. \tag{29}\] It should be noted that an estimate of AoA can be represented by a function of the ratios between the amplitudes of adjacent antennas. In the following section, we introduce an AoA estimation scheme inspired by (29). ### _A single target vehicle per V2X sub-channel_ Prior to the localization for vehicles, we need to estimate the AoA, as indicated by the measurement model of AoA in (2). Furthermore, it is worth mentioning that the lens-MIMO system exhibits a peak power feature, where the received signal power, arriving at specific critical angles \(\mathcal{S}_{\theta_{n}}\), is concentrated on a single antenna element. When the AoA deviates from the critical angles, this peak power feature fades away because the received power spreads out over the antenna elements, which is called a power leakage problem [32, 33]. This leakage problem is the worst when the AoA is in the middle of the adjacent critical angles, which leads to a degradation in the AoA estimation performance. To improve the AoA estimation while minimizing complexity in the presence of power leakage problem, we propose an AoA estimation algorithm called R2SA (Ratio of 2 most Strong received powers at the antenna elements). This algorithm leverages the ratio of the two strongest received powers at the lens-MIMO. Precisely, the algorithm first selects the antenna of the strongest received power and its adjacent antenna. Particularly, the antenna index of the strongest received power can indicate a rough estimate of the AoA, and the received amplitude ratio of these selected two antennas would be used to enhance the rough estimate. We assume the V2V unicast communications in 3GPP V2X standardization, where each sub-channel is allocated to each different V2V link such that we can hold the validity of the i.i.d. V2V channels among vehicles like in (24) and (25) of Section III. Following this, the \(k\)-th vehicle is connected with the \(j\)-th target vehicle in \(\mathcal{V}_{k}\) in a sub-channel, and it measures the amplitudes of the whole antenna array and determines the antenna element \(n_{k,j}^{*}\) of the strongest received power. Then, we have \[n_{k,j}^{*}=\operatorname*{arg\,max}_{n\in\mathcal{S}_{N}}\left|\mathbf{y}_{k,j}\right|, \tag{30}\] where \(\mathcal{S}_{N}=[-\frac{N-1}{2},\frac{N-1}{2}]\in\mathbb{R}^{N\times 1}\) is a set of antenna elements. By setting \(n_{k,j}\) to \(n_{k,j}^{*}\) in (28), we can estimate the error term \(e_{k,j}\). Next, we propose the R2SA for the AoA estimation as follows: \[\hat{\theta}_{k,j}= \tag{31}\] \[\begin{cases}\sin^{-1}\left(\frac{\lambda}{L}(n_{k,j}^{*}+e_{k,j}) \right),&\text{for}\,-1\leq\frac{\lambda}{L}(n_{k,j}^{*}+e_{k,j})\leq 1\\ \sin^{-1}\left(\frac{\lambda}{L}(n_{k,j}^{*})\right),&\text{otherwise},\end{cases}\] where \(e_{k,j}=1/(R_{n_{k,j}^{*}}(\theta_{k,j})+1)\) computed by (28) enhances a rough estimate of the AoA. It should be noted that (31) without adjusting \(e_{k,j}\) is a rough estimate of the AoA, called maximum antenna selection (MS). Particularly, the rough estimate of MS is limited by the angular resolution confined by the total number of antennas. The proposed R2SA algorithm in (31) does not require the correlation process or exhaustive searching by exploiting only two adjacent antennas. ### _Multiple target vehicles per V2X sub-channel_ We now assume that the \(k\)-th vehicle can be simultaneously connected with multiple vehicles within the communication range in the same sub-channel to improve spectral efficiency and latency. It is possible to assume that the \(k\)-th vehicle can receive positioning reference signals from multiple target vehicles for localization purposes. However, an issue arises regarding the scheduling of simultaneous connections of the multiple target vehicles on the same sub-channel. For this issue, we expect that collisions are more likely to occur at the front and rear of the subject vehicle within its proximity region. Hence, when moving into the intersection, we can define high-priority vehicles as those moving toward us in the opposite lane or crossroads. However, the specific implementation of scheduling is beyond the scope of this paper. Specifically, we will look into the performance of the R2SA for the multiple AoAs estimation in the V2V environment, where the the \(k\)-th vehicle is assumed to be capable of suppressing the interference of multiple vehicles through the successive interference cancellation (SIC) method. Since the signals of different incident angles are focused on different focal points at the antenna array, they tend to be physically separated over different antenna elements. As a result, this leads to the inter-path interference suppression, and helps the SIC processing to further improve the multiple AoAs estimation. Further, to estimate all the AoAs \(\theta_{k,j}\forall j\in\mathcal{V}_{k}\) in a sub-channel, the antenna index of the strongest received power at the \(k\)-th vehicle is first selected as in (30), and the selected antenna is used to estimate the first AoA in terms of a single AoA estimation scheme in (31). Next, the SIC subtracts the estimated first strongest path from the received signal. Subsequently, we select the other antenna index of the strongest received power at each SIC step and run the R2SA. The SIC continues until the AoA estimation of all the target vehicles is finished. For each iteration of SIC, the signal for estimating the \(j\)-th AoA can be defined as follows: \[\tilde{\mathbf{y}}_{k}^{j}=\tilde{\mathbf{y}}_{k}^{j-1}-\tilde{\mathbf{y}}_{k}^{j -1}\circ\mathbf{\bar{a}}(\hat{\theta}_{k,j-1}), \tag{32}\] where \([\mathbf{\bar{a}}(\hat{\theta}_{k,j-1})]_{n}=\frac{L}{\sqrt{J}}\text{sinc}\left( \frac{L}{\lambda}(\sin\theta_{n}-\sin\hat{\theta}_{k,j-1})\right)\) is a known pattern of the lens-MIMO's received signal at the estimated AoA \(\hat{\theta}_{k,j-1}\) in (5). The signal in (32) is the input to the next SIC iteration. Based on (32), we select the antenna index of the strongest received power for the \(j\)-th AoA estimation. This way, each AoA from different vehicles is estimated sequentially at every stage of SIC. We repeat the process of AoA estimation until we finish estimating all the AoAs. Algorithm 1 summarizes the process. ``` Initialize: Received signal at \(k\)-th vehicle, \(\tilde{\mathbf{y}}_{k}^{1}=\mathbf{y}_{k}\). Signal for cancellation, \(\tilde{\mathbf{y}}_{k}^{0}=0\). for\(j\gets 1\)to\(|\mathcal{V}_{k}|\)do Set the eliminated signal \(\tilde{\mathbf{y}}_{k}^{j}=\tilde{\mathbf{y}}_{k}^{j}-\tilde{\mathbf{y}}_{k}^{j -1}\). Select the antenna index using (30). Stage 1Ratio calculation Measure \(\left|[\tilde{\mathbf{y}}_{k}^{j}]_{n_{k,j}^{*}}\right|\) and \(\left|[\tilde{\mathbf{y}}_{k}^{j}]_{(n_{\tilde{\mathbf{i}},j}+1)}\right|\). Calculate \(R_{n_{k,j}^{*}}\) using (27) and obtain \(e_{k,j}=1/\left(R_{n_{k,j}^{*}}+1\right)\). Stage 2Fine-tuning of AoA if\(-1\leq\frac{L}{D}(n_{k,j}^{*}+e_{k,j})\leq 1\)then \(\hat{\theta}_{k,j}=\sin^{-1}\left(\frac{\lambda}{D}\left(n_{k,j}^{*}+e_{k,j} \right)\right)\), else \(\hat{\theta}_{k,j}=\sin^{-1}\left(\frac{\lambda}{D}\left(n_{k,j}^{*}\right)\right)\). end if Calculate SIC signal using (32). end if Output:\(\hat{\theta}_{k}=[\hat{\theta}_{k,1},\ldots,\hat{\theta}_{k,J}]\). ``` **Algorithm 1**AoA Estimation algorithm ### _Analysis of R2SA_ #### Iv-D1 Performance We first evaluate the R2SA for a different number of target AoAs and compare it with its CRLBs as the signal-to-noise ratio (SNR) increases. Fig. 3 shows the error variances of the R2SA estimate with and without SIC for three cases: (a) a single target vehicle per sub-channel, (b) two paths of two target vehicles in the same sub-channel, which are angularly separated by more than lens-MIMO resolution, (c) two paths in the same sub-channel within the lens-MIMO resolution, where the lens's resolution \(\frac{1}{N}\) is \(\frac{1}{2L+1}\), and the number of antennas is set to 31. The difference between (b) and (c) is whether or not two paths are so close that two paths end up with the same antenna index of the strongest received power. On a single AoA estimation in (a), we can see that the difference between the lens's CRLB in (3) and its derived upper bound in (A.5) is about 2.5 dB for the same performance. Additionally, R2SA converges to the upper bound of the lens-MIMO at about 7.5 dB, but MS in (30) is limited by the lens-MIMO resolution. The R2SA with SIC in (b) further improves the performance by \(7.5\) dB at \(10^{-4}\). Besides, its SIC gain in (c), where two paths focus on the same antenna, is most pronounced at \(2\times 10^{-4}\) error variance because the ratio in (27) is determined by the same antenna index, which results in a great deal of interference for the R2SA without SIC. It should be noted that the R2SA without SIC in (b) is better than the those in (c) because the inherent capability of the lens-MIMO to focus energy at different focal points allows for the physical suppression of interference from two paths with different AoAs. However, in both cases (b) and (c), an error floor is observed in the high SNR region due to the residual interference of SIC, while the case of (c) shows early error floor due to the lack of the physical separation of the two paths energies over the array elements. To overcome the error floor, a larger number of antenna elements is required and will be addressed in Section VI. In the use case of the street intersection depicted in Fig. 1, we think that the (b) case happens more often than the (a) case if the V2X MAC scheduler connects those of vehicles Fig. 3: Error variance versus SNR for a comparison in different scenarios. on the opposite side of the lane and crossroad lane with higher priority. Thus, we statistically evaluate a probability that the difference between the two angles is greater than the resolution of lens-MIMO, where vehicles are dropped on the road as a Poisson point process (PPP) with different vehicle densities (i.e., the expected value of the number of vehicles per unit area). For computational convenience, we introduce a sign function, which is represented as \(\text{sgn}(x)\), and it has 1 when \(x\geq 0\) and 0 otherwise. In addition, we define the angle difference as \(\theta_{k}^{j,i}=|\theta_{k,j}-\theta_{k,i}|\in\mathbb{R}^{N_{\text{v}}(N_{ \text{v}}-1)}\), then the probability that the AoAs of two paths are separated by more than \(1/N\) is defined as \[p_{\text{sep}}=\mathbb{E}_{\mathbf{\theta}}\left[\text{sgn}\left(\theta_{k}^{j,i} -\frac{1}{N}\right)\right], \tag{33}\] where \(\mathbf{\theta}\) consists of all differences in each AoA pair. Table I shows the separation probability as the density of vehicles increases from 10 to 40 cars/km/lane, where \(p_{\text{sep}}\) increases highly by the number of antennas. It should be noted that \(p_{\text{sep}}\) depends more on the resolution of the array than the vehicle density. Based on Table I, we would say that the (b) channel scenario happens with about 90\(\%\) and (c) happens with about 10\(\%\). In channel (b), much of the received energy of two paths are likely to be physically separated at the antenna array by the energy-focusing property of the lens-MIMO. Thus, the inter-path interference prior to SIC is effectively suppressed, which improves the performance. Additionally, we can see that the R2SA could be a feasible solution for the AoA estimation in the considered scenario with several antennas. #### Iv-B2 Computational complexity A key advantage of the R2SA is its lower computational complexity over conventional schemes. Specifically, the total number of multiplication operations for the R2SA with SIC in Algorithm 1 is as follows: \(T_{\text{R2SA}}=K(2N+2)\). This complexity comes from four steps in Algorithm 1: the first is to find the antenna index of the strongest received power in \(N\times 1\) vector (\(N\), line3); the second part is to calculate the ratio of two adjacent antennas and the error (2, line6-7); the third part is to compute the SIC signal with the element-wise product of two \(N\times 1\) vectors (\(N\), line14); the last part is for the '\(for\,loop\)' of the SIC iteration (\(K\), line1), respectively. In contrast, conventional methods like MUltiple SIgnal Classification (MUSIC) and maximum likelihood (ML) require matrix multiplications for the correlation matrix and eigenvalue decomposition. They also involve additional computations for evaluating the pseudo-spectrum and performing exhaustive searching in a preset dictionary [18, 34]. In big-\(\mathcal{O}\) sense, MUSIC and ML methods have an approximate number of multiplications of \(\mathcal{O}(d_{\text{dic}}N^{4})\) and \(\mathcal{O}(d_{\text{dic}}N^{3})\), respectively, where \(d_{\text{dic}}\) represents the size of the pre-defined dictionary. It is readily noted that the complexity of R2SA is significantly lower than conventional methods, and that the limited computing resources of vehicles make the R2SA method more advantageous compared to previous estimation schemes in V2V communications. ## V AoA-based Localization In this section, we first explore the maximum likelihood (ML) localization and propose an alternative method that solves the sensing equations (SEs) of the AoA-based geometric function for the relative localization in the considered intersection scenario. Regarding the position \(\mathbf{p}\) and orientation \(\mathbf{\omega}\) vectors, we focus on the relative localization using the AoA estimates \(\hat{\theta}_{k,j}\,\forall k,j\in\mathcal{V}\), where the probability density function (pdf) of the AoA measurement model is known, as depicted in (13), and the variance of estimated AoA at each vehicle is unknown. Further, by Bayes' theorem for (13) without the constant terms, which correspond to the prior and marginal probabilities for the AoA and localization parameters, respectively, the conditional joint density function of \(\mathbf{p}\) and \(\mathbf{\omega}\) given by \(\hat{\mathbf{\theta}}\) can be re-written as \[\begin{split} f(\mathbf{p},\mathbf{\omega}|\hat{\mathbf{\theta}})\\ =\prod_{k\in\mathcal{V}}\prod_{j\in\mathcal{V}_{k}}f(\mathbf{p}_ {k},\mathbf{p}_{j},\omega_{k,j}|\hat{\theta}_{k,j})\\ =\prod_{k\in\mathcal{V}}\prod_{j\in\mathcal{V}_{k}}\frac{1}{\sqrt {2\pi\sigma_{\text{AoA}}^{2}(\hat{\theta}_{k,j})}}\text{exp}\Bigg{(}-\frac{( \alpha_{k,j}-\hat{\theta}_{k,j})^{2}}{2\sigma_{\text{AoA}}^{2}(\hat{\theta}_ {k,j})}\Bigg{)},\end{split} \tag{34}\] where \(\sigma_{\text{AoA}}^{2}(\hat{\theta}_{k,j})\) is CRLB at \(\hat{\theta}_{k,j}\) in (5). Eq. (34) holds when each target vehicle is assigned to a different sub-channel such that all vehicle communication channels are independent. Following this, we localize the vehicles by maximizing the joint pdf as follows: \[[\hat{\mathbf{p}},\hat{\mathbf{\omega}}]=\underset{\mathbf{p},\mathbf{\omega}}{\max} \sum_{k\in\mathcal{V}}\sum_{j\in\mathcal{V}_{k}}\text{log}f(\mathbf{p}_{k}, \mathbf{p}_{j},\omega_{k,j}|\hat{\theta}_{k,j}). \tag{35}\] In (34), the variance of estimated AoA is often unavailable in estimating localization. Therefore, the other way of localization is to solve a set of SEs, which are defined as \(\mathcal{L}_{k,j}=\Big{|}g(\mathbf{p}_{k},\mathbf{p}_{j},\omega_{k,j})-\hat{ \theta}_{k,j}\Big{|}^{2}\forall k,j\in\mathcal{V}\), assuming all estimates of AoA are accurate (i.e., \(\mathcal{L}_{k,j}=0\) for all \(k\) and \(j\)). Consequently, the localization parameters can be obtained as follows: \[[\hat{\mathbf{p}},\,\hat{\mathbf{\omega}}]=\underset{\mathbf{p},\,\mathbf{\omega}}{\min} \sum_{k\in\mathcal{V}}\sum_{j\in\mathcal{V}_{k}}\Big{|}g(\mathbf{p}_{k}, \mathbf{p}_{j},\omega_{k,j})-\hat{\theta}_{k,j}\Big{|}^{2}\,. \tag{36}\] The solution of (36) approaches ML asymptotically, as SNR increases [35]. As SNR increases, \(\sigma_{\text{AoA}}^{2}\) decreases and each estimate \(\hat{\theta}_{k,j}\) gets close to the actual geometric function \(g(\mathbf{p}_{k},\mathbf{p}_{j},\omega_{k,j})\). Consequently, the SEs \(\mathcal{L}_{k,j}\forall k,j\) in (36) tend to approach zero. Thus, all of \(\mathcal{L}_{k,j}\) are linearly dependent since the sum of SEs weighted with arbitrary non-zero values \(v_{k,j}\) can be readily shown zero (i.e., \(\sum_{k\in\mathcal{V}}\sum_{j\in\mathcal{V}_{k}}v_{k,j}\mathcal{L}_{k,j}=0\) for any non-zero \(v_{k,j}\)). Hence, the set of sensing equations is not over-determined and possesses a unique solution. Moreover, in the fully-connected vehicular channel case, we have the \(3N_{v}\) unknown parameters (\(x_{k}\), \(y_{k}\), and \(\omega_{k,j}\,\forall k\)) and the \begin{table} \begin{tabular}{l|l l l} \hline Vehicle density & \multicolumn{3}{c}{The number of antennas, \(N\)} \\ (cars/km/lane) & 15 & 31 & 61 \\ \hline \hline 10 & 0.7513 & 0.8749 & 0.9355 \\ \hline 20 & 0.7493 & 0.8650 & 0.9394 \\ \hline 40 & 0.7311 & 0.8544 & 0.9251 \\ \hline \end{tabular} \end{table} Table I: Separation probability, \(p_{\text{sep}}\) \(N_{v}(N_{v}-1)\) equations following \(\tan\left(\hat{\theta}_{k,j}+\omega_{k,j}\right)=(y_{j}-y_{k})/(x_{j}-x_{k})\, \forall j,k\in\mathcal{V}\). To obtain a feasible solution of (36) without encountering an under-determined case, we then require \(N_{v}(N_{v}-1)\geq 3N_{v}\), which leads to at least four vehicles being required, i.e., \(N_{v}\geq 4\). For the relative localization in (36), the localization parameters may be determined in a centralized manner, where each vehicle estimates the AoA of its target vehicles and sends them to a mobile edge cloud (MEC) placed in near by the road side unit (RSU). Due to the MEC's ability to process high-volume data in real-time, as indicated in [36, 37], the MEC can estimate the position and orientation of all vehicles with no processing delay. Considering the millisecond network latency and the computing power of the MEC, the service latency would be kept below 10 milliseconds, which is significantly smaller than the variations in vehicle's mobility, particularly for short-range V2V scenarios. Each vehicle then has the relative localization knowledge about the target vehicles. If the \(k\)-th vehicle can acquire its absolute localization in terms of its own sensing capability, then the relative localization information can directly transform into absolute ones. Fig. 4 shows the proposed localization method solving SEs in (36) for the AoA estimates \(\hat{\theta}_{k,j}\forall j\) of R2SA when SNR \(=5\) dB and \(N=61\). To intuitively reveal the solution obtained from (36) in a two-dimensional figure, we further assume that each vehicle is aware of its own location. Thus, the solution of subject vehicle's position and orientation would be equivalent to (36) under the feasible conditions where neither over-determination nor under-determination occurs. Fig. 4(a) shows the sum of the logarithm SE for each vehicle's position, where the dotted lines connecting each pair of vehicles represent the SEs \(\mathcal{L}_{k,j}\). We can observe the presence of points with infinite negative values at the intersection points of the equations \((g(\mathbf{p}_{k},\omega_{k,j}|\mathbf{p}_{j})-\hat{\theta}_{k,j})\forall j\in \mathcal{V}_{k}\). In Fig. 4(b), the points with the minimum value of the squared error exist at the real orientation of each vehicle, where those values for each connected vehicle are all the same due to ignoring AoA variance. It is noted that solution of (36) is equivalent to that of the ML localization, and is being adopted as a feasible solution for cooperative localization without over-and-under-determined problems. It is noted that the position and orientation can be jointly estimated by solving the sensing equations in (36), without relying on channel quality information and other measurements such as the time of arrival (ToA) and phase difference of arrival (PDoA). **Remark 1**.: _(Absence of line-of-sight (LoS) path) In the scenario of an intersection street, short-distance V2V communications often benefit from the availability of a LoS path [28, 29, 30]. However, it is possible for the LoS path to be obstructed by nearby vehicles. There have been papers addressing the challenge of localizing hidden vehicles, such as [16], where the positions of scatters and hidden vehicles are jointly estimated under the assumption of sufficiently strong multipath signals. While this approach can be adapted for lens-MIMO systems, it does come with the drawback of increased computational complexity for vehicles._ _Hence, we can explore alternative approaches that are more suitable and applicable in the intersection scenario, offering potential improvements over existing methods. In situations where the LoS channels of certain target vehicles are obstructed by nearby vehicles in close proximity, the signal power received from those channels tends to experience significant attenuation due to scattering and absorption losses. This attenuation of received signal power provides a valuable indicator for determining whether the V2V link is blocked or not. As a result, we can exclude the sensing equations that involve channels with low signal power in equation (36). As long as the number of remaining sensing equations is larger than the number of unknown parameters (i.e., \(N_{v}(N_{v}-1)-N_{\text{disc}}\geq 3N_{v}\), where \(N_{\text{disc}}\) represents the discarded sensing equations), the localization method in (36) can still find a feasible solution. The specific scheme for eliminating the LoS channel is an interesting direction for future research._ ## VI Numerical results In this section, we evaluate the performance of the proposed localization algorithm in the street intersection scenario, where single or multiple target vehicles are allocated in the same sub-channel. Their results are compared with the CRLBs and also with target requirements for 5G positioning service in 3GPP, where position and orientation accuracy should be less than 0.2 m and \(2^{\circ}\) with 95 \(\%\) confidence level [38]. In order to Fig. 4: Sum of sensing equations. make the confidence analysis easier, we further assume that the estimate of position and orientation are Gaussian random variables, where the 95\(\%\) confidence means the estimate error is within \(\pm 1.96\sigma\) when estimation error variance is \(\sigma\). We also evaluate the performance impact of the number of vehicles assigned in the same sub-channel, and system conditions to satisfy the requirements for the 5G positioning services. ### _Parameter setup_ Fig. 1(a) depicts the street intersection with each lane width of 5 m and length of 30 m, where there are three roads, and each road has two lanes per direction. We adopt a communication radius of \(R=50\) m, as specified by the 3GPP standardization for the urban intersection scenario [39]. Specifically, the vehicles are randomly distributed along each lane by a Poisson point process (PPP) with a vehicle density of 10 cars/km/lane, where the length and width of vehicles are 4.7 m and 1.8 m, respectively, and the PPPs for the different lanes are assumed to be independent. The carrier frequency is 28 GHz and the complex, and the complex channel gain \(h_{k,j}\) in (1) follows a distribution of \(\mathcal{CN}(0,1)\)[40, 41]. Additionally, the AoA \(\theta_{k,j}\) and relative orientation \(\omega_{k,j}\) are determined by the geometry of the placement of vehicles. The path loss model of each LoS link, denoted by \(\rho_{\text{o}}\), is assumed as follows by geometric statistics [42], it is given by \[\frac{1}{\rho_{\text{o}}^{k,j}}=\zeta^{2}(d_{k,j})\left(\frac{\lambda}{4\pi d _{k,j}}\right)^{2}, \tag{37}\] where \(\zeta^{2}(d_{k,j})\) is the atmospheric attenuation over distance \(d_{k,j}\). Lens-MIMO and ULA have \((2L+1)\) antenna elements, whose spacing is \(f/L\) and \(\lambda/2\), respectively. In order to provide complete angular coverage for the vehicle, we assume that two lens-MIMO systems are mounted on the front and back of the vehicle, each capable of an angular view from \(-90^{\circ}\) to \(90^{\circ}\). ### _Simulation results_ We evaluate the proposed AoA-based localization algorithm in two cases: (a) the single target AoA estimation in Section IV-B and (b) the multi-target AoAs estimation in Section IV-C. In the case of (a), we confirm that the proposed algorithm approaches the CRLBs of the position and orientation, and compare it with the target requirements for 5G positioning services. In the case of (b), we analyze the feasibility of lens-MIMO-based localization for multi-target AoAs estimation in the V2V street intersection scenario. #### Iv-B1 A single target in a sub-channel We consider the unicast of each vehicles in different sub-channels. To evaluate the performance of the proposed scheme, we adopt the root mean squared error (RMSE) of the position and orientation estimates, denoted as RMSE\({}_{p}\) and RMSE\({}_{\omega}\), respectively. These metrics are as follows: \[\text{RMSE}_{p}=\frac{1}{N_{V}}\sum_{k\in\mathcal{V}}\sqrt{||\mathbf{p}_{k}- \hat{\mathbf{p}}_{k}||^{2}}, \tag{38}\] \[\text{RMSE}_{\omega}=\frac{1}{N_{V}}\sum_{k\in\mathcal{V}}\sqrt{|\omega_{k,j} -\hat{\omega}_{k,j}|^{2}}. \tag{39}\] Fig. 5 presents the localization error variance as the SNR increases, and compares the proposed R2SA with the existing methods, such as MUSIC and ML estimations involving exhaustive searching within a dictionary range of \([-90^{\circ},90^{\circ}]\) with a resolution of \(0.1^{\circ}\), where positions and orientations are estimated by (36). The number of antennas and vehicles are set up to 121 and 4, respectively. The focal length is set to \(30\lambda\), which is the minimum focal length for the lens aperture \(L=60\lambda\) to satisfy the condition \(f\geq L/2\). The AoA CRLBs of lens-MIMO and ULA in (5) and (4) are depicted as dashed red and greenish lines, respectively. The MUSIC method is represented by dotted black lines for lens-MIMO and ULA, while the ML method is represented by dotted magenta lines. The proposed R2SA in (31) is indicated by the dashed blue line. Fig. 5 also verifies that ML method for both position and orientation approaches the derived CRLBs of position and orientation in (24) and (25), which provides the authenticity of the derived CRLBs. In the low SNR region (\(\text{SNR}\leq-4\) dB), the performance of all estimators utilizing lens-MIMO is adversely affected since its received peak power feature at the Fig. 5: Comparison of localization errors among different methods for AoA estimation. array is no longer visible in the high power noise. However, as the SNR increases, lens-MIMO surpasses the ULA in performance for both the MUSIC and ML methods. For SNR above \(-1\) dB, the proposed R2SA algorithm exhibits superior performance compared to the CRLB of ULA, and its performance gets close to the MUSIC method. From the computational complexity perspective discussed in Section IV-D, R2SA demonstrates significantly lower computational complexity, approximately \(2\times 10^{2}\) (\(\mathcal{O}(2N)\)) per target vehicle. In contrast, the ML and MUSIC methods require approximately \(4\times 10^{11}\) (\(\mathcal{O}(d_{\text{disc}}N^{4})\)) and \(3\times 10^{9}\) (\(\mathcal{O}d_{\text{disc}}N^{3}\)) multiplications, respectively, for each vehicle. Hence, R2SA can be considered more suitable than other methods for V2V systems with limited computing resources, as it offers real-time signal processing and low latency. In Fig. 6, we compare the CRLBs of the localization parameters (position \(\mathbf{p}\) and orientation \(\boldsymbol{\omega}\)) for lens-MIMO and ULA as the SNR increase, with the same simulation parameters of Fig. 5. The dashed red and dotted greenish lines are the CRLB of lens-MIMO and ULA, respectively. The dotted black lines are the derived upper and lower bound, and dotted blue and flushed lines are the localization performance with AoA estimates of R2S2 in (31) and MS in (30), respectively. We also compare these performances with the positioning requirements of \(95\%\) confidence in 0.2 m position and \(2^{\circ}\) orientation accuracy, which are shown in horizontal black dash-dotted lines. Fig. 6 verifies that the position and orientation CRLB of lens-MIMO in (24) and (25) are always superior to the CRLB of the conventional ULA by about 13 dB, as shown in (11). The performances of both position and orientation with R2SA get close to the upper bound of lens-MIMO's CRLB within a marginal gap of about 1.5 dB as the SNR increases. R2SA also satisfies the target requirements of position and orientation at 5 dB, and its performance is about 10 dB better than the ULA's CRLB, even with lower complexity. Meanwhile, the CRLBs of ULA satisfy the requirements at 15 dB. The MS scheme estimates the AoA by quantizing the incoming directions represented by each antenna element. As a result, it acts as a spatial quantizer for the AoA. Consequently, the MS scheme exhibits an error floor at high SNR regions due to the spatial quantization error. It can be noted that the proposed R2SA holds feasibility for 5G positioning services, particularly in the SNR region higher than 5 dB, which can be reliably achieved in mmWave-based V2V communication systems [28, 43]. Fig. 7 compares the same performance curves with Fig. 6 as the number of antennas increases, where SNR is 10 dB and focal length is the same. For a mmWave lens-MIMO, as the number of antennas would be normally large. Therefore, we consider up to 60 antennas, which can be fit into normal size vehicles, as demonstrated in previous validation studies [32, 44]. Particularly, the CRLBs of lens-MIMO and ULA are close for the smaller number of antennas, such as \(N=13\), but gets far different as its number of antennas increases as shown in Fig. 7(a) and (b). This is because the larger-sized lens-MIMO can receive more energy and achieve higher spatial sampling resolution. In contrast, when lens-MIMO has antenna elements of less than 13, the spatial energy could not be sufficiently collected for all directions since the lens-MIMO can collect only as many spatial samples as the number of antennas. With more than 20 antennas, the position and orientation accuracy of R2SA is better than CRLBs of ULA, but cannot converge to lens-MIMO's CRLB. Its loss is first due to the fact that R2SA uses only the ratio of two adjacent antennas. Secondly, it is because the sensing equation ignores the measurement noise variance. Notwithstanding, the position accuracy with R2SA satisfies the target accuracy requirement, and its orientation estimate achieves more than 90 \(\%\) confidence level at \(N=61\), which can be further enhanced by larger number of vehicles involved in the cooperative localization. It is important to note that the proposed R2SA method approaches the target requirements for position and orientation estimation while maintaining a lower complexity compared to utilizing the entire antenna array. Fig. 8 compares the same set of performance curves, which are identical to those shown in Fig. 7, for larger number of vehicles participating in localization estimation, where Fig. 6: Comparison of localization errors among different derived CRLBs and target requirements. is 31 and SNR is 10 dB. To increase the total number of vehicle pairs, we raise the average number of vehicles, which is the mean of PPP, resulting in a higher vehicle density. Each vehicle is then sequentially connected to its adjacent vehicles. The results confirm that the position and orientation performance is proportional to the number of vehicles. The R2SA satisfies the target requirements of position and orientation for vehicle densities of 10 and 8, respectively, whereas CRLBs of ULA fail to meet the demands, even if many vehicles participate in cooperative localization. This shows that R2SA can satisfy the requirements, even with a smaller number of vehicles. Fig. 8(a) and (b) verify that the CRLBs of lens-MIMO is in-between the derived upper and lower bound in (A.5), whose differences in the number of required vehicles for target demands between the CRLBs and upper bounds are insignificant with 1.5 vehicles for position and 0.8 vehicles for orientation. The R2SA requires 3 and 2 more vehicles for the position and orientation accuracy than the CRLB of lens-MIMO. In the single target case, Figs. 6-8 remark that the accuracy of the AoA estimation is critical to the localization performance. From the perspective of the target requirements for 5G positioning service, the R2SA of position accuracy is robust for fewer antenna elements in Fig. 7, and the R2SA of orientation is suitable with sparser connected vehicles in Fig. 8. #### V-A2 Multiple vehicles connection in a sub-channel Next, we consider the multi-target vehicles' AoA estimation in Fig. 1(a), where the vehicles in the street intersections are multi-casting each other in the same sub-channel simultaneously. More than two multiple vehicles are sequentially allocated to the sub-channel until the AoA of all surrounding vehicles is estimated. The vehicle density on each lane is assumed to be 10 cars/km/lane with PPP. In multiple vehicles localization, we explore the RMSE localization error as \[\text{RMSE}_{l}=\frac{1}{N_{v}}\sum_{k\in\mathcal{V}}\sqrt{||\mathbf{\eta}_{k}- \hat{\mathbf{\eta}}_{k}||^{2}}, \tag{40}\] where \(\mathbf{\eta}_{k}=[x_{k},y_{k},\omega_{k,j}]\). Fig. 9 shows the localization error variance of multiple AoAs estimation, where eight vehicles are participating in the localization and SNR is 10 dB. The dashed and dotted bluish lines show the localization error variance for R2SA with and without SIC, where we consider the cases that the Fig. 8: Localization error as the number of vehicles increases. Fig. 7: Localization error as the number of antennas increases. number of connected target vehicles among eight vehicles in the same sub-channel is one, two, and four. For any number of antennas, the localization error variances with SIC are reduce to approximately 58 % and 77 % of ones without SIC for two and four target vehicles, respectively. For two target vehicles, it is worth noting that the multi-AoA estimation for R2SA with SIC approaches to the CRLB of a single AoA for ULA. However, it is shown that when the number of connected target vehicles is four and \(\text{SNR}=10\) dB, the proposed localization for the R2SA with SIC could not meet the required mean squared error (i.e., \(10^{-2}\) for position and \(3\times 10^{-4}\) for orientation) even for very large number of antennas. Considering the mean squared error variance requirements, we statistically evaluate the feasibility of R2SA with SIC in terms of outage probability that either position or orientation requirement is not supported for a given target. It is defined as \[\begin{split} p_{\text{out}}=\mathbb{E}_{\mathbf{\eta}}\bigg{[}& \text{sgn}\left(\sqrt{||[\mathbf{\eta}_{k}]_{1:2}-[\hat{\mathbf{\eta}}_{k}]_{1:2}||^{2 }}-\gamma_{p}\right)\\ &\oplus\text{sgn}\left(\sqrt{||[\mathbf{\eta}_{k}]_{3}-[\hat{\mathbf{ \eta}}_{k}]_{3}|^{2}}-\gamma_{\omega}\right)\bigg{]},\end{split} \tag{41}\] where \(A\oplus B\) is a boolean addition of \(A\) and \(B\), which is zero only when both \(A\) and \(B\) are zero; \(\gamma_{p}\) and \(\gamma_{\omega}\) are the target threshold of position and orientation, respectively, determined by \(1.96\sigma_{p}\) and \(1.96\sigma_{\omega}\) with accuracy \(\sigma_{p}=20\) cm and \(\sigma_{o}=2^{\circ}\). The sigmoid function \(\text{sgn}(x)\) has 1 when \(x\geq 0\) and 0 otherwise. To satisfy the mean squared error variance requirements, Fig. 10 investigates the proposed localization method with R2SA in massive lens-MIMO systems with \(N=121\) and \(N=161\). We consider different numbers of target vehicles in the same sub-channel and varying vehicle density per lane, while keeping the parameters of the intersection scenario consistent. Fig. 10 shows that R2SA with and without SIC ends up with the same outage probability for high SNR region due to the ability of the massive lens-MIMO to effectively suppress inter-path interference through energy focusing, as discussed in Section IV-D. We also observe that the performance difference between the vehicle density of 10 and 20 cars/km/lane is negligible. The R2SA with SIC for two target vehicles shows the outage probability floor in Fig. 10(a) for \(N=121\). Meanwhile, it achieves the 5 \(\%\) outage probability of 20 cm and \(2^{\circ}\) at \(\text{SNR}=10\) dB in Fig. 10(b) for ultra-dense antennas \(N=161\), while the SIC gain has about 20 dB for the performance. On the other hand, for the R2SA of four target vehicles in the same sub-channel, it is difficult to keep the 95 \(\%\) confidence in the street intersection scenario. We remark that localization with R2SA is affected by the number of vehicles allocated to one sub-channel and the number of antennas rather than the density of vehicles. In Fig. 11, unlike massive MIMO, we will consider more practical lens-MIMO of 31 and 61 antennas in the mmWave spectrum of 28GHz, where the size of lens-MIMO (i.e., the form factor of lens-MIMO is aperture \(\times\) focal length) is \(15\times 7.5\) cm, and \(30\times 15\) cm, respectively. We assume that the lens-MIMO of these form factors can be equipped at the front and Fig. 10: Outage probability versus SNR for multi-target \(\in\{1,2,4\}\) vehicles and vehicle density \(\in\{10,20\}\) cars/km/lane with massive lens-MIMO. Fig. 9: Localization error versus the number of antennas for multi-target \(\in\{1,2,4\}\) vehicles in one sub-channel. rear bumpers of vehicles, where their length and width are 5 m and 2 m, respectively as depicted in [40], because there will be more chances of collision accidents at the front and back. Further, to exploit a fundamental capability of lens-MIMO that has better AoA estimation performance at the bore sight, we limit the received angular view for \(20^{\circ}\), \(40^{\circ}\), and \(60^{\circ}\). Fig. 11 presents a comparison of the performances of the R2SA with SIC for different angular views and illustrates the RMSEs of the estimated localization parameters, as defined in (40). The performance is shown for two scenarios: (a) a smaller number of antennas (\(N=31\)), and (b) a larger number of antennas (\(N=61\)). The dashed blue and flushed lines are R2SA and MS with a single target in a sub-channel; and the black lines are R2SA for multiple targets in a sub-channel with different view angles, where the total number of vehicles is 8, and the number of target AoAs in a sub-channel is 2. Fig. 11(a) illustrates that the SE-based localization with R2SA achieves the best performance in \(20^{\circ}\) view than the others in the SNR range less than 5 dB, but its accuracy shows an error floor as the SNR increases. This is because two received spatial signals are more likely to be focused on a single antenna, making it hard to separate the signals, even with the SIC processing. The R2SA of \(60^{\circ}\) view angle shows the best performance in the high SNR region (SNR \(\geq 0\)). Meanwhile, in Fig. 11(b) with larger antenna elements, the R2SA of \(20^{\circ}\) view angle with SIC converges to the R2SA of a single target at SNR \(=5\) dB, where The performance of all three angular views is within a 3 dB difference for the localization error variance of \(10^{-3}\). As shown in the results in Section VI-B1, where the R2SA for a single target satisfied the target requirements (i.e., \(0.2\,\text{m}\times 1.96\sigma\) for position and \(2^{\circ}\times 1.96\sigma\) for orientation), it is observed that the R2SA with SIC of the larger number of antennas (\(N=61\)) guarantees localization accuracy in V2V street intersection scenarios for any angle of views of lens-MIMO in the SNR region of larger than 10 dB. It is noted that the SIC-combined R2SA with a narrower view angle assures the requirements of positioning services. In the multiple target vehicles case, Figs. 9-11 remark that the R2SA with SIC, using a larger number of antennas, can offer precise V2V positioning services for two target vehicles in the street intersection. This could be achieved with significantly lower complexity compared to conventional methods in ULA systems, as discussed in Section IV-D. ## VII Conclusion This paper first presented the theoretical limit of the localization of a lens-MIMO for a cooperative vehicle-to-vehicle (V2V) communication in the street intersection, and its results are compared with the conventional uniform linear array (ULA). In the lens-MIMO, we further investigated the characteristic of the received signal to estimate an angle of arrival (AoA) with low complexity. As a result, we proposed a R2SA AoA estimation scheme exploiting the ratio of the two strongest received signals at the antenna elements. For the given AoA estimates, we presented the feasibility of a localization algorithm that solves the sensing equations with respect to the AoA and geometric model. Furthermore, we confirmed that the bounds of position and orientation are affected by the CRLB of AoA, and showed that the localization performance of lens-MIMO is better than the conventional ULA under the specific condition in terms of lens design parameters, such as focal length and lens aperture. The simulation results verified that the performance of the localization using R2SA approaches the derived upper bound of the lens-MIMO for a single target vehicle. Furthermore, R2SA outperforms conventional methods such as MUSIC and ML in ULA systems, even though it has lower complexity. It was also confirmed that the proposed localization method, which solves the sensing equations, satisfies the target requirements for the 5G positioning service. In the street intersection scenario with multiple target vehicles, we found that a lens-MIMO with a larger size and narrower angle of view can effectively suppress multi-path interference. Hence, the localization and AoA estimation methods proposed for the street intersection demonstrate promising potential as a framework suitable for 5G/B5G localization use cases that Fig. 11: Localization error versus SNR for the angle of view \(\in\{20,60,140\}\) degree a with practical size of the array. require higher accuracy and lower complexity. As future work, it is promising to extend the proposed method to accommodate multi-path and non-line-of-sight channel parameters in practical environments. ## VIII Appendix ### _Appendix A_ The first derivative of the \(sinc\) function can be represented as \[\frac{\delta}{\delta x}\left(\frac{\sin\pi x}{\pi x}\right)=\begin{cases}0,& \text{for}\,x=0\\ \frac{\pi x\cos\left(\pi x\right)-\sin\left(\pi x\right)}{\pi x^{2}},&\text{ otherwise.}\end{cases}\] (A.1) Suppose \(x=Z\), where \(Z\in\mathbb{Z}\) is an arbitrary integer. Then, \(\sin\pi Z=0\) and \[\frac{\delta}{\delta x}\left(\frac{\sin\pi x}{\pi x}\right)|_{x=Z}=\begin{cases} 0,&\text{for}\,Z=0\\ \frac{1}{Z},&\text{$Z$ is odd}\\ -\frac{1}{Z},&\text{$Z$ is even.}\end{cases}\] (A.2) Then, the upper and lower bounds of \(\mathbf{\mu}_{2}^{\mathsf{T}}\mathbf{\mu}_{2}\) are simply derived as follows: \[\sum_{\ell=1}^{\frac{N-1}{2}}\frac{1}{\ell^{2}}<1+\sum_{\ell=1}^{\frac{N-1}{2 }}\frac{1}{\ell(\ell-1)}=2-\frac{2}{N-1}<2,\] (A.3) \[\mu_{2}^{\mathsf{T}}\mu_{2}\geq\sum_{\ell=1}^{N}\frac{1}{\ell^{2}}\geq 1.\] (A.4) By substituting the above inequalities into (10), the upper and lower bounds of the lens-MIMO's CRLB can be determined as \[\frac{f\lambda^{2}\sigma^{2}}{4L^{4}\cos^{2}\theta}<\text{CRLB}_{\text{Lens}}( \theta)<\frac{f\lambda^{2}\sigma^{2}}{2L^{4}\cos^{2}\theta}.\] (A.5) For a fair comparison with the CRLB of ULA, let the number of antennas \(N\) in (4) be \(\left(2\frac{L}{\lambda}+1\right)\). Then, the CRLB of ULA can be reformulated as a function of the lens aperture, it is given by \[\text{CRLB}_{\text{ULA}}(\theta)=\frac{3L^{2}\sigma^{2}}{L(2L+\lambda)(L+ \lambda)d_{\text{ULA}\text{ cos}^{2}\theta}}.\] (A.6) Using (A.5) and (A.6), we can readily compare the bounds with and without a lens. Suppose that \(\text{CRLB}_{\text{Lens}}\leq\text{CRLB}_{\text{ULA}}\); this inequality may be represented as a condition in terms of the focal length of the lens. By considering the upper bound of the lens's CRLB in (A.5), it is given by \[f\leq\frac{12L^{3}}{(2L+1)(L+1)\lambda^{4}}.\] (A.7) This completes the proof of (11). ### _Appendix B_ This section focuses on deriving the Fisher information matrix (FIM) elements. The sub-block matrix \([F_{\mathbf{xx}}]_{i,j}\) of the FIM is written as \[[F_{\mathbf{xx}}]_{i,j}=\mathbb{E}\left[\frac{\partial\ln\mathbf{f}(\mathbf{ \theta}|\mathbf{\eta})}{\partial x_{i}}\frac{\partial\ln\mathbf{f}(\mathbf{\theta}| \mathbf{\eta})}{\partial x_{j}}\right].\] (B.1) Note that \(\theta_{k,j}\) is independent of \(\theta_{k,i}\) when \(i\neq j\) by the mmWave assumption, then the entries become zero for \(i\neq j\). The two times differentiation of \(\mathbf{f}(\mathbf{\theta}|\mathbf{\eta})\) with respect to \(x_{i}\) and \(x_{j}\) is an exponential function, then we have \[[F_{\mathbf{xx}}]_{i,j}=\begin{cases}\sum_{j\in\mathcal{V}}\frac{\partial^{2} \ln\mathbf{f}(\theta_{k,j}|\mathbf{p},\omega)}{\partial^{2}x_{k}^{2}},&\text{ for }i=j,\\ 0,&\text{otherwise.}\end{cases}\] (B.2) Considering the second derivative, we can derive the diagonal entries of FIM as follows: \[\frac{\partial^{2}\ln\mathbf{f}(\theta_{k,j}|\mathbf{p}_{k}, \mathbf{p}_{j},\omega_{k,j})}{\partial x_{k}^{2}}\] (B.3) \[=\frac{1}{\sqrt{2\pi\sigma_{n_{k,j}}^{2}}}\frac{\partial}{ \partial x_{k}}\left[\frac{\partial}{\partial x_{k}}\left(-\frac{(\theta_{k,j} -\alpha_{k,j})^{2}}{2\sigma_{n_{k,j}}^{2}}\right)\right]\] (B.4) \[=\frac{1}{\sqrt{2\pi\sigma_{n_{k,j}}^{2}}}\frac{\partial}{ \partial x_{k}}\left[\left(\frac{(\theta_{k,j}-\alpha_{k,j})}{\sigma_{n_{k,j} }^{2}}\right)\frac{\partial\alpha_{k,j}}{\partial x_{k}}\right]\] (B.5) \[=\frac{1}{\sqrt{2\pi\sigma_{n_{k,j}}^{2}}}\frac{\partial}{ \partial x_{k}}\left[\left(\frac{(\theta_{k,j}-\alpha_{k,j})}{\sigma_{n_{k,j} }^{2}}\right)\frac{(y_{j}-y_{k})}{d_{k,j}^{2}}\right]\] (B.6) \[=\mathbf{A}\left[-\frac{(y_{j}-y_{k})^{2}}{d_{k,j}^{4}}+(\theta_{k,j}-\alpha_{k,j})\frac{\partial}{\partial x_{k}}\frac{(y_{j}-y_{k})}{d_{j,k}^{ 2}}\right],\] (B.7) where \(\mathbf{A}=\frac{1}{\sigma_{n_{k,j}}^{2}\sqrt{2\pi\sigma_{n_{k,j}}^{2}}}\) is the constant term. In (B.5), the derivative of the geometric form \(\alpha_{k,j}\) is given by \[\begin{split}\frac{\partial\alpha_{k,j}}{\partial x_{k}}& =\frac{\partial}{\partial x_{k}}\left(\tan^{-1}\left(\frac{y_{j}-y_{k}}{x_{j}-x _{k}}\right)-\omega_{k,j}\right)\\ &=\frac{y_{j}-y_{k}}{(y_{j}-y_{k})^{2}+(x_{j}-x_{k})^{2}}=\frac{y_ {j}-y_{k}}{d_{k,j}^{2}}.\end{split}\] (B.8) Since \(\mathbb{E}[\theta_{k,j}]=\alpha_{k,j}\), we have the \(k\)-th diagonal entry by substituting (B.7) into (B.1) as follows: \[F_{\mathbf{xx}}(k,k)=\sum_{j\in\mathcal{V}}\frac{1}{\sqrt{2\pi\sigma_{n_{k,j}}^{ 2}}}\frac{1}{\sigma_{n_{k,j}}^{2}}\frac{(y_{j}-y_{k})^{2}}{d_{k,j}^{4}}.\] (B.9) This completes the proof of (18).
2306.02754
PULSAR: Pre-training with Extracted Healthcare Terms for Summarising Patients' Problems and Data Augmentation with Black-box Large Language Models
Medical progress notes play a crucial role in documenting a patient's hospital journey, including his or her condition, treatment plan, and any updates for healthcare providers. Automatic summarisation of a patient's problems in the form of a problem list can aid stakeholders in understanding a patient's condition, reducing workload and cognitive bias. BioNLP 2023 Shared Task 1A focuses on generating a list of diagnoses and problems from the provider's progress notes during hospitalisation. In this paper, we introduce our proposed approach to this task, which integrates two complementary components. One component employs large language models (LLMs) for data augmentation; the other is an abstractive summarisation LLM with a novel pre-training objective for generating the patients' problems summarised as a list. Our approach was ranked second among all submissions to the shared task. The performance of our model on the development and test datasets shows that our approach is more robust on unknown data, with an improvement of up to 3.1 points over the same size of the larger model.
Hao Li, Yuping Wu, Viktor Schlegel, Riza Batista-Navarro, Thanh-Tung Nguyen, Abhinav Ramesh Kashyap, Xiaojun Zeng, Daniel Beck, Stefan Winkler, Goran Nenadic
2023-06-05T10:17:50Z
http://arxiv.org/abs/2306.02754v1
PULSAR: Pre-training with Extracted Healthcare Terms for Summarising Patients' Problems and Data Augmentation with Black-box Large Language Models ###### Abstract Medical progress notes play a crucial role in documenting a patient's hospital journey, including his or her condition, treatment plan, and any updates for healthcare providers. Automatic summarisation of a patient's problems in the form of a "problem list" can aid stakeholders in understanding a patient's condition, reducing workload and cognitive bias. BioNLP 2023 Shared Task 1A focuses on generating a list of diagnoses and problems from the provider's progress notes during hospitalisation. In this paper, we introduce our proposed approach to this task, which integrates two complementary components 2. One component employs large language models (LLMs) for data augmentation; the other is an abstractive summarisation LLM with a novel pre-training objective for generating the patients' problems summarised as a list. Our approach was ranked second among all submissions to the shared task. The performance of our model on the development and test datasets shows that our approach is more robust on unknown data, with an improvement of up to 3.1 points over the same size of the larger model. Footnote 2: Corresponding email: [email protected] ## 1 Introduction Medical progress notes are used to document a patient's course in a hospital, including their current condition, treatment plan, and any updates to the plan Li et al. (2022). Automated identification of treated problems from the assessment sections of a progress note in form of a "problem list" can help healthcare stakeholders to gain an accurate understanding of the patient's condition, reducing workload and cognitive bias Gao et al. (2022). This problem list is then used to outline and pursue a detailed treatment plan. The majority of studies on clinical summarisation have focused on clinical notes; radiology reports Zhang et al. (2018); MacAvaney et al. (2019); Gharebagh et al. (2020); Kondadadi et al. (2021); Dai et al. (2021), and progress notes Moen et al. (2016); Liang et al. (2019); Adams et al. (2021); Gao et al. (2022). In contrast, some studies have focused on dialogues Yim and Yetisgen-Yildiz (2021); Manas et al. (2021); Zhang et al. (2021). Recently, Gao et al. (2022) proposed the task of "progress note understanding", where the goal is to generate problem lists given the assessment sections of a progress note. They further explored the performance of T5 Raffel et al. (2020), BART Kondadadi et al. (2021) based on pre-training tasks with masked healthcare concepts Gao et al. (2022). To draw further attention to this task, The BioNLP 2023 Shared Task 1A Gao et al. (2023) invited external participants to develop approaches to advance the state-of-the-art on the proposed task. The main contribution of this work is a novel framework for data augmentation and summarisation of diagnoses/problems. In our approach, first, we optimise a domain-specific Language Model (LM) using a combination of different pre-training objectives, depicted in Figure 1; this model significantly outperforms the state-of-the-art, even when optimised on a limited number of manually annotated progress notes. Second, we instruct Large Language Models (LLMs) to generate synthetic data, in order to reduce the reliance on large, high-quality annotated datasets. Finally, we use the generated data to fine-tune the domain-specific LM on the task of problem list generation, given appropriate progress note sections. Our approach ranked second among all submissions to the shared task without additional annotated data. The results of our evaluation suggest that our pre-training objectives are aligned with the downstream task of summarisation and can significantly improve performance. ## 2 Methodology Figure 1 shows the two components of our framework: first we pre-train an encoder-decoder model on MIMIC-III progress notes Johnson et al. (2016) using three different concept masking pre-training objectives. Then we employ data augmentation when fine-tuning our model for the summarisation task. ### Pre-training Model The items on the problem list are not necessarily directly extracted from the original progress notes and hence we cast the problem as abstractive summarisation. Drawing inspiration from PEGASUS Zhang et al. (2020), we used an objective which closely resembles the abstractive summarisation objective, to gain better and faster fine-tuning performance. Following the success obtained through masking words and contiguous spans Joshi et al. (2020); Raffel et al. (2020), we propose to select and mask text spans or whole sentences from input documents. We concatenate these "gap text spans (sentences)" into a pseudo-summary. Gap text spans were selected by the QuickUMLS entity linking Soldaini and Goharian (2016) and an NER model trained on the i2b2-2010 challenge Uzuner et al. (2011). Similar to the T5 pre-training procedure Raffel et al. (2020), these text spans were replaced by "sentinel" mask tokens \(<extra\_id\_i>\) to inform the model that input was masked. Here, \(i\) indicates the number of the mask (from left to right). The output sequence thus consists of the dropped-out text spans, delimited by the sentinel token between terms and the last \(<extra\_id\_i>\) input representing the end of the output. Figure 2 illustrates our pre-training objective. Specifically, we considered three masking policies in our pre-training objective. For each sentence, When both tools identified entities, we selected UMLS terms with the probability of \(0.7\) and i2b2 terms with the probability of \(0.3\). When only one tool identified entities, these entities were selected. Finally, when no entities were identified, the entire sentence was masked with a probability of \(0.15\). In order to provide the model with the necessary medical knowledge and reduce domain barriers Pandey et al. (2022), we leverage all progress notes from MIMIC-III Johnson et al. (2016) to train Flan-T5 Chung et al. (2022) on this objective. The processed pre-training corpus had 2.08m rows of data, with 2.2k containing no UMLS terms, 23k containing no i2b2 entities, and 797 where neither tool recognised any entities. Figure 1: Overview of PULSAR. The left component represents the pre-training process with three different mask policies depicted in different colours. Both Gap Sentences Generation (GSG) and Masked Language Modelling (MLM) are applied simultaneously to this example as pre-training objectives. The right component shows the workflow for data augmentation where the three labels \(\{1,0.5,0\}\) represent same thing, somewhat similar and completely different topics, respectively. PT Instances and DA Instances stand for Pre-training Instances and Data Augmentation Instances, respectively. Figure 2: Our pre-training objective. The terms “CPAP” and “sat drifts” are identified by the NER models and replaced by a unique sentinel token respectively. The objective is to predict these masked-out spans. ### Data Augmentation (DA) The lack of high-quality annotated data is a bottleneck that inhibits supervised learning methods in the healthcare field. For example, BioNLP Task 1A Gao et al. (2023) has only 764 annotated training examples. Therefore, we rely on data augmentation techniques to obtain more training samples. Specifically, we propose a novel healthcare data generation (DG) framework based on DINO Schick and Schutze (2021); Li et al. (2023), which exploits the generative abilities of LLMs by relying on instruction following rather than model training. Our instructions to the LLMs include task-specific descriptions (i.e., "_Write two sentences that mean the same thing but keep these two healthcare terms \([Term1],[Term2]\)_. Sentence 1: \([Source]\) Sentence 2: ") to make the model generate a paraphrase of \([Source]\), which is selected from the annotated training data. The instructions to keep terms aim to keep relevant terms from \([Source]\) which also appear in the problem list (i.e. the output). In addition, we might expect that the text generated by the LLM would only fit well into the corresponding instruction but would not be applicable as a reasonable output for other instructions. For example, in Figure 1 (i.e. label \(\{1\}\) is the expected label in blue and label\(\{0\}\) is the count label in red), it is expected that the generated text should have the same meaning as \([Source]\), but at the same time not have a completely different meaning from \([Source]\). Following previous work, we employ the self-debiasing Schick et al. (2021) algorithm to achieve this objective, i.e. when predicting the next token, not only the probability of the corresponding label is considered, but also the counter label is taken into account. We then use BERTScore Zhang et al. (2020) and BLEURT Sellam et al. (2020) to assess the similarity between each generated sample and the source, removing 85% of the lowest scoring generated sentences. The backbone of the framework can be any generative LLM, such as GPT3.5, GPT3 Brown et al. (2020) and GPT2Radford et al. (2019) series models. Limited by the data use agreement, we used BioMedLM Bolton et al. (2022), an open-source GPT-style model pre-trained on the biomedical abstracts and papers, 1. Footnote 5: chat.openai.com ### Implementation Details **Pre-training**: We choose FlanT5-3B and FlanT5-11B Chung et al. (2022) as our LM. PULSAR-3B and PULSAR-11B are pre-trained on two NVIDIA Tesla A100 80GB GPUs and four NVIDIA Tesla A100 80GB GPUs for 1 epoch respectively6. During the pre-training, we rely on Fully Sharded Data Parallel (FSDP) with CPU offloading Baines et al. (2021) to fit LLMs into GPU memory. Footnote 6: The official test set result for PULSAR-11B was fine-tuned after the 0.33 pre-training epoch. **Data Augmentation**: We employ BioMedLM \begin{table} \begin{tabular}{l c c} \hline \hline & Dev set & Test set \\ \cline{2-3} Approach (Setting) & R-1/R-2/R-L & R-F1/R-P/R-R \\ \hline PULSAR\({}_{\text{3B}}(DA)\) & 36.27/16.78/33.83 & 30.48/38.02/29.72 \\ PULSAR\({}_{\text{11B}}(DA)\) & 35.92/15.87/33.14 & 31.15/44.90/32/8.78 \\ PULSAR\({}_{\text{3B}}\) & 33.60/13.70/13.32 & 31.14/44.50/27.18 \\ PULSAR\({}_{\text{11B}}\) & 33.38/13.14/30.63 & 30.44/42.68/27.12 \\ FlanT5\({}_{\text{11B}}(DA)\) & 32.57/13.07/29.98 & \\ FlanT5\({}_{\text{11B}}(DA)\) & 29.44/11.28/25.8 & 30.64/40.61/27.25 \\ FlanT5\({}_{\text{3B}}(DA)\) & 29.46/09.85/26.15 & 30.47/83.01/29.72 \\ ClinicAT\({}_{\text{5LARGE}}(DA)\) & 28.60/11.13/26.11 & 25.43/25.67/32.05 \\ FlanT5\({}_{\text{3B}}\) & 28.90/93.92/5.26 & 30.60/401.09/28.85 \\ PULSAR\({}_{\text{3B}}(-A)\) & 27.70/16.09/24.34 & 28.29/38.24/26.54 \\ ClinicAT\({}_{\text{5LARGE}}\) & 31.09/12.85/82.15 & 19.92/18.93/28.89 \\ FlanT5\({}_{\text{LARGE}}\) & 29.86/10.192/70.8 & \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of evaluated models on the development set measured in terms of Rouge-1/2/LCS, and on the test set measured in terms of Rouge-F1/Precision/Recall, respectively. The composition of the input content is Assessment + Subject + Object, except where only the Assessment section of the input was used, indicated by -A. DA means that data augmentation was employed. The Rouge-L score on the development set was used for official ranking. Colours (i.e. last, 2nd, 3rd, 4th, 5th, 6th) indicate the highest to lowest performance. \begin{table} \begin{tabular}{l c} \hline \hline Approach(MaxLen) & R-1/R-2/R-L \\ \hline **Baselines** & \\ T5\({}_{\text{LARGE}}(512)\) & 29.901/10.81/28.21 \\ FlanT5\({}_{\text{BASE}}(512)\) & 27.16/8.9435/24.90 \\ ClinicalT5\({}_{\text{ SCRATCH}}(512)\) & 26.68/9.51/23.94 \\ T5\({}_{\text{BASE}}(512)\) & 25.07/7.72/23.36 \\ FlanT5\({}_{\text{BASE}}(1024)\) & 25.51/7.96/23.07 \\ ClinicalT5\({}_{\text{BASE}}(512)\) & 22.27/7.61/20.49 \\ PEGASUS\({}_{\text{XSUM}}(512)\) & 22.39/6.86/20.36 \\ ClinicalT5\({}_{\text{BASE}}(1024)\) & 21.13/7.19/19.55 \\ ClinicalT5\({}_{\text{SCI}}(512)\) & 14.12/4.61/13.22 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of baseline models on the development measured in terms of Rouge-1/2/LCS. The composition of the input content is Assessment + Subject + Object. The same colour represents the same model with different input lengths. (Radford et al., 2019) as the data augmentation model with default settings, setting maximum output length to \(40\). Finally, the generated data are matched with the corresponding summaries, subjective and objective to create a training set of \(1\)k instances. The DA model (Schick and Schutze, 2021) is run on a single NVIDIA Tesla V100 32G GPU, with each run taking up to twelve hours. Example templates and the full dataset description can be seen in Appendix A. ## 3 Experimental Setup **Baselines**: We have chosen to adapt T5-base as one of our baselines, similar to the approach taken by Gao et al. (2022). Additionally, we have incorporated various state-of-the-art models such as FlanT5 (Chung et al., 2022), ClinicalT5 (Lehman and Johnson, 2023) and PEGASUS (Zhang et al., 2020). Whereas FlanT5 is an enhanced version of T5 that has been finetuned in a mixture of tasks (Chung et al., 2022) and ClinicalT5 pre-trained on MIMIC-III (Johnson et al., 2016). PEGASUS is an abstractive summarisation model with Gap Sentences Generation and Masked Language Model (Devlin et al., 2019) as pre-train tasks. **Evaluation metrics**: We calculate ROUGE (Lin, 2004) scores on the test set, by comparing the generated summaries to their corresponding references, averaging for all generation samples. For all experiments, the data set was divided into a "train" and a "dev" set with a ratio of 8:2 for training and evaluation, respectively. The results are presented in Table 1, left column, and Table 2. Table 1, right column, shows the performance of the models on the official withheld test set. In this case, both train and dev splits were used for training. ## 4 Results and Analysis **Pre-training helps**: Both Table 1 and Table 2 demonstrate that the pre-training objective improves task performance (compare 3B and 11B PULSAR to corresponding FlanT5 models). The best performance of PULSAR was 3.1 points higher than the FlanT5-11B on the development set as the training set and 11.2 points higher than ClinicalT5-large on the official test set. The small difference in performance between PULSAR-11B and PULSAR-3B is primarily because the former has only completed 1/3 of the first pre-training epoch, potentially resulting in a lack of relevant medical knowledge and familiarity with downstream task patterns. **Data augmentation is effective when the data distribution is consistent; It is significantly more helpful for small models when on a random data distribution**: Table 1 shows that, data augmentation improves performance (3 point on average, compared to not using DA). This shows that the proposed DA approach can effectively alleviate the lack of annotated healthcare data, when the distribution of training and test set data is consistent. From Table 1, it becomes evident that smaller models (ClinicalT5-large) can improve by up to 6 points with the help of data augmentation, but the effect diminishes with model size as it increases max to 2.5 on LLMs. The potential reason is that the test set for the sharing task differs significantly from the training set, in the vary of length of the summary. **The model is capable of discriminating irrelevant information, but longer input lengths may result in decreased performance**: We conducted ablation experiments on PULSAR-3B to verify the impact of the input text type. In contrast to Gao et al. (2022)'s findings on the small model, the results (PULSAR-3B vs. PULSAR-3B-A) in Table 1 show that if the input is Assessment + Subjective + Objective, the model performs better (by 2.9 points on the official test set and by 7 points on the development set) compared with only using Assessment as input. This indicates that while most of the relevant information can be inferred from the Assessment section alone, additional input can be beneficial. However, increasing the input length appears to not be useful: Table 2 shows that models trained with longer input lengths (1024 tokens) do not improve over models that were trained on 512-token-long input. ## 5 Conclusion This paper contributed to the development of summarising patients' problems. Firstly, we proposed a novel task-specific pre-training LLM objective. Compared with other submissions, we rank 2nd at the official shared task without using additional manually annotated training samples. Secondly, we propose a new data augmentation framework and demonstrate its effectiveness in the healthcare domain. In the future, we will explore the applicability of our approach to other domain-specific generative tasks and conduct a deeper analysis of factors that contribute to overall model performance. ### Limitations The proposed model is computationally demanding. Recent work on parameter-efficient fine-tuning methods, such as LoRA [14], suggests that they can significantly reduce the number of trainable parameters at a minimal performance cost, which may help further democratise the development of domain- and task-specific models. In addition, as we continued to pretrain, to obtain the \(\mathsf{PULSAR}\) models, their tokenizer was inherited from corresponding \(\mathsf{Flan}\)-T5 model. Thus it does not contain domain-specific terminology, which may be a limitation in terms of representation density (i.e. frequent clinical terms may be split in multiple rare sub-tokens). ### Ethics Statement For the present work, we used an existing anonymised dataset from BioNLP 2023 Shared Task 1A without any data protection issues. In addition, data augmentation only uses an open-source, off-line model which is not offensive to the data user agreement that is shared with a third party. ## Acknowledgements We thank the anonymous reviewers from the BioNLP 2023 Shared Task for their valuable feedback. We would also like to acknowledge the use of the Computational Shared Facility at The University of Manchester.
2302.01972
DCA: Delayed Charging Attack on the Electric Shared Mobility System
An efficient operation of the electric shared mobility system (ESMS) relies heavily on seamless interconnections among shared electric vehicles (SEV), electric vehicle supply equipment (EVSE), and the grid. Nevertheless, this interconnectivity also makes the ESMS vulnerable to cyberattacks that may cause short-term breakdowns or long-term degradation of the ESMS. This study focuses on one such attack with long-lasting effects, the Delayed Charge Attack (DCA), that stealthily delays the charging service by exploiting the physical and communication vulnerabilities. To begin, we present the ESMS threat model by highlighting the assets, information flow, and access points. We next identify a linked sequence of vulnerabilities as a viable attack vector for launching DCA. Then, we detail the implementation of DCA, which can effectively bypass the detection in the SEV's battery management system and the cross-verification in the cloud environment. We test the DCA model against various Anomaly Detection (AD) algorithms by simulating the DCA dynamics in a Susceptible-Infectious-Removed-Susceptible process, where the EVSE can be compromised by the DCA or detected for repair. Using real-world taxi trip data and EVSE locations in New York City, the DCA model allows us to explore the long-term impacts and validate the system consequences. The results show that a 10-min delay results in 12-min longer queuing times and 8% more unfulfilled requests, leading to a 10.7% (\$311.7) weekly revenue loss per driver. With the AD algorithms, the weekly revenue loss remains at least 3.8% (\$111.8) with increased repair costs of \$36,000, suggesting the DCA's robustness against the AD.
Shuocheng Guo, Hanlin Chen, Mizanur Rahman, Xinwu Qian
2023-02-03T19:46:03Z
http://arxiv.org/abs/2302.01972v2
# DCA: Delayed Charging Attack on the Electric Shared Mobility System ###### Abstract An efficient operation of the electric shared mobility system (ESMS) relies heavily on seamless interconnections between shared electric vehicles (SEV), electric vehicle supply equipment (EVSE), and the grid. Nevertheless, this interconnectivity also makes the ESMS vulnerable to cyberattacks that may cause short-term breakdowns or long-term degradation of the ESMS. This study focuses on one such attack with long-lasting effects, the Delayed Charge Attack (DCA), that stealthily delays the charging service by exploiting the physical and communication vulnerabilities. To begin, we present the ESMS threat model by highlighting the assets, information flow, and access points. We next identify a linked sequence of vulnerabilities as a viable attack vector for launching DCA. Then, we detail the implementation of DCA, which can effectively bypass the detection in the SEV's battery management system and the cross-verification in the cloud environment. We test the DCA model against various Anomaly Detection (AD) algorithms by simulating the DCA dynamics in a Susceptible-Infectious-Removed-Susceptible (SIRS) process, where the EVSE can be compromised by the DCA or detected for repair. Using real-world taxi trip data and EVSE locations in New York City, the DCA model allows us to explore the long-term impacts and validate the system consequences. The results show that a 10-min delay will result in 12-min longer queuing times and 8% more unfulfilled requests, leading to a 10.7% ($311.7) weekly revenue loss per driver. With the AD algorithms, the weekly revenue loss remains at 3.8% ($111.8), suggesting the robustness of the DCA. Delayed charging attack, false data injection attack, electric shared mobility system, shared electric vehicle, cybersecurity. ## I Introduction Electrifying the fleet for shared mobility service is a promising direction to lower operation costs and reduce greenhouse gas (GHG) emissions [1, 2]. As an example, Shenzhen, China has a fully-electrified taxi fleet by the end of 2019 [3]. Moreover, New York City (NYC) will embrace 100% electrification of its for-hire vehicles fleet by 2030 [4], which further requires the deployment of over 1,750 DC-Fast charging ports [5]. For large-scale electric shared mobility systems (ESMSs), the essence of the efficient operation is the optimal scheduling of charging and mobility services, which requires seamless interconnections among the major assets in the ESMS (see Fig. 1), including the shared electric vehicles (SEVs), EV supply equipment (EVSE), charging station management system (CSMS), and EVSE and SEV operators, facilitated by multiple communication protocols, e.g., Open Charge Point Protocol (OCPP) [6] and Open Charge Point Interface (OCPI) [7]. However, these communication pathways also present vulnerabilities that can be exploited to compromise the SEVs and EVSE [8, 9, 10], resulting in the battery charging controller malfunctions and inconsistent changing outcomes for SEVs, thus disrupting the coordination of charging schedules across the entire fleet. This will translate into local congestion at EVSEs, excessive downtime for vehicle supply, and degradation of system performances or even catastrophic failure of the entire mobility system. Considering the vulnerabilities and significant consequences above, this study will take the first step to investigate an attack model that can disrupt ESMS and understand its long-term impacts on operational dynamics. The cybersecurity issues have been extensively studied in the fields of smart grid [11], Internet-of-Things [12], mobility-as-a-service systems [13], traffic signal control systems [14], and more recently, the ESMS [15, 16, 17, 18]. While the ESMS offers improved efficiency with coordinated charging and dispatching, it also inherits the cyber threats from its assets, including SEVs, EVSE, and the communication interfaces with the CSMs and SEV operator. As such, the ESMS can be subject to new forms of cyberattacks due to the increased system dependencies. As depicted in Fig. 1, we showcase a potential attack vector (represented by red lines) for a distinct type of False Data Injection Attack (FDIA), known as the Delayed Charging Attack (DCA). The DCA's objective is to stealthily delay the charging process of the SEV fleet while maintaining the safety Fig. 1: Communication framework in the ESMS (DSO: Distribution System Operators, EMS: Energy Management System) requirements of the SEV (e.g., avoiding excessive current). This attack is accomplished by compromising the EVSE via a physical access point (such as a USB port) and intercepting the communication between the SEV and EVSE. By injecting falsified State-of-Charge (SoC) data into the SEV, the SEV is spoofed to accept a reduced charging rate (e.g. lower current or voltage), thereby requiring an extended time to reach the target SoC. Meanwhile, constant charging log information is uploaded to the cloud environment (i.e., the CSMS, SEV operator, and the OCPI in between) by the compromised EVSE and SEV, which can effectively bypass the cross-verification. Unlike other types of attacks (e.g., denial-of-service) which cause an entire breakdown in one shot, the DCA is designed as a stealthy cyberattack with long-term consequences. In this regard, the DCA is likely to be overlooked for two reasons: (1) the DCA will not lead to an immediate collapse of the ESMS, but instead a gradual degradation over time, and (2) the anomalies resulting from the DCA are challenging to detect, particularly in the ESMS, where outliers (e.g., extended charging duration) can be indistinguishable due to the large variation in charging duration (which can range from several minutes to 1-2 hours) across all charging activities. To the best of our knowledge, no prior studies have investigated the DCA on the large-scale ESMS. This study aims to address this gap by exploring the system-level consequences of the DCA and evaluating the DCA's efficacy and robustness under different length of delayed charging service and various detection strategies. This study will begin by presenting the ESMS threat model and identifying a linked sequence of potential vulnerabilities, which forms a viable attack vectors for launching DCA. Then, we model the DCA dynamics based on the Susceptible-Infectious-Removed-Susceptible (SIRS) process. Detection strategies are incorporated by comparing widely adopted Anomaly Detection (AD) techniques, which can detect the malfunctioning EVSE based on the deviation of the normal system performance. Those malfunctioning EVSE will revert to susceptible state upon repair. The major contributions of this study are summarized as follows: (1) we present a threat model for the ESMS and conduct the cybersecurity analyses; (2) we identify the vulnerabilities in the ESMS, e.g., physical entry points (e.g., USB port), communication interfaces (e.g., signal exchanges between EVSE and SEV, software updates), which are further exploited as access points for launching DCA. Different AD strategies are adopted to demonstrate the robustness of the DCA; and (3) we develop a high-fidelity simulation platform using real-world data to assess the long-term effects of the DCA on a large-scale ESMS and validate the system consequences of this attack. The rest of the paper is organized as follows. Section II describes ESMS threat model and examines the viability of the DCA model. Section III proposes the SIRS-based DCA model and AD algorithms, and Section IV introduces the key components in our high-fidelity simulation platform. Section V presents the scenario design and parameter assumption, followed by numerical experiments in Section VI. Section VII concludes our paper. ## II Threat Model for the ESMS In this section, we introduce the threat model of the ESMS by highlighting the major assets, information flow, and access points. This will allow us to outline the potential cyber-physical threats and understand their impacts on the ESMS. Our threat model, shown in Fig. 2, provides a minimal implementation of the electric shared mobility service operated by a single EVSE and SEV service provider. It includes four types of trust boundaries, the SEV fleet, and the SEV service provider. Information flows are detailed between the major assets and interfaces, including EVSE or SEV control & communication interfaces, connectors, SEV battery management system (BMS), and the external entities (e.g., SEV and EVSE service providers). ### _Major Assets_ We summarize the major assets in the ESMS threat model as follows: * **CSMS**: The CSMS enables SEV drivers and EVSE operators to control and monitor the EVSE remotely, including charging record-keeping, scheduling, and user authentication [20]. * **EVSE**: The EVSE consists of four major components: DC Fast Charging (DCFC) EVSE controller, EVSE communication interface, EVSE Contactor, and local control interface. The main function of the EVSE is to transport the electric power from the power grid to the SEV battery. * **DCFC EVSE controller**: The DCFC EVSE controller processes the exchanged data from the local control interface (e.g., payment information) and EVSE communication interface (e.g., SEV battery data). It process the real-time information between the EVSE and the EVSE vendor controller in the CSMS, e.g., availability status. In addition, the EVSE controller also supports the maintenance service, which can be conducted by either physical access (e.g., USB) or over-the-air service via OCPP from the CSMS (e.g., patch and software update). * **EVSE communication interface**: The EVSE communication interface communicates with the EVSE Contactor for power supply, obtains charging requirements from the EVSE control & communication interface, and sends it to the DCFC EVSE controller [21]. * **EVSE control&communication interface**: The EVSE control&communication interface is typically the only physical link between the EVSE and SEV, which is embedded in the charging connector. The most popular types of DC Fast charging connectors in the U.S. include the CHAdeMo, Combined Charging System (CCS), and Tesla charger [22]. The connector consists of power lines, control lines, control pilots (PLC communication in CCS), or CAN buses in CHAdeMO, enabling power supply, analog control, and data communication, respectively [23]. * **BMS**: BMS controls the status of the SEV battery within the specified safe operating conditions [24]. For instance, it monitors the real-time voltage, current, and battery temperature to avoid excessive current or overheating [23]. * **SEV control & communication interface**: SEV battery charging interface collects the battery data information from the BMS and sends the EVSE's configuration to the BMS for a compatibility check [23]. ### _Information Flows in SEV Charging Process_ The ESMS relies heavily on seamless communication for an efficient coordination of dispatching and charging needs. In particular, a full cycle of charging service, from charging reservations to completion, requires various communication exchanges between the SEV driver, EVSE, and CSMS. For instance, the SEV driver must initiate a charging request with the desired state of charge (SoC) or charging duration, which can be accomplished through the charging app (e.g., EVgo [25] and EVmatch [26]) or human-machine interface (HMI). Before the charging starts, several rounds of confirmation will proceed via the analog control lines for a compatibility check (e.g., SEV battery and charger parameters) [23]. During the charging process, numerous communication exchanges take place between the EVSE and SEV regarding power supply and battery conditions [23] (e.g., maximum voltage to stop charging, target voltage, battery capacity, and maximum admissible current of the EVSE and SEV). Furthermore, the BMS continuously calculates the optimal charging current based on the current SoC, battery condition, and temperature. Upon reaching the target SoC or charging duration, the BMS sends a signal to the EVSE to end the charging process. ### _Access Points_ The wireless communication and physical entries in the ESMS open a wide attack surface, which can be exploited as access points to disrupt the charging process. We briefly summarize three types of access points in the ESMS as follows (for a comprehensive review, see Johnson et al. [18]). * **Control Area Network (CAN)**: The initial design purpose of CAN was to ensure communication performance under a complex electromagnetic environment, which does not include consideration for cybersecurity [27]. The communication mechanism within CAN utilizes a broadcasting mechanism, enabling eavesdropping attacks. As seen in Fig. 2, the CAN buses cover the major components in the EVSE and SEV through the EVSE control & communication interface, where the transmitted messages can be modified and broadcast to all covered electronic control units (ECUs) without discretion. [27]. * **OCPP**: Nasr et al. [20] reported 13 types of vulnerabilities (e.g., missing authentication, hard-coded credentials, and missing rate limit) in 16 real-world CSMSs. Those vulnerabilities can be further exploited to compromise the lower-level EVSE by embedding malware into patches, thus disrupting the charging process and manipulating the default setting of the EVSE. * **USB port on EVSE**: Although potential vulnerabilities of USB ports were demonstrated in several studies [28, 17], the first attack via the external interface, e.g., USB or serial interfaces, was reported by the Idaho National Laboratory (INL) [29, 30]. After obtaining physical and remote access to the EVSE, researchers in INL successfully manipulated the modular power electronics modules in EVSE ports equipped with J1772 CCS and CHAdeMO protocols, thus disrupting the charging process. ### _Possible Attacks on ESMS_ In the ESMS, the vulnerabilities identified during the charging process create a _linked sequence_ from the external USB port in the maintenance interface, via the connector, and to the BMS and ECUs in the SEV. These vulnerabilities open a wide attack surface for different malicious attacks, including an EV-EVSE tampering attack, Distributed Denial-of-Service (DDoS) attack, Man-in-the-Middle (MitM) attack, and electronic control manipulation. We summarize these attacks and their corresponding impacts on the charging service in Table I. Specifically, the EV-EVSE interface tampering is only possible by physically deploying the off-the-shelf radio near Fig. 2: A minimal implementation of ESMS, adapted from the STRIDE threat model proposed by the Sandia National Laboratories [19] the EVSE, which will significantly suffer from high attacking costs for large-scale impacts. Moreover, the charging process will completely terminate unless manually reconnecting to the connector, making the attack easily detectable through manual reporting of malfunctions. Similarly, the consequences of DDoS and electronic control manipulation are mainly explicit and easier to detect, e.g., delayed response from the server and the disruption of web service for hours [33], which have little impact on the ESMS in the long run. As for the MitM attack, the primary target is the data privacy issue, yet few impacts have been reported on the system performance of ESMS. In summary, we note that the above attacks target either one-time breakdowns or data privacy issues, which are not aligned with our research goal. Our proposed DCA, on the other hand, serves as a special type of the FDIA that stealthily falsifies the SEV to accept a lower charging rate, leading to a delayed charging service and long-term degradation of the ESMS. In the following sections, we will present the stage-by-stage DCA development and the AD techniques for the DCA detection. ## III DCA Development and Anomaly Detection Having outlined the threat model and potential attacks in the ESMS, we now move on to the DCA model, including the attack vector, consequences for the ESMS, and implementation details. To evaluate the effectiveness and robustness of DCA, we will model the DCA dynamics in the SIRS process, incorporating both attack and detection models, where five types of AD techniques will be introduced for DCA detection. ### _DCA Modeling_ The DCA is a distint form of FDIA that aims to disrupt the charging service for hte SEV fleet by injecting falsified SoC information into the SEV. The attacker exploits vulnerabilities in physical entries (e.g., USB ports) and communication interfaces and protocols (e.g., CAN bus and OCPP) to manipulate the SEV into accepting a reduced charging rate while still ensuring SEV battery's safety (e.g., avoiding excessive current or voltage). The DCA will first compromise a set of EVSE via the USB port and targets the delay of charging services for the SEV fleet. This deviation from normal operation can hardly be detected by SEV drivers and standard AD techniques due to the wide range of charging duration, from several minutes to 1-2 hours [3]. These minor delays in individual charging activities will result in local congestion at EVSEs and unavailability of SEVs, eventually leading to a cascading failure in ESMS. It is worth noting that a straightforward DCA can be executed by directly charging the SEV at a lower rate. However, the BMS in SEV continuously monitors the input current via the CAN bus and ECUs, and the BMS will raise an alarm if the deviation from the optimal charging current surpasses a set threshold. Therefore, special care is required to devise a DCA model that can stealthily slow down the charging process. We next outline the full steps of the DCA implementation as below (also illustrated in Fig. 3): 1. Compromise the EVSE via the USB port. 2. Before the charging starts, the compromised EVSE sends true configuration data to the SEV for the compatibility check. 3. During the charging process: 1. The DCFC EVSE controller collects the ground-truth information of the SEV battery via CAN and reports it to the CSMS via OCPP. 2. The DCFC EVSE controller sends falsified SoC information to the BMS in SEV through the EVSE control&communication interface (CAN bus in CHAdeMO and power line communication (PLC) in CCS protocol). Step 3(b) serves as the core of our DCA for delaying the charging service. It ensures that the battery safety requirement, e.g., maximum current and voltage, are met while bypassing the BMS's monitoring for the optimal charging current. Furthermore, our DCA guarantees that the charging log information uploaded to the cloud (e.g., CSMS and SEV Operator) remains unchanged, allowing for successful cross-verification in the cloud. To gain a better understanding the SEV battery's recharging and monitoring mechanisms, we present the charging profiles of the normal operation and the DCA in Fig. 4. We assume that the BMS adopts the constant-current-constant-voltage (CC-CV) recharging scheme, that is widely used for Lithium-ion batteries. Under proper design of falsified SoC information, we show that the charging service can be delayed within safe limits. Fig. 3: A block diagram of DCA (Ctrl&Comm.: Control&Communication) The CC-CV recharging scheme, shown in Fig.4a, maps the charging duration (or SoC information) and charging rate (i.e., charge voltage and current) in a one-to-one relationship. During the constant current (CC) phase, the charge voltage increases gradually to reach its maximum. At a certain SoC tipping point (e.g., 80%), the charge current begins to decrease gradually to zero in the constant voltage (CV) phase. This relationship allows the BMS to detect a change in the charging rate if the SoC information and charge voltage (or current) do not match. In light of this, our DCA leverages this characteristic to manipulate the SoC reported to the BMS, leading to a reduction in the optimal charging current. The difference between the original and delayed charging currents are shown in Fig. 4a, indicating that the optimal charging current under DCA will be less than or equal to the original charging rate. As illustrated in Fig. 4b, the lower charging rate will produce a smoother charging profile and lead to a longer charging duration to reach the target SoC, while still respecting the safety limits of charge voltage and current. ### _DCA Dynamics in SIRS Process_ We model the DCA dynamics in an SIRS process. First, the DCA is launched at a set of infectious EVSE ports. As discussed in Sec. III-A, the infected EVSE controller sends falsified SoC information to the SEV's BMS, resulting in a lower optimal charging current and an extended time to reach the same target SoC. We present the SIRS model in the DCA context, shown in Fig. 5, where \(\beta\) and \(\gamma\) denote the transmission rate and recovery rate, respectively. And \(\tau\) is the time for repair, indicating the time required for an EVSE to return to the susceptible state. The implementation is outlined in Algorithm 1. ``` 0: At time step \(t\), set of SEVs \(\mathcal{V}^{t}\), set of EVSE \(\mathcal{S}^{t}=\mathcal{S}^{t}_{S}\bigcup\mathcal{S}^{t}_{I}\bigcup\mathcal{ S}^{t}_{R}\) 0: Set of EVSE \(\mathcal{S}^{t+1}\) with updated status. 1:for\(i\in\mathcal{S}^{t}\)do {*/loop all EVSE*/} 2:if i is susceptible then 3: Calculate\(\mathcal{x}^{S-I}_{R}\gets B(1,\beta)\) {*/\(B\) is Binomial distribution*/} 4:if\(\mathcal{x}^{S-I}_{I}=1\)then 5: Update the status to infectious 6:\(\mathcal{S}^{t+1}_{I}\leftarrow\mathcal{S}^{t}_{I}\bigcup\{i\}\); \(\mathcal{S}^{t+1}_{I}\leftarrow\mathcal{S}^{t}_{S}\setminus\{i\}\) 7: Record the time stamp \(t^{\prime}_{i}\gets t\) 8:endif 9:else if i is removed and \(t^{\prime}_{i}\geq\tau\)then {*/Repair is finished*/} 10: Update the status back to susceptible 11:\(\mathcal{S}^{t+1}_{S}\leftarrow\mathcal{S}^{t}_{I}\bigcup\{i\}\); \(\mathcal{S}^{t+1}_{R}\leftarrow\mathcal{S}^{t}_{R}\setminus\{i\}\) 12:endif 13:endfor 14:for\(i\in\mathcal{S}^{t}_{I}\)do {*/loop all infectious EVSE*/} 15: Proceed\(\mathcal{x}^{I-R}_{i}\leftarrow\) AnomalyDetection(\(d_{i},t_{i},c_{i}\)) 16:if\(\mathcal{x}^{I-R}_{I}=1\)then {*/EVSE is identified to be an anomaly*/} 17: Conduct alarm validation 18:if alarm is true positive then 19: Update the status to removed. 20: Record the time stamp \(t^{R}_{i}\gets t\) 21:\(\mathcal{S}^{t+1}_{R}\leftarrow\mathcal{S}^{t}_{R}\bigcup\{i\}\); \(\mathcal{S}^{t+1}_{I}\leftarrow\mathcal{S}^{I}_{I}\setminus\{i\}\) 22:endif 23:endif 24:endif ``` **Algorithm 1** SIRS model in the ESMS #### Iii-B1 S-I transmission In the S-I transmission process, an EVSE is compromised by either a wireless communication interface (e.g., between CSMS and EVSE under OCPP) or an external physical entry (e.g., USB port). For this study, we make a conservative assumption such that the _EVSE is only infected by the physical access via the USB port_, as this approach has been validated to disrupt the SEV's charging service by the Idaho National Laboratory [29, 30]. We also note that the malware may also spread through the EVSE communication network and SEV-EVSE connection [15]. These stronger assumptions may contribute to a higher transmission rate \(\beta\). However, there is currently a lack of real-world evidence within the ESMS to support such assumptions. As such, these types of malware infection will be the focus of future research as more vulnerabilities in the ESMS are identified. #### Iii-B2 I-R transmission The I-R transmission relies on the AD algorithm based on the charging log information. Upon a positive alarm, the EVSE operator will first verify it through the EVSE communication network (assumed within one minute) and send out technician for repair service. If the alarm is confirmed to be a true positive (the EVSE is infectious), the EVSE will be isolated and the technicians will inspect and repair. Therefore, the transmission rate (\(\gamma\)) strongly depends on the effectiveness of the AD algorithm, which will be introduced in Section III-C. #### Iii-B3 R-S transmission The R-S transmission occurs after the completion of the repair process for the infected EVSE. The duration of this process is determined based on the mean-time-to-repair (MTTR), denoted by \(\tau\). During this period, the EVSE is out of service, and the queued SEV will relocate to other available EVSE ports. ### _Anomaly Detection for DCA_ To examine the robustness of the DCA model, we incorporate the AD algorithms in the ESMS that proactively monitor the system charging performances and protect against potential threats. The core purpose of the AD algorithms is to prevent the SEV fleet from experiencing delays in charging Fig. 4: Charging profiles under normal operation and DCA: (a) CC-CV charging process and (b) SoC profile Fig. 5: SIRS model under DCA service, thereby mitigating the potential for cascading failures in the entire system due to the local congestion and excessive downtime of SEV supply, especially during the peak hour. In particular, we will focus on five AD techniques considering (1) the features of anomalies and (2) the ESMS operation. For the former, the anomalies in our study are assumed to be contextual [34]. For instance, a short charging duration (e.g., \(20\,\mathrm{min}\)) may be considered anomalous during the nighttime but acceptable during the peak hours [3]. For the latter, we treat the historical charging performance as the baseline (e.g., training set), assuming that it only includes data instances collected during the normal operation before the DCA launched. We next introduce five AD techniques as follows: #### Iii-B1 Isolation Forest (IF) The IF algorithm was first proposed by Liu et al. [35] for the purpose of AD and was later used for the purpose of cyberattack detection [36]. The IF-based detection algorithm proceeds as follows. We first train the IF model using the benign historical charging log data to understand the properties of the normal operation. The charging log data consists of the charging duration, time of day, and the initial SoC, denoted by \((d_{i},t_{i},c_{i})\). With a pre-defined false alarm rate \(\alpha_{I}\) and the number of estimators \(n\), we are able to train an IF model consisting of \(n\) proper binary trees, where \(\alpha_{I}\) of the samples are considered as the anomaly. Next, we collect batches of charging log data online as testing data. We denote the anomaly score of sample \(j\) in the testing set by \(s_{j}\in[0,1]\), which takes the form: \[s_{j}=2^{-\frac{\bar{h}_{j}}{\bar{h}}} \tag{1}\] where \(\bar{h}_{i}\) denotes the average path length of a sample \(i\) from a collection of isolation trees, and \(\bar{h}\) is the average path length of an unsuccessful search, which is adapted to normalize the \(\bar{h}_{i}\). Thus the anomaly score can be understood as the efforts to find a path in the isolated tree, where the anomalies can be identified at the early stage of exploration. By further incorporating with a false alarm rate \(\alpha_{I}\), the anomaly is defined by a binary indicator \(y_{j}\), where \(y_{j}=1\) if \(s_{j}<\alpha_{I}\) (the sample \(j\) is identified to be an anomaly) and \(0\) otherwise. #### Iii-B2 Kullback-Leibler Divergence (KLD) The KLD [37] measures the differences between two probability distributions \(p(x)\) and \(q(x)\) regarding the event \(x\in\mathcal{X}\), denoted by \(D(p||q)\), which can be expressed as: \[D(p||q)=\sum_{x\in\mathcal{X}}p(x)\log\frac{p(x)}{q(x)} \tag{2}\] where the \(p(x)\) and \(q(x)\) are assumed to represent the historical distributions of one EVSE under normal operation and DCA, respectively. Specifically, \(p(x)\) is obtained based on the historical charging log data, and \(q(x)\) represents the real-time charging logs. For the charging log sample \(x\), each log can be expressed as a triple \((d_{i},t_{i},c_{i})\). And the anomaly is identified if the samples are associated with the KLD \(D(p||q)\) larger than a pre-defined threshold \(\alpha_{KLD}\). #### Iii-B3 K-Means clustering (KMeans) The KMeans clustering method [38] is a cluster-based algorithm that groups the input samples into \(K\) disjoint clusters based on the sample features. Specifically, we first obtain \(K\) clusters based on the historical charging log data \((d_{i},t_{i},c_{i})\). Next, we measure the distance between the online charging logs \((d_{j},t_{j},c_{j})\) and the nearest centroid of the \(K\) clusters. Given a sensitivity level of \(\alpha_{KM}\), we report the EVSE \(i\in\mathcal{S}\) as an anomaly if more than \(\alpha_{KM}\%\) of the real-time samples are associated with a distance larger than the pre-defined threshold \(D_{KM}\). #### Iii-B4 Gaussian Mixture Model (GMM) The GMM is a model consisting of \(K\) separate multivariate normal distributions \(\mathcal{N}(\cdot)\), known as mixture components. Each mixture component is parameterized by \(\theta_{k}:=\{\mu_{k},\Sigma_{k}\}\), where \(\mu_{k}\) and \(\Sigma_{k}\) represent the mean value vector and covariance matrix of the \(k\)-th mixture component. Let \(\pi_{k}\geq 0\) be the associated weight for the \(k\)-th mixture component, where \(\sum_{k=1}^{K}\pi_{k}=1\). We consider the probability density function for the GMM as a weighted sum of the mixture components: \[p(\mathbf{x}|\theta)=\sum_{k=1}^{K}\pi_{k}\mathcal{N}(\mathbf{x}|\mu_{k}, \Sigma_{k}) \tag{3}\] where \(\mathbf{x}\) is a collection of tuple \((d_{i},t_{i},c_{i})\) with the dimension \(K=3\). For the GMM-based AD technique, we define the anomaly as the samples \(x_{i}\) with \(p(x_{i}|\theta)<c_{G}\), where \(c_{G}\) denotes the significance level of the GMM. An anomaly is detected if more than \(\alpha_{G}\%\) samples for an EVSE are identified as an outlier. Note that there is no analytical solution for the parameters \(\theta:=\{\theta_{k}:k=1,\cdots,K\}\). Instead, we can estimate the parameters \(\theta\) using the Expectation-Maximization (EM) algorithm [39]. The EM algorithm proceeds by initiating a random set of parameters \(\hat{\theta}\) and iteratively estimating the optimal set \(\hat{\theta}^{*}\) that can maximize the average log-likelihood given the training set \(\mathbf{x}\). For the detailed implementation of the EM algorithm, we refer interested readers to Reynolds [40]. #### Iii-B5 Principal component classifier (PCC) The PCC [41] first obtains the principal components from the covariance matrix of the training data (assumed as the historical charging log data under normal operation). Next, an anomaly score is assigned to each online charging event based on its deviation from the principal components. The anomaly score incorporates the _major_ and _minor_ components. Specifically, the former detects extreme observations (charging events with large variances). The minor components help to detect the values that are not outliers but inconsistent with the correlation structure as the normal operation. For each EVSE \(i\in\mathcal{S}\), we define the EVSE \(i\) as an anomaly if over \(\alpha_{P}\%\) of the batch of real-time charging events are associated with a significant level of \(c_{P}\) for either the major or minor components. ## IV Agent-based Simulation Platform We develop an agent-based simulator1 to characterize the interactions between agents (SEVs) and the environment (passenger demand and EVSE ports). Our simulator has five components: matching, dispatching, repositioning, charging, and DCA unit. Specifically, the DCA unit includes the SIRS-based DCA model and AD techniques to demonstrate the robustness of DCA. In addition, we also design utility-based heuristic algorithms that guide unoccupied SEVs to the under-supply areas and match the SEVs to available EVSE. Key features in our simulator are listed below (for more detailed settings, see Qian et al. [42]). ### _DCA Unit_ #### Iv-A1 SIRS-based DCA Model We first assume all EVSEs are infectious after the warm-up period. The SEVs using the infectious EVSE ports will encounter a delayed charging service following the Gaussian noise \(\Delta d\sim\mathcal{N}\left(\mu_{d},\sigma_{d}\right)\) such that the charging duration is \(d_{i}\gets d_{i}+\Delta d\). If the infectious EVSE is detected, it will undergo the Infectious-Removed-Susceptible process, such that the EVSE can be infected again after being repaired. #### Iv-A2 AD Techniques for DCA Detection The infectious EVSE can be identified as an anomaly by comparing the real-time and historical charging log information (e.g., charging duration, time of day, and initial SoC), see details in Section III-C and Algorithm 1. ### _Charging_ #### Iv-B1 Matching with EVSE Ports We consider a utility-based heuristic such that each SEV is assigned to the EVSE with the highest utility scaled by a soft-max function. For each EVSE port \(j\in\mathcal{S}\), a shorter queuing time \(q_{j}\) and travel time \(t_{.j}\) will lead to a higher utility. Specifically, for an SEV \(i\), the probability of selecting EVSE \(j\), \(P^{c}(j|i)\), is shown below: \[P^{c}(j|i)=\frac{\exp\left(-q_{i}t_{ij}\right)}{\sum\limits_{j\in\mathcal{S}} \exp\left(-q_{j}t_{ij}\right)} \tag{4}\] where \(\sum\limits_{j\in\mathcal{S}}P^{c}(j|i)=1\) and \(P^{c}(j|i)>0\) for each location \(i\). #### Iv-B2 Charging Service and Queuing The charging service follows the first-come-first-serve rule. #### Iv-B3 SIRS Process Only the EVSE in susceptible and infectious statuses can serve SEVs. If an infectious EVSE is detected for repair, the SEV in the queue will relocate to a nearby EVSE with the highest utility following Eq. (4). ### _Repositioning_ To better describe the status-quo scenario [43], we assume that the idled SEVs will reposition to an under-supplying area. Specifically, we conduct a utility-based heuristic analogous to the EVSE matching (see Eq. (4)). The utility score is calculated based on the supply-demand gap and travel time. At location \(j\), the supply-demand gap is defined by the gap between the number of fulfilled orders (\(N_{j}^{\text{order}}\)) and idled SEVs (\(N_{j}^{\text{idle}}\)) in the past \(15\min\). For an empty SEV at location \(i\), the probability of selecting location \(j\in\mathcal{Z}\), \(P^{r}(j|i)\), takes the form of a soft-max function, shown as follows. \[P^{r}(j|i)=\frac{\exp\left(\frac{N_{j}^{\text{order}}-N_{j}^{\text{idle}}}{t_ {ij}}\right)}{\sum\limits_{j\in\mathcal{Z}}\exp\left(\frac{N_{j}^{\text{ predle}}-N_{j}^{\text{obs}}}{t_{ij}}\right)} \tag{5}\] where \(\mathcal{Z}\) denotes the set of all taxi zones, and \(t_{ij}\) is the travel time between locations \(i\) and \(j\). ### _Matching with Requests_ We conduct a greedy matching strategy, which sequentially assigns idle vehicles to the open orders to achieve the shortest travel time within the permissible waiting time threshold. These include the idling and cruising SEVs that are traveling to the relocation destination. For the cruising SEVs, it is considered available for matching at its real-time location. After being matched, it will stop cruising and move to the assigned pick-up location. ### _Dispatching_ The SEVs with assigned requests will be dispatched to the pick-up location. The idled SEV will, on the other hand, be guided to the target location with the highest utility considering the supply-demand pattern and travel time. The route is generated and processed from the Open Source Routing Machine (OSRM) engine [44], which provides the detailed real-time location for each simulation tick. ## V Numerical Experiments ### _Case Study Area_ We conduct a real-world case study in NYC to demonstrate the effectiveness of the DCA model on the ESMS. In particular, we develop a high-fidelity simulation platform extended from our previous study [42]. The simulation environment is updated every minute and covers 1,347 hexagon cells (each covering an area of 0.14 square miles). We use NYC taxi data in May 2016 as the input [45], and all trips are assumed to originate and head to the centroid of the hexagon cells. In addition, we obtain the real-world locations of EVSE ports from the Alternative Fuels Data Center (AFDC) [46]. Note that we only consider the DCFC ports considering the system efficiency and poor accessibility for taxi drivers to home charging in NYC [5]. The travel time between each pair of centroid is obtained by overlaying with the actual road network and then querying from the OSRM engine. We perform a downscaled status-quo experiment of the taxi system by randomly sampling 25% of the historical trip record (around 130,000 trips). As existing EVSE facilities can hardly support the charging demand of the 100% electrified SEV fleet [5], we also consider enlarging the number of charging piles of the real-world charging stations [46]. To find the best combination of the SEV fleet size, demand, and the number of EVSE ports, we conduct a cross-validation procedure by first fixing 25% of the taxi trip demand and tuning the fleet size from 1,300 to 1,500, incremented by 50 and increasing the number of EVSE by 1.5 to 3.0 times stepped by 0.5. The best combination consists of 1,400 SEVs and 215 DC Fast EVSE ports to serve 25% of real-world daily taxi trips. The baseline scenario is justified by the real-world evidence (see Figs. 5(a)-5(d)) Unless otherwise specified, available SEVs will be dispatched to passengers that can be reached within \(20\min\), and the maximum waiting time for passengers is \(10\min\). Finally, we consider a 4-week simulation using three random seeds, where the first week is regarded as a warm-up, and the other three weeks are the validation period. The DCA is launched at the end of the warm-up. ### _Parameter Setting_ We summarize in Table II the parameter setting in this study, including SEV, mobility service, and the SIRS process. Specifically, we adopt the prototype of the Tesla Model 3 for the electric mobility service [54], which is supported by both Tesla Supercharging and CCS EVSE ports. Based on the field experiment by Moloughney [50], it took \(32\,\mathrm{min}\) for a Tesla Model 3 to reach \(80\%\) SoC and another \(31\,\mathrm{min}\) for a full charge, where the charging rate for the first \(80\%\) is observed to be approximately linear. Such a linear charging profile is also validated by the in-vehicular database [51], which reports an average charging rate of \(3.3\,\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{ \char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{ \char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{ \char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{ \char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{ \char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37}\mathrm{\char 37} \mathrm{\char 37}\mathrm{\char threshold, which accelerates the snowballing of the excessive queuing time. With the AD techniques, the queuing time is greatly reduced by over \(10\,\mathrm{min}\). However, there still exists space for improvement during the daytime, especially during the daytime (9 AM - 5 PM). This is because there exists a proportion of EVSE that is removed for repair and out-of-service for at least \(\tau=3\)hours. In this regard, the SEVs in the queue have to relocate to other EVSE ports, resulting in local congestion at the EVSE ports in S and I statuses. As seen in Fig. (e)e, the charging duration can be reduced from over \(30\,\mathrm{min}\) to relatively the same levels as the baseline under all five AD techniques. Compared with the early morning and evening periods, the average delays in charging service are observed to be longer at around 6 AM and 2 PM, potentially due to a higher proportion of infectious EVSE ports. ### _System Performance for SEV Fleet_ This subsection shows the SEV driver's weekly revenue loss under different lengths of charging delay with and without the AD models. To better understand the degradation of the mobility service, we also present two major dynamics, including order fulfillment rate and SEV occupancy rate. Fig. 7 displays the system revenue loss during the last week in our simulation to capture the long-term impact of the DCA. One immediate observation is the strong linear relationship between delayed charging time and revenue loss. Specifically, without the AD, we report a system revenue loss of at least $172.9 (5.9%), $311.7 (10.7%), and up to $404.0 (13.9%) under the delay of 5, 10, and \(15\,\mathrm{min}\), respectively. With a delay of \(15\,\mathrm{min}\), the system revenue loss can be reduced to about $140.1 (4.8%) comparing all five AD techniques. Also, under IF and GMM, the lowest revenue losses are observed in the cases with \(10\,\mathrm{min}\) delay. This implies the trade-off between the extended charging duration and the robustness of DCA. Compared with the \(5\,\mathrm{min}\) delay, the \(10\,\mathrm{min}\) delay is more likely to be detected, resulting in less disruption in the charging process. On the other hand, the \(15\,\mathrm{min}\) delay can be effectively identified by the AD techniques. However, there always exists a proportion of infectious EVSE ports that cause significant charging delays. Finally, despite the improvement in a system revenue loss of up to 8.6% (from 13.9% to 5.9%), the underlined repair cost may grow exponentially with longer delay of charging, which will be evidenced in Fig. 9. Figs. (a)a-(b)b illustrate the order fulfillment and SEV fleet's occupancy rates under the baseline and the DCA with and without AD. For the scenarios under AD, we only show the results of the KLD, which is observed to be one of the most effective AD techniques in Fig 7. In Fig. (a)a, distinct differences between the baseline and scenarios without the AD are observed during the morning and evening peaks (e.g., 8 to 10 AM and 6 to 11 PM). In this case, we report a reduction of fulfillment rates of about 20% during the morning peak and 12% during the evening peak, yielding fulfillment rates of 80% and 67%, respectively. Similarly, the reduction of SEV occupancy rates varies from 2% to 6% under the delay of \(5\,\mathrm{min}\) and \(15\,\mathrm{min}\) from 6 PM to 11 PM. Under the PCC, we report an improvement in fulfillment rate of up to 8% and a SEV occupancy rate as high as 5% under the delay of \(15\,\mathrm{min}\) Fig. 8: Dynamics of electric mobility service during the last (4th) week in simulation: (a) order fulfillment rate and (b) SEV fleet’s occupancy rate. Fig. 6: (a)-(c) are three major metrics in the baseline scenario: (a) SEV occupancy rate, (b) order fulfillment rate, and (c) EVSE occupancy rate; (d)-(e) compare the charging dynamics under the baseline scenario and under \(10\,\mathrm{min}\) delay: (d) SEV queuing time at EVSE (min), and (e) SEV charging time (min) Fig. 7: Comparison of total revenue during the last (4th) week in simulation ### _Impacts of EVSE_ We illustrate in Fig. 10 the SIR proportions of EVSE under different delays and AD techniques during the 4th week. In general, the numbers of susceptible (status S) EVSE under \(5\,\mathrm{min}\) delay are lower than those under \(10\,\mathrm{min}\) and \(15\,\mathrm{min}\) delay. In particular, under a \(5\,\mathrm{min}\) delay, the IF and GMM are associated with the lowest number of susceptible EVSE, which implies the relatively higher revenue loss in Fig. 7. At the same time, significantly higher proportions of infectious EVSE and lower proportions of removed EVSE are observed under the IF and GMM, especially with the delay of \(5\,\mathrm{min}\). This can be explained by the stealthiness of DCA, which can hardly be detected by the AD techniques mentioned above. In addition, the numbers of removed EVSE under \(15\,\mathrm{min}\) delay are reported to be higher than those under \(10\,\mathrm{min}\) delay. In this case, the higher proportion of removed EVSE results in a higher revenue loss, as observed in Fig. 7, and lower fulfillment rates and occupancy rates in Fig. 8. ### _Performance of AD Techniques_ Table IV reports the performance between the five AD techniques: accuracy, precision, recall, and F1 score. We note that the KLD and PCC outperform the other AD techniques under different charging delays, where the accuracy and precision are over 0.86 and F1 scores are at least 0.92. In particular, the recall can reach up to 0.99, which suggests a significantly low miss-detection rate (as low as 1%). The superior performance of KLD and PCC can be explained by the non-parametric features, where no prior assumptions are made on the sample distribution. In particular, the low-rank approximation under the PCC exhibits the great potential of identifying the anomaly, especially for large-scale problems. In addition to the PCC, we also report good performances under the KMeans. In this case, all metrics are over 0.80 under the \(10\,\mathrm{min}\) and \(15\,\mathrm{min}\) delay. Also, we observe that the precision and recall exceed 0.80 in all cases, indicating a good performance regarding the false-positive and miss-detection rates. As a clustering-based technique, high-quality clusters are obtained based on the historical data under normal operation. The outliers are likely to be associated with high anomaly scores (i.e., long distance to the cluster). ### _Sensitivity Analysis_ To better understand the trade-off between repair cost and system revenue loss, we conduct sensitivity analyses on the AD techniques with respect to \(\alpha_{(\cdot)}\) (the ranges for the cross-validation procedure were detailed in Table III). We summarize the repair cost and revenue loss under the \(10\,\mathrm{min}\) delay in Fig. 9. We first observe that the higher sensitivity levels lead to linearly or even exponentially increasing repair costs, ranging from about $6,000 (KMeans) to nearly $40,000 (KLD and PCC). This is because the AD models aim to perform more sensitive DCA detection by grouping more charging events as anomalies, resulting in higher repair costs to send out a technician for repair service. At the same time, the revenue loss (red dotted lines) witnessed an overall reduction with significantly higher repair costs due to more sensitive detection. However, the revenue loss remains at least 4% in all five AD models regardless of the repair cost, which suggests the robustness of the DCA model. Observing the trade-off between repair cost and revenue loss, we note that a slight compromise on the revenue loss can result in a significantly lower repair cost, e.g., \(\alpha_{I}=0.05,0.075\), \(\alpha_{KLD}=1,2\), and \(\alpha_{P}=0.3,0.4\). Those trade-off decisions will shed light on the coordinated management of SEV and EVSE for commercial purpose [58]. Fig. 10: Proportion of EVSE in statuses S, I, and R under five AD techniques. Fig. 9: Trade-off between revenue loss and repair cost under a delay of \(10\,\mathrm{min}\) over a three-week period We highlight that there will always be a proportion of infectious EVSEs serving the SEV fleet, resulting in huge revenue loss regardless of the sensitivity levels of the DCA detection models. Meanwhile, more sensitive detection may not contribute to more successful detection but lead to exponentially increasing repair costs and higher miss-detection rates (proportion of false-positive and false-negative alarms). In this regard, we alert more tailored AD models that can best balance the trade-off between system revenue and repair cost. ## VII Conclusion This study presents the DCA model that can stealthily impede the ESMS by delaying the charging service of the SEV fleet. Leveraging the NYC taxi trip data and real-world EVSE locations, we evaluate the system impacts of the DCA on the ESMS in NYC using a self-developed high-fidelity simulation platform. The results indicate a long-term degradation of the ESMS caused by the DCA. For instance, a \(10\,\mathrm{min}\) delay in charging yields up to \(29\,\mathrm{min}\) of queuing time at EVSE and 8% more unaffilled requests, leading to a 10.7% ($311.7) weekly revenue loss per SEV driver. Despite the marginal reduction in revenue loss, the application of the AD techniques leads to increased repair costs, reaching up to $36,000 per week. However, the weekly revenue loss remains at a minimum of 3.8% ($111.8). Therefore, our DCA model highlights a realistic and stealthy cyberattack approach that can chronically harm the ESMS. Future research should focus on incorporating a more realistic EVSE choice model to better understand the impacts of DCA and its cascading failures. For instance, the SEV may detour to a relatively distant EVSE due to the preference [59]. Additionally, tailored defense strategies should be developed to effectively identify malfunctioned EVSEs, even under minimal disruptions in charging service (e.g., a \(2\,\mathrm{min}\) delay). This is especially relevant to the increasing prevalence of high-wattage EVSEs (e.g., extreme fast chargers [22]) and heavy-duty electric trucks in the ESMS. Finally, we will take into account battery degradation, which adds another layer of variability to charging duration beyond the human-involved activities, thus making the DCA even more robust.
2307.15101
Detection of Children Abuse by Voice and Audio Classification by Short-Time Fourier Transform Machine Learning implemented on Nvidia Edge GPU device
The safety of children in children home has become an increasing social concern, and the purpose of this experiment is to use machine learning applied to detect the scenarios of child abuse to increase the safety of children. This experiment uses machine learning to classify and recognize a child's voice and predict whether the current sound made by the child is crying, screaming or laughing. If a child is found to be crying or screaming, an alert is immediately sent to the relevant personnel so that they can perceive what the child may be experiencing in a surveillance blind spot and respond in a timely manner. Together with a hybrid use of video image classification, the accuracy of child abuse detection can be significantly increased. This greatly reduces the likelihood that a child will receive violent abuse in the nursery and allows personnel to stop an imminent or incipient child abuse incident in time. The datasets collected from this experiment is entirely from sounds recorded on site at the children home, including crying, laughing, screaming sound and background noises. These sound files are transformed into spectrograms using Short-Time Fourier Transform, and then these image data are imported into a CNN neural network for classification, and the final trained model can achieve an accuracy of about 92% for sound detection.
Jiuqi Yan, Yingxian Chen, W. W. T. Fok
2023-07-27T16:48:19Z
http://arxiv.org/abs/2307.15101v1
Detection of Children Abuse by Voice and Audio Classification by Short-Time Fourier Transform Machine Learning implemented on Nvidia Edge GPU device ###### Abstract The safety of children in children home has become an increasing social concern, and the purpose of this experiment is to use machine learning applied to detect the scenarios of child abuse to increase the safety of children. This experiment uses machine learning to classify and recognize a child's voice and predict whether the current sound made by the child is crying, screaming or laughing. If a child is found to be crying or screaming, an alert is immediately sent to the relevant personnel so that they can perceive what the child may be experiencing in a surveillance blind spot and respond in a timely manner. Together with a hybrid use of video image classification, the accuracy of child abuse detection can be significantly increased. This greatly reduces the likelihood that a child will receive violent abuse in the nursery and allows personnel to stop an imminent or incipient child abuse incident in time. The datasets collected from this experiment is entirely from sounds recorded on site at the children home, including crying, laughing, screaming sound and background noises. These sound files are transformed into spectrograms using Short-Time Fourier Transform, and then these image data are imported into a CNN neural network for classification, and the final trained model can achieve an accuracy of about 92% for sound detection. ## 1 Introduction Children in school or children home are encountering risk of home accident, fighting among children and being abused. There were a few incidents that children in the children home in Hong Kong were abused by their caretaker and police investigation is required. The government called for using latest Artificial Intelligent to strengthen the monitoring and supervision of children home and generate real-time alerts to supervisors if there is any violent or abnormal behavior detected. Apart from using CCTV video image, the sound in the children home could also provide signal to reflect the situation happening in the children home. This research project develop algorithm and build a light-weight AI model for the analysis of audio wave form and classify the type of sound to assist the classification of the caretakers and children's behavior. ## 2 Related works ### Machine Learning Machine learning has achieved great success in the past decades in the direction of image classification[12], face recognition, unmanned autonomous vehicles, speech recognition, etc[3].Linnaeinma[4] conceived and proposed the model of BP neural, i.e. - the inverse model of automatic differentiation model. However, it did not create an academic wave at that time, but stagnated for a decade, and the backpropagation algorithm described in detail The backpropagation algorithm described in detail was introduced by Weibos[5]. After a few years, there have been algorithms combined with training, and many scholars involved in the field of neural network research at that time proposed the idea of combining MLP and BP training[67]. ### Neural Networks Neural networks are an important machine learning technique with an overall structure similar to the nerves of the human brain and are designed to implement human brain functions. The most famous convolutional neural network in deep learning was proposed by Lecun et al[8]. It was the first true multilayer structural learning algorithm that uses spatial relativity to reduce the number of parameters to improve training performance. Based on the original multilayer neural network, a feature learning part was added, which mimics the human brain's hierarchy on signal processing.
2301.06595
PtyLab.m/py/jl: a cross-platform, open-source inverse modeling toolbox for conventional and Fourier ptychography
Conventional (CP) and Fourier (FP) ptychography have emerged as versatile quantitative phase imaging techniques. While the main application cases for each technique are different, namely lens-less short wavelength imaging for CP and lens-based visible light imaging for FP, both methods share a common algorithmic ground. CP and FP have in part independently evolved to include experimentally robust forward models and inversion techniques. This separation has resulted in a plethora of algorithmic extensions, some of which have not crossed the boundary from one modality to the other. Here, we present an open source, cross-platform software, called PtyLab, enabling both CP and FP data analysis in a unified framework. With this framework, we aim to facilitate and accelerate cross-pollination between the two techniques. Moreover, the availability in Matlab, Python, and Julia will set a low barrier to enter each field.
Lars Loetgering, Mengqi Du, Dirk Boonzajer Flaes, Tomas Aidukas, Felix Wechsler, Daniel S. Penagos Molina, Max Rose, Antonios Pelekanidis, Wilhelm Eschen, Jürgen Hess, Thomas Wilhein, Rainer Heintzmann, Jan Rothhardt, Stefan Witte
2023-01-16T20:26:28Z
http://arxiv.org/abs/2301.06595v1
PtyLab.m/py/jl: a cross-platform, open-source inverse modeling toolbox for conventional and Fourier ptychography ###### Abstract Conventional (CP) and Fourier (FP) ptychography have emerged as versatile quantitative phase imaging techniques. While the main application cases for each technique are different, namely lens-less short wavelength imaging for CP and lens-based visible light imaging for FP, both methods share a common algorithmic ground. CP and FP have in part independently evolved to include experimentally robust forward models and inversion techniques. This separation has resulted in a plethora of algorithmic extensions, some of which have not crossed the boundary from one modality to the other. Here, we present an open source, cross-platform software, called _PtyLab_, enabling both CP and FP data analysis in a unified framework. With this framework, we aim to facilitate and accelerate cross-pollination between the two techniques. Moreover, the availability in Matlab, Python, and Julia will set a low barrier to enter each field. ## 1 Introduction Ptychography [1, 2] has grown into a mature technique for x-ray, extreme ultraviolet (EUV), and electron microscopy. It has revolutionized synchrotron-based x-ray microscopy, where it improves upon previously existing scanning transmission x-ray microscopy (STXM) data analysis techniques [3, 4, 5, 6]. Three major benefits of ptychography over STXM are: (1) decoupling of the illumination spot size from the achievable lateral resolution, (2) quantitative amplitude and phase contrast, and (3) access to wavefront diagnostics [7, 8, 9, 10, 11]. Similar benefits have subsequently been demonstrated for scanning transmission electron microscopes (STEMs) [12, 13, 14], where it recently produced micrographs at record-breaking resolution [15, 16]. A parallel line of development is EUV laboratory-scale microscopy, where ptychography is a promising candidate for actinine metrology for lithography applications [17, 18, 19] and a tool for chemically-resolved microscopy [20, 21]. In ptychography, a specimen is laterally scanned through a localized illumination, referred to as probe. A detector downstream of the specimen records a sequence of diffraction patterns. These observations lack phase information preventing direct inversion. Ptychography solves this problem by recording data from laterally overlapping specimen regions of interest. This acquisition scheme opens up the possibility for phase retrieval and simultaneous deconvolution of illumination and specimen information. Beyond operation with x-ray and electron radiation, ptychography has been implemented with extreme ultraviolet, visible, near-infrared, and terahertz radiation [22; 23; 24; 25; 19]. Fourier ptychography [26] follows a similar operational principle as (conventional) ptychography, denoted FP and CP, respectively, throughout this paper. In FP, a specimen is illuminated from different directions, typically steered by means of an LED array, which serves as a controllable condenser. A sequence of low-resolution bright and dark field images is recorded in a lens-based microscope. Changing the illumination direction amounts to shifting the object spectrum with respect to the pupil of the optical system. If the illumination direction is changed in such a way that two recorded images share information in the Fourier domain, phase retrieval techniques can be applied to separately reconstruct the object spectrum and the pupil of the optical system. Thus FP has three attractive features: (1) The low-resolution data can be stitched together to a large synthetic numerical aperture (NA), resulting in both a large field of view and high resolution. In contrast to most wide-field systems, FP thus does not trade-off resolution and field of view. (2) after conversion to real-space, the recovered object spectrum gives quantitative amplitude and phase maps of the sample; (3) the reconstructed pupil function enables aberration diagnostics of the optical system at hand [27; 28; 29]. While FP has mostly found applications in the visible domain, recent implementations using infrared radiation and x-rays have been reported [30; 31]. ### Contribution In both CP and FP, the recorded data jointly sample real and reciprocal space [32; 33; 34; 35; 36]. In CP, the probe serves as a real space window that selects local spatial frequency content. In FP, the pupil selects a low-resolution real space image from a localized Fourier space bandpass. In fact, the forward models of CP and FP are mathematically equivalent and the measured data cubes may be regarded as rotations of one another in phase space [35]. Although this equivalence is well-known, CP and FP have evolved into two separate communities with different algorithmic approaches and self-calibration techniques. Here, we report on a numerical data analysis toolbox, named _PtyLab_ (code available online [37]), which places the equivalence of CP and FP at the center of its logical structure, resulting in three main contributions of this work: 1. Cross-modal: _PtyLab_ allows to not only analyze CP and FP data, but also convert the same data set between the two domains. This flexible conversion between CP and FP leads to both physical insights as well as algorithmic cross-pollination of both domains. To our knowledge, _PtyLab_ is the first ptychography code designed to be cross-modal, unifying the data analysis frameworks of CP and FP. 2. Multi-lingual: _PtyLab_ is the first cross-platform ptychography code available in three programming languages, namely Matlab, Python, and Julia. Thus, it enables researchers with different programming backgrounds to communicate and exchange ideas based on a unified terminology and and code structure. 3. Open access: _PtyLab_ is released together with various experimental data sets and accompanying hands-on tutorials, where the user is trained in practical data analysis. We hope that this contributes to standardized data analysis in both CP and FP. In addition, _PtyLab_ features a variety of algorithmic extensions as compared to currently available ptychography software packages. Some of these were previously reported by us and are now provided open source. This includes axial (zPIE) [38] as well as angle (aPIE) [39] correction engines, code to analyze ptychographic optical coherence tomography (POCT) data, efficient wave propagation algorithms both valid for high NA and polychromatic radiation, and detector subsampling (sPIE) [38; 39; 24; 40; 41]. Other novelties are reported here for the first time such as external reference wave ptychography. In addition, previously reported algorithms developed by other groups are included such as the extended ptychographic iterative engine (ePIE) [42], multislice (e3PIE) [43], mixed states [44], information multiplexing (PIM) [45], momentum acceleration (mPIE) [46], Tikhonov and total variation regularization [47; 48; 20], correlation-based lateral position correction [49], and orthogonal probe relaxation (OPR) [50]. In writing this manuscript, we pursued the goal of providing a concise overview of the various engines available to date in ptychography. ### Related work Most CP packages reported to date have focused on high performance computing, which is key for day-to-day user operation at large scale facilities, where beamtime is scarce and experimental feedback is needed quickly [51; 52; 53; 54; 55; 56; 57; 58]. Another line of research has investigated the capabilities opened up by modern automatic differentiation (AD) and machine learning (ML) toolboxes [59; 60]. AD frameworks offer flexibility as they simply require the user to specify a desired forward model, typically specified in the form of a suitable loss function and possible regularizers. This approach is convenient for the user as it dispenses with the need to derive challenging gradient expressions, the latter of which oftentimes involve complex-valued (Wirtinger) derivatives [61, 62]. It is thus for instance straightforward to switch from one regularizer to another without analytically deriving and programming the underlying gradient expressions into the underlying software. ML approaches, in particular those based on neural networks, have been used to significantly speed up the reconstruction process, lower the sampling requirements in the raw data, and to embed denoising priors [63, 64]. However, neural network approaches need to be trained based on data sets that have already been solved for the corresponding real space images. In reference [63] the neural network was trained based on the solution of an iterative solver. Moreover, training a neural network capable of solving ptychography problems is a memory-consumptive, large-scale computational challenge, that cannot be performed on small hardware architecture. We thus believe the need for memory-efficient but possibly slower iterative algorithms remains, despite the exciting possibilities opened up by neural networks [65]. From the above referenced work, we shortly describe some of the features of two prominent code projects, namely _PrychoShelves_[56] and _PryPy_[53], to illustrate some of the different design choices made in _PryLab_. _PrychoShelves_[56] is a Matlab-based software package for ptychography, designed with large-scale synchrotron facilities in mind. _Shelves_ refer to the modular coding framework representing bookshelves, from which desired _books_ (e.g., detector module, reconstruction engine etc.) can be taken out and inserted into the processing pipeline. To provide data handling across synchrotron facilities worldwide, _PrychoShelves_ supports commonly used X-ray detectors as well as instrument control software. Reconstructions are highly-optimized for Matlab-based GPU acceleration as well as CPU processing through reconstruction engines written in binary C++ code and parallelized through the OpenMP multiprocessing interface. The C++ code supports Difference Map [8] and Maximum Likelihood [47] engines, together with other features such as mixed state [44] and multi-slice ptychography [43]. A wider range of reconstruction features are available through Matlab-based GPU engines, including an iterative least-squares solver for maximum-likelihood ptychography [54], orthogonal-probe-relaxation [50], near-field ptychography [66], and position correction [67]. _PryPy_[53] is an open-source ptychography framework written in Python. It follows the Python coding style and is therefore modularized and object-oriented. The physical models are abstracted, which results in readable and concise code interfaces at a user level. The key element is the so-called POD class (probe, object, diffraction), which holds the access rule for a single position, object mode, and probe mode. For parallelization of the reconstructions, _PryPy_ uses a message passing interface (MPI), which allows for flexible usage on standard office PCs, workstations, and clusters. MPI favors non-sequential reconstruction algorithms that can be parallelized (e.g. Difference Map [8] and Maximum Likelihood [47]). So far, a broad range of forward models are implemented (e.g. mixed state ptychography [44], near-field ptychography [66], and lateral position correction [67]). The _PryPy_ framework is actively developed and novel features (e.g. GPU acceleration) are constantly added. Both _PrychoShelves_ and _PryPy_ are powerful ptychography data analysis platforms, but their design for high-performance computing poses an entry barrier for simple, one-off reconstructions in an academic-lab setting. In such cases, rapid code prototyping and ease-of-use can be more desirable than highly-optimized data handling and reconstructions. Unlike CP, FP has not seen the same wide-spread use within research institutions that resulted in well-developed and maintained coding platforms. The existing open-source FP codes in [68, 69, 70, 71, 72, 73] exist mainly to supplement publications by providing only minimal working examples with limited functionality intended. Recently an attempt has been made to provide a Matlab-based FP reconstruction platform [74], which among other features provides raw data denoising, GPU processing, and LED misalignment correction. Our goal here is to bridge the gap between CP and FP, thereby allowing for a cross-pollination of the two domains and a unified algorithmic framework. As compared to the above highlighted software packages _PryLab_ is less focused on high performance and distributed computing, but puts emphasis on providing an ptychography ecosystem for researchers interested in rapid prototyping and exchanging algorithms - across modalities and programming languages. ### Outline In Section 2 we revisit the idea of reciprocity, which formalizes the equivalence between CP and FP - the central idea for the unified software design in _PryLab_. Section 3 details language-specific implementation details in Matlab, Python, and Julia. Section 4 serves as a comprehensive overview of the available forward models and the corresponding optimization algorithms. Practical features for scan grid optimization are described in section 6. _PryLab_ is released with various data sets and hands-on tutorials, which are described in section 7. ## 2 Implications of reciprocity for ptychography One may think of the data sets recorded in ptychography in analogy to a musical score, where frequency information is prescribed at particular signatures in time. Once such a time-frequency, or phase-space, representation is given in form of a musical score, we can convert this information into either the time or the frequency domain. For example, we can digitally record a concert and Fourier transform the resulting signal. These processing steps would involve the temporal waveform and its frequency spectrum, respectively. Likewise, ptychography jointly samples real and reciprocal space representations of a signal, where for simplicity we ignore the additional complication of phase retrieval for the moment. The goal of ptychography is to convert partial phase-space information of a signal into a pure space or a pure spatial frequency representation. Physically, the phase-space description of ptychography [32] is intimately connected to the principle of reciprocity [76], which states that by interchanging the illumination and detection direction in an optical system identical data sets can be observed. We would like to distinguish two types of reciprocity in ptychography. Type-I reciprocity refers to the ability to design separate CP and FP optical systems, both of which produce 4D data cubes which are essentially related by a phase space rotation [35]. In addition, we define type-II reciprocity, which refers to the ability to algorithmically convert a 4D data cube from one domain to the other. Thus type-I reciprocity is essentially a statement about the ability to design different CP and FP hardware embodiments producing the same 4D experimental data cube. Type-II reciprocity is a matter of data processing: once a 4D data cube is measured in either a CP or a FP representation, it can be converted into the other respective domain and subsequently reconstructed. Figure 1: Illustration of the operation principle and equivalence of conventional and Fourier ptychography. (a) An object is laterally translated against a localized illumination profile. (b) In CP, the recorded data cube is a sequence of diffraction patterns, providing spatial frequency (\(\mathbf{u}\)) information for each scan position (\(\mathbf{s}\)). Each detector pixel alone contains a sequence real-space information that may be reshaped into a low-resolution real-space image of the sample. (c) In FP, the recorded data cube is a sequence of low-resolution bright and dark field image plane (\(\mathbf{s}\)) data corresponding to bandpass-filtered versions of the object spectrum. The shifts with respect to the pupil are controlled by shifting the illumination direction (\(\mathbf{u}\)). (d) Single-lens FP experimental configuration. Data in panel (b) and (c) from [75] Figure 1 illustrates this idea of reciprocity in the context of ptychography. In CP (Fig. 1a), an object is laterally translated against a typically focused beam. This probe localizes the origin of the measured signal in real-space (scan coordinates \(\mathbf{s}\)). A sequence of diffraction patterns is captured on a pixelated detector, which is assumed here to be located in the far field (spatial frequency \(\mathbf{u}\)). Hence the data cube in CP consists of a sequence of angular scattering maps, each corresponding to a particular real-space specimen region localized to the spatial extent of the incident probe (Fig. 1b). In FP, an object is held at a fixed location under variation of the illumination angle (Fig. 1d). Thus the angular spectrum emanating from the sample is shifted over the finite-sized lens aperture. The pupil serves to bandpass filter the object's spatial frequency spectrum, resulting in a data cube consisting of dark and bright field images (Fig. 1c). Thus the data cube in FP consists of a sequence of real-space images, each corresponding to a particular reciprocal space portion of the object spectrum localized to the passband admitted by the pupil of the imaging system. In summary, both flavors of ptychography sample real and reciprocal space. We illustrate the aforementioned types of reciprocity in two ways: First, consider replacing the detector in the FP setup (Fig. 1d) with a pixelated light source. Turning on a single point source at a time and placing the detector in the far field of the sample, we can record a CP data set by scanning the point source location, as pointed out in a recent review article [2]. Of course, this hardware conversion ability faces practical limits imposed by the lens, which may cause a space-variant probe, but this complication is ignored here. Thus via hardware modification we can convert a CP experimental setup into and a FP system, which is a type-I reciprocity. Type-II reciprocity concerns the captured data itself and does not require any hardware modifications. It is possible to convert the measured data cube from one modality to the other. Suppose in Fig. 1a we scan the sample (for conceptual simplicity) on a regular raster grid and record a sequence of diffraction patterns. Then each pixel of the detector can be regarded as a traditional (single-pixel) scanning microscope data set. The data on each individual pixel may directly be reshaped into a two-dimensional real-space image, simply by permuting its dimension in correspondence with the scan trajectory. Practically, aperiodic translation trajectories and high NA effects require interpolation techniques. However, at low NA and using raster scan grids the data reshaping operation can be implemented with a single line of code (namely, a permute operation), converting for instance a CP data set into a sequence of low-resolution bright and dark field images, the latter of which constitutes the raw data for FP. While we described type-II reciprocity phenomenologically in this section, a mathematical proof of this conversion ability is provided in the appendix. The mathematical details also elucidate the correspondence between reconstructed quantities in CP and FP. We provide online tutorials that illustrate the conversion between CP and FP [37]. The ability to convert CP and FP data has a bearing on the computational complexity of inversion algorithms underlying ptychography. Suppose we are given a CP data cube consisting of diffraction patterns with \(U^{2}\) pixels at \(S^{2}\) scan points. A single iteration of a parallelized ptychography solver (for example difference map [8]) requires us to numerically propagate exit waves from all scan positions to the detector plane and back. The Fourier transform operations involved will have a computational complexity of \(\mathcal{O}\left[S^{2}\cdot U^{2}\cdot\log\left(U\right)\right]\) if we work in the CP domain, while it scales with \(\mathcal{O}\left[U^{2}\cdot S^{2}\cdot\log\left(S\right)\right]\) if we convert the data into the FP domain. The difference in the log-terms can result in a practical speed up, provided that the number of detector pixels per dimension \(U\) and the number of scan positions per dimension \(S\) is largely different. In summary, utilizing type-II reciprocity is the central motivation for the design of _PtyLab_: CP and FP data can be converted into each other. A unified data analysis framework thus allows to migrate between the two modalities. A benefit of this data conversion ability is the applicability of diverse inversion algorithms and self-calibration routines in the domain in which they are most conveniently applied. Another benefit of reciprocity is the trade-off in computational complexity. ## 3 Code Structure In this section we describe structural workflow in _PtyLab_. Our overall goal is to provide a code that enables flow of algorithmic ideas and rapid prototyping beyond the boundaries of modality (CP/FP) and programming language (Matlab/Python/Julia). Thus collaborators with different programming language preferences or from different communities (e.g. synchrotron-based CP versus visible light FP) can easily exchange code, without being perfectly literate in the other programming language. This approach comes at the benefit of a unified structure and naming convention but at times at the expense of certain language-specific programming conventions. The following subsection describes the common structure independent of programming language. Subsequently, we address differences in the Matlab, Python, and Julia implementations. ### Platform- and modality-independent workflow A high-level overview of the _PyLab_'s workflow is illustrated in Fig. 2. Assuming CP or FP data is stored in a local folder on the user's computer, the first step is preprocessing the data (see Fig. 2). Preprocessing converts the raw data into a _PryLab_ class object. The user specifies physical input (illumination wavelength), hardware properties (detector pixel pitch, binning), and geometric parameters (sample-detector distance [CP], lens magnification [FP], sample scan trajectory [CP], illumination angles [FP]). In addition, the user specifies a forward model that describes the propagation from end to end (CP: probe to detector, FP: pupil to detector). The preprocessing pipeline then writes a single or multiple _PyLab_ class objects into a hdf5 file (see Fig. 3) [77]. Second, the reconstruction script loads the preprocessed data. An initialization function generates uniform or randomized starting estimates for the probe (CP) or pupil (FP) and object (CP) or object spectrum (FP). In each the probe/pupil, object/object spectrum, and detector planes, meshgrids are calculated. These meshgrids depend on the specified physical, hardware, and geometrical parameters, as well as a forward model (propagator) that describes the mapping between probe (CP) or pupil (FP) and detector planes. A variety of propagation models can be specified, including angular spectrum (AS), scaled angular spectrum (SAS), Fresnel (Fresnel), Fraunhofer (Fraunhofer), and tilted Fresnel diffraction. The latter is relevant for non-coplanar reflection geometries and is typically performed only once on the raw diffraction data stack, provided no angle correction is performed (see subsection 5.7 for further details). Fraunhofer and Fresnel diffraction distinguish each other by an incorporation of a quadratic phase inside the propagation model. This quadratic phase may be absorbed into the probe/pupil function and can be compensated for post-reconstruction when a quantitative analysis of the reconstructed wavefront (CP) or pupil (FP) is of interest to the user. Figure 2: Workflow in _PryLab_. Experimental data is converted into a preprocessed hdf5 data set. The remaining parameters controlling algorithmic and monitoring behavior and required for reconstruction are set by an initialization routine. Various reconstruction engines and forward models can be chosen to analyze the preprocessed data. After the reconstruction is finished the reconstructed data is written into a final hdf5 file. ### Matlab structure The Matlab code structure is shown in Fig. 4a. Here an object of class _PtyLab_ is generated. Its first-order properties (obj."firstOrder") contain the physics as well as the geometry of the experiment. In addition, there are second-order properties (obj.params."secondOrder"), which are algorithmic parameters (obj.params), monitoring control (obj.monitor), propagators as part of the forward model (obj.propagator), and file export parameters (obj.export). Certain notational conventions are noteworthy: the diffraction or image data is contained in obj.ptychogram, a term borrowed from time-frequency analysis where the raw data is often referred to as _spectrogram_[79]. The dimensions of (obj.ptychogram) are \((\texttt{y},\texttt{x},\texttt{numFrames})\), which is different from the Python convention (see subsection 3.3). The order along the first two dimensions stems from Matlab's most convenient use when adhering to row-column convention. Similarly, obj.positions follows row-column convention. ### Python structure The Python structure is similar in idea to the Matlab structure, but is designed with a stronger emphasis on modularity. As shown in Fig. 4(b), the Python implementation contains five classes: ExperimentalData, Reconstruction, Monitor, Params, and Engines. Most but not all of these classes reflect second-order properties in the Matlab structure. The ExperimentalData class imports data from a preprocessed.hdf5 file, checks if all required parameters for a ptychographic reconstruction are included, and saves them into an instance that is immutable. The Reconstruction class takes the ExperimentalData instance as the input, and creates a mutable instance containing attributes that are optimized during a reconstruction process, e.g. the probe/pupil, and the object, as well as attributes that are related to the optimizable parameters, e.g. the error, the coordinates and meshgrids. Note that in the Python implementation, the probe/pupil and the object are set as 6D arrays with the fixed axes [nlambda, nosm, Figure 3: _PtyLab_ HDF file structure of preprocessed (a) and reconstructed (b) data. The orange boxes show the mandatory fields, with specific differences between CP (yellow) and FP (green) data. The grey box in panel (a) shows optional fields, that are nevertheless recommended. Both CP and FP reconstructions struggle to converge when background is not appropriately subtracted or accounted for in the forward model, subtraction being the easier route. Initial estimates for the probe diameter in CP and the pupil diameter in FP are recommended to be specified (both referred to as entrancePupilDiamater). This can aid initial convergence in CP. Moreover, the circle fitting routine for position calibration in FP [78], which is used in _PtyLab_, requires an estimate of the pupil diameter. Arrays indicated with (*) are specified in SI units. Figure 4: The Matlab code structure (left) comprises a single class, which contains all field relevant for ptychographic data analysis.The Matlab class is organized into first- and second-order properties. First-order properties contain physical information (e.g. wavelength) and geometrical parameters of the experiment. Second-order properties are mainly found in params, which contains algorithmic properties (step sizes, number of iterations, etc.) that are optimized during data analysis. Other second-order properties comprise monitoring behaviour (monitor), specification of the wave propagation model (propagator), and input-output control (export). The Python code (right) consists of five separate classes: ExperimentalData, Reconstruction, Params, Monitor, and Engines. The Julia implementation consists of 4 main abstract types called ExperimentalData, Reconstruction, Params, and Engines. npsm, nslice, row, col], which are the number of wavelength, object state mixtures, probe state mixtures, slices (for multislice ptychography), and rows as well as columns. The Monitor class is used to display a reconstruction process. Equivalent to its Matlab counterpart, one or two figures are created depending on the verbosity level set by users. A default figure shows the updated object, probe, and reconstruction error. An optional figure shows the comparison of the measured and estimated diffraction patterns. The update frequency of the plots can also be controlled by users (Monitor.figureUpdateFrequency). The Params class holds parameters that determine how a reconstruction is performed, for instance whether a reconstruction is carried out on a CPU or a GPU, the propagator type such as Fraunhofer, Fresnel, (scaled) angular spectrum (ASP, scaledASP), etc., and whether the order of position iterations is random or sequential. Switches and parameters of various regularization types are also included in the Params instance, for example controlling how frequently orthogonalization is applied in the context of a mixed-states reconstruction. The Engine class consists of a BaseEngine as a parent class, and other child engine classes, for instance ePIE [42], mPIE [46], zPIE [38], aPIE [39], and qNewton [80]. All four instances of ExperimentalData, Reconstruction, Params, and Monitor are taken as inputs for a chosen engine, then get modified/updated by the engine, and can be passed to a different engine easily. Each engine stores its own attributes such as the number of iterations (numIteration), and the update step sizes for the probe/pupil (betaProbe) and object (betaObject). ### Julia structure _PryLab,jl_ is the most recent translation of _PryLab_ to Julia. Due to the differences in Julia to Matlab and Python consequently small differences exist in the implementation but most of the common principles still hold. The amount of features is less than in the other two packages since its focus was on performance first. The basis are four main types ExperimentalData, Reconstruction, Params, and Engines. Engines is an abstract type which is subtyped (indicated by <:) by specific algorithms (such as ePIE <: Engines) as composite types. Via this mechanism, generic functions can be commonly used by all Engines solvers. However, Julia's multiple dispatch allows that different functionalities can be specified if they belong to a ePIE or a zPIE solver. Params is a composite type storing second-order-properties. Further, ExperimentalDataCPM <: ExperimentalData exists and similarly ReconstructionCPM <: Reconstruction to store the experimental data and the reconstruction data. During the iteration of the algorithms, the fields of ReconstructionCPM are allowed to change. Julia's language features allow for a functional style of programming implying that memory buffers are not explicitly exposed to the user but instead are implicitly stored via closures. ## 4 Inverse modeling The inverse modeling workflow in _PryLab_ consists of several modular processing steps, which are shown in Fig. 5. All optimization algorithms in _PryLab_ iterate between the object (CP) / object spectrum (FP) plane (orange) and the detector plane (green), where the two planes are linked via a suitable propagation model (yellow). We subdivide this section into several parts, describing the individual modules that the user can stack up to build customized data analysis pipelines. ### Forward model In CP and FP the goal is to retrieve a wide-field high-resolution reconstruction of a sample of interest. In addition, the probe (CP) and pupil (FP) of the imaging system are recovered. A forward model links the estimated detector intensity \(I\) to the object \(O\) and probe \(P\) (CP) or object spectrum \(\tilde{O}\) and pupil \(\tilde{P}\) (FP), \[\begin{split}& I_{j}\left(\mathbf{q}\right)=\left|\mathcal{D}_{\mathbf{r} \rightarrow\mathbf{q}}\left[P\left(\mathbf{r}\right)\cdot O\left(\mathbf{r}-\mathbf{r}_{j} \right)\right]\right|^{2}\text{ (CP)}\\ & I_{j}\left(\mathbf{r}\right)=\left|\mathcal{D}_{\mathbf{q}\rightarrow\bm {r}}\left[\tilde{P}\left(\mathbf{q}\right)\cdot\tilde{O}\left(\mathbf{q}-\mathbf{q}_{j} \right)\right]\right|^{2}\text{ (FP).}\end{split} \tag{1}\] Here \(\mathcal{D}\) describes wave propagation between the sample (CP)/pupil (FP) and the detector plane, \(\mathbf{r}\) refers to spatial coordinates, and \(\mathbf{q}\) refers to reciprocal space coordinates. The index \(j=1,...,J\) denotes scan position. For simplicity, we drop the coordinate dependence and use the CP notation throughout; the conversion to FP of all results discussed below is straightforward. The symbol \(O\) refers to a particular object box of equal size as the probe FOV (compare red region in Fig. 2). The entire object field of view is denoted by \(O_{\text{FOV}}\) (compare blue region in Fig. 2). In the presence of noise in the observed signal, for instance caused by photoelectric conversion and read-out, we cannot expect to find a combination of sample and prope/pupil that exactly matches the recorded data. In what follows we therefore assume the measured data \(m\) to arise as a probabilistic response to the true intensity \(I\) incident on the detector and discuss several maximum likelihood estimation (MLE) models [81, 82, 47, 80, 54, 83]. These MLE models aim at estimating the most likely combination of object and probe, a viewpoint extended by the addition of maximum a posteriori (MAP) estimation, which originates from a Bayesian viewpoint and enables flexible embedding of regularization into the forward model [47, 84, 83]. Before going into the details of inverse models, we briefly review the continuous and discrete viewpoints on optimization, which are both encountered in the ptychography optimization literature [85, 81, 82, 47, 44, 86, 80, 87, 46, 54, 83]. ### Inverse modeling In this section we review various forward models implemented in _PryLab_. A summary of these forward models is given in Fig. 6. We first describe general techniques to tackle the inverse problem underlying ptychography. Subsequently, we detail the individual solvers that allow the user to build and invert modular forward models. #### 4.2.1 The continuous viewpoint In the continuous viewpoint, we aim to minimize a cost functional \[C=\int\mathcal{C}\left(\mathbf{r},f\left(\mathbf{r}\right),\mathbf{g}\right)d\mathbf{r}, \tag{2}\] Figure 5: Optimization workflow. The schematic illustrates the building blocks of a user-defined reconstruction routine in _PyLab_. In the object plane, the forward exit wave model as well as the inverse model for the object and probe gradients, subject to several regularization options, are specified. In the detector plane, the underlying noise model and various regularization options lead to an optimal update for the estimated detector wave. After initialization of the object and probe, the reconstruction engine iterates between the object and detector plane until the reconstruction error is low or other stopping criteria are satisfied. where the functional density \(\mathcal{C}\) is a real-valued, non-negative and at least once differentiable function. We use the abbreviation \(\mathbf{g}=\nabla f\left(\mathbf{r}\right)\) for notational brevity in the equations to follow. For real-valued functions \(f\) minimizing the cost functional \(C\) is equivalent to solving the Euler-Lagrange equation [88] \[\frac{\partial\mathcal{C}}{\partial f}-\text{div}_{\mathbf{g}}\left(\frac{ \partial\mathcal{C}}{\partial\mathbf{g}}\right)=0, \tag{3}\] where \(\text{div}_{\mathbf{g}}\) is the divergence with respect to the third (vector-valued) input of \(\mathcal{C}\). For complex-valued \(f\), we may solve two separate Euler-Lagrange equations for the two degrees of freedom of \(f\), for instance its real and imaginary parts. We can save some work if we regard \(f\) and its complex conjugate \(f^{*}\) as the degrees of freedom to solve for [62, 89]. In the particular case that the cost density \(\mathcal{C}\) is symmetric, i.e. \[\frac{\partial\mathcal{C}\left(\mathbf{r},f,\mathbf{g}\right)}{\partial f^{*}}=\left( \frac{\partial\mathcal{C}\left(\mathbf{r},f,\mathbf{g}\right)}{\partial f}\right)^{*}, \tag{4}\] it suffices to solve a single Euler-Lagrange equation [61] \[\frac{\partial\mathcal{C}}{\partial f^{*}}-\text{div}_{\mathbf{g}}\left(\frac{ \partial\mathcal{C}}{\partial\mathbf{g}^{*}}\right)=0. \tag{5}\] If Eq. 5 is not amenable to a direct solution, we can iteratively solve it by seeking the steady state solution of the diffusion equation [90, 91] \[\frac{\partial f}{\partial t}=-\alpha\left[\frac{\partial\mathcal{C}}{ \partial f^{*}}-\text{div}_{\mathbf{g}}\left(\frac{\partial\mathcal{C}}{\partial \mathbf{g}^{*}}\right)\right], \tag{6}\] where \(\alpha\) controls the diffusion step size. Approximating the time derivative by finite differences, we may rewrite Eq. 6 as \[f_{k+1}=f_{k}-\alpha\left[\frac{\partial\mathcal{C}}{\partial f_{k}^{*}}- \text{div}_{\mathbf{g}_{k}}\left(\frac{\partial\mathcal{C}}{\partial\mathbf{g}_{k}^{* }}\right)\right], \tag{7}\] where \(k\) denotes iteration. We refer to this update as _functional gradient descent_. Under some circumstances to be discussed below the divergence term vanishes. In this case we identify this update with the Wirtinger derivative previously discussed in [47, 92, 93]. However, we will make use of regularizers which require this more general update rule. #### 4.2.2 The discrete viewpoint The discrete viewpoint is used when considering inverse problems over sampled functions. In this case, we oftentimes wish to minimize the sum of squares cost function \[\mathcal{C}=\sum_{k}\lambda_{k}\left\|\mathbf{A}_{k}f-\tilde{\psi}_{k}\right\|_{2 }^{2}, \tag{8}\] where \(\left\|\ldots\right\|_{2}\) denotes the L2 norm. Here \(\mathbf{A}_{k}\) is a matrix and \(f\) is a vector, which are compatible in dimensions. The gradient to this problem is given by [94] \[\frac{\partial\mathcal{C}}{\partial f^{*}}=\sum_{k}\lambda_{k}\mathbf{A}_{k}^{ \dagger}\left(\mathbf{A}_{k}f-\tilde{\psi}_{k}\right), \tag{9}\] where the matrix \(\mathbf{A}_{k}^{\dagger}\) is the conjugate transpose of \(\mathbf{A}_{k}\). We may iteratively solve the original problem in Eq. 8 using gradient descent \[f_{n+1}=f_{n}-\alpha\sum_{k}\lambda_{k}\mathbf{A}_{k}^{\dagger}\left(\mathbf{A}_{k}f- \tilde{\psi}_{k}\right). \tag{10}\] A non-iterative solution is formally obtained by setting the gradient in Eq. 9 to zero and solving for \(f,\) \[f=\left(\sum_{k}\lambda_{k}\mathbf{A}_{k}^{\dagger}\mathbf{A}_{k}\right)^{-1}\left( \sum_{k}\lambda_{k}\tilde{\psi}_{k}\right), \tag{11}\] which is referred to as the _least squares solution_. We note that the transition between the continuous and discrete viewpoints is seamless, provided that the signals of interest are bandlimited. In this case one may switch between the continuous and discrete viewpoints by adequate sampling and interpolation [95]. ### Maximum likelihood (MLE) estimation We now discuss models for the detector noise commonly used in ptychography. Two particularly prominent models that have been addressed [81, 82, 47, 80, 54] are the Poisson likelihood \[p\left[m\left|I\right]=\frac{\left(I+b\right)^{m+b}}{\left(m+b\right)!}\exp \left[-\left(I+b\right)\right]\quad\text{(shifted Poisson)} \tag{12}\] and the Anscombe likelihood \[p\left[m\left|I\right]=\exp\left[-\left(\sqrt{m+b}-\sqrt{I+b}\right)^{2} \right]\quad\text{(Anscombe)}, \tag{13}\] where \(I=\tilde{\psi}^{*}\tilde{\psi}\) is the estimated intensity,, \(\tilde{\psi}=\mathcal{D}\left(P\cdot O\right)\) (CP, cf. Eq. 1; similarly for FP), and \(m\) is the measured intensity. In both cases, the offset term \(b\) is typically not made explicit in the literature, although needed to prevent division by zero in the maximum likelihood gradients (cf. Eqs. 14 and 15). In case of the Poisson likelihood the additional term \(b\) has previously been used to account for detection models that contain mixed Poissonian and Gaussian noise contributions and is also referred to as the shifted Poisson approximation [96, 97, 98]. The Anscombe model transforms Poisson-distributed data, which exhibits exposure dependent shot noise, into variance-stabilized data with uniform uncertainty across variable exposure, which is the basis for robust denoising [99]. However, while the Anscombe transform has been noted to stabilize variance, it can introduce bias and tends to underestimate the true mean of the signal in the limit of low exposure [100]. We recommend setting the offset to at least \(b=1\) to prevent division by zero in the gradient descent update rules derived below. Computing the Wirtinger derivatives of the negative log-likelihood \(\mathcal{L}=-\sum_{\mathbf{q}}\log\left(p\left[m\left|I\right|\right]\right)\) of Eqs. 12 and 13 results in [82, 47, 80, 54] \[\frac{\partial\mathcal{L}}{\partial\tilde{\psi}^{*}}=\left(1-\frac{m+b}{I+b} \right)\tilde{\psi}\quad\text{(shifted Poisson gradient)} \tag{14}\] and \[\frac{\partial\mathcal{L}}{\partial\tilde{\psi}^{*}}=\left(1-\sqrt{\frac{m+b} {I+b}}\right)\tilde{\psi}\quad\text{(Anscombe gradient)}. \tag{15}\] It appears to us that the Anscombe forward model is used by the vast majority in the ptychography literature. Although the Poisson distribution works in practice, we have observed that the Anscombe model is more robust in practical data analysis. One has to keep in mind that the Poisson model assumes that the photoelectric counting distribution's mean equals its variance and is only valid for shot-noise limited data - a somewhat restrictive assumption considering the manifold fluctuations that are present in typical experiments, including partial spatial and temporal coherence effects as well as detector read out. We note that other models for the statistics of photoelectic counting distributions have been proposed in the literature, albeit to our knowledge they not yet have been used for ptychography. Noteworthy is the negative binomial distribution \[p\left[m\left|I\right]=\frac{\left(m+M-1\right)!}{m!\left(M-1\right)!}\cdot \frac{I^{m}\cdot M^{M}}{\left(I+M\right)^{m+M}},\quad\text{(negative- binomial)} \tag{16}\] which was first derived by Mandel [101]. The parameter \(M\) counts the degrees of freedom in the detected light. For an integration time \(T\) much longer than the coherence time \(\tau_{c}\) the degrees of freedom can be estimated as \(M=T/\tau_{c}\)[102]. Notice that this number does not have to be an integer and one can simply replace the factorials in Eq. 16 by gamma functions. However, leaving the factorials it is easy to see that for large \(M\) the negative binomials approximately equals a Poisson distribution. In the other extreme case that \(M=1\), the negative binomial distribution degenerates into a geometric distribution, which is the noise distribution for thermal light measured at time scale approaching the coherence time [101]. Thus by varying \(M\) one can parameterize between the degree to which the Poisson model is relaxed. The gradient of the negative log-likelihood of the negative binomial distribution is given by \[\frac{\partial\mathcal{L}}{\partial\tilde{\psi}^{*}}=\left[\frac{m+M}{I+M}- \frac{m}{I}\right]\tilde{\psi},\quad\text{(negative-binomial gradient)} \tag{17}\] which has the desired property that it vanishes for \(I=n\), similar to Eqs. 14 and 15. For large \(M\) the first fraction approaches one and we recover the Poisson gradient (compare Eq. 14). It is an interesting possibility left for future studies to test the performance of a generalized Anscombe gradient of the form \[\frac{\partial\mathcal{L}}{\partial\tilde{\psi}^{*}}=\left[\sqrt{\frac{m+M+b} {I+M+b}}-\sqrt{\frac{m+b}{I+b}}\right]\tilde{\psi},\quad\text{(generalized Anscombe gradient)} \tag{18}\] which results from taking the square root of the fractions in the negative-binomial gradient. In the limit of large \(M\) the latter gradient approaches the Anscombe gradient. ### Maximum a posteriori (MAP) estimation The MLE approach in the previous subsection can be extended by MAP estimation, which introduces prior knowledge into the reconstruction process. In MAP the detector intensity is regarded as a random variable with underlying probability density \(p\left[I\right]\). Since the detector intensity \(I=\tilde{\psi}^{*}\tilde{\psi}\) is a function of the real space object and probe, namely \(\psi\left(\mathbf{r}\right)=P\left(\mathbf{r}\right)\cdot O\left(\mathbf{r}-\mathbf{r_{j}}\right)\) and \(\tilde{\psi}\left(\mathbf{q}\right)=\mathcal{F}_{\mathbf{r}\rightarrow\mathbf{q}}\left[ \psi\left(\mathbf{r}\right)\right]\), MAP opens up a convenient way to formulate and impose constraints in the inverse problem underlying ptychography. The optimization problem is then \[\text{argmax}_{P,O}\prod_{\mathbf{q}}p\left[m\left(\mathbf{q}\right)\left|I\left(\mathbf{ q}\right)\right.\right]p\left[I\left(\mathbf{q}\right)\right], \tag{19}\] which is equivalent to \[\text{argmin}_{P,O}\sum_{\mathbf{q}}-\log\left(p\left[m\left(\mathbf{q}\right)\left| I\left(\mathbf{q}\right)\right.\right]\right)-\log\left(p\left[I\left(\mathbf{q}\right) \right]\right). \tag{20}\] #### 4.4.1 Proximal detector updates The detector update can be restricted to small step sizes by introducing the proximal prior \[p\left[I\right]=\exp\left[-\lambda\left(\sqrt{\left|\tilde{\psi}_{n+1}\right| ^{2}}-\sqrt{\left|\tilde{\psi}_{n}\right|^{2}}\right)^{2}\right],\text{ \ \ (proximal prior)} \tag{21}\] where \(\lambda\) controls how strong changes in the magnitude of the estimated detector wave are penalized between successive iterations \(n\) and \(n+1\). Inserting this into Eq. 20 together with the shifted Poisson (Eq. 12) and Anscombe (Eq. 13) likelihood, we get the cost functions \[\mathcal{C}=\left|\tilde{\psi}_{n+1}\right|^{2}-m\cdot\log\left(\left|\tilde{ \psi}_{n+1}\right|^{2}\right)+\lambda\left(\sqrt{\left|\tilde{\psi}_{n+1} \right|^{2}}-\sqrt{\left|\tilde{\psi}_{n}\right|^{2}}\right)^{2}\text{ \ \ (proximal Poisson)} \tag{22}\] and \[\mathcal{C}=\left(\sqrt{\left|\tilde{\psi}_{n+1}\right|^{2}}-\sqrt{m}\right) ^{2}+\lambda\left(\sqrt{\left|\tilde{\psi}_{n+1}\right|^{2}}-\sqrt{\left| \tilde{\psi}_{n}\right|^{2}}\right)^{2}.\text{ \ \ (proximal Anscombe)} \tag{23}\] These updates are referred to as _proximal_. A large value of the tuning parameter \(\lambda\) forces the updated wave \(\tilde{\psi}_{n+1}\) to remain in the proximity of the previous estimate \(\tilde{\psi}_{n}\). Intuitively, the updates along the gradient directions in Eqs. 14 and 15 enforce the magnitude of the updated wave to be equal to the measured data, either in intensity or modulus for the Poisson and Anscombe gradients, respectively. However, due to noise, sequential update schemes can only be as certain as the noise in a single diffraction pattern permits. Proximal gradients incorporate the memory of previous updates and do not naively accept the update suggested by the data. The gradient direction suggested by the data is followed in case that the deviation from the current estimate is small. It is conjectured here that this incorporates dose fractionation effects into ptychography, resulting in superior signal-to-noise in the reconstruction. This is supported by previous reports that observed improved reconstruction quality using proximal gradients for Gerchberg-Saxton type phase retrieval [103] and ptychography [86, 83]. The cost functions above (Eqs. 22 and 23) result in the update steps \[\tilde{\psi}_{n+1}=\frac{m+\lambda\left|\tilde{\psi}_{n}\right|^{2}}{\left(1+ \lambda\right)\left|\tilde{\psi}_{n}\right|^{2}}\tilde{\psi}_{n}\text{ \ \ (proximal Poisson update)}, \tag{24}\] and \[\tilde{\psi}_{n+1}=\frac{\sqrt{m}+\lambda\left|\tilde{\psi}_{n}\right|}{\left( 1+\lambda\right)\left|\tilde{\psi}_{n}\right|}\tilde{\psi}_{n}.\text{ \ \ (proximal Anscombe update)}. \tag{25}\] We note that the proximal Poisson update is different from the result in [103], since here the prior is Gaussian in modulus, while in the related work the prior is Gaussian in intensity. The approach reported here avoids the need to solve a cubic polynomial to compute the proximal Poisson update. #### 4.4.2 Proximal probe and object updates via (e)PIE While in the previous section a proximal update step has been discussed in the detector update step, we may impose a similar type of regularization for the probe and object update, which has been shown to result in the ptychographic iterative engine (ePIE) [44]. This derivation is reviewed here from the discrete viewpoint outlined above. Considering the cost function \[\mathcal{C}=\left\|\mathcal{A}O_{n+1}-\tilde{\psi}\right\|_{2}^{2}+\alpha \left\|\Gamma O_{n+1}-\Gamma O_{n}\right\|_{2}^{2}, \tag{26}\] the first term is the overlap constraint of ptychography while the second penalizes the step size in the search direction for the object \(O\), which here is a vector. The operator matrix \(\mathcal{A}=\mathcal{DP}\) contains both the propagator \(\mathcal{D}\) and a diagonal probe matrix \(\mathcal{P}\), which act on the object. The matrix \(\Gamma\) allows for regularization of the object. The detector wave \(\tilde{\psi}\) is the obtained from the detector update step, as described in the previous subsections, where we have omitted an index to focus on the update of the object. Noting that \(\mathcal{A}^{\dagger}\mathcal{A}=\mathcal{P}^{\dagger}\mathcal{D}^{\dagger} \mathcal{DP}=\mathcal{P}^{\dagger}\mathcal{P}\), application of the least square solution in Eq. 11 results in \[O_{n+1} =\left(\mathcal{A}^{\dagger}\mathcal{A}+\alpha\Gamma^{\dagger} \Gamma\right)^{-1}\left(\mathcal{A}^{\dagger}\tilde{\psi}+\alpha\Gamma^{ \dagger}\Gamma O_{n}\right)\] \[=\left(\mathcal{A}^{\dagger}\mathcal{A}+\alpha\Gamma^{\dagger} \Gamma\right)^{-1}\left(\mathcal{A}^{\dagger}\tilde{\psi}-\mathcal{A}^{ \dagger}\mathcal{A}O_{n}+\mathcal{A}^{\dagger}\mathcal{A}O_{n}+\alpha\Gamma^ {\dagger}\Gamma O_{n}\right)\] \[=\left(\mathcal{A}^{\dagger}\mathcal{A}+\alpha\Gamma^{\dagger} \Gamma\right)^{-1}\mathcal{A}^{\dagger}\left(\tilde{\psi}-\mathcal{A}O_{n} \right)+O_{n}\] \[=\left(\mathcal{P}^{\dagger}\mathcal{P}+\alpha\Gamma^{\dagger} \Gamma\right)^{-1}\mathcal{P}^{\dagger}\left(\mathcal{D}^{\dagger}\tilde{ \psi}-\mathcal{P}O_{n}\right)+O_{n}. \tag{27}\] The particular choice \[\Gamma_{\text{PE}}=\text{diag}\left[\left(\frac{\max\left(\left|P\right| \right)}{\alpha\beta\left|P\right|}\left(\left|P\right|^{2}+\epsilon\right)- \frac{\left|P\right|^{2}}{\alpha}\right)^{1/2}\right] \tag{28}\] results in the original version of the ptychographic iterative engine (PIE) [1, 104], namely \[O_{n+1}=O_{n}+\beta\frac{\left|P_{n}\right|}{\max\left(\left|P_{n}\right| \right)}\frac{P_{n}^{*}}{\left|P_{n}\right|^{2}+\epsilon}\left(\psi-P_{n} \cdot O_{n}\right). \tag{29}\] Later the extended ptychographic iterative engine (ePIE) was proposed [42], which uses the regularization matrix [44] \[\Gamma_{\text{ePIE}}=\text{diag}\left[\left(\frac{\max\left(\left|P\right|^{2} \right)}{\alpha\beta}-\frac{\left|P\right|^{2}}{\alpha}\right)^{1/2}\right], \tag{30}\] resulting in the ePIE update [42] \[O_{n+1}=O_{n}+\beta\frac{P_{n}^{*}}{\max\left|P_{n}\right|^{2}}\left(\psi-P_{n }O_{n}\right). \tag{31}\] An intuition of the difference between PIE and ePIE can be obtained in the limit of \(\epsilon\to 0\) and \(\beta=1\), for which we get \[\Gamma_{\text{PE}}=\text{diag}\left[\left(\frac{\left|P\right|^{2}}{\alpha} \left(\frac{\max\left(\left|P\right|\right)}{\left|P\right|}-1\right)\right)^{1 /2}\right] \tag{32}\] and \[\Gamma_{\text{ePIE}}=\text{diag}\left[\left(\frac{\max\left(\left|P\right|^{2} \right)}{\alpha}-\frac{\left|P\right|^{2}}{\alpha}\right)^{1/2}\right]. \tag{33}\] \(\Gamma_{\text{PE}}\) is small, when \(\left|P\right|\) approaches \(\max\left(\left|P\right|\right)\) or \(0\). Thus PIE allows object updates for locations where the probe amplitude is small or large. In contrast, \(\Gamma_{\text{ePIE}}\) is small, only when \(\left|P\right|\) approaches \(\max\left(\left|P\right|\right)\). Thus ePIE allows object updates only for locations where the probe amplitude is large. The derivation of the probe update is similar, resulting in a joint optimization of \(P\) and \(O\). The robustness of ePIE is attributed to the penalized step size at low probe intensities. However, in FP the large dynamic range of the object spectrum can cause problems in conjunction with ePIE. The pupil would be updated only in the center of k-space, where the object spectrum exhibits values close to its maximum amplitude. High illumination angles produce dark field images, which have a reduced signal-to-noise ratio as compared to bright field images from lower illumination angles. In CP the illuminating beam is always aligned with the detector resulting in images with similar signal-to-noise ratio as compared between scan positions. For this reason, we use the PIE-type update by default in _PtyLab_ for FP and the ePIE-type update for CP data analysis. Other choices of regularization can be embedded into the reconstruction routine by changing the weight matrix \(\Gamma\). We note in passing that the PIE-type update rule has been identified as a quasi-Newton algorithm [80]. #### 4.4.3 Tikhonov regularization A popular idea in solving inverse problems is Tikhonov regularization. The general idea is to add an additional term to the cost function, which penalizes variations in the object, \[\mathcal{C}=\left|P\cdot O-\psi\right|^{2}+\lambda\left|\nabla_{x,y}O\right|^ {2}. \tag{34}\] We emphasize that the regularization term in Eq. 26 in the previous subsection penalizes fluctuations at for a fixed object pixel between successive iterations. In this subsection the regularization term in Eq. 34 penalizes fluctuations between neighboring object pixels. Applying functional gradient descent, as described by Eq. 7, to the cost in Eq. 34 gives \[O_{n+1}=O_{n}-\alpha P^{*}\left(\psi-P\cdot O_{n}\right)+\alpha\lambda\Delta O _{n}, \tag{35}\] where \[\Delta O_{n}=\frac{\partial^{2}O_{n}}{\partial x^{2}}+\frac{\partial^{2}O_{n }}{\partial y^{2}}=-\mathcal{F}^{-1}\left(\left[\left(2\pi q_{x}\right)^{2}+ \left(2\pi q_{y}\right)^{2}\right]\mathcal{F}\left(O_{n}\right)\right) \tag{36}\] is the Laplacian. Subtracting the Laplacian in Eq. 35 removes high-frequency components from the object. Thus introducing Tikhonov regularization results in a low-pass filter smoothing the object [47, 48]. While the smoothing operation is effective in preventing noise in the object reconstruction, it results in unwanted loss of high resolution features in the reconstruction. An alternative regularization with more favorable edge preservation properties is discussed in the next subsection. #### 4.4.4 Total variation regularization Total variation (TV) regularization penalizes changes in the image while to a certain degree preserving edge features [90]. The corresponding cost function can be approximated by \[\mathcal{C}=\left|P\cdot O-\psi\right|^{2}+\lambda\sqrt{\left|\nabla O\right| ^{2}+\epsilon}. \tag{37}\] Applying functional gradient descent (Eq. 7) gives \[O_{n+1}=O_{n}+\alpha P^{*}\left(\psi-P\cdot O_{n}\right)+\lambda\,\text{div }\left(\frac{\nabla O_{n}}{\sqrt{\left|\nabla O_{n}\right|^{2}+\epsilon}} \right), \tag{38}\] where div denotes divergence. Equation 38 is applied to the complex-valued object. Alternative implementations [20] have reported application of a TV prior to the real and imaginary part of the object separately, which is not equivalent to our implementation due to the nonlinearity of the TV regularizer. ### Momentum acceleration (mPIE and mqNewton) The momentum-accelerated ptychographic iterative engine (mPIE) [46] is the standard solver used for CP in _PtyLab_. In mPIE a predefined number of ePIE iterations \(T\) is carried out, after which the search direction is complemented by a momentum term \(\nu\) updating the entire object field of view \(O_{\text{gFOV}}\), \[\nu_{n} =\eta\cdot\nu_{n-T}+O_{n,\text{gFOV}}-O_{n+1-T,\text{gFOV}} \tag{39}\] \[O_{n+1,\text{oFOV}} =O_{n,\text{oFOV}}+\eta\cdot\nu_{n}. \tag{40}\] Here \(\eta\) is a damping term that is set to 0.7 by default[46]. Similar to conjugate gradient solvers [47, 105, 54], the momentum term accelerates the search direction and prevents zigzag motion towards the optimum. We emphasize that \(O_{n+1,\text{eFOV}}\) in this subsection denotes the entire probe field of view, while in other subsection \(O\) is an object box of the same size as the probe window. While addition of momentum is typically done for another regularized version of the PIE-type family of algorithms (rPIE, see [46]), it can complement any of the existing reconstruction engines including PIE. To avoid naming ambiguities, addition of momentum to PIE will be referred to as momentum accelerated quasi-Newton (mqNewton), which we often use as an FP solver. ## 5 Robust inverse models In the foregoing section, we have reviewed the basic inverse models underlying ptychography. However, oftentimes a variety of systematic errors are present in the experimental data that requires more robust inverse models. This is the case when the data is corrupted by for example partial spatial as well as partial temporal coherence, when the illumination wavefront profile is unstable throughout the scan, and when the scan positions are imprecisely known. In what follows, we discuss robust forward models that account for and mitigate the aforementioned sources of error. ### Mixed States In mixed state ptychography [44] the intensity in the detector plane is modeled as \[I=\sum_{k}\tilde{\psi}_{k}^{*}\tilde{\psi}_{k}, \tag{41}\] where the index \(k\) discerns mutually incoherent signal contributions (also known as _mixed states_). Inserting this into Eqs. 12 and 13 and calculating the Wirtinger derivative with respect to each \(\tilde{\psi}_{k}\), we get the gradients \[\frac{\partial\mathcal{L}}{\partial\tilde{\psi}_{k}^{*}}=\left[1-\left(\frac{ n+b}{I+b}\right)^{p}\right]\tilde{\psi}_{k}, \tag{42}\] where \(p=1\) for the Poisson model and \(p=1/2\) for the Anscombe model. The real space cost function for mixed state ptychography is \[\mathcal{C}=\sum_{k}\left|P_{n+1,k}\cdot O_{n}-\psi_{k}\right|^{2}+\lambda_{ P}\sum_{k}\left|P_{n+1,k}-P_{n,k}\right|^{2}+\lambda_{O}\left|O_{n+1}-O_{n} \right|^{2}, \tag{43}\] where the particular choices \(\lambda_{P}=\frac{1}{\beta}\max\left|O_{n}\right|^{2}-\left|O_{n}\right|^{2}\) and \(\lambda_{O}=\frac{1}{\beta}\max\left(\sum_{k}\left|P_{n,k}\right|^{2}\right) -\sum_{k}\left|P_{n,k}\right|^{2}\) and setting the Wirtinger derivatives with respect to \(P_{n+1}\) and \(O_{n+1}\) to zero lead to \[P_{n+1,k} =P_{n,k}+\frac{\beta}{\max\left|O_{n}\right|^{2}}O_{n}^{*}\left( \psi_{k}-P_{n,k}\cdot O_{n}\right) \tag{44}\] \[O_{n+1} =O_{n}+\frac{\beta}{\max\left(\sum_{k}\left|P_{n,k}\right|^{2} \right)}\sum_{k}P_{n,k}^{*}\left(\psi_{k}-P_{n,k}\cdot O_{n}\right), \tag{45}\] which is a modified version of ePIE for the case of mixed states, as first derived in [44]. In _PtyLab_ a snapshot singular value decomposition [79] is used to orthogonalize the probe states during the reconstruction process, which allows for their interpretation as orthogonal modes of the mutual intensity of a partially coherent field provided that other decoherence effects are absent in the experimental data [106; 107]. It is noteworthy that multiple object states can be reconstructed as well [44], provided the illumination is fully coherent, which otherwise leads to ambiguities [108]. ### Multispectral ptychography In multispectral ptychography [45; 109; 110; 111; 40; 112; 113] a light source of multiple individual spectral lines or a continuous spectrum is used. Because different colors are mutually incoherent, the detector update is identical to mixed state ptychography (compare Eq. 42), but the index \(k\) now denotes wavelength instead of spatial mode. The differences between mixed state and multispectral ptychography lie in the real space updates and in the propagator between the sample and the detector plane. In _PtyLab_ we minimize the following cost function \[\mathcal{C}= \sum_{k=1}^{K}\left|P_{n+1,k}\cdot O_{n,k}-\psi_{k}\right|^{2}+ \sum_{k=1}^{K}\lambda_{P,k}\left|P_{n+1,k}-P_{n,k}\right|^{2} \tag{46}\] \[+\sum_{k=1}^{K}\lambda_{O,k}\left|O_{n+1,k}-O_{n,k}\right|^{2}+ \mu\sum_{k=2}^{K-1}\left|2O_{n+1,k}-O_{n,k+1}-O_{n,k-1}\right|^{2} \tag{47}\] where \(\mu\) is a user defined parameter that enforces similarity between adjacent spectral reconstructions and \[\lambda_{P,k}=\frac{1}{\beta}\max\left|O_{n,k}\right|^{2}-\left|O_{n,k}\right| ^{2} \tag{48}\] \[\lambda_{O,k}=\frac{1}{\beta}\max\left|P_{n,k}\right|^{2}-\left| P_{n,k}\right|^{2}. \tag{49}\] Figure 6: Selection of forward models implemented in _PtyLab_: (a) The basic coherent diffraction model assumes the thin element approximation (TEA), where the exit wave \(\psi_{j}\) is modeled as a product of probe \(P\) and object box \(O_{j}\) a scan position \(j\). The exit wave is propagated into the detector plane via a suitable propagator \(D\). (b) In mixed state ptychography the object interacts with \(k\) mutually incoherent probe modes, giving rise to independent exit waves \(\psi_{j,k}\). These exit waves are propagated into the detector plane and incoherently added to form the intensity forward model. (c) Multispectral ptychography. Here a polychromatic probe interacts with a dispersive object, both of which are functions of wavelength \(\Lambda\). (d) In multislice ptychography the exit wave is modeled using the beam propagation method (BPM), which models a three-dimensional object to consist of several two-dimensional slices (index \(s\)). Inside each slice the TEA is used, while the propagation between slices is carried out via angular spectrum propagation \(A\). (e) Orthogonal probe relaxation can model scan position dependent probes \(P_{j}\) as a linear combination of mutually coherent orthogonal basis modes \(U_{k}\). (f) A coherent external reference wave can be added to the forward model. These cost functions result in the updates \[P_{n+1,k} =P_{n,k}+\frac{\beta}{\max\left|O_{n,k}\right|^{2}}O_{n}^{*}\left( \psi_{k}-P_{n,k}\cdot O_{n}\right) \tag{50}\] \[O_{n+1,k} =\frac{\gamma}{\gamma+2\mu\beta}O_{n,k}+\frac{\beta P_{n,k}^{*} \left(\psi_{k}-P_{n,k}\cdot O_{n,k}\right)+\beta\mu\left(O_{n,k+1}+O_{n,k-1} \right)}{\gamma+2\mu\beta}, \tag{51}\] where \(\gamma=\max\left|P_{n,k}\right|^{2}\). At the boundaries of the spectral range (\(k=1\) and \(k=K\)) the object updates are carried out without influence from adjacent spectral channels, \[O_{n+1,k}=O_{n,k}+\frac{\beta}{\max\left|P_{n,k}\right|^{2}}P_{n,k}^{*}\left( \psi_{k}-P_{n,k}\cdot O_{n,k}\right). \tag{52}\] The latter update also results for the special case \(\mu=0\). In this case the spectral channels are only coupled by the incoherent model for the detector intensity (Eq. 41). In the presence of spectral regularization (\(\mu\neq 0\)) we do not need a priori knowledge about the spectral weights of the incident beam [40]. In the original work proposing multispectral ptychography no spectral regularization of adjacent channels was used. Instead the spectrum of the incident polychromatic probe was known a priori and used as a constraint in the optimization routine [45]. The second difference between mixed state and multispectral ptychography is the propagation model. Other code projects, for instance _PyPy_[53], use zero padding to model the wavelength dependence of far-field wave propagation. In _PyLab_ the pixel size of a monochromatic wave can be scaled by using two-step propagators (scaledASP), which omits the need for spectrally dependent zero padding of the exit wave. For details the reader is referred to the supplementary information of [40]. ### Multislice ptychography (e3PIE) In multislice CP [43] the specimen is modeled by a stack of 2D slices. The beam propagation method (BPM) [114] is used as a forward model for the exit wave. In each of the slices the thin element approximation is assumed to be valid. The cascade of multiplication with each slice and subsequent propagation enables the BPM to model multiple forward scattering effects. A basic version of multislice CP (termed e3PIE) is implemented in _PtyLab_. For details, the reader is referred to the original work by Maiden et al. [43] and subsequent work [115; 116]. We have not yet implemented multislice FP [117] in the current version of _PtyLab_, although such an engine may come in future releases. ### Orthogonal probe relaxation (OPR) A basic version of orthogonal probe relaxation (OPR) [50] is implemented in _PyLab_. Instead of sharing the same probe across all scan positions in CP, as done for example in simple engines such as ePIE, OPR relaxes the requirement for a stable probe. The exit waves from different scan positions are used to estimate a low-rank basis, which seeks to model probe variations that occurred during a full scan, for example caused by pointing instability of the source. For details, the reader is referred to the original work by Odstrcil et al. [50]. We note that OPR can be combined with mixed states, as recently described by Eschen et al. [21]. With regard to FP, to our knowledge OPR has not been applied, although this may be an interesting approach to effectively model space-variant pupil functions. The latter is typically achieved by partitioning a larger field of view into a set of smaller sub-regions, each of which may be subject to different pupil aberrations. This approach, known as embedded pupil recovery (EPRY) [27], has to date essentially remained the only model for space-variant pupil aberrations in FP. However, because EPRY requires the reconstruction of a separate pupil for each sub-region, the model requires many degrees of freedom and ignores that adjacent sub-regions are unlikely to have strongly differing pupil aberrations. OPR could be a promising candidate to robustify EPRY in future FP applications for spatially varying aberration reconstruction. An example of the use of OPR in FP is shown in Fig.7. The image FOV was split into small segments, each with its own unique pupil function and OPR was used to impose a low-rank consistency constraint on all the pupil functions. In applying OPR to FP adjacent pupils share information and poor convergence of some isolated field segments can be avoided (compare highlighted red boxes in Fig. 7). For further implementation details the reader is referred to [118]. ### Subsampling (sPIE) In some applications it is challenging to sufficiently sample the captured detector signal. For example, in EUV ptychography generating a highly focused probe is oftentimes restricted by the available hardware [19]. In other applications, such as near-field ptychography, the detector pixel size can be a limiting factor [119]. In both situations one may attempt to solve for the probe and object in CP (or the pupil and object spectrum in FP) from undersampled measurements, being too coarse to oversample the diffraction data. In principle, the detrimental effect of undersampling can be compensated by high overlap. In the context of CP this technique is known as reciprocal space upsampling and was first demonstrated by Batey et al. [120] (where the algorithm was named _sPIE_). In the latter work, the captured ptychography measurements were deliberately undersampled by means of binning, but the original oversampled data could be recovered thanks to the high scan grid overlap in the captured data. We later generalized this principle to arbitrary sensing matrices that are not necessarily a result of an operation equivalent to binning, but that could result from any sensing architecture that compresses multiple, not necessarily neighboring pixels into a smaller data cube [41]. In such situations one seeks to minimize the cost function \[\mathcal{L}=\left\|\sqrt{\mathbf{S}\left|\tilde{\psi}\right|^{2}}-\sqrt{I}\right\| _{2}^{2}, \tag{53}\] where \(\mathbf{S}\) is a sensing matrix representing, for example, downsampling or any other detection scheme. Gradient descent on this cost function results in a modified intensity projection given by \[\tilde{\psi}_{n+1}=\tilde{\psi}_{n}\mathbf{S}^{T}\sqrt{\frac{I}{\mathbf{S}\left|\tilde {\psi}_{n}\right|^{2}}}, \tag{54}\] where \(\mathbf{S}^{T}\) is the transpose of the sensing matrix. For the special case of \(\mathbf{S}\) being a downsampling operation \(\mathbf{S}^{T}\) is an upsampling operation (compare Fig. 8). In this case, Eq. 54 modifies the estimated detector wave by multiplying it with Figure 7: (a) Orthogonal probe relaxation scheme in FP. In step 1, the reconstructed pupils at a given iteration are factorized using SVD to produce an orthogonal full-field aberration basis. In step 2, the low-rank representation is obtained by eliminating modes with low-contributions to actual aberrations. This way, noise and errors are eliminated. In step 3, the low-rank full-field aberration reconstruction is performed from the low-rank basis. In this example, it can be seen that noisy pupils (top right corner) were replaced with a better pupil estimate. The whole process ensures that pupil aberrations remain well conditioned. (b) Experimental validation of pupil initialisation[118], where the use of OPR resulted in stable pupil reconstruction at the corners (indicated by red boxes). the upsampled version of the ratio between the measured intensity \(I\) (already downsampled) and the downsampled estimated intensity \(\mathbf{S}\left|\tilde{\psi}_{n}\right|^{2}\). This principle was also used by Xu et al. who reported sub-sampled near-field ptychography [93]. ### Lateral position correction (pcPIE) In ptychography the scan positions may not be accurately known, for example in the case of a low-precision scanning stage [85, 67, 121, 122]. The wrong estimation of the scan positions will cause errors and artifacts during the stitching of the object patches into the large object field of view. In _PryLab_ a momentum-accelerated version of a cross-correlation-based lateral position correction algorithm is used [49]. The rationale of this position correction is based on the observation that, at iteration \(n+1\) of the reconstruction procedure, the object patch estimate at scan position \(j\) is slightly shifted towards its true position. This shift is detected and used to update the scan grid by maximizing the cross correlation \[C_{n,j}(\Delta\mathbf{r})=\sum_{\mathbf{r}}O_{n,j}^{*}(\mathbf{r}-\Delta\mathbf{r})\cdot O_{n+ 1,j}(\mathbf{r}-\Delta\mathbf{r}) \tag{55}\] with respect to the shift \(\Delta\mathbf{r}\). In practice, we shift the object patch of the iteration \(n\) one pixel in all directions (horizontally, vertically and diagonally) and compute the centre of mass of the cross correlation \[\Delta_{n,j}=\frac{\sum_{\Delta\mathbf{r}}\left(|C_{n,j}(\Delta\mathbf{r})|-\left<|C_{ n,j}(\Delta\mathbf{r})|\right>\right)\Delta\mathbf{r}}{\sum_{\mathbf{r}}|O_{n,j}(\mathbf{r})|^{2}}, \tag{56}\] where the brackets \(\left<\ldots\right>\) denote an average over all shift pixels, and then estimate the position gradient \(d_{n,j}\) using \[d_{n,j}=\alpha\cdot\Delta_{n,j}+\beta\cdot d_{n-1,j} \tag{57}\] The updated scan position at iteration \(n+1\) is \[\mathbf{r}_{n+1,j}=\mathbf{r}_{n,j}-d_{n}. \tag{58}\] Default values are \(\alpha=250\), \(\beta=0.9\), and \(d_{0}=0\). Figure 8: Illustration of sPIE in CP. Assuming a far-field diffraction geometry, the real and reciprocal space sampling conditions are inversely proportional (indicated by the dashed lines): A small probe field of view (pFOV) in CP requires only coarse detector pixels \(\Delta\mathbf{q}\). Conversely, if the physical probe wavefront extends over a larger region, the observed data is undersampled (top row). In such situations, the detector pixels need to be sub-divided into smaller pixels \(\Delta\mathbf{q}^{\prime}\) (bottom row). This allows to extend the numerical pFOV to be larger than the physical probe size. The resulting constraint in the forward model is that the sum over the intensities over a set \(\mathbf{S}\) of sub-sampled pixels equals the corresponding observation over the same region in the observed data with a coarse sampling grid. ### Reflection ptychography with angle calibration (aPIE) In reflection ptychography the sample plane and the detector are non-coplanar. Assuming far-field diffraction for simplicity, the captured data is related to the specimen exit wave by a Fourier transformation plus an additional coordinate transformation [123; 17]. An inverse coordinate transformation can be applied to the captured raw data to simplify the forward model. However, this operation requires accurate knowledge of the angle between the optical axis and the specimen surface normal. If this angle is not calibrated within a fraction of a degree, the reconstruction quality can suffer notably. We have recently presented an algorithm for angular auto-calibration in reflection ptychography (aPIE), which is part of _PtyLab_. aPIE uses a heuristic strategy to estimate the unknown angle within an iteratively shrinking search interval. For details the reader is referred to [39]. ### Axial position correction (zPIE) Similarly to pcPIE (position correction) and aPIE (angle correction in reflection ptychography) another self-calibration algorithm provided as part of _PtyLab_ is zPIE, which can be used to estimate the sample-detector distance [38]. The main idea is that when the sample-detector distance is miscalibrated, the reconstructed object oftentimes exhibits characteristics of a slightly defocused inline hologram, including ringing at edges. An autofocus metric based on total variation (TV) is then used to calibrate the correct sample detector distance. We observed that TV-based autofocusing performs best in the near-field on binary specimens, although it can also be used on biological specimens. Other choices of autofocusing metrics can easily be implemented by the user, if the TV-based sharpness metric fails [124]. ### Ptychography combined with an external reference beam We recently reported ptychographic optical coherence tomography, which combines full field frequency-domain OCT with ptychography [24]. In the latter work there was no need for an external reference wave, as common in OCT applications. Instead the reference was provided from a direct surface reflection of the sample itself. Thus the technique can principally be applied to the short-wavelength regime, where providing an external reference comes with extra experimental challenges. However, in the visible and infrared spectral range a reference wave is readily provided and can make POCT more convenient. Providing an external reference wave in ptychography requires adjustments to the forward model. In this case, we seek to minimize the cost function density \[\mathcal{L}=\left[\sqrt{\left|\tilde{\psi}+\tilde{\rho}\right|^{2}}-\sqrt{I} \right]^{2}, \tag{59}\] where \(\tilde{\rho}\) denotes a coherent external reference wave. All other quantities are the same as defined as in previous sections. Using gradient descent with a unit step size in conjunction with Wirtinger derivatives, we obtain updates for both the wave diffracted from the specimen \[\tilde{\psi}_{n+1}=\left(\tilde{\psi}_{n}+\tilde{\rho}_{n}\right)\sqrt{\frac{ I}{\left|\tilde{\psi}_{n}+\tilde{\rho}_{n}\right|^{2}}}-\tilde{\rho}_{n} \tag{60}\] and the external reference wave \[\tilde{\rho}_{n+1}=\left(\tilde{\psi}_{n}+\tilde{\rho}_{n}\right)\sqrt{\frac{ I}{\left|\tilde{\psi}_{n}+\tilde{\rho}_{n}\right|^{2}}}-\tilde{\psi}_{n}. \tag{61}\] We note that the mathematical structure of external reference beam ptychography opens up a trivial ambiguity. Suppose that the triplet of probe \(P\), object \(O\), and reference \(\tilde{\rho}\) yields the observed intensity \(I\), i.e. \[I=\left|\tilde{\psi}+\tilde{\rho}\right|^{2}=\left|\mathcal{F}\left(P\cdot O \right)+\tilde{\rho}\right|^{2}=\left|\mathcal{F}\left(P\right)\otimes \mathcal{F}\left(O\right)+\tilde{\rho}\right|^{2}. \tag{62}\] Then it immediately follows that the triplet of probe \(P\), object \(-O\), and reference \(\tilde{P}+\tilde{\rho}\) is also a solution, since \[\left|\tilde{\psi}+\tilde{\rho}\right|^{2} =\left|\mathcal{F}\left(P\cdot\left[1-O\right]\right)+\tilde{\rho }\right|^{2} \tag{63}\] \[=\left|\tilde{P}+\mathcal{F}\left(P\right)\otimes\mathcal{F} \left(-O\right)+\tilde{\rho}\right|^{2}\] (64) \[=\left|\mathcal{F}\left(P\right)\otimes\mathcal{F}\left(-O\right) +\tilde{P}+\tilde{\rho}\right|^{2}\] (65) \[=\left|\mathcal{F}\left(P\right)\otimes\mathcal{F}\left(O_{\text {twin}}\right)+\tilde{\rho}_{\text{twin}}\right|^{2}, \tag{66}\] where \(\otimes\) denotes convolution and we defined the twin object \(O_{\text{twin}}=-O\) as well as the twin reference \(\tilde{\rho}_{\text{twin}}=\tilde{P}+\tilde{\rho}\). \(P\) and \(\tilde{P}\) denote the probe and its Fourier transform, respectively. An analog argument holds for near-field diffraction geometries, where an additional quadratic phase envelope in the probe enters the math. It is thus seen that the twin object and the twin reference wave explain the same observed interferograms as the true object and reference. To avoid this ambiguity, a separate measurement of the reference wave (with the wave from the specimen blocked) can be carried out or a priori knowledge about the specimen can be provided (for example knowledge about the specimen being transparent in certain regions such as an empty microscopy slide). ## 6 Scan grid optimization In CP, a certain amount of consideration is needed to optimize the scan trajectory. To date, the majority of CP setups employ mechanical scanners, although variants exist where the beam is rapidly steered over the sample by means of galvo mirrors [125]. The latter offers advantages in terms of speed and overall cost of the experimental setup, but the isoplanatic illumination patch of such mirror systems is finite and thus limits the field of view over which the probe wavefront can be assumed to be stable, thus compromising one of the very benefits of CP. Hence mechanical scanners are still the preferred option. For such systems it is important to minimize the total scan distance in order to reduce scan time and prevent mechanical wear. In addition, unlike other scanning microscopy systems ptychography requires non-periodic scan grids to avoid ambiguities in the reconstruction [126]. A popular choice are Fermat scan grids [127], as depicted in Fig. 9(a). This type of scan grid is conveniently described in polar coordinates, where its trajectory assumes the form of a spiral. Minimizing to total travel path can be done using a solver for the traveling salesman problem (TSP). In _PtyLab_, we use a 2opt [128] TSP heuristic solver, which offers a good compromise between optimality and optimization time. Fig. 9(b) shows an example of a distance-optimized scan trajectory, where the color scale indicates the start (blue) and end position (red). Figure 9: A variety of scan grids can be generated and optimized in _PtyLab_. (a) The typical workflow is to generate an aperiodic scan grid in polar coordinates, here a Fermat grid, and subsequently preprocessing steps on it. (b) The total path of the scan trajectory is minimized by solving the traveling salesman problem. In some cases, morphological operations to scan grids are useful, such as non-uniform _scaling_ (c) and _rectification_ (d). (e) Another useful technique is _checkpointing_, where the same scan point is revisited during a long scan. In panel (e) a Fermat grid with 200 scan points plus 20 checkpoints is shown, which are equally spaced in time. (f) For large scan grids an overlapping k-means (OKM) algorithm can be used to _partition_ the scan grid into overlapping clusters and subsequently process each cluster separately. The overlap between clusters is required to synchronize phase information, which can otherwise differ by a global offset, and for stitching a large-field-of-view image. Moreover, several operations are available to transform scan grids, including _non-uniform scaling_ and _rectification_ as shown Fig. 9(c) and (d), respectively. The former allows for non-uniform spatial sampling, adjusted such that the sampling is higher in regions that are challenging to resolve, while the latter clips the field of view to a rectangular (here square) region. Another practically useful strategy is _checkpointing_ (see Fig. 9(e)), which alters a given scan grid such that it revisits a certain reference point throughout the scan. Deviations in the diffraction data at the checkpoints allow for identifying sources of error in the experimental setup, including position drift, flux instability, and illumination wavefront variations. The checkpoints are equi-spaced in time. The aforementioned techniques are primarily scan grid preprocessing techniques, meaning that the scan grid is optimized prior to the actual experiment. After the data acquisition, scan grid postprocessing techniques may be required. For example, large scan grids can be _partitioned_ to prevent memory limitations. In this way large data sets may be chopped up into smaller pieces which are then individually processed [129]. It is important that the scan partitions spatially overlap, so that adjacent regions can be phase synchronized and stitched up post-reconstruction. To ensure overlap between the clusters, an overlapping k-means (OKM) algorithm can be used [130; 131]. Figure 9(f) shows an example of a scan grid partitioned into four overlapping clusters (filled circles/triangles, unfilled diamonds/squares), each containing 160 scan points. In the middle, the clusters overlap. A second reason for scan grid partitioning can be to define batch gradients which speed up convergence and robustness [54]. As a note on FP scan grids, at first sight it appears surprising that the technique does not exhibit raster scan artefacts although the most commonly employed LED arrays are typically regularly spaced. However, the typically regular LED spatial arrangement still corresponds to a non-periodic spacing in angle, which explains that it is not subject to the aforementioned raster scan artefacts. While most of the aforementioned preprocessing steps are not required for FP due to the absence of mechanical movement, checkpointing and partitioning may still be used for monitoring stability and distributed data analysis, respectively. ## 7 Open experimental data and tutorials We publish a variety of CP and FP data sets and tutorials with the aim to introduce users to the functionality of _PtyLab_. Figure 10 depicts two such data sets. The top row shows a soft x-ray (\(\lambda=2.48\)nm) data set collected at a synchrotron (experimental details in [75]). The bottom row depicts a visible light (\(\lambda=625\)nm) FP data set of lung carcinoma. For both data sets we show from left to right a single frame of the raw data, the recovered quantitative phase image (QPI) of the object (_resolution test target_ and _lung carcinoma_ histology slide), and reconstructed probe/pupil for the case of CP (top) and FP (bottom). Hue and brightness depict the phase and amplitude, respectively, of the complex-valued reconstructed quantities. A variety of additional data sets are published alongside _PtyLab_, which are summarized in table 1. Each of these data sets comes with an online tutorial explaining suitable data analysis approaches, including self-calibration and regularization. ## 8 Discussion and conclusions _PtyLab_ is a versatile ptychography software which we hope will aid researchers to explore the capabilities of CP and FP. Nevertheless, despite our excitement about this endeavor, we should mention some of its shortcomings: (1) Researchers with large-scale and high-throughput data analysis tasks (e.g. beamline scientists at synchrotrons) may be better off with one of the currently available high-performance ptychography packages mentioned in the introduction. However, we believe increased performance comes at the cost of flexibility in algorithm prototyping. (2) _PtyLab_ currently does not support tomographic reconstruction with specimen rotation. It is to be noted that external CT toolboxes, such as Astra [136] or Tigre [137], can principally be used for ptychographic computed tomography once a sequence of 2D reconstructions at different angles is available. However, some ptychotomographic software embed specialized regularization techniques within the reconstruction routine [138], which are not available in standard CT \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline mode & size (MB) & data set & tutorials & reference \\ \hline CP & 360 & helical beam & regular reconstruction & [75] \\ CP & 360 & helical beam & mixed-states & [75] \\ CP & 102 & USAF & axial position calibration (zPIE) & [38] \\ CP & 404 & Siemens star & total-variation regularization & [21] \\ FP & 10 & lung carcinoma & regular reconstruction & [134] \\ FP & 10 & USAF & position calibration & [135] \\ \hline \end{tabular} \end{table} Table 1: Overview of open data sets and _PtyLab_ tutorials. packages. In contrast, ptychographic optical coherence tomography (POCT) [24] does not require angle diversity and 3D reconstructions can be obtained simply by performing a Fourier transform along the wavelength dimension in a multispectral reconstruction object stack. The latter is readily performed in _PtyLab_. We have designed _PtyLab_ based on the principle of reciprocity. An interesting implication of the conversion between CP and FP is performance, as it provides the freedom to choose whether the computational complexity of the Fourier transform operation, used in typical inversion algorithms, scales logarithmically with the number of pixels in the detector or with the number of scan positions in a given data cube - numbers which can be orders of magnitude apart so that even on a logarithmic scale, practical speed ups can be achieved. However, in order to take full advantage of reciprocity, interpolation techniques to non-equidistantly sampled geometries are required, which will be explored in future work. In summary, we have presented _PtyLab_, a cross-platform, open source inverse modeling toolbox for CP and FP. We believe _PtyLab_'s major strengths lie in (1) the uniform framework for CP and FP enabling cross-pollination between the two domains, (2) the availability in three widely used programming languages (Matlab, Python, and Julia), making it easy for researchers with different programming backgrounds to exchange and benchmark code snippets and data analyses, and (3) its versatile code architecture suited both for beginners and experts interested in rapid ptychographic algorithm prototyping. In addition, a plethora of self-calibration features (e.g. aPIE, zPIE) and algorithmic novelties (e.g. conversion between CP and FP, POCT, CP with external reference beam, sPIE) are available that to our knowledge have previously not been featured in open access ptychography code. Various functions for scan grid generation help the user to optimize data acquisition and postprocessing. For further information the reader is referred to the GitHub website with its accompanying tutorials as well as the open data provided along with it [37]. Figure 10: Examples of CP and FP experimental data analyses using _PtyLab_. Top row: synchrotron-based soft x-ray CP (a) raw data, reconstructed (b) object QPI and (c) probe wavefront. Bottom row: visible light FP (d) raw data, reconstructed (e) object QPI and (f) pupil. Amplitude and phase are depicted as brightness and hue; experimental details in [75, 132]. Figure adapted from [133]. ## Appendix ### Equivalence of CP and FP In this appendix, we provide a formal proof that the same data cube can be regarded as a CP or an FP data set, implying the ability to convert between the two. Without loss of generality, we assume we are given a far-field CP data set \[I_{d}\left(\mathbf{q},\mathbf{s}\right)=\left|\int P\left(\mathbf{x}\right)O\left(\mathbf{x}- \mathbf{s}\right)\exp\left[-i2\pi\mathbf{q}\mathbf{x}\right]d\mathbf{x}\right|^{2}, \tag{67}\] where \(\mathbf{s}\) and \(\mathbf{q}\) denote scan positions and detector coordinates, respectively. For a given scan point \(\mathbf{s}_{0}\), we have \[I_{d}\left(\mathbf{q},\mathbf{s}_{0}\right) =\left|\int P\left(\mathbf{x}\right)O\left(\mathbf{x}-\mathbf{s}_{0}\right) \exp\left[-i2\pi\mathbf{q}\mathbf{x}\right]d\mathbf{x}\right|^{2} \tag{68}\] \[=\left|\tilde{P}\left(\mathbf{q}\right)\otimes\tilde{O}_{\mathbf{s}_{0}} \left(\mathbf{q}\right)\right|^{2}, \tag{69}\] where \[\tilde{P}\left(\mathbf{q}\right)=\mathcal{F}_{\mathbf{x}\rightarrow\mathbf{q}}\left[P \left(\mathbf{x}\right)\right] \tag{70}\] is the probe spectrum and \[\tilde{O}_{\mathbf{s}_{0}}\left(\mathbf{q}\right)=\mathcal{F}_{\mathbf{x}\rightarrow\mathbf{q }}\left[O\left(\mathbf{x}-\mathbf{s}_{0}\right)\right] \tag{71}\] is the object spectrum. CP solvers use the problem formulation in Eq. 68 for a sequence of scan positions. Next, consider a fixed observation pixel (\(\mathbf{q}_{0}\)) in the data cube in Eq. 67 \[I_{d}\left(\mathbf{q}_{0},\mathbf{s}\right) =\left|\int P\left(\mathbf{x}\right)O\left(\mathbf{x}-\mathbf{s}\right)\exp \left[-i2\pi\mathbf{q}_{0}\mathbf{x}\right]d\mathbf{x}\right|^{2}\] \[=\left|\int P\left(\mathbf{x}\right)O\left(\mathbf{x}-\mathbf{s}\right)\exp \left[-i2\pi\mathbf{q}_{0}\left(\mathbf{x}-\mathbf{s}\right)\right]d\mathbf{x}\right|^{2}\] \[=\left|\int P\left(\mathbf{x}\right)O_{\mathbf{q}_{0}}^{\prime}\left(\bm {s}-\mathbf{x}\right)d\mathbf{x}\right|^{2}\] \[=\left|P\left(\mathbf{s}\right)\otimes O_{\mathbf{q}_{0}}^{\prime}\left( \mathbf{s}\right)\right|^{2}, \tag{72}\] where we defined \[O_{\mathbf{q}_{0}}^{\prime}\left(\mathbf{x}\right)=O\left(-\mathbf{x}\right)\exp\left[i2 \pi\mathbf{q}_{0}\mathbf{x}\right]. \tag{73}\] FP solves the problem formulation in Eq. 72 for a sequence of illumination directions. Thus we may consider the same data cube to be either a CP or an FP inverse problem. From the CP perspective we reconstruct P and O, while from the FP perspective we reconstruct \(\tilde{P}\) and \(\tilde{O}\). Thus if we tackle a CP data from the FP perspective, we simply inverse Fourier transform the reconstructed object spectrum to retrieve the object that we would have reconstructed had we directly chosen a CP solver. A similar statement holds for the correspondence between probe and pupil. ### Acknowledgements Python packages: numpy [139], matplotlib [140], h5py [141], scipy [142], scikit-image [143], tqdm [144] _PtyLab,jl_ was implemented in Julia [145] and made use of the following packages: CUDA.jl[146, 147], FFTWjl [148, 149] and LLVM.jl[150]. The Python, Matlab, and Julia versions of PtyLab are available online [37]. ### Funding Information SW acknowledges funding from the European Research Council (ERC-CoG 864016) and the Netherlands Organisation for Scientific Research NWO through the LINX Perspectief Programme. The work of TA is supported by funding from the Swiss National Science Foundation (SNF), Project Numbers \(200021\_196898\) ## Disclosures The authors declare no conflicts of interest.
2305.14466
$β^-$ decay of neutron-rich $^{45}$Cl at magic number N=28
Results from the study of $\beta^-$-decay of $^{45}$Cl, produced in the fragmentation of a 140-MeV/u $^{48}$Ca beam, are presented. The half-life for $^{45}$Cl $\beta$-decay is measured to be 513(36) ms. The $\beta^-$ and $\beta^- 1n$ decay of $^{45}$Cl populated excited states in $^{45,44}$Ar, respectively. On the basis of $\gamma$-ray singles and $\gamma$-$\gamma$ coincidence data, decay schemes for the two daughter nuclei have been established. They are compared with shell model calculations using the FSU interaction. The low-lying negative parity states for $^{45}$Ar are well described by a single particle (neutron) occupying orbitals near the Fermi surface, whereas neutron excitations across the $N = 20$ shell gap are needed to explain the positive-parity states which are expected to be populated in allowed Gamow-Teller $\beta$-decay of $^{45}$Cl. The highest $\beta$-feeding to the 5/2$^+$ state in $^{45}$Ar from the ground state of $^{45}$Cl points towards a 3/2$^+$ spin-parity assignment of the ground state of the parent over the other possibility of 1/2$^+$. The high Q$_{\beta^-}$ value of $^{45}$Cl decay allows for the population of $1p1h$ states above the neutron separation energy in $^{45}$Ar leading to positive parity states of $^{44}$Ar being populated by removal of one neutron from the $sd$ shell. The spin-parities of the excited levels in $^{44}$Ar are tentatively assigned for the first time by comparison with the shell model calculations. The 2978~keV level of $^{44}$Ar is identified as the excited 0$^+$ level which could correspond to a different configuration from the ground state.
Soumik Bhattacharya, Vandana Tripathi, S. L. Tabor, A. Volya, P. C. Bender, C. Benetti, M. P. Carpenter, J. J. Carroll, A. Chester, C. J. Chiara, K. Childers, B. R. Clark, B. P. Crider, J. T. Harke, S. N. Liddick, R. S. Lubna, S. Luitel, B. Longfellow, M. J. Mogannam, T. H. Ogunbeku, J. Perello, A. L. Richard, E. Rubino, S. Saha, O. A. Shehu, R. Unz, Y. Xiao, Yiyi Zhu
2023-05-23T18:45:03Z
http://arxiv.org/abs/2305.14466v1
# \(\beta^{-}\) decay of neutron-rich \({}^{45}\)Cl at magic number N=28 ###### Abstract Results from the study of \(\beta^{-}\)-decay of \({}^{45}\)Cl, produced in the fragmentation of a 140-MeV/u \({}^{48}\)Ca beam, are presented. The half-life for \({}^{45}\)Cl \(\beta\)-decay is measured to be 513(36) ms. The \(\beta^{-}\) and \(\beta^{-}1n\) decay of \({}^{45}\)Cl populated excited states in \({}^{45,44}\)Ar, respectively. On the basis of \(\gamma\)-ray singles and \(\gamma\gamma\) coincidence data, decay schemes for the two daughter nuclei have been established. They are compared with shell model calculations using the FSU interaction. The low-lying negative parity states for \({}^{45}\)Ar are well described by a single particle (neutron) occupying orbitals near the Fermi surface, whereas neutron excitations across the \(N=20\) shell gap are needed to explain the positive-parity states which are expected to be populated in allowed Gamow-Teller \(\beta\)-decay of \({}^{45}\)Cl. The highest \(\beta\)-feeding to the 5/2\({}^{+}\) state in \({}^{45}\)Ar from the ground state of \({}^{45}\)Cl points towards a 3/2\({}^{+}\) spin-parity assignment of the ground state of the parent over the other possibility of 1/2\({}^{+}\). The high Q\({}_{\beta^{-}}\) value of \({}^{45}\)Cl decay allows for the population of 1\(p\)1\(h\) states above the neutron separation energy in \({}^{45}\)Ar leading to positive parity states of \({}^{44}\)Ar being populated by removal of one neutron from the \(sd\) shell. The spin-parities of the excited levels in \({}^{44}\)Ar are tentatively assigned for the first time by comparison with the shell model calculations. The 2978 keV level of \({}^{44}\)Ar is identified as the excited 0\({}^{+}\) level which could correspond to a different configuration from the ground state. pacs: 21.30.-k, 21.30.-k, 21.30.+h, 21.30.+h, 21.30.+h, 21.30.+h ## I Introduction In the past few decades a primary focus of nuclear structure studies has been understanding whether the known magic numbers that appear to hold good near stability, remain so as the drip lines are approached [1; 2; 3] where the proton-neutron asymmetry is large. The magic numbers \(N=28\) and \(Z=28\) are the lowest ones whose emergence requires a strong spin-orbit interaction and thus are of particular interest for the experimental and theoretical study of exotic nuclei far from stability to understand the isospin dependence of the spin-orbit interaction. There are several examples of experimental evidence, accompanied by theoretical calculations, which indicate that the \(N=28\) shell gap below \({}^{48}\)Ca reduces continuously with decreasing proton number. Just two protons away from \({}^{48}\)Ca, the excitation energy of first 2\({}^{+}\) state in \({}^{46}\)Ar drops considerably [4]. Further, around doubly magic \({}^{48}\)Ca which is considered spherical the nuclear shape changes rather rapidly to develop deformation in \({}^{42}\)Si [5] while shape coexistence is observed in \({}^{44}S\)[6; 7; 8; 9; 10; 11]. The study of the excitation of protons in odd-\(Z\) nuclei through measurements of excited states in the K, Cl and P isotopes have indicated a near-degeneracy of the proton \(d_{3/2}\) and \(s_{1/2}\) orbitals approaching \(N=28\)[12]. The increase of collectivity away from \(Z=20\) as well as the degeneracy of \(\pi d_{3/2}\) and \(\pi s_{1/2}\) orbitals can be explained by the monopole part of the tensor force [13], which is attractive between the \(\nu f_{7/2}\) and \(\pi d_{3/2}\). With the degeneracy of the two proton orbitals, the ground states of odd-\(A\) Cl isotopes are found to vary between 3/2\({}^{+}\) and 1/2\({}^{+}\). The ground state spin-parity of \({}^{45}\)Cl is not known experimentally, though both 1/2\({}^{+}\)[12] and 3/2\({}^{+}\)[14] are predicted as possible spins based on different calculations. The ground states of odd-\(A\)\({}^{37-45}\)Ar, (\(Z=18\)) isotopes on the other hand are anticipated to be 7/2\({}^{-}\) in a simple filling of the orbitals due to a neutron hole in the \(f_{7/2}\) orbital. However, the ground state spin/parity changes between 5/2\({}^{-}\) and 7/2\({}^{-}\) throughout the Ar isotopic chain from \(N=20\) to \(N=28\). A charge radius measurement using laser spectroscopy has found the ground state as 7/2\({}^{-}\) for \({}^{39,41}\)Ar, 5/2\({}^{-}\) for \({}^{43}\)Ar [15] and then again 7/2\({}^{-}\) for \({}^{45}\)Ar [16]. The \(N=26\)\({}^{44}\)Ar is proposed to be deformed with a prolate ground state associated with a high \(B(E2)\) value for the 2\({}^{+}_{1}\) to 0\({}^{+}_{1}\) transition from Coulomb excitation study [17; 18]. The B(E2) value for the same transition in \({}^{46}\)Ar [4; 19] on the other hand is found to be lower supporting the smaller charge radius for the \({}^{46}\)Ar [15] compared to \({}^{44}\)Ar. However, the latest measurement of the \(B(E2)\) values from the lifetime measurement of \(2^{+}_{1}\) state for \({}^{46}\)Ar and \({}^{44}\)Ar by Mengoni _et al._[20], reports almost double \(B(E2)\) values for the \({}^{46}\)Ar than \({}^{44}\)Ar which is unexpected if the of \(N=28\) shell gap persists for \({}^{46}\)Ar. Different calculations also show discrepancy between the \(B(E2)\) values of these two nuclei which remains to be resolved. Between the deformed \({}^{44}\)Ar and near spherical \({}^{46}\)Ar, the structure of the intermediate \({}^{45}\)Ar is therefore of special interest. \(\beta\)-decay is an excellent experimental tool to study the excited states of neutron rich nuclei near \(N=28\). The large \(Q_{\beta^{-}}\) value ensures that a large number of excited states, both bound and unbound, are populated. In this region of the chart of nuclides, the differences in ground-state spins and parities between the \(\beta\)-decay parent and daughter limit the possibility of direct feeding to the ground state. This is due to the protons filling the \(1s_{1/2}\) or \(0d_{3/2}\) subshells, whereas the neutrons are filling the \(0f_{7/2}\) subshell. For odd-\(A\) Cl isotopes like \({}^{45}\)Cl in this work, a positive-parity ground state is expected, which will decay to the positive-parity excited states of odd-\(A\) daughter Ar via allowed Gamow-Teller (GT) transitions and not feed the negative parity ground state directly. The positive-parity states in the even-odd daughter arise from promoting a proton or a neutron across the \(Z=20\) or \(N=20\) shell gap and these \(1p1h\) states are expected at relatively high energies. For \({}^{45}\)Cl, the expected \(3/2^{+}\) ground state will \(\beta\)-decay to positive-parity (\(1/2^{+}\), \(3/2^{+}\) or \(5/2^{+}\)) or negative-parity (\(1/2^{-}\), \(3/2^{-}\) or \(5/2^{-}\)) states by allowed or first forbidden \(\beta\)-decays, respectively. With the large \(Q_{\beta^{-}}\) value, the \(\beta\)-decay can also populate states above the neutron separation energy (S\({}_{n}\)) in \({}^{45}\)Ar and therefore opens up the possibility of studying the excited states in the \(\beta 1n\) daughter \({}^{44}\)Ar. The investigation of excited states in \({}^{44,45}\)Ar is the focus of this study along with the ground state spin and parity determination for the parent \({}^{45}\)Cl. ## II Experimental Setup The experiment was carried out at the National Superconducting Cyclotron Laboratory (NSCL) [21] at Michigan State University to investigate the \(\beta^{-}\) decay of exotic \({}^{45}\)Cl. A 140-MeV/u \({}^{48}\)Ca primary beam, was fragmented using a thick Be target at the target position of the fragment separator, A1900 [22], to produce the nuclei of interest. A wedge-shaped Al degrader, which increases the energy dispersion for different fragments, was placed at the intermediate dispersive image of the A1900 separator to provide a cleaner particle identification of the cocktail beam. After passing through the wedge shaped Al degrader, the selected isotopes were transported to the Beta Counting System (BCS) [23]. The BCS is equipped with a \(\approx\) 1mm thick pixelated (40 strips x 40 strips) Double-Sided Silicon Strip Detector (DSSD) at the center. An Al degrader upstream reduced the energy of the fragments to ensure that the implants stopped at the middle of the DSSD. The DSSD was followed by a Single-Sided Silicon Strip Detector (SSSD) which served as a veto detector. This veto detector was used to counter a large flux of light particles which were transmitted through the DSSD detector for the particular A1900 settings used for this experiment which can impair implant-\(\beta\) correlations. Dual-gain pre-amplifiers were used for the DSSD to record the Figure 1: Schematic representation of \({}^{45}\)Cl decay via \(\beta 0n\) and \(\beta 1n\) channels. The boxes display the ground-state spin-parity assignments of the corresponding nuclei. The dominant neutron and proton configuration for the ground state of parent nucleus \({}^{45}\)Cl and the excited states expected to be populated in \({}^{45}\)Ar and \({}^{44}\)Ar by \(\beta\) and \(\beta 1n\) are shown. The arrows show the transformation of neutron from \({}^{45}\)Cl to proton and removal of neutron from \({}^{45}\)Ar involving the possible orbitals to produce excited states of \({}^{45}\)Ar and \({}^{44}\)Ar, respectively. Figure 2: The two dimensional plot of partial energy deposition in the upstream PIN detector (\(\Delta E\)) and the Time of Flight (ToF) measurement with respect to the focal plane scintillator detector used for particle identification of the nuclei of interest in the present work. time and position of implants (GeV energy depositions), as well as subsequent decays (keV to MeV energy depositions). The implant rate was kept below 200/s to maximize the efficiency of correlating the implanted ion with the decay products. Two Si PIN detectors, placed upstream of the DSSD, provided the partial energy loss information of the fragments. Along with the scintillator at the intermediate dispersive image of the A1900 these PIN detectors provide the time of flight information used to generate particle identification plots (PID) of the incoming implants as shown in Fig. 2. The DSSD and SSSD detector combination was surrounded by 16 Clover detectors to detect the \(\beta\)-delayed \(\gamma\) rays with an efficiency of about 5% at 1 MeV. The efficiency of the array was measured with the SRM [24] and \({}^{56}\)Co sources placed outside of the DSSD and then corrected for the dimensions of the DSSD with GEANT4 simulations. The time-stamped data were collected using the NSCL digital data acquisition system [25]. The timing and spatial correlations from each channel corresponding to the different detectors were used to ensure the correlation between the implanted fragments in the DSSD and the corresponding \(\beta\)-decay event. ## III Experimental results Fig. 2 shows the clear separation of the different isotopes produced in the present experimental investigation. Selected \({}^{45}\)Cl implants from the cocktail beam were correlated with the emitted \(\beta\) particles to obtain half-life. Further, coincidence with delayed \(\gamma\) transitions with the correlated implant-decay events allowed us to study the excited states of \({}^{45}\)Ar and \({}^{44}\)Ar produced via \(\beta\) and \(\beta\)-1n decay respectively. The \(\beta\)-decay of other neutron rich P and S nuclei seen in Fig. 2 have been reported and discussed in our previous publication [26]. ### \(\beta\)-decay of \({}^{45}\)Cl The time differences between \({}^{45}\)Cl implantations and \(\beta\) particles detected in the same or one of the adjacent eight pixels of the DSSD, in coincidence with the strongest ground-state \(\gamma\) transition (542 keV) in \({}^{45}\)Ar, were histogrammed to generate a decay curve. The half-life of \({}^{45}\)Cl was extracted from this decay curve, as shown in Fig. 3. A fit using a simple exponential decay function of \({}^{45}\)Cl and a background component accounting for other long-lived activity gives a half-life of 513(36) ms. The previous half life measurement for \({}^{45}\)Cl with a value of 400(43) ms is taken from the work of Sorlin _et. al._[27]. In that work the half-lives were deduced from constructing a time histogram of the \(\beta\)-n coincidences detected after the identification of the corresponding parent nucleus. For \({}^{45}\)Cl only 880 events were detected and the authors discuss a possible mixing with another implant with shorter half life. The measured half life of 513 (36) ms obtained in this work is consistent with the shell model calculations using the FSU interaction which give a value of 500 ms without including the First Forbidden beta transitions. The \(\gamma\)-\(\gamma\) coincidences between the transitions observed in \({}^{45}\)Ar are shown in Fig. 4(a-d) and the corresponding level scheme in Fig. 5. Figure 4(a) displays the 542-keV gate which is the strongest transition in \({}^{45}\)Ar and one can see all the known transitions: 798, 876, 1193, 1230, 1525, and 2754 keV along with the newly placed 2554 keV. The 2554-keV transition is proposed to decay from the highest observed level, at 4326 keV to the 1772-keV level based on the coincidences shown in Fig. 4 (d). The coincidence gate of 1230 keV shows the 542-keV and 1525-keV transitions only, which belong to the same cascade [Fig. 4(b)]. Figure 3: The time difference between the \({}^{45}\)Cl implants and correlated \(\beta\) decay events gated by the 542-keV ground state transition in \({}^{45}\)Ar. The experimental data is fitted with exponential decay functions to account for \({}^{45}\)Cl decay with suitable background. The half-life is found to be \(T_{1/2}\)= 513(36) ms. This number is larger than the previous measurement of 400(43) ms [27] though within 2\(\sigma\). Figure 4: (a)-(d) Coincidences observed between the \(\gamma\) transitions in \({}^{45}\)Ar used to establish the level scheme. The absence of any transition other than 542 keV in the coincidence gate of 1193 keV confirms that no transition is feeding the 1735-keV level [Fig. 4(c)]. In Ref. [28] two tentative \(\gamma\) transitions at 3408 and 2215 keV were reported to decay from the proposed 3950- and 2757-keV levels, respectively. The existence of the 3950-keV (3946 keV in present study) level was established prior from the neutron transfer study [16]. The present work confirms the presence of 2215- and 3404-keV transitions in the coincidence gate of 542 keV [Fig. 4(a)]. The level scheme in Fig. 5 shows the relative \(\beta\) branching of the levels considering the absolute efficiencies of the detected \(\gamma\)-rays. The assignment of the spin-parities of the observed levels of \({}^{45}\)Ar is guided by predictions from shell-model calculations (discussed later in detail) as well as from the allowed \(\beta\) transition rates from the parent nucleus (\({}^{45}\)Cl). The \(\beta\) decay should primarily populate the \(1p1h\) positive parity states and hence the log\(ft\) values from the SM calculations are noted only for the positive parity states. Table 2 gives the level energies and \(\gamma\)-rays decaying from them, possible \(J^{\pi}\) values, and relative intensities for both \({}^{45}\)Ar and \({}^{44}\)Ar excited states. ### \(\beta 1n\)-decay of \({}^{45}\)Cl The large \(Q_{\beta^{-}}\) = 11.51 MeV of \({}^{45}\)Cl, along with a low (5.169-MeV) neutron separation energy (S\({}_{n}\)) of \({}^{45}\)Ar, leads to a significant beta-delayed neutron branch populating excited states in \({}^{44}\)Ar. The spin-parities of the states populated in the \(\beta\)-delayed daughter largely depend on the spin-parity of excited state in \({}^{45}\)Ar. The \(\gamma\) rays in coincidence with the 1158-keV ground-state transition (2\({}^{+}\) to 0\({}^{+}\)) in \({}^{44}\)Ar are shown in Fig. 6(a). Here it is worth mentioning that a closeby \(\gamma\) transition (1157 keV) exists in \({}^{44}\)Ca (2\({}^{+}\) to 0\({}^{+}\) transition) leading to some spurious coincidences marked with solid blue boxes in Fig. 6(a). The strongest transition seen is 853 keV, which is the decay from the second 2\({}^{+}\), 2011 keV level to the first excited 2\({}^{+}\) (1158-KeV) level. This second 2\({}^{+}\) state at 2011 keV has been confirmed in an earlier deep inelastic reaction study by Wan _et al._[30]. The coincidence spectrum also shows the weaker 966-, 2376-, 2797- and 3649 keV transitions which were already reported in Ref. [28]. The presence and absence of 3649 keV transition in 1158 and 853 gate respectively, fixes the placement of this transition from the 4808 keV to the 1158 keV level. The highest observed excited level 5354 keV from the present work is seen to be decaying only to 2978 keV level via the 2376 keV transition. The level scheme, formed from the present work is shown in Fig. 7 (left panel). Along with the experimental levels, the corresponding SM predicted levels (right panel) are also shown. All the transitions, apart from the 853-, 1158- and 2011-keV transitions are observed for the first time in the \(\beta 1n\) decay of \({}^{45}\)Cl and are marked in blue in the experimental level scheme shown in Fig.7 (left panel). Two levels at 2746 keV and 3439 keV, shown in the experimental level scheme by green dashed line, were not populated in the \(\beta\)n channel but are shown for comparison with the shell model calculations and were seen in the previous study by Fornal _et. al._[31]. Figure 5: Partial level scheme of \({}^{45}\)Ar following \(\beta^{-}\)-decay of \({}^{45}\)Cl with a \(T_{1/2}\) = 513(36) ms and \(Q_{\beta^{-}}\) = 11.51(14) MeV. The transitions marked in black were known before whereas red indicates the new transition (2554-keV) as well as prior tentative transitions that we were able to confirm. The branching (relative to 5/2\({}^{+}\)) is also shown for the excited states. The shell model calculation using the FSU interaction [29] predicted the 0p0h and 1p1h excited states and are shown along with the experimental levels. Figure 6: (a)-(d) Coincidences observed between the \(\gamma\) transitions in \({}^{44}\)Ar. The 1158 keV transition coincidence gate shows contamination from \({}^{44}\)Ca which are marked with blue solid boxes. ## IV Discussion ### \({}^{45}\)Ar The ground state spin parity of \({}^{45}\)Cl is expected to be \(3/2^{+}\) due to an odd proton in \(d_{3/2}\) orbital and a full \(\nu f_{7/2}\) orbital in a simple picture as illustrated in Fig. 1. However with the \(d_{3/2}\) and \(s_{1/2}\) orbitals being nearly degenerate, a \(1/2^{+}\) assignment cannot be ruled out. Below, based on the \(\gamma\) decay characteristics and the shell-model calculations, we suggest a \(3/2^{+}\) spin-parity. The shell-model calculations presented in this work were performed using the shell model code CoSMo [32] using the FSU interaction. The FSU interaction is a data-driven shell-model interaction aimed to explain cross-shell excitations between the \(sd\) and \(fp\) shell and also \(p\) and \(sd\) shells for neutron-rich \(sd\)-shell nuclei [29]. The predictions of the FSU interaction have found great success in explaining many experimental observations [33; 34]. For the calculations quoted here, when the excitations are confined to the major shells (_e.g._\(sd\) and \(fp\)) they are referred to as \(0p0h\) excitations, whereas \(1p1h\) calculations involve movement of a single nucleon between the major shells. #### iv.1.1 Positive-parity states From the selection rules of allowed Gamow-Teller (GT) decay, states with positive parity should have the highest branching in the daughter nucleus. In a more nuanced picture, the \(\beta^{-}\) decay will likely involve the conversion of a \(d_{3/2,5/2}\) neutron into a \(d_{3/2}\) proton as \({}^{45}\)Ar has a vacancy in the \(\pi d_{3/2}\) orbital. The other possibility is for the \(s_{1/2}\) neutron to transform into an \(s_{1/2}\) proton as indicated in Fig. 1. These will create neutron hole states in \({}^{45}\)Ar corresponding to \(1p1h\) states in the shell-model calculations. We propose the states at 1735, 1772, 3296, 3946, and 4326 keV with the highest branching to be populated via allowed \(GT\) transitions and hence have positive parity. In a recent transfer study using the reaction \({}^{1}\)H(\({}^{46}\)Ar,\(d\))\({}^{45}\)Ar [16] the same states were proposed as neutron hole states. The observed states at 1735 and 1772 keV in Ref. [16] were found to have large spectroscopic factors for \(\nu d_{3/2}\) and \(\nu s_{1/2}\) hole states but, due to limited experimental energy resolution, they were not able to resolve the two levels. In this work, with the excellent resolution of the high purity Germanium detectors, we are able to assign accurate energies to these two. The shell-model calculations presented, found the \(3/2^{+}\) state at a higher energy than the \(1/2^{+}\) state and as a result, we assign the 1735- and 1772-keV levels as \(1/2^{+}\) and \(3/2^{+}\), respectively. This is further confirmed by the branching ratio of the decays of the 3296-keV state as discussed ahead. The 3296-keV state has the strongest population in the current \(\beta\)-decay study. It has decay branches to the \(7/2^{-}\) ground state, the 542-keV first excited \(3/2^{-}\) state and a strong branch to the 1772-keV state which we propose to be the \(3/2^{+}_{1}\) state. We could not identify any transition to the 1735-keV state within our observation capability. As the 3296-keV state decays to the ground state, it likely has a \(5/2^{+}\) spin-parity. The vicinity of this state to the \(5/2^{+}\) 3224-keV state predicted in the SM calculations ratifies this assignment and the calculated log\(ft\) value of 4.98 justify its strong population in beta decay. The predicted decay probabilities of the \(5/2^{+}_{1}\) states to the \(3/2^{+}_{1}\) and \(1/2^{+}_{1}\) states in SM calculations are listed in Table 1. The decay to the \(3/2^{+}\) state via a M1 transition is stronger and supports the spin assignments to the doublet of states at \(\approx 1.7\) MeV. The occupation numbers for the ground state and the excited positive-parity states in \({}^{45}\)Ar from SM calculations are shown in Fig. 8. The occupation numbers for the two states at \(\approx 1.7\) MeV clearly establish them as \(\nu s_{1/2}\) and \(\nu d_{3/2}\) hole states, respectively. The occupancy of the 3296-keV state from the shell-model calculation shows contribution from the \(\nu d_{5/2}\) hole along with \(s_{1/2}\). This level was also observed via the \({}^{1}\)H(\({}^{46}\)Ar,\({}^{45}\)Ar)\({}^{2}\)H reaction by Lu \(et.al.\)[16] though no spin was assigned to that state. The assignment of \(5/2^{+}\) for 3296 keV from the present work, as discussed above, is consistent with the observation of a small peak at 3.29 MeV [16] in the deuteron spectra. Another state at 3.95 MeV, described Figure 7: Partial level scheme of \({}^{44}\)Ar following \(\beta 1n\)-decay of \({}^{45}\)Cl (\(Q_{\beta^{-}n}=6.34(14)\) MeV) is shown in the left panel. The relative intensity of the deexciting transitions are also reported along with the associated errors. Two levels are shown by green dashed lines which were reported previously but not seen in the present work. The transitions which are seen for the first time from \(\beta 1n\) work are marked in blue. The spins of the levels that are suggested for the first time in the present work are marked with red (see section B of the discussion). Predictions of shell-model (SM) calculations using the FSU interaction [29] are shown in the right panel. The possible \(\gamma\)-transitions and their relative branching ratios were also calculated and are noted. The excited states, predicted by the SM but not observed in the experiment, are shown at the right of the SM level scheme in red. to have \(\ell=0\) parentage in Ref. [16], is also observed in the present work. The shell-model calculation supports the presence of the second \(1/2^{+}\) state at 3978 keV with considerable contribution from both neutron and proton \(s_{1/2}\) orbitals. Therefore, the state at 3946 keV is assigned a \(1/2^{+}\) spin-parity. The shell model further predicts a closeby \(3/2^{+}\) at 4264 keV (see Fig. 5), which we have associated with the 4326-keV state. The higher energy states (at 3978 keV and 4264 keV) show a rise in occupation number for protons in the \(f_{7/2}\) orbital with respect to the ground-state configuration while the occupation number for \(\pi d_{3/2}\) remains the same. It is accompanied by a slight drop in the occupancy for the \(\nu f_{7/2}\) orbital. These states could have contribution from conversion of an \(f_{7/2}\) neutron to a \(f_{7/2}\) proton in the beta decay. #### iii.2.2 Ground state of parent \({}^{45}\)Cl The strong population of the \(5/2^{+}\) at 3296 keV in the \(\beta\) decay leads to the determination of the ground state spin and parity of the parent to be \(3/2^{+}\). The ground-state spin-parity of the neutron-rich odd-mass Cl isotopes are in the spotlight due to the degeneracy of \(\pi s_{1/2}\) and \(\pi d_{3/2}\) orbitals as one moves from the \(N=20\)\({}^{37}\)Cl to the neutron-rich \({}^{45}\)Cl (\(N\)=28). Gade _et al._[12] systematically showed the reduction of the \(E(1/2^{+})\)-\(E(3/2^{+})\) gap as a function of neutron-proton asymmetry for all the odd-mass K, Cl and P isotopes. Though the ground-state spins have been experimentally verified for \({}^{41}\)Cl and \({}^{43}\)Cl as \(1/2^{+}\) with a very closeby \(3/2^{+}\), the tentatively assigned \(1/2^{+}\) ground state of \({}^{45}\)Cl based on SM calculations in Ref. [12] had not been confirmed experimentally. Two closely spaced energy states (127 keV apart) are predicted as candidates for the \(1/2^{+}\) and \(3/2^{+}\) generated from proton holes in \(s_{1/2}\) and \(d_{3/2}\) respectively. The present work reports the highest \(\beta\)-feeding to the \(5/2^{+}\) state in \({}^{45}\)Ar from the gs of \({}^{45}\)Cl. This is possible only from a \(3/2^{+}\) ground state and can be considered as the first experimental support for the assignment of \(3/2^{+}\) spin to the ground state over \(1/2^{+}\) for \({}^{45}\)Cl indicating a return of normal filling of orbitals in odd-A Cl isotopes. \(\beta\)-decay of \({}^{45}\)S into \({}^{45}\)Cl can shed further light on the ground state spin/parity. #### iii.2.3 Negative parity states The lower states in \({}^{45}\)Ar are supposed to have negative parity arising from the excitations of the odd neutron(s) within the \(fp\) shell. Their population in \(\beta\)-decay is either due to feeding from the high-lying states or could also arise from First Forbidden (FF) transitions. The excited states at 1340 and 1418 keV are found to be consistent with the previous \(\beta\)-decay study by Mrazek _et al._[28]. The state at 1.34 MeV was tentatively reported by Lu _et al._[16] in a \({}^{1}\)H(\({}^{46}\)Ar, d)\({}^{45}\)Ar study though the \({}^{45}\)Ar(d, p)\({}^{45}\)Ar reaction Gaudefroy _et. al._[35; 36] couldn't identify this state. Gaudefroy _et. al._ on the other hand, reported a 1420(60)-keV excited state with \(3/2^{-}\) spin-parity and described it as a member of the multiplets of the \(\pi(2^{+})\otimes\nu f_{7}\) configuration with one proton hole in \(s_{1/2}\) and another in \(d_{3/2}\). Because of the mixed nature, both these states were weakly populated in either of the transfer-reaction study [16; 36]. The present shell-model calculation predicts the second excited state to be \(3/2^{-}\) with a closeby \(1/2^{-}\) state (see Fig. 5). The 1340-keV level has a direct \(\gamma\)-decay branch to the \(7/2^{-}\) ground state favoring a \(3/2^{-}\) spin assignment over \(1/2^{-}\) spin whereas the 1418-keV state decays to only the 542-keV \(3/2^{-}\) state and not to the ground \(7/2^{-}\) state. Therefore, it is proposed that the second excited state at 1340 keV has a \(3/2^{-}\) spin-parity while the 1418-keV level is a \(1/2^{-}\) consistent with the \(\ell=1\) assignment of Ref. [35] as well as present shell model predictions with a composite Figure 8: The occupation number of different orbitals (proton and neutron) for the positive parity excited states calculated from shell-model using the FSU interaction [29] for \({}^{45}\)Ar are shown. The energies of the experimental states corresponding to each proposed SM states in the present work are indicated in parentheses. The blue dashed columns are the maximum occupancy for an orbital (\(2j\)+1). The solid black columns are the occupancy of the \(7/2^{-}\) ground state and the red columns are the occupancies of the excited states of \({}^{45}\)Ar. \begin{table} \begin{tabular}{c c c c} \hline \(J_{i}\)\(\rightarrow\)\(J_{f}\) & \(E_{\gamma}\) (keV) & B(M1) (\(\mu_{N}^{2}\)) & B(E2)(e\({}^{2}\)fm\({}^{4}\)) \\ - & - & - & rate(1/sec) & rate(1/sec) \\ \hline \(5/2^{+}\)\(\rightarrow\)\(1/2^{+}\) & 1525 & - & 104 \\ - & - & - & 1.05e+12 \\ \hline \(5/2^{+}\)\(\rightarrow\)\(3/2^{+}\) & 1525 & 0.35 & 27.3 \\ - & - & 2.19e+13 & 2.77e+11 \\ \hline \end{tabular} \end{table} Table 1: SM predictions (using the FSU interaction) for the branching ratio of the \(5/2^{+}_{1}\) state in \({}^{45}\)Ar. The experimental value of the \(\gamma\) ray transition was used in the calculation of the rates. The difference in branching to the \(3/2^{+}\) and \(1/2^{+}\) states is used to confirm the spin parity assignment to the two experimental state at \(\approx\)1.7 MeV. configuration of \(\pi(d_{3/2}\)\(\otimes\)\(s_{1/2})\)\(\otimes\)\(\nu p_{3/2}\). The negative-parity energy levels in \({}^{45}\)Ar (observed here and from prior studies) and \({}^{43}\)Ar are further compared with the shell-model calculations in Fig. 9 to understand the evolution of the \(N=28\) shell gap. Relatively less information is available for the negative parity states in \({}^{43}\)Ar, as is clear in the figure. Unlike \({}^{45}\)Ar, a ground-state doublet of \(5/2^{-}\) and \(7/2^{-}\) spins is predicted in \({}^{43}\)Ar and can be considered as members of the multiplets arising from the configuration \(\pi d_{3/2}^{-}\nu f_{7/2}^{-}\). There is only tentative experimental evidence of this doublet in \({}^{43}\)Ar with \(5/2^{-}\) proposed to be the ground state [37; 38]. Further the \(\beta\)-decay of \({}^{43}\)Cl with a \(3/2^{+}\) gs [39] shows a large branch to the ground state of \({}^{43}\)Ar through First Forbidden(FF) decay which also favors the \(5/2^{-}\) assignment. Hence, though the SM calculations using the FSU interaction correctly predict very closely spaced \(7/2^{-}\) (gs) and \(5/2^{-}\) state (286 keV in Fig. 9) for \({}^{43}\)Ar, the \(5/2^{-}\) is more probable for the ground state. For \({}^{45}\)Ar on the other hand, both the SM calculations and the experimental observations do not support a close \(5/2^{-}\)-\(7/2^{-}\) ground-state multiplet, a signature of the proximity to the \(N=28\) shell closure. ### \({}^{44}\)Ar The states populated in \({}^{44}\)Ar follow the neutron emission from the \(1p1h\) positive parity states with spins of 1/2, 3/2 or 5/2 populated in \({}^{45}\)Ar. The first excited state of even-even \({}^{44}\)Ar is at 1158 keV with a \(J^{\pi}\) of \(2^{+}\) known from earlier Coulomb-excitation, in-beam \(\gamma\)-spectroscopy and deep inelastic studies [17; 18; 30; 31] and is described as a deformed state while the second excited one is also a \(2^{+}\) state at 2011 keV. The spins of the other excited levels observed at 2978, 4808 and 5354 keV are proposed to be \(0^{+}\) to \(4^{+}\) in NNDC [40]. We have tried to make more specific spin assignments to these by comparing with the predictions of shell-model calculation in the \(0p0h\) valence space. The experimental states, though, are likely to have some contribution from \(2p2h\) configurations as the neutron is likely being emitted from the \(sd\) shell. Our calculations currently cannot accommodate this. The experimental levels (left panel) and SM predicted states (right Panel) for \({}^{44}\)Ar are shown in Fig. 7 along with the relative intensities of the \(\gamma\)-transitions from each level. The experimental 2978-keV level decays to the first and second \(2^{+}\) states via the 1818- and 966-keV transitions, respectively, where the 966-keV decay dominates over the 1818-keV branch. This 2978-keV level lies close in energy to 4 predicted states (0\({}^{+}\), 4\({}^{+}\), 2\({}^{+}\) and 3\({}^{+}\)). The calculated 4\({}^{+}\) (2978 keV) and 2\({}^{+}\) (3013 keV) states have higher transition rates to the 4\({}^{+}_{1}\) (2680 keV) and 0\({}^{+}\) (gs) states, respectively, which is not observed in the decay of experimental 2978-keV level. This leaves the two spins, 3\({}^{+}\) (3047 keV) and 0\({}^{+}\) (2717 keV), as the most probable candidates for this level. \begin{table} \begin{tabular}{c c c c} \hline \({}^{45}\)Ar & & & \\ \hline \hline \(E_{i}\) & \(J_{i}\to J_{f}\) & \(E_{\gamma}\) & I\({}_{rel}\) \\ (keV) & & (keV) & \\ \hline 542(1) & \(3/2^{-}\to 7/2^{-}\) & 542(1) & 100(10) \\ \hline 1340(1) & \(3/2^{-}\to 7/2^{-}\) & 1340(1) & 11.4(13) \\ & \(3/2^{-}\to 3/2^{-}\) & 798(2) & 5.7(7) \\ \hline 1418(2) & \(1/2^{-}\to 3/2^{-}\) & 876(1) & 11.4(12) \\ \hline 1735(1) & \(1/2^{+}\to 3/2^{-}\) & 1193(1) & 13.0(15) \\ \hline 1772(1) & \(3/2^{+}\to 3/2^{-}\) & 1230(1) & 37.0(39) \\ \hline 2757(2) & \((1/2^{-})\to 3/2^{-}\) & 2215(2) & 2.1(4) \\ \hline 3296(2) & \(5/2^{+}\to 3/2^{+}\) & 1525(1) & 18.0(20) \\ & \(5/2^{+}\to 3/2^{-}\) & 2754(1) & 29.0(32) \\ & \(5/2^{+}\to 7/2^{-}\) & 3296(2) & 12.1(16) \\ \hline 3946(2) & \(1/2^{+}\to 3/2^{-}\) & 3404(2) & 1.3(4) \\ \hline 4326(3) & \(3/2^{+}\to 3/2^{+}\) & 2554(2) & 2.6(5) \\ & \(3/2^{+}\to 3/2^{-}\) & 2986(2) & 2.1(4) \\ & \(3/2^{+}\to 3/2^{-}\) & 3784(3) & 3.7(7) \\ \hline \({}^{44}\)Ar & & & Rel. Branching \\ \hline \hline 1158(1) & \(2^{+}\to 0^{+}\) & 1158 (1) & 100 \\ \hline 2011(1) & \((2^{+})\to 2^{+}\) & 853(1) & 68(24) \\ & \((2^{+})\to 0^{+}\) & 2011(1) & 100 \\ \hline 2978(1) & \((0^{+})\to(2^{+})\) & 966(1) & 100 \\ & \((0^{+})\to 2^{+}\) & 1818(1) & 83(33) \\ \hline 4808(2) & \((2^{+})\to(2^{+})\) & 2797(2) & 100 \\ & \((2^{+})\to 2^{+}\) & 3649(2) & 4.3(12) \\ & \((2^{+})\to 0^{+}\) & 4808(2) & 4.7(16) \\ \hline 5354(2) & \((1^{+})\to(0^{+})\) & 2376(1) & 100 \\ \hline \end{tabular} \end{table} Table 2: \(\gamma\)-ray energies along with the corresponding initial levels and initial and final spins for \({}^{45}\)Ar and \({}^{44}\)Ar observed in the present work are presented. For \({}^{45}\)Ar the intensities of the \(\gamma\) rays are normalized with respect to the strongest 542-keV \(\gamma\) ray. For \({}^{44}\)Ar the branching from each level is shown with each branch normalized to the strongest one from that level. Figure 9: The low-energy negative-parity states are displayed for \({}^{45}\)Ar and \({}^{43}\)Ar [37; 41] for comparison both from experiment and shell-model calculations using the FSU interaction [29]. The experimental energy of \(7/2^{-}\) state for \({}^{43}\)Ar is yet unknown. The level at 4808 keV decays to the ground state and the excited \(2^{+}\) (1158- and 2011-keV) states. Therefore, among the prior suggested spin-parity assignments, \(4^{+}\), \(3^{+}\) or \(0^{+}\)[40] are not possible for this state. Between the remaining \(2^{+}\) and \(1^{+}\) spin, the shell model does not predict any \(1^{+}\) state nearby (Fig. 7), therefore the spin for this level is suggested to be \(2^{+}\) corresponding to the level 4842 keV from the calculation. Further, the calculations (Fig. 7) predict that the 4842-keV level (exp. 4808 keV level) has the most intense decay branch to the gs and the excited \(2^{+}\) states, which matches with the experimental observation. If the spin-parity of 2978-keV state is assigned as \(3^{+}\) then the shell-model predicts a strongest decay branch from 4808-keV to 2978-keV level. The absence of a decay path from 4808-keV to 2978-keV state encourages us to assign the 2978-keV state as the excited \(0^{+}\) over the \(3^{+}\) possibility. The systematics of excited \(0^{+}\) in Ar isotopes will be discussed next. The highest observed level from the present \(\beta\)n decay work is at 5354 keV with a decay only to the newly assigned \(0^{+}\) 2978-keV level. This 5354-keV level was proposed to decay to the 2011-keV (\(2^{+}_{2}\)) and 1158-keV (\(2^{+}_{1}\)) states via the 3342- and 4195-keV transitions in the earlier \({}^{44}\)Cl beta-decay work [28] but with 3-5 times less intensity than the 2376 keV transition. The observation of the strong 2376-keV transition to the 2978-keV (\(0^{+}\)) state rules out \(0^{+}\) and \(3^{+}\) assignments for the 5354 keV level. The shell model predicts a \(2^{+}\) at 5141 keV and a 1\({}^{+}\) at 5208, both of which are good candidates but the \(2^{+}\) is predicted to decay by a strong transition to the \(2^{+}_{2}\) state not consistent with the experiment. Therefore, the 5354 keV is assigned as \(1^{+}\), consistent with all experimental observations. The population of \(1^{+}\) and \(2^{+}\) in the delayed neutron decay suggests the population of a \(3/2^{+}\) unbound state in \({}^{45}\)Ar which decays by a \(\ell=0\) neutron. ### Even-Even Isotopes near \(N=28\) To understand the evolution of shape away from the \(N=28\) shell closure, the excited state energies of \(2^{+}_{1}\) and \(0^{+}_{2}\) excited states are plotted for even-even Ar isotopes as a function of neutron number in Fig. 10(a). With \(N=20\), both the \(2^{+}_{1}\) and \(0^{+}_{2}\) are high in energy for \({}^{38}\)Ar, reflecting the large shell gap between the \(d_{3/2}\) and \(f_{7/2}\) orbitals. As we increase the neutron number approaching half occupancy of the \(f_{7/2}\) orbital, the lowering of the first \(2^{+}\) state indicates an increasing collectivity, and reduction of the \(N=28\) shell gap. In the Ar isotopic chain, it is interesting to notice that the most collective behavior is for the ground state of \({}^{44}\)Ar (\(N=26\)) signified by the lowest energy of \(2^{+}_{1}\). After that, the increase of the \(2^{+}_{1}\) energy points towards the restoration of the shell gap between the \(\nu f_{7/2}\) and \(\nu p_{3/2}\) orbitals in \({}^{46}\)Ar. With an additional neutron pair above \(N=28\), the \(2^{+}\) state comes down in energy for \({}^{48}\)Ar. A different trend is seen for the \(0^{+}_{2}\) state, which generally represents a different shape of the nucleus from the ground state. For \({}^{40}\)Ar, the \(0^{+}_{2}\) state is described to be part of a super-deformed band in Ref. [47]. With increasing neutron number, the energy of this state is found to increase, attaining a maximum value for \({}^{44}\)Ar. After that, the experimental \(0^{+}_{2}\) state shows a decreasing trend again for \({}^{46}\)Ar. As can be seen from Fig. 10(a), the shell-model calculations with the FSU interaction (solid black stars) closely mirror the \(2^{+}_{1}\) values for \({}^{38-48}\)Ar. For the \(0^{+}_{2}\) states (open red star) from \(0p0h\) configurations, the shell-model predictions show an increasing trend in energy with decreasing neutron number, in disagreement with the experiment. It may be inferred that the \(0^{+}_{2}\) states for \({}^{38-42}\)Ar isotopes have a contribution from \(2p2h\) configuration which is beyond the scope of present SM calculations. It will be interesting to search for the \(0^{+}_{2}\) state in \({}^{48}\)Ar, which is predicted to be very high (4.3 MeV) in the present SM calculation. For further systematic analysis, experimental \(2^{+}_{1}\) and \(0^{+}_{2}\) states are plotted for selected nuclei around \(Z=20\), \(N=28\) magic shell closures in Fig. 10(b). The doubly magic \({}^{48}\)Ca (in the middle) shows a high lying \(2^{+}_{1}\) and excited \(0^{+}_{2}\) representing a pronounced \(Z=20\), \(N=28\) shell gap. With two fewer protons, for \({}^{46}\)Ar the \(2^{+}_{1}\) and \(0^{+}_{2}\) are lower in energy with a further decrease in the state for \({}^{44}\)Ar [Fig 10(a)] indicating the deformation associated with the ground state. Reducing two more protons, for Figure 10: (a) The experimental low-lying \(2^{+}_{1}\) and \(0^{+}_{2}\) energies of Ar isotopes as a function of neutron number. The shell-model (using the FSU interaction) predicted states are also shown as closed (open) star for \(2^{+}_{1}\) (\(0^{+}_{2}\)) spin. The \(0^{+}_{2}\) spin for \({}^{42}\)Ar is adopted from one of the possible spins predicted in NNDC [41] for the 2512.5-keV level for comparison purposes. (b) Comparison of \(2^{+}_{1}\) and \(0^{+}_{2}\) energies in Ca, Ar and S nuclei near magic numbers \(N=28\) and \(Z=20\). From the doubly magic \({}^{48}\)Ca (in center), the left (right) isotones (isotopes) have two protons (neutrons) less than the previous one, keeping the \(N=28\) (\(Z=20\)) magic number. The experimental values for the nuclei (other than \({}^{44}\)Ar) are taken from Refs. [41; 42; 43; 44; 45; 46]. \({}^{44}\)S the \(2^{+}_{1}\) drops further and importantly the spacing between the \(0^{+}_{2}\) and \(2^{+}_{1}\) levels collapses resulting in low lying prolate-spherical shape co-existence in \({}^{44}\)S [6; 10]. In contrast, for the isobar \({}^{44}\)Ar, the \(2^{+}_{1}\) and \(0^{+}_{2}\) show a large separation. In Fig. 10(b) the isotopes of Ca away from the \(N=28\) shell closure are also shown to the right of \({}^{48}\)Ca and one can notice the same trend of reduction of the gap between the \(2^{+}_{1}\) and \(0^{+}_{2}\) states along with lowering of the energy of \(0^{+}_{2}\). Therefore, the systematics suggests that if we decrease either the proton or neutron number away from doubly magic \({}^{48}\)Ca, keeping the other magic number constant, the \(0^{+}_{2}\) which broadly represents a different shape in excited state moves lower in energy, approaching the possibility of shape coexistence. But with both proton and neutron number away from the magic number, the nuclei seem to favor one shape at low excitation energy. ## V Summary The \(\beta^{-}\) decay of \({}^{45}\)Cl is reported here, from an experiment performed at the NSCL following the fragmentation of a \({}^{48}\)Ca primary beam. The half-life (\(T_{1/2}\)) of \({}^{45}\)Cl is measured to be 513(36) ms, which is longer than the prior measurement from GANIL but consistent with shell model calculations using the FSU interaction. The level schemes of \({}^{45}\)Ar and \({}^{44}\)Ar are established from the observed \(\gamma\)-\(\gamma\) coincidences in the \(\beta\) and \(\beta 1n\) channels, respectively. Many of the prior tentative placements of transitions in \({}^{45}\)Ar have been verified and a new \(\gamma\)-transition at 2554 keV has been added. The experimentally observed levels are compared with SM calculations for both \({}^{44,45}\)Ar with excellent agreement. From the predicted occupancy of different orbitals and the decay pattern of the \(\gamma\) transitions from the excited levels, the spin-parity of the levels of \({}^{45}\)Ar populated via GT transitions are proposed. The higher lying positive-parity states of \({}^{45}\)Ar are candidates for \(1p1h\) excitations consistent with their population in prior transfer reactions. The maximum feeding to the \(5/2^{+}\) state in \({}^{45}\)Ar, supported by the small log\(ft\) value calculated from SM calculations, allowed us to assign a spin parity of \(3/2^{+}\) to the ground state of the parent \({}^{45}\)Cl. The spin and parity for the levels in \({}^{44}\)Ar are proposed by comparing with SM calculations. An excited \(0^{+}_{2}\) state is proposed for the first time in \({}^{44}\)Ar at 2978 keV. The SM calculations reproduce the experimental evolution of the \(2^{+}_{1}\) state for the even-\(A\)\(Ar\) isotopes (from \(N=20\) to \(30\)) reasonably well and suggest maximum collectivity for the ground state of \({}^{44}\)Ar. However, the trend of the excited \(0^{+}_{2}\) in even-even Ar isotopes is not that consistent with the calculations. The accuracy of the present shell model calculations for predicting the \(0^{+}_{2}\) states will get validation with the experimental observation of the yet unknown \(0^{+}_{2}\) state for \({}^{48}\)Ar in future experimental endeavors. ## VI Acknowledgement We thank the NSCL operation team and the A1900 team, especially Tom Ginter, for the production and optimization of the secondary beam. This work was supported by the U.S. National Science Foundation under Grant Nos. PHY-2012522 (FSU), PHY-1848177 (CAREER); U.S. Department of Energy, Office of Science, Office of Nuclear Physics under award Nos. DE-SC0020451 (FRIB), DE-FG02-94ER40848 (UML), DE-AC52-07NA27344 (LLNL), DE-AC02-06CH11357(ANL) and also by the U.S. Department of Energy (DOE) National Nuclear Security Administration Grant No. DOE-DE-NA0003906, and the Nuclear Science and Security Consortium under Award No. DE-NA0003180.
2305.13309
Evaluating Factual Consistency of Texts with Semantic Role Labeling
Automated evaluation of text generation systems has recently seen increasing attention, particularly checking whether generated text stays truthful to input sources. Existing methods frequently rely on an evaluation using task-specific language models, which in turn allows for little interpretability of generated scores. We introduce SRLScore, a reference-free evaluation metric designed with text summarization in mind. Our approach generates fact tuples constructed from Semantic Role Labels, applied to both input and summary texts. A final factuality score is computed by an adjustable scoring mechanism, which allows for easy adaption of the method across domains. Correlation with human judgments on English summarization datasets shows that SRLScore is competitive with state-of-the-art methods and exhibits stable generalization across datasets without requiring further training or hyperparameter tuning. We experiment with an optional co-reference resolution step, but find that the performance boost is mostly outweighed by the additional compute required. Our metric is available online at https://github.com/heyjing/SRLScore.
Jing Fan, Dennis Aumiller, Michael Gertz
2023-05-22T17:59:42Z
http://arxiv.org/abs/2305.13309v1
# Evaluating Factual Consistency of Texts with Semantic Role Labeling ###### Abstract Automated evaluation of text generation systems has recently seen increasing attention, particularly checking whether generated text stays truthful to input sources. Existing methods frequently rely on an evaluation using task-specific language models, which in turn allows for little interpretability of generated scores. We introduce **SRLScore**, a reference-free evaluation metric designed with text summarization in mind. Our approach generates fact tuples constructed from Semantic Role Labels, applied to both input and summary texts. A final factuality score is computed by an adjustable scoring mechanism, which allows for easy adaption of the method across domains. Correlation with human judgments on English summarization datasets shows that **SRLScore** is competitive with state-of-the-art methods and exhibits stable generalization across datasets without requiring further training or hyperparameter tuning. We experiment with an optional co-reference resolution step, but find that the performance boost is mostly outweighed by the additional compute required. Our metric is available online at: [https://github.com/heyjing/SRLScore](https://github.com/heyjing/SRLScore) ## 1 Introduction One of the remaining issues that prevents productive deployments of neural text summarization systems is the low correlation of system outputs with human preferences. Among those, _factuality_, i.e., the agreement of facts in the generated summaries with those present in the input text, is not part of the general training objectives of models, which frequently leads to hallucinated facts that are detrimental to perceived system performance (ter Hoeve et al., 2020; Fabbri et al., 2021). Prior work has therefore introduced metrics for automated testing of factuality in generated text (Goodrich et al., 2019; Kryscinski et al., 2020; Yuan et al., 2021), which allows for a more nuanced verification of model capabilities. In particular, one of the first relevant works by Goodrich et al. (2019) introduces the idea of representing text as a series of "fact tuples", in their case as (subject, predicate, object) triplets. Their method exhibits some assumptions about the underlying data, which hampers correlation with human ratings. For example, subject or object may vary for the same sentence meaning expressed using different syntactic structures, e.g., active and passive forms. Semantic Role Labeling (SRL), however, allows for a syntactically independent meaning representation. Our metric, **SRLScore**, improves factuality evaluation, building on fact tuples similar to Goodrich et al. It distinguishes itself in several ways from existing approaches, though: 1. To account for a more nuanced fact representation, we employ SRL to produce abstract representations of sentences that are _independent of their syntactic formulations_. 2. Fact tuples in **SRLScore** are generated on the _input text_ instead of gold summaries; as a consequence, our method is reference-free, and may be applied for evaluation irrespective of the availability of labeled datasets. 3. We introduce a novel weighting scheme for fact tuple comparison, where adjustable weights allow for user optimization. 4. Finally, we experiment with extensions along different parts of the pipeline, including an optional co-reference resolution step and alternative similarity scoring functions. Notably, **SRLScore** entirely relies on publicly available software components and may be used without any further domain adaption required. While our experiments are performed on English, we argue that the transfer of our approach to other languages is possible given only the existence of a language-specific tokenizer and a sufficiently good SRL tagger. Furthermore, **SRLScore** offers the additional benefit of being an _interpretable_ metric, due to its composition on top of fact tuples. In comparison, metrics used for factuality evaluation that are based on the intermediate presentations of language models, e.g., _generation perplexity_Zhang et al. (2020); Thompson and Post (2020); Yuan et al. (2021), cannot present insightful reasons _why_ a particular score was achieved. Furthermore, it has been empirically demonstrated that generation-based evaluators exhibit a _self-preference_ of outputs generated by models similar to the factuality evaluator Fabbri et al. (2021); Liu et al. (2023). This makes them a questionable choice over interpretable metrics. We empirically show that the correlation of **SRLScore** with human ratings is on par with existing methods, and perform several ablations to study the impact of algorithmic choices within our pipeline. ## 2 Related Work Automated analysis of (abstractive) summaries became more relevant in recent years, with the influx of generic summarization systems becoming available Nallapati et al. (2016); See et al. (2017); Lewis et al. (2020). In particular, Goodrich et al. (2019) were the first to propose a reference-based estimator for factuality of generated summaries. As mentioned, their approach is based on a tuple representation of "facts" in the generated and gold summary. Fact tuples are extracted based on a weakly supervised end-to-end tagger and subsequently compared on the basis of matching arguments. Notably, no readily available implementation of their method currently exists. Later work has proposed alternative metrics based on textual entailment Falke et al. (2019); Mishra et al. (2021) and Question Answering (QA) Wang et al. (2020); Durmus et al. (2020), where agreement of answers to questions on the reference and summary are used for estimating factuality. However, QA-based metrics require additional task-specific fine-tuning on generic datasets, which makes the adoption to new domains fairly expensive. The only other work that to our knowledge utilizes some form of SRL-based factuality estimation is presented by Fischer et al. (2022). In comparison to **SRLScore**, their method aggregates "role buckets" at the document level, instead of creating sentence-specific fact tuples. Empirically, their implementation has lower correlation with human ratings than compared approaches, which is contrary to our own findings. Li et al. (2022) frame factuality estimation as an in-filling task, where fact statements are withheld as masked tokens in a generated summary, and a separate model is trained to predict missing facts. Notably, this relies on the assumption that the majority of factual mistakes stems from noun phrases and entity mentions Pagnoni et al. (2021). An alternative body of literature has explored the possibility to exploit Language Models (LMs) directly for estimating factual consistency: Some works, such as BertScore Zhang et al. (2020), use LM-generated representations to generate alignments for scoring. In comparison, PRISM Thompson and Post (2020) or BARTScore Yuan et al. (2021) directly use model perplexity as a factuality estimate. Xie et al. (2021) explore masking approaches, which fall somewhere between the works of Li et al. (2022) and BARTScore; their framing of counterfactual estimation still relies on model-based likelihood scores for computation. The majority of prior work expresses metric perfor Figure 1: Visual explanation of **SRLScore**. An input text and its associated summary are transformed into a series of fact tuples (_SR Tuple_) through extraction from SRL (and optional co-reference) annotations. The final factuality score is computed based on the similarity of the summary facts with fact tuples generated from the input text. mance in terms of correlation with human factuality ratings. Notably, annotations exist for subsets of the popular CNN/DailyMail (Hermann et al., 2015; Nallapati et al., 2017) and XSUM summarization corpora (Narayan et al., 2018). Where Wang et al. (2020) collect user annotations from crowd workers, Fabbri et al. (2021) additionally sample expert judgments, and find that expert ratings tend to be more representative. Maynez et al. (2020) study several aspects of summarization evaluation beyond just factuality, but do not disclose the background of annotators for evaluation. Generally, reliably evaluating correlation of summarization metrics with human preferences is no easy task, either: Deutsch et al. (2022) show that system-level evaluation metrics for text summarization rarely outperform simplistic metrics, such as ROUGE (Lin, 2004), to a statistically significant degree. Partially, this can be attributed to the small number of human-annotated samples available, generally less than 1000 different instances. ## 3 SRLScore Our factual consistency metric, called **SRLScore**, is implemented as a two-stage process: first, extracting fact tuples using Semantic Role Labeling (SRL) on both the source texts and the summary texts, and then determining a factuality score based on tuple comparison. The measure outputs human-interpretable scores between 0 and 1, where a higher score indicates greater factual consistency of a summary text. In this section, we detail the algorithmic choices and present an adaptive weighting scheme for computing the final factuality scores. ### Generating Fact Tuples with Semantic Role Labeling As Figure 1 shows, we operate on the sentence level, primarily because existing SRL tools work well on this level of granularity (Shi and Lin, 2019; Xu et al., 2021). The goal of our fact extractor is to produce _a fact database_ comprised of semantic role tuples for each input text. The primary task of SRL is to find all role-bearing constituents in a sentence and label them with their respective roles (Marquez et al., 2008). Typical semantic roles include _agent_, _patient/theme_, _recipient_, _goal_, _instrument_, _manner_, _time_, _location_ and so on. From the many semantic labels available, we include seven roles based on availability in tagging schemes to construct a fact tuple: _agent_, _negation_, _relation_, _patient_, _recipient_, _time_, and _location_. We further note that not every sentence needs to contain _all_ of these roles; absent labels are represented by _None_ in this work. Importantly, roles reveal the semantic relations between a predicate (verb) and its arguments, which implies that one can generate several fact tuples from a single sentence, depending on the number of verbs in it. To illustrate an exemplary fact tuple, the extracted semantic tuple from sentence 1 in Figure 2 is (Mueller, None, gave, a book, Mary, yesterday, in Berlin). ### Scoring Texts by Comparing Fact Tuples Once fact tuples for both the input and summary texts are generated, the second step in our pipeline is to compute a factual accuracy score. We implement a dynamic weighting system, which crucially improves over a naive comparison, as we empirically show in Section 4.6. Furthermore, we describe the drop-in replacements for exact matching during similarity computation. Scoring Algorithm.Given an input text \(R\) and summary text \(S\), let \(F_{R}\) and \(F_{S}\) be _fact databases_, representing the semantic information contained in \(R\) and \(S\), respectively. Individual fact tuples are represented as an ordered list of fact arguments, e.g., \(f\) = \((agent\), \(negation\), \(relation\), \(patient\), \(recipient\), \(time\), \(location)\in F\). Particular arguments in a fact tuple are referred to by their index position, meaning \(agent=f^{0}\), \(negation=f^{1}\), and so on. We further assume that there exists a scoring function that expresses the _factual support of summary tuple_\(f_{s}\), _given an input tuple_\(f_{r}\), denoted as \(S(f_{s}|f_{r})\). To obtain a factuality score, we attempt to extract the best match \(\hat{f}_{r}\in F_{R}\) for each sum Figure 2: Examples of semantic role label annotations. Labels may remain consistent across different syntactic forms (Sentence 1 & 2). A single sentence can also include several relations at the same time (Sentence 3). mary fact \(f_{s}\in F_{s}\) where \(\hat{f}_{r}\) maximizes the support score \(S(f_{s}|\hat{f}_{r})\). Importantly, we differ from, e.g., Goodrich et al. (2019), by considering the entirety of \(F_{R}\), instead of subsets that match both the agent and relation of the fact tuple. The factual accuracy is then the average across all maximized tuple scores in \(F_{S}\). With that, **SRLScore** is defined as: \[\textbf{SRLScore}(R,S):=\frac{1}{|F_{S}|}\sum_{f_{s}\in F_{s}}\max_{f_{r}\in F_ {R}}S(f_{s}|f_{r}) \tag{1}\] The final part of this scoring system is the computation of factual support \(S(f_{s}|f_{r})\). Tuples are scored by comparing the corresponding attributes of each tuple, formally: \[S(f_{s}|f_{r}):=\sum_{i}\mathbbm{1}_{f_{i}\neq None}\cdot sim(f_{s}^{i},f_{r}^ {i})\cdot w_{i}, \tag{2}\] where the summation over \(i\) addresses all attributes of the fact tuples, \(\mathbbm{1}_{f_{s}^{i}\neq None}\) represents an indicator function considering only non-empty arguments \(f_{s}^{i}\) (zero otherwise), and \(w_{i}\) assigns static weights to arguments in position \(i\). Generally, it should be assumed that the weights allow for a maximum factuality score of 1, i.e., \(\sum_{i}w_{i}=1\). Finally, \(sim(f_{s}^{i},f_{r}^{i})\) is the pairwise argument similarity of \(f_{s}^{i}\) and \(f_{r}^{i}\). We consider different similarity metrics, as described in the following paragraphs. Dynamic Weighting System.The generic weighting in Equation (2) does not necessarily apply to the particular case of evaluating factual consistency in summarization, since a summary is still factually correct even if it leaves out particular aspects (e.g., dropping the date of an event), which were present in the input text. With static weights, however, absent arguments are still contributing to the scoring of the tuple \(f_{s}\), which means that leaving arguments out might potentially be considered as a penalization of factuality. To address this issue, we introduce a weight re-normalization factor, \(W_{norm}\), that distributes the static weights \(w_{i}\) across only those attributes that are present in the current summary fact. In particular, this also increases penalties for actual mistakes over simple fact omission. The weight normalization is defined as follows: \[W_{norm}:=\frac{1}{\sum\limits_{i}\mathbbm{1}_{f_{i}\neq None}\cdot w_{i}} \tag{3}\] With re-normalization enabled, we replace the existing computation of \(S(f_{s}|f_{r})\) by the product \(W_{norm}\cdot S(f_{s}|f_{r})\). String Similarity Methods.We experiment with different methods to calculate the pairwise similarity \(sim(f_{s}^{i},f_{r}^{i})\): exact matching (in line with prior work), but also approximate matching functions, such as word vector similarity1 and ROUGE-1 precision Lin (2004). Computation of similarity with vectors and ROUGE each have their own respective strengths. Word vectors offer the highest flexibility in terms of recognizing argument similarity, enabling semantic comparison instead of purely syntactic equivalence. ROUGE-1 similarity does not offer the same level of flexibility in terms of matching, but shines with its comparatively faster computation, while still recognizing partial matches. Footnote 1: We use spaCy’s vector similarity, see [https://SpaCy.io/usage/linguistic-features#vectors-similarity](https://SpaCy.io/usage/linguistic-features#vectors-similarity), last accessed: 2023-03-06. ### Improved Surface Form Invariance with Co-reference Resolution In light of the fact that sentence-level SRL extraction misses co-references of the same entity across the texts, we integrate an optional component that takes co-reference resolution into account during the tuple generation. Concretely, we employ an off-the-shelf co-reference resolution tool Lee et al. (2017) to identify and store all reference clusters in an external _entity dictionary_. There, all linguistic expressions that refer to the same entity will be grouped together, which allows for later disambiguation. As shown in Figure 3, if an extracted semantic role tuple contains co-references, a single fact tuple will be _expanded_ into multiple tuples, representing the Cartesian product over all synonymous entity surface forms. The key idea here is to enable a better matching of potential facts across input texts and summaries, effectively increasing the recall of matches. The disadvantage is that this directly affects the runtime of our method by a strong factor, since the additional tuples in \(F_{S}\) and \(F_{R}\) will undoubtedly increase the number of comparisons. ## 4 Experiments We empirically demonstrate the performance of our method through a number of experiments on two popular datasets for factual consistency evaluation, which are covered in this section. We further share implementation details and the choices for extracting SRL tuples and extracting co-reference clusters. In addition to the experimental analysis, we also study the behavior of **SRLScore** through a number of ablation experiments and a brief error analysis. ### Evaluation Datasets **QAGS Wang et al. (2020).** The dataset comprises of two separate splits: the first contains 235 instances collected from the test split of CNN/DailyMail Nallapati et al. (2016), where each instance contains a source article and a model-generated summary using the bottom-up approach by Gehrmann et al. (2018). A secondary set contains 239 further instances from the test split of XSUM Narayan et al. (2018), with generated summaries sampled from BART Lewis et al. (2020). **SummEval Fabbri et al. (2021).** It includes synthetic summaries from 16 different abstractive and extractive models of 100 randomly selected articles from the test split of CNN/DailyMail. Unlike QAGS, which collected annotations from MTurk2, each SummEval sample was evaluated by five crowd-sourced annotators and three experts. For each summary, judges were asked to evaluate the coherence, consistency, fluency and relevance. For our evaluation, we use the expert ratings with regard to factual consistency as the gold score, based on the recommendation by Fabbri et al. (2021). Footnote 2: [https://www.mturk.com/](https://www.mturk.com/), last accessed: 2023-03-06. ### Evaluation Metrics and Significance In line with prior work, we evaluate metrics by computing Pearson correlation (denoted as \(\rho\)) and Spearman correlation (denoted as \(s\)) between model predictions and human reference ratings. Given the limited size of all considered evaluation datasets, we further test results for significance using permutation tests Riezler and Maxwell (2005); Deutsch et al. (2021), following the recommendation of Dror et al. (2018). In all tables, \({}^{\dagger}\) denotes a significance level of 0.05 (\(p<0.05\)) and \({}^{\ddagger}\) a level of 0.01 (\(p<0.01\)). When testing significance against several systems, we further apply Bonferroni correction of significance levels Dunn (1961). ### Implementation We use AllenNLP Gardner et al. (2018), specifically version 2.1.0, to extract semantic role labels. AllenNLP implements a BERT-based SRL tagger Shi and Lin (2019), with some modifications. The output of AllenNLP uses PropBank convention Palmer et al. (2005); Bonial et al. (2012); Pradhan et al. (2022), which lists for each verb its permitted role labels using numbered arguments (_ARG0, ARG1,..._) instead of names, due to the difficulty of providing a small, predefined list of semantic roles that is sufficient for all verbs. Since numbered arguments are meant to have a verb-specific meaning Yi et al. (2007), this implies that our mapping between numbered arguments and semantic roles may not always be consistent. The exact mapping used in our experiments is detailed in Appendix A. For co-reference, we similarly use the model provided by AllenNLP Lee et al. (2017), which matches the output format of the SRL tagger. All experiments were carried out on a system with an Intel Xeon Silver 4210 CPU, two TITAN RTX GPUs (24 GB GPU VRAM each) and 64 GB of main memory. We run inference for the SRL model and co-reference component on separate GPUs. We report scores of all system and baseline variants across a single random seed only. Since we are comparing provided "plug-and-play" metrics, it is reasonable to assume that these are the primary choice for others evaluating their own datasets. Particularly for **SRLScore**, we further note that due to the system design, no fine-tuning or training is necessary. The only parameters varied during the experiments are thus the argument weights, which we describe in the following section. Figure 3: Example of the tuple expansion step through co-reference resolution. In addition to the original SR tuple, we add tuples with all possible permutations of the surface forms of mentioned entities. ### System Variants We compare with a number of generic automatic evaluation metrics, including BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and METEOR (Banerjee and Lavie, 2005). Besides, we also consider several metrics specifically developed for factuality estimation, which have reported prior state-of-the-art correlation. Wherever possible, we reproduce scores with the official scripts provided by authors. Comparison is done with three variants of BARTScore (Yuan et al., 2021), two variants of CoCo (Xie et al., 2021), and two variants of ClozE (Li et al., 2022). For more details on reproducibility, see Appendix B. We chose each variant such that the highest self-reported scores of each paper on all evaluated datasets are considered. For our own method, SRLScorebase represents a default setting, assigning equal weights \(w_{i}=\frac{1}{7}\) to all attributes (_agent, negation, relation, patient, recipient, time, location_); the respective similarity function (exact match, spaCy vector, or ROUGE similarity) is chosen to maximize dataset-specific performance (see results of Table 2). SRLScorecoref uses the same weights, with co-reference enabled. We further provide model ablations to test various specifications of our models. As we could not find a implementation based on the original tuple extraction approach by Goodrich et al. (2019), we introduce SRLScoreopenic and SRLScoregoodrich as approximations of their method. Here, fact tuples are reduced to (agent, relation, patient) triplets (with equal weights \(w_{i}=\frac{1}{3}\)). We note that this is not a true equivalence to the original method, although "[i]n most English sentences the subject is the agent" (Bates and Macwhinney, 1982); in reality, a broader variety of roles in the subject position may be encountered. The same applies for our mapping between _object_ and the _patient_ role. However, by using the same upstream labeling tool (i.e., the SRL model provided by AllenAI), we may more accurately compare the algorithmic scoring methods, independent of the annotation accuracy. We argue that our SRL-based modeling of relationship triplets allows for a better generalization beyond Wikipedia, which Goodrich et al. were using in their own experiments. The difference of SRLScoreopenic and SRLScoregoodrich lies in the implemented scoring function, where the OpenIE variant employs our own scoring algorithm, SRLScoregoodrich uses the preliminary filtering step defined in Goodrich et al. (2019). We do not apply a co-reference system in either one of the two ablation settings. Finally, SRLScorecoreconfoptimized illustrates the possibility of adapting our method to a particular dataset. For this variant, we optimize available hyperparameters (weights, scoring function, co-reference) in order to obtain the highest possible scores. ### Main Results The central evaluation results with recommended default settings are shown in Table 1. In almost all cases, specialized factuality metrics show higher \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Metrics**} & \multicolumn{2}{c}{**QAGS-CNN/DM**} & \multicolumn{2}{c}{**QAGS-XSUM**} & \multicolumn{2}{c}{**SummEval**} & \multicolumn{1}{c}{**Avg.**} \\ \cline{2-7} & \(\rho\) & \(s\) & \(\rho\) & \(s\) & \(\rho\) & \(s\) & \(\rho\) \\ \hline ROUGE-1 (F1) & 0.34 & 0.32 & \(-\)0.01 & \(-\)0.05 & 0.13 & 0.14 & 0.15 \\ BLEU & 0.13 & 0.33 & 0.08 & 0.03 & 0.09 & 0.14 & 0.10 \\ METEOR & 0.33 & 0.36 & 0.06 & 0.01 & 0.12 & 0.14 & 0.17 \\ \hline BARTScore & 0.65 & 0.57 & 0.00 & 0.02 & 0.27 & 0.26 & 0.31 \\ BARTScorecnn & **0.73** & **0.68** & 0.19 & 0.18 & 0.35 & 0.32 & 0.42 \\ BARTScoreconf+para & 0.69 & 0.62 & 0.07 & 0.07 & 0.42 & **0.37** & 0.39 \\ CoCospan & 0.64 & 0.55 & 0.22 & 0.20 & 0.40 & 0.35 & 0.42 \\ CoCosent & 0.68 & 0.59 & 0.16 & 0.14 & 0.39 & 0.35 & 0.41 \\ ClozE-Rncoreweb.tr* & 0.66 & - & 0.32 & - & 0.47 & - & **0.48** \\ ClozE-Rnconfidence* & 0.65 & - & 0.29 & - & **0.48** & - & 0.47 \\ \hline SRLScorebase & 0.67 & 0.59 & 0.20 & 0.18 & 0.43 & 0.33 & 0.43 \\ SRLScorecoref & 0.65 & 0.58 & 0.27 & 0.26 & 0.43 & 0.32 & 0.45 \\ SRLScorecore-optimized & - & - & **0.33** & **0.33** & - & - & - \\ \hline \hline \end{tabular} \end{table} Table 1: Pearson (\(\rho\)) and Spearman (\(s\)) correlation of metrics with human ratings on the evaluated datasets. Bold scores indicate highest absolute values. For **SRLScore** variants, we report highest scores across all similarity functions. No significant differences were found between the correlation scores of factuality-specific metrics. \({}^{*}\): results were taken from the respective paper, as there is no existing code to reproduce their results as of now. correlation than generic summarization evaluation metrics (ROUGE-1, BLEU and METEOR). Notably, despite the high increase in absolute scores, we do not always detect a significant level of improvement between factuality-specific metrics and generic metrics, particularly on QAGS-XSUM; we will discuss further implications of this in more detail later. When testing our own method, SRLScorebase, against generic metrics, we find strongly significant improvements only for Pearson correlation of QAGS-CNN/DM and SummEval, as well as Spearman correlation on SummEval (\(p<0.01\), with Bonferroni correction). It should be further noted that BARTScorecnn and CoCo results use BART models (Lewis et al., 2020) that were fine-tuned on the CNN/DailyMail corpus (respectively a variant fine-tuned on XSUM for CoCo on QAGS-XSUM); this may shift the results in favor of these methods for the particular dataset. In comparison, **SRLScore** does not make such assumptions, which may indicate a potentially stronger generalization to unseen datasets. The results in Table 1 also show that there are no significant differences between any of the factuality-specific metrics (**SRLScore**, BARTScore, and CoCo), particularly after applying Bonferroni correction for the comparison against several methods. These insights open up discussions about the current claims of "state-of-the-art" performance, which may not be easily distinguished on the current evaluation datasets. We admit that there is likely no trivial solution to this (besides further annotations), as the main problem seems to stem from the high variance on small sample sizes. ### Ablation Study Given the limited expressiveness of the generic result evaluation, we perform a series of ablation studies on **SRLScore**, to support the individual algorithmic choices made in our method. Extending Tuple Attributes.We investigate the assumption that semantic representations of sentences are usually far more complicated than the simplistic view of (_agent_, _relation_, _patient_) triplets, and the fact that errors may involve further roles. To this end, we compared SRLScoreopenie, using a triplet representation, against SRLScorebase with seven roles. The results in Table 2 confirm that extending tuples to cover more semantic roles is effective across datasets and metrics; SRLScorebase scores consistently better than SRLScoreopenie, with significant improvements primarily on SummEval (the largest considered dataset). Performance of Similarity Functions.Also seen in Table 2 is the difference in scores across various similarity functions. **SRLScore** achieves generally higher correlation when using vector (spaCy) or ROUGE similarity over exact matching, although not to a significant degree. These observations can be attributed to the hypothesis that abstractive entity references will not be detected by exact matching. Also note that results on QAGS-XSUM are particularly affected by this, which shows higher levels of abstraction than CNN/DM-derived resources (Wang et al., 2020; Pagnoni et al., 2021). This is also visible for the SRLScorecoref variant, as seen in Table 1, which can further improve the matching of re-formulations. Dynamic Weight Re-Normalization.We next analyze the contribution of our dynamic weighting scheme through removing the weight re-normalization \(W_{norm}\) and instead defaulting to a static weighting on SRLScorebase. Results in Table 3 demonstrate that re-distributing static weights dynamically to present roles is very effective, however, results show no statistical significance. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Metrics**} & \multicolumn{2}{c}{**QCNNDM**} & \multicolumn{2}{c}{**QXSUM**} & \multicolumn{2}{c}{**SummE**} \\ \cline{3-6} & & \(\rho\) & \(s\) & \(\rho\) & \(s\) & \(\rho\) & \(s\) \\ \hline \multirow{3}{*}{SRLScore openie} & Exact & 0.59 & 0.51 & 0.09 & 0.09 & 0.34 & 0.28 \\ & ROUGE & 0.62 & 0.56 & 0.07 & 0.07 & 0.41 & 0.32 \\ & SpaCy & 0.59 & 0.53 & 0.13 & 0.10 & 0.37 & 0.32 \\ \hline \multirow{3}{*}{SRLScore home} & Exact & 0.61 & 0.54 & 0.14 & 0.15 & 0.37\({}^{\dagger}\) & 0.31\({}^{\ddagger}\) \\ & ROUGE & **0.67** & **0.59** & 0.15\({}^{\dagger}\) & 0.13 & **0.43\({}^{\dagger}\)** & 0.33 \\ \cline{1-1} & SpaCy & 0.63 & 0.55 & **0.20** & **0.18** & 0.40\({}^{\dagger}\)** & **0.34\({}^{\dagger}\)** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of **SRLScore** with a simplified triplet representation (SRLScoreopenie). Extending the fact tuples strictly improves correlation with human ratings across all similarity functions. Significance markers indicate improvements over the same similarity function of the openie variant. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Weight Setting**} & \multicolumn{2}{c}{**QCNNDM**} & \multicolumn{2}{c}{**QXSUM**} & \multicolumn{2}{c}{**SummE**} \\ \cline{2-6} & \(\rho\) & \(s\) & \(\rho\) & \(s\) & \(\rho\) & \(s\) \\ \hline Static weights & 0.59 & 0.49 & 0.09 & 0.09 & 0.38 & 0.28 \\ \hline Dynamic weights & **0.67** & **0.59** & **0.20** & **0.18** & **0.43** & **0.33** \\ \hline \hline \end{tabular} \end{table} Table 3: Correlation scores of SRLScorebase with and without weight re-normalization enabled. Ablation of Goodrich Scoring Method.We finally examine the performance of our scoring system against the partial matching approach of Goodrich et al. For fairness, we compare results on the reduced triplet sets. SRLScore\({}_{\text{openie}}\) uses the presented weighting function, SRLScore\({}_{\text{goodrich}}\) implements an equivalent scoring to Goodrich et al. Results in Table 4 show that the presented scoring algorithm performs better than the scores determined by Goodrich's approach on different datasets, in most instances to a significant degree. Performance of Co-reference Resolution System.Results in Table 1 reveal that the co-reference system is not always improving scores, particularly on the CNN/DailyMail-derived datasets. However, the use of co-reference resolution will significantly increase the processing time, as shown in Table 5. This is expected, given that there are now more fact tuples due to the _tuple expansion_; since the presented scoring method requires the comparison of each fact tuple in the summary against _all_ input text tuples. We further compare the runtime against BARTScore, which only requires a single forward-pass through a neural net and can be batched easily, resulting in a 10x speed-up. In contrast, **SRLScore** requires construction and comparison the fact tuples, which are the main contributors for slower inference times. ### Error Analysis To better understand the limitations of our presented methods, we examine a number of instances manually, particularly those where there are large differences between model-generated scores and human annotations on QAGS-XSUM. Table 6 shows two instances, where **SRLScore** respectively predicts a much higher and lower factuality score than human annotators. Notably, human raters tend to drastically reduce factuality scores in the presence of even a single mistake (what we refer to as _"strike-out scoring"_). In comparison, **SRLScore** and other factuality metrics tend to be more heavily influenced by the correctness of the _majority_ of attributes, which can be seen as a _"bottom-up scoring"_ (scores are built up from a initial factuality of zero instead of deducing from an initial score of one). On the other hand, highly abstractive samples, which retain factuality according to human raters, may pose a challenge for tuple-based **SRLScore**. In the second example of Table 6, synonymous expressions like _step down_ instead of _resign_ cause low predicted similarity; potential solutions could be found in verb sense disambiguation Brown et al. (2011, 2022). ## 5 Conclusion and Future Directions In this work, we presented a semantically consistent metric for estimating the factual truthfulness of two pieces of text: we applied our presented metric to the problem of text summarization evaluation, and demonstrated that it performs on par with existing approaches. In fact, we find that due to the small sample sizes of evaluation datasets, there are no significant differences between any of the considered state-of-the-art factuality estimation metrics. Our approach strikes with its relative simplicity and interpretability due to the intermediate representation of "fact tuples", which makes it possible for human annotators to review how or why system decisions were made. Furthermore, we have demonstrated the suitability of our approach over more naive tuple-based scoring methods through a series of ablation experiments, which also show the adaptability of our method to particular unseen settings by simply adjusting a series of parameters. In our opinion, there are two key challenges concerning the effective deployment of **SRLScore**. The current implementation still suffers from impractically long runtimes for longer input texts. Notably, however, both the tuple generation and comparison stages can be parallelized and we are currently working on improving the compute effi \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Scoring Method**} & \multicolumn{2}{c}{**QCNNDM**} & \multicolumn{2}{c}{**QXSUM**} & \multicolumn{2}{c}{**SummE**} \\ \cline{2-5} & \(\rho\) & \(s\) & \(\rho\) & \(s\) & \(\rho\) & \(s\) \\ \hline SRLScore\({}_{\text{goodrich}}\) & 0.45 & 0.38 & 0.05 & 0.07 & 0.29 & 0.24 \\ \hline SRLScore\({}_{\text{openie}}\) & **0.62\({}^{\dagger}\)** & **0.56\({}^{\dagger}\)** & **0.13** & **0.10** & **0.41\({}^{\ddagger}\)** & **0.32\({}^{\dagger}\)** \\ \hline \hline \end{tabular} \end{table} Table 4: Results of the ablation experiment comparing the scoring method by Goodrich et al. (2019) with our proposed scheme, based on triplet representations. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{2}{c}{**SRLScore**} & \multicolumn{3}{c}{**BARTScore**} \\ \hline base & coref & base & cnn & cnn+para \\ \hline 2.35 & 19.32 & 0.22 & 0.23 & 0.23 \\ \hline \hline \end{tabular} \end{table} Table 5: Average processing time (in seconds) per instance in QAGS-CNN/DM. **SRLScore** uses ROUGE similarity. BARTScore is run with a batch size of 4. ciency of our method. Secondly, we have seen a general trend that factuality estimation metrics are scoring differently from human annotators, who are putting heavy emphasis on a _completely_ factual summary instead. We suspect that adopting a similar _strike-out scoring_ for estimation may better correlate with human ratings, although it will require sufficiently accurate taggers to ensure correct recognition of all entities. ### Limitations While the presented method exhibits stable correlation with human judgments on some of the evaluated datasets, it still exhibits instances under which it will predict opposing factuality scores. It should therefore be considered an _addition_ to human evaluation, but at this point not fully replace it. We also want to point out that the underlying summarization datasets that were used to compare human ratings on are known for their own set of limitations, particularly being fairly extractive in nature. This plays well with **SRLScore**'s estimation of matching between individual tuples extracted from single sentences; on the other hand, if summary texts contain facts derived from multiple source sentences (or undergo otherwise complex structural changes), fact tuples may be insufficient in their current form. Another limitation is the expressiveness of results on the fairly small human-annotated datasets. Here, statistically significant differences can rarely be obtained. However, we are to our knowledge the first to demonstrate this insight about (significant) differences between existing methods, which we consider a particularly useful insight for future work. We further want to point out that our method was only evaluated on English datasets; we argue that it can be applied to other languages, given a similarly performing SRL labeling model. In practice, however, the existence of available models is currently limited for non-English languages. ### Ethics Statement The paper considers the automated analysis of factuality in generated text. While we see no imminent risk in the development of our presented method, we want to point to the explicitly spelled out limitations of the current method (see the previous section). The blind application of factuality metrics could be considered harmful in instances where the predicted scores are differing strongly from human ratings. We therefore recommend that factuality metrics should be employed purely as a _complementary_ evaluation, and never directly replace analysis with humans in the loop. ### Acknowledgments We thank the anonymous reviewers for their helpful comments and suggestions. The work of Jing Fan is supported by a scholarship of the China Scholarship Council (CSC).
2308.04851
CME Propagation Through the Heliosphere: Status and Future of Observations and Model Development
The ISWAT clusters H1+H2 have a focus on interplanetary space and its characteristics, especially on the large-scale co-rotating and transient structures impacting Earth. SIRs, generated by the interaction between high-speed solar wind originating in large-scale open coronal magnetic fields and slower solar wind from closed magnetic fields, are regions of compressed plasma and magnetic field followed by high-speed streams that recur at the ca. 27 day solar rotation period. Short-term reconfigurations of the lower coronal magnetic field generate flare emissions and provide the energy to accelerate enormous amounts of magnetised plasma and particles in the form of CMEs into interplanetary space. The dynamic interplay between these phenomena changes the configuration of interplanetary space on various temporal and spatial scales which in turn influences the propagation of individual structures. While considerable efforts have been made to model the solar wind, we outline the limitations arising from the rather large uncertainties in parameters inferred from observations that make reliable predictions of the structures impacting Earth difficult. Moreover, the increased complexity of interplanetary space as solar activity rises in cycle 25 is likely to pose a challenge to these models. Combining observational and modeling expertise will extend our knowledge of the relationship between these different phenomena and the underlying physical processes, leading to improved models and scientific understanding and more-reliable space-weather forecasting. The current paper summarizes the efforts and progress achieved in recent years, identifies open questions, and gives an outlook for the next 5-10 years. It acts as basis for updating the existing COSPAR roadmap by Schrijver+ (2015), as well as providing a useful and practical guide for peer-users and the next generation of space weather scientists.
M. Temmer, C. Scolini, I. G. Richardson, S. G. Heinemann, E. Paouris, A. Vourlidas, M. M. Bisi, writing teams, :, N. Al-Haddad, T. Amerstorfer, L. Barnard, D. Buresova, S. J. Hofmeister, K. Iwai, B. V. Jackson, R. Jarolim, L. K. Jian, J. A. Linker, N. Lugaz, P. K. Manoharan, M. L. Mays, W. Mishra, M. J. Owens, E. Palmerio, B. Perri, J. Pomoell, R. F. Pinto, E. Samara, T. Singh, D. Sur, C. Verbeke, A. M. Veronig, B. Zhuang
2023-08-09T10:26:23Z
http://arxiv.org/abs/2308.04851v1
# CME Propagation Through the Heliosphere: ###### Abstract The ISWAT (International Space Weather Action Teams) heliosphere clusters H1 and H2 have a focus on interplanetary space and its characteristics, especially on the large-scale co-rotating and transient structures impacting Earth. Solar wind stream interaction regions, generated by the interaction between high-speed solar wind originating in large-scale open coronal magnetic fields and slower solar wind from closed magnetic fields, are regions of compressed plasma and magnetic field followed by high-speed streams that recur at the \(\sim\)27 day solar rotation period. Short-term reconfigurations of the lower coronal magnetic field generate flare emissions and provide the energy to accelerate enormous amounts of magnetised plasma and particles in the form of coronal mass ejections into interplanetary space. The dynamic interplay between these phenomena changes the configuration of interplanetary space on various temporal and spatial scales which in turn influences the propagation of individual structures. While considerable efforts have been made to model the solar wind, we outline the limitations arising from the rather large uncertainties in parameters inferred from observations that make reliable predictions of the structures impacting Earth difficult. Moreover, the increased complexity of interplanetary space as solar activity rises in cycle 25 is likely to pose a challenge to these models. Combining observational and modeling expertise will extend our knowledge of the relationship between these different phenomena and the underlying physical processes, leading to improved models and scientific understanding and more-reliable space-weather forecasting. The current paper summarizes the efforts and progress achieved in recent years, identifies open questions, and gives an outlook for the next 5-10 years. It acts as basis for updating the existing COSPAR roadmap by Schrijver et al. (2015), as well as providing a useful and practical guide for peer-users and the next generation of space weather scientists. Space weather; Interplanetary Space; Observations and Modeling; COSPAR Roadmap ## 1 Introduction Our Sun is an active star that impacts modern life and society by dynamically generating large-scale structures across the heliosphere consisting of plasma and magnetic field that interact with Earth and other planets. The study of the influence of the Sun on interplanetary space and solar system bodies is often known as "space weather" (e.g., Wright et al., 1997; Cade and Chan-Park, 2015). Space weather poses a global threat for Earth, though countries are impacted differently depending on their latitudinal position and infrastructure. The most severe consequences come from the effects of intense geomagnetic storms, i.e., disturbances of the Earth's magnetosphere resulting from the impact of coronal mass ejections (CMEs), on advanced human technologies (e.g., Eastwood et al., 2017). These may include induced electric currents with the potential to severely disrupt power grids, and degrade communication networks. Space weather can also affect cutting-edge communication, positioning, and navigation technologies. These require reliable and operational connections between ground- and space-based instrumentation to allow individual users of smartphones and other devices to navigate indoors and out, as well as to protect users of navigation products from errors. Increasing demands on the accuracy and reliability of new technologies require a deeper knowledge and more accurate identification of the effects of space weather, including the ability to distinguish between different sources of space weather effects, such as changes in the ionosphere-thermosphere-magnetosphere coupling during space weather events, and their effects on Earth's upper atmosphere. Space agencies (e.g., in Europe (ESA), in US (NASA), in China (CNSA), in Russia (Roscosmos), in India (ISRO) or in Japan (JAXA)), international research unions (e.g., Committee on Space Research (COSPAR), International Space Environment Service (ISES), International Space Weather Initiative (ISWI), or Scientific Committee on Solar-Terrestrial Physics (SCOSTEP)), and the United Nations, run extensive space-weather panels and programmes for enhancing awareness of, and preparedness for, strong solar and geomagnetic activity. Individual countries have invested substantial amounts of money to build forecasting capabilities designed to address their own vulnerability to space weather (e.g., Hapgood, 2017; Oggenoorth et al., 2019). There are also considerable efforts to translate results from scientific research into operational models and to train forecasters, passing on the knowledge gained to the next generation of space weather researchers. This paper briefly reviews recent progress made in the topics of interest to the ISWAT (international Space Weather Action Teams) H1+H2 Clusters and places this progress in the context of the COSPAR Space Weather Roadmap paper by Schrijver et al. (2015). In the following we briefly explain the ISWAT initiative and structure. ### Interrelation Between the ISWAT Teams at a Glance Space weather, with its many facets, is a highly-interdisciplinary field that requires coordination among research involving different spatial and temporal regimes, starting from the source of events on the Sun (covered by ISWAT Cluster S) through the heliosphere (covered by ISWAT cluster H, and the focus of this paper), to the vicinity of Earth (i.e., Geospace, treated by Cluster G). The H Clusters' teams focus on research and studies of the background solar wind and propagation of transient events, as well as the mutual interactions between the various large-scale structures, with the aim of improving heliospheric models. This requires reliable input on the solar perspective from the S Clusters' teams, such as long-term solar activity (S1 Cluster summary, see TI2 paper by Pevtsov et al. (2023), short-term dynamic changes of the magnetic field on the Sun and the interplay between open and closed magnetic field. Such input may be used, for example, to model the behavior of the background solar wind (S2 Cluster summary TI2 paper by Reiss et al., 2023). A goal of future heliospheric models is that they will work in real-time, for example by forecasting the geoeffectiveness (i.e., capable of causing a geomagnetic disturbance) of a CME before the eruption has actually happened on the Sun. A major challenge for that is that the input parameters for modeling a specific solar eruption (its speed, size, magnetic field, location, etc...) need to be forecast prior to the eruption (see S3 Cluster summary TI2 papers by: Georgulis et al. (2023) on forecasting; and Linton et al. (2023) on understanding solar eruptions). In turn, the H Cluster teams provide input in terms of expected impact of CMEs and SIRs (stream interaction regions; and CIRs, i.e., co-rotating interaction region) on the Geospace system for the G Cluster teams (G1 Cluster summary TI2 paper by Opgenoorth et al. (2023) on the geomagnetic environment; G2a Cluster summary TI2 paper by Bruinsma et al. (2023) on atmospheric variability; G2b Cluster summary TI2 paper by Tsagouri et al. (2023) on observational and modeling aspects for the ionospheric variability; and G3 Cluster summary TI2 papers on near-Earth radiation and plasma environment by Zheng et al. (2023); Minow et al. (2023); Boyd et al. (2023)). Within the H Cluster, H3 investigates the radiation environment in the heliosphere (solar energetic particles (SEPs) and Galactic cosmic rays (GCRs); see the H3 Cluster summary TI2 paper by Guo et al. (2023) and also the TI1 review paper on SEPs by Whitman et al., 2022). Finally, H4 investigates space weather at other planetary bodies. These interrelations are also depicted in the schematic overview given in Figure 1. As can be seen, the H Clusters act as "communication link" between the S and G Clusters. In combination, the ISWAT Initiative -- with the different Clusters and their respective teams and overarching activities -- provides the best basis for testing theories, developing tools, and evaluating the results (research to operation--R2O; operation to research--O2R). ### The COSPAR Space Weather Roadmap: Where Do We Stand? Extensive research in recent years has enhanced our understanding of the physical processes involved in the interaction between solar wind and transient events, while increased computational power has enabled substantial progress in modeling the solar wind. We have not only developed computationally expensive magnetohydrodynamics (MHD) models as suggested by Schrijver et al. (2015), but also improved empirical and analytic models. Data assimilation (DA) algorithms, combining in situ and remote-sensing-image data, as well as common metrics, have been developed. Together, these advances have enabled improved and more detailed insight into large-scale propagating disturbances and their impact (e.g., Mays et al., 2015; Dumbovic et al., 2018; Riley et al., 2018; Verbeke et al., 2019). However, a major weakness is a lack of coordination on the validation of models, i.e., determining objectively how well the models perform. For example, validation efforts for individual models may use different choices of events and input data, and there are very few benchmarks that can be used to confront models with each other. This is partially due to the diversity of modeling approaches, which can make comparisons difficult. For example, most analytical models give only 1D solutions, while the high computational time and expense makes it challenging for numerical MHD codes to perform the multiple runs required for a full validation. Most models do not provide predictions of the magnetic field. Thus, it is still diffi Figure 1: Schematic overview of the topics of interest starting from the Sun to interplanetary (IP) space and arrival at Earth (SIR/CIR: stream- and co-rotating interaction region; HSS: high-speed-stream, CME: coronal mass ejection, SEP: solar energetic particles, GIC: ground induced current), which are related to the ISWAT S, H, and G Clusters and the input information required by the H Cluster from the S Cluster together with the output from the H Cluster provided to the G Cluster. This leads to a feedback loop between the Clusters. cult to find the best trade-offs between model accuracy, robustness, and speed, although new numerical techniques are helping to overcome this challenge. The coupling of different codes into dedicated space-weather frameworks (see Table1) to model the entire heliosphere demonstrates efforts in the community to combine models and exploit their individual strengths. In that respect we note the importance of ensemble modeling, where the uncertainties of input parameters for a specific model can be used to derive the probability of a range of outcomes (such as in hurricane track predictions, as recommended by Schrijver et al., 2015), as has already been explored (e.g., Mays et al., 2015; Amerstorfer et al., 2018; Weiss et al., 2021). There are major challenges for further improving our models. First, current models in "forecast mode" cannot fully capture the evolution of CME magnetic fields from the eruption on the solar surface to interplanetary space. Modeling the magnetic field of a CME (often assumed to be a flux rope (FR)) sufficiently reliably to derive the impact at Earth, especially predicting the magnetic field component \(B_{z}\), is the "holy grail" of space weather research (and prediction/forecast). Within the ISWAT initiative, work on this problem spans expertise in Clusters S3 and H2. Second, the correct and accurate (i.e., validated) modeling of the background solar wind is still an outstanding issue, which is the topic of Cluster H1. Besides CME events interacting with the ambient solar wind flow, recent results show that even the quiet solar wind flow itself has a transient component (e.g., Bourouaine et al., 2020). We therefore need to better understand the solar wind as time-dependent outflow. Simulations of CME propagation are only as precise as the accuracy of the background flow allows. Third, the thorough validation of solar wind models poses a problem since there are only limited locations where solar wind measurements are available to compare with model outputs which usually cover large regions of the heliosphere. The limited measurements restrict validation procedures and prevent the skill of a model from being reliably quantified. Fourth, solar activity changes on short-, mid-, and long-term scales (see Cluster S1), requiring dynamic adjustments of model parameters. For example, the default model parameters derived through statistical studies for cycle 23 need to be adapted when applied to events during solar cycle 24. A drop in the magnetic field and heliospheric pressure (see e.g., Yermolaev et al., 2022) during the weaker solar cycle 24 led to a cascade of reactions, such as an over-expansion of CMEs in the heliosphere that changed their propagation behavior and the formation of shocks (see e.g., Gopalswamy et al., 2015; Lugaz et al., 2017). In addition, cycle 24 revealed a more complex coronal magnetic field leading to more pseudostreamer contributions and, hence, CME trajectories being directed out of the ecliptic (see e.g., Jian et al., 2019). As can be seen, there are still many open scientific questions related to advance models for CME and solar wind forecasting. _There is not a single model or framework currently available that outperforms others, and each model shows strength and weakness on different aspects._ ### General Methodology There are a number of different types of space weather models in the heliospheric domain that are designed to provide specific types of predictions. For example, models may assume different CME structures and use different criteria to assess the impact of the CME at a target such as Earth. Therefore, caution is advised to ensure that appropriate parameters are considered when comparing model or forecast outputs with actual measurements. Hit/miss "categorical" forecasts are concerned with predictions of the arrival or non-arrival of a CME or SIR structure at a given target location. Additionally, CME propagation models that do not describe the internal magnetic structure of CMEs can be used to predict the time of arrival (ToA) of the CME (most commonly defined as the arrival time of the CME-driven shock, depending on the specifics of the model) but not the arrival of the ejecta or the impact (e.g., geomagnetic) of the CME. This is true for both the empirical/analytical models and MHD-based cone CME models that are widely employed for forecasting due to their robustness and the relatively-low computational resources required (e.g., Pizzo et al., 2011). On the other hand, models describing the CME internal magnetic structure (generally in the form of various magnetic FR or spheromak models) are able to distinguish between the ToA of the CME-driven shock, and the ToA of the ejecta. In addition, some models also predict the speed on arrival (SoA) and density on arrival (DoA) for both CMEs and SIRs, the major structures contributing to the space weather impact on planetary magnetospheres via compression mechanisms resulting from increased dynamic pressure (see Cluster G). The prediction can be provided in the form of a single value (e.g., as provided by drag-based and other analytical CME models), or in the form of a time series at a given target (e.g., from MHD models or the OSPREI suite of Kay et al., 2022). Additionally, CME propagation models that differentiate between the shock, sheath, and ejecta components of a CME can provide time series predictions of the magnetic-field components, including the \(B_{z}(t)\) component that \begin{table} \begin{tabular}{c|c} NASA/CCMC & Kuznetsova \& Center (2022) \\ SWMF & Toth et al. (2005); Gombosi et al. (2021) \\ ESA/VSWMC & Poedts et al. (2020) \\ SUSANOO & Shiota et al. (2014); Shiota \& Kataoka (2016) \\ STORMS & Rouillard et al. (2020) \\ \end{tabular} \end{table} Table 1: Examples of Space Weather modeling frameworks in US, Europe and Asia, with links to software download and/or webpage hosting the service (CCMC: Community coordinated modeling center; SWMF: Space Weather Modeling Framework; ESA/VSWMC: European Space Agency/Virtual Space Weather Modeling Center; SUSANOO: Space-Weather-Forecast-Usable System Anchored by Numerical Operations and Observations; STORMS: Solar-Terrestrial Observations and Modeling Service) is most important for assessing geoeffectiveness as it is mainly responsible for erosion of the magnetospheric field (see e.g., Pal et al., 2022). Times series of other parameters contributing to the interplanetary evolution of CME structures, such as the plasma beta (requiring estimates of the plasma temperature and density) may also be provided, together with the duration of the perturbation which is important in determining the space weather impact of an interplanetary structure. Predictions of the shock, sheath, and ejecta durations at a given target typically require the use of magnetised CME MHD models. Little emphasis has been put on the modeling and prediction of the ejecta wake duration so far, with only exploratory studies based on MHD models having been performed (e.g., Scolini et al., 2021). Increasing efforts have been devoted to reducing the computation time of CME and global background solar wind models to less than a day so that they may contribute to daily forecasts. The extensive use of code parallelization allows models to run in parallel on a few tens to hundreds of processing cores on computer clusters of various sizes, thereby speeding up the computation. Specific approaches considered include: coupling between empirical coronal models and MHD heliospheric models (e.g. Odstrcil, 2003; Poedts et al., 2020), tomographic methods (which can also be used to propagate the background magnetic field and for driving MHD models without the need for other CME parameterizations, e.g. Bisi et al., 2015; Jackson et al., 2020; Gonzi et al., 2021, and references therein), grid adaptation techniques such as adaptive mesh refinement (AMR) or r-AMR (Verbeke et al., 2022), implicit solvers (Mikic et al., 2018; Poedts et al., 2020), or interpolation from multi-1D solvers (MULTI-VP and the Alfven-wave Driven Solar Wind Model AWSOM-r; Pinto and Rouillard, 2017; Huang et al., 2020). The ENIL (Odstrcil and Pizzo, 1999a) and EUHFORIA (European Heliospheric Forecasting Information Asset; Pomoell and Poedts, 2018) models have been the work-horses of the space-weather community due to their adaptability, usability and useful performance (see more details on CME propagation models in Section 4.3). _It is important to note that every model makes assumptions that may differ and uses numerical, analytical, or empirical techniques or inputs that naturally introduce simulated behaviors of varying degrees of physical accuracy._ ### Availability of Observational Data Observations are crucial in space weather, not just to efficiently monitor the Sun and heliosphere, and detect sudden events, but also to provide statistics to improve our understanding of the underlying physics and to better constrain and improve models. In recent years, a plethora of satellite missions have provided valuable data for space weather research, including SOHO (Solar and Heliospheric Observatory; Domingo et al., 1995), Wind (Ogilvie et al., 1995), ACE (Advanced Composition Explorer; Stone et al., 1998), DSCOVR (Deep Space Climate Observatory; Burt and Smith, 2012), GOES, Proba-2 (Santandrea et al., 2013), the twin STEREO (Solar Terrestrial Relation Observatory; Howard et al., 2006) spacecraft, SDO (Solar Dynamics Observatory; Pesnell et al., 2012). Promising for enhancing our knowledge are the recently launched Parker Solar Probe (PSP; Fox et al., 2016) and Solar Orbiter (SoIO; Muller et al., 2020) missions. PSP is providing key in situ data in the inner heliosphere extending down to the solar corona, that will improve our understanding of the evolution of solar wind structures as they move out from the Sun, while SoIO, in addition to also providing in situ measurements in the inner heliosphere, will provide images out of the ecliptic that will increase the coverage of magnetograms to polar regions. In the frame of ESA's Space Safety Programme the future operational space weather mission _Vigil_ is planned to be launched 2029. _Vigil_ will be located permanently at the Lagrange point L5 and is designed as an operational space weather mission to stream a constant feed of near real-time data on potentially-hazardous solar activity, before it comes into view from Earth. _Vigil_ would help to overcome the drawback that, at present, measurements of solar surface magnetic fields are largely confined to the visible hemisphere, by extending the region of surface magnetic field observations that can be fed into the models (see Section 4). Schrijver et al. (2015) explicitly mention that extending solar magnetic field coverage will improve multi-day forecasts of individual space weather events. Synchronic real-time magnetograms as opposed to time-delayed synoptic maps will be key for a better global modeling of the magnetic field (Caplan et al., 2016; Jeong et al., 2020). Still, however, there are no plans for farside magnetographs, and we are stuck doing the best we can do with helioseismology and ADAPT (Air Force Data Assimilative Photospheric flux Transport) approaches (Arge et al., 2010). It is important to point out that many highly-used missions (e.g., SOHO, ACE, WIND, SDO, STEREO) are aging and that attention needs to be paid to potential losses of critical parts of our heliospheric observatory. Complementary data are also available from ground-based facilities, such as magnetograms from the GONG network, radio observations from the Worldwide Interplanetary Scintillation Station (WIPSS) Network (e.g., Bisi et al., 2016) and modern radio telescopes such as the Low Frequency Array (LOFAR) and the Murchison Widefield Array (MWA) (see also e.g., TI1 papers by Chashei et al., 2022; Chhetri et al., 2022; Fallows et al., 2022; Iwai et al., 2022, and references therein, as well as the LOFAR For Space Weather (LOFAR4SW) project), white-light coronagraph data for the low corona from the Mauna Loa Solar Observatory (MLSO), and high-resolution solar images and spectropolarimetry from the Daniel K. Inouye Solar Telescope (DKIST; Rimmele et al., 2020) telescope. Radio and interplanetary scintillation (IPS) is a promising approach for the future (see also Shiafullah et al., 2020), in particular for probing latitudinal variations of the solar wind (e.g., Sokol et al., 2015; Porowski et al., 2022, and references therein). The magnetic field of CMEs can also be tracked from radio observations of Faraday rotation (e.g., Jensen et al., 2010, 2013; Bisi et al., 2016; Wood et al., 2020; Kooi et al., 2022, with radio telescope systems such as LOFAR, the Karl G. Jansky Very Large Array (VLA), Greenbank, MWA, and the future Square Kilometre Array Observatory (SKA)). In comparison to space mis sions, ground-based observatories allow for bigger installations with higher resolution and regular maintenance. _Ensuring continuing support of ground- and space-based infrastructures for space weather observations is crucial. It facilitates to develop more diverse data-driven codes that include DA as recommended by Schrijver et al. (2015)._ ### H1+H2 Cluster Activities In the following we describe the H1+H2 Cluster activities and related open questions towards improving Space Weather forecasts. An overview on the various large-scale interplanetary structures driving Space Weather are given in Section 2. SIRs and CIRs, the main contributors to moderate-to-strong space-weather disturbances at Earth, are not fully understood and have many open questions that are presented in Section 3. CMEs as main source of strong-to-severe space weather disturbances and modeling efforts are described in more detail in Section 4. Various interaction scenarios between these different types of large-scale solar wind structures are given in Section 5. A recent NASA sponsored Gap Analysis from the Johns Hopkins Applied Physics Laboratory led by A. Vourlidas, provides valuable future prospects for model development and improvement, that is presented in Section 6. In Section 7 we give some closing thoughts and point out paths forward. ## 2 Structure of Interplanetary Space throughout the Heliosphere In this section, we introduce the basic properties of the solar wind and its various large-scale structures including transient CMEs, out to approximately the orbit of Mars. It is not intended to be a comprehensive review of this topic (for such reviews see e.g., Cranmer et al., 2017; Cranmer & Winebarger, 2019; Richardson, 2018; Owens, 2020; Luhmann et al., 2020; Zhang et al., 2021; Temmer, 2021; Gopalswamy, 2022). ### Large Scale Structures in the Solar Wind The solar wind, formed from the supersonic expansion of the solar corona (e.g., Parker, 1958; Cranmer & Winebarger, 2019), is a plasma consisting predominantly of electrons and protons with smaller contributions from helium and heavier ions (e.g., von Steiger et al., 2000). The solar wind flowing nearly radially away from the Sun drags out coronal magnetic field lines that, because of solar rotation at their footpoints, form an approximately Archimedean spiral configuration in which the interplanetary magnetic field (IMF) is more (less) tightly wound in slower- (higher-) speed solar wind (e.g., Owens & Forsyth, 2013, and references therein). The coronal source of solar wind rotates approximately every 27 days as seen by an Earth observer, while it takes about 4 days for the radially flowing solar wind plasma to reach 1 AU. This combination produces a local IMF oriented part of a global heliospheric field configuration in a spiral shape with 45 degrees from the radial direction (Parker, 1961). The earliest observations of the solar wind (Snyder et al., 1963) revealed that the large-scale solar wind is structured into streams of higher-speed solar wind associated with open field lines originating in coronal holes (e.g., Krieger et al., 1973, see also Section 3) interspersed with intervals of slower, denser wind. The origin of the slow solar wind is still unclear but it is probably of mixed origin in predominantly closed coronal magnetic structures that tend to lie below streamers at the Sun including the streamer belt mapping to the heliospheric current sheet (HCS). Recent PSP observations show that a highly structured slow solar wind can also emerge from within coronal holes (Bale et al., 2019). There is still a debate about the actual origin of the open flux and why some of the open flux at the Sun appears to be "missing" compared to estimates based on in situ observations in the solar wind (e.g., Linker et al., 2017, 2021). Typical properties of high-speed stream (HSS) coronal hole flows at 1 AU based on spacecraft observations (e.g., Ebert et al., 2009; Owens, 2020) include speeds of \(\sim\)500-800 km s\({}^{-1}\), densities of \(\sim\)2-4 cm\({}^{-3}\), magnetic field strengths of \(\sim\)3-4 nT, and proton temperatures of \(\sim\)2-3\(\times\)10\({}^{5}\) K. In slow streamer-belt solar wind, the corresponding values are: speeds of \(\sim\)300-400 km s\({}^{-1}\), densities of \(\sim\)5-10 cm\({}^{-3}\), magnetic field strengths of \(\sim\)4-8 nT and proton temperatures of \(\sim\)0.5-1\(\times\)10\({}^{5}\) K (Schwenn, 2006; Yermolaev et al., 2009). The solar wind speed Figure 2: Schematic of two high-speed streams co-rotating with the Sun and the associated variations in several plasma parameters at 1 AU: Thermal temperature (\(V_{T}\)), magnetic field fluctuation level (\(\sigma_{x}\)); solar wind speed (\(V_{W}\)); density (\(N\)); magnetic field intensity (\(B\)); and transverse component of the solar wind velocity (\(V_{\phi}\)). The regions indicated are: the unperturbed slow solar wind (S), compressed, accelerated slow solar wind (S’), compressed, decelerated fast solar wind (F’), unperturbed fast solar wind (F), and a rarefaction (R). S and F form the interaction region, and the stream interface is at the S’–F’ boundary. Dotted lines indicate magnetic field lines in the slow and fast solar wind that thread into the interaction region beyond 1 AU (Belcher & Davis, 1971). is relatively independent of the heliocentric distance, but the other parameters depend inversely on some power of it. Ulysses observations indicate that the magnetic field strength appears to be latitude-independent (Smith & Balogh, 1995), suggesting that significant non-radial expansion of the solar wind occurs. We also note that "typical" properties might vary, especially when considering different solar cycles and different epochs of a solar cycle (see more details in Section 2.2). The interaction between a HSS and the preceding slower solar wind forms a region of compressed plasma at the leading edge of the HSS that corotates with the Sun (Figure 2). Such structures are termed "co-rotating interaction regions" (CIRs), though the term "stream interaction region" (SIR) has also been introduced to indicate an interaction region that is only observed on one rotation (e.g., Jian et al., 2006). However, the terms are also used interchangeably. Figure 2 shows the typical variations in the solar wind parameters at \(\sim\)1 AU associated with CIRs including enhancements in the plasma density, magnetic field intensity and proton temperature and a deflection in the solar wind flow direction. CIRs will be discussed in more detail in Section 3. See also Richardson (2018) for a recent review about solar wind stream interaction regions throughout the heliosphere. Transient structures associated with CMEs at the Sun form the other major component of the solar wind. CMEs are identified as bright, outwardly-propagating structures in white-light coronagraph images. They include an enormous mass of coronal material and carry an embedded magnetic field that is stronger than that in the background solar wind. Due to that, they quickly expand in both lateral and radial direction (e.g., Scolini et al., 2020). This strongly influences their 2-D appearance in white-light image data and, hence, the derivation of propagation speed and width, posing a challenge when thinking of accurate inputs for space weather models. CME-associated eruptions are often evident in other remote sensing observations (e.g., extreme ultra-violet--EUV and X-ray low-coronal signatures, and radio signatures), providing critical complementary information on the erupted structures (e.g., Hudson & Cliver, 2001; Palmerio et al., 2017). Taken together, these signatures, when indicative of a frontside Earth-directed CME can provide at best, a lead time of two to three days for arrival at Earth (see Cluster S3). Forecasting when a solar active region will erupt, and predicting the properties of the resulting CME, from solar surface structures prior to eruption, is itself a major space weather challenge as discussed in TI2 paper by Georgoulis et al. (2023). Occasionally, "stealth" or "stealth-like" CMEs are observed in coronagraphs that have weak or no eruptive signatures in the low corona (Robbrecht et al., 2009; Palmerio et al., 2021c). Figure 3 shows a schematic of a CME propagating out through the solar wind. When observed in situ, a CME is often referred to as an "interplanetary" CME (ICME; e.g., Rouillard, 2011). Since the link between CMEs at the Sun and ICMEs in the solar wind is now firmly established, for example from STEREO observations (e.g., Mostl et al., 2009), it is clear that they are the same physical phenomenon, namely a magnetized plasma structure ejected from the Sun. Nevertheless, both CME and ICME are frequently used in the literature to distinguish between CMEs imaged by remote-sensing instruments, such as coronagraphs (revealing global properties) and the related structures observed in situ (revealing local properties). With the differentiation by the observing techniques, the terms CMEs and ICMEs refer to different geometry or scales, and may also refer to various evolutionary stages (but not necessarily, considering heliospheric image data or spacecraft with in situ measurements orbiting close to the Sun). Throughout this paper, we use the term CME for both the imaged and in situ observed cases. Figure 4 shows the relation between density structures identified from in situ and white-light information. The in situ signatures of CME passage may include first the detection of a forward shock, if the CME speed is sufficiently high compared to the surrounding wind (see also Section 3.3). This may be followed by a sheath characterized by a pile-up/compression region, then by another density enhancement region called the leading edge, and the magnetic ejecta (sometimes referred to as the "driver" of the preceding shock/compression) that is identified by a number of characteristics that differ from those of the background solar wind, due to its origin in an eruptive event (e.g., Wimmer-Schweingruber et al., 2006; Zurbuchen & Richardson, 2006; Kilpua et al., 2017a; Temmer & Bothmer, 2022, and references therein). These characteristics include unusual solar wind charge states and composition (e.g., Lepri et al., 2001; Gruesbeck et al., 2011; Zurbuchen et al., 2016; Owens, 2018; Rivera et al., 2019), bidirectional suprathermal electron heat fluxes, indicating the presence of looped field lines rooted at the Sun (e.g., Gosling et al., 1987), a monotonic speed decrease (consistent with expansion), low densities and proton temperatures (e.g., Richardson & Cane, 1995) relative to the ambient wind, often leading to a low plasma beta indicating a magnetically-dominated structure, and elevated helium abundance. Traditionally, ejecta showing a combination of low den Figure 3: Schematic of the structure of an CME and upstream shock, including a magnetic FR, plasma characteristics (indicated by yellow shading that differ from those of the ambient solar wind plasma, and counterstreaming suprathermal electron signatures (Zurbuchen & Richardson, 2006). sity, low temperature, and enhanced, slowly-rotating magnetic fields have been known as "magnetic clouds" (MCs; Burlaga et al., 1981), while structures exhibiting magnetic field rotations but lacking some of the typical plasma signatures have been called "MC-like" structures (Cane & Richardson, 2003; Lepping et al., 2005). More recently, the terms "magnetic ejecta" (ME; Winslow et al., 2015) and "magnetic obstacle" (MO; Nieves-Chinchilla et al., 2018) have been introduced to refer to ejecta signatures lacking clear rotations in the magnetic field components, with or without associated solar wind plasma observations. The smooth rotations of the magnetic field components have been often interpreted as indicative of possible magnetic flux-rope (MFR) or magnetic flux-rope-like (MFR-like) structures (Bothmer & Schwenn, 1998). Such CMEs have received considerable attention because the magnetic field configurations are arguably simpler to model and may be consistent with the helical structures occasionally present in coronagraph observations of CMEs. However, only a fraction of CMEs include in situ MC signatures, and this fraction appears to vary with the solar cycle from the majority of CMEs at solar minimum to as small as \(\sim 20\%\) around solar maximum (Richardson & Cane, 2004). In the following, we will use the general term "ejecta" to refer to the in situ counterparts of CMEs when not distinguishing among the different ejecta sub-classes. The final structure that may be encountered in situ is a "wake" following the ejecta. The features of CMEs will be discussed further in Sections 4 and 5. The speeds of CMEs observed in situ cover a wide range. Many CMEs have speeds similar to the ambient solar wind, suggesting that they are carried out with the ambient flow, while a few have speeds exceeding 1000 km s\({}^{-1}\)(e.g., Richardson & Cane, 2010). There is evidence (e.g. Cane et al., 1986; Gopalswamy et al., 2000) that, as they move away from the Sun, fast CMEs tend to decelerate, even well beyond 1 AU (e.g., Richardson, 2014; Witasse et al., 2017), tending towards the ambient solar wind speed, while slow CMEs are accelerated by the ambient solar wind. This may be accounted for by a so-called "drag force" that is incorporated into many analytical CME propagation models (see Section 4 for more details). As noted above, when traveling faster than the background solar wind speed, a CME can generate a shock wave. Particles accelerated by CME-driven shocks make a major contribution to SEP events, in addition to particles accelerated by solar flares (see the Cluster H3 and TI2 paper by Guo et al. (2023) for more details about SEPs). Also, the intensity of an SEP event tends to be correlated with the speed of the associated CME observed by coronagraphs, and hence many current SEP prediction models, reviewed by Whitman et al. (2022), require such CME observations as an input. ### Solar Cycle Variations The large scale structure of the solar wind is profoundly influenced by the \(\sim\)11 year solar activity cycle. Around the minimum of a solar cycle, coronal holes tend to dominate, expanding from the polar regions to equatorial locations. Solar wind HSSs, originating from coronal holes located near the solar equator, and the associated CIRs formed in front of them then become the source of the recurrent disturbances of Earth's magnetosphere and ionosphere (Verbanac et al., 2011). Figure 5 (from McComas et al., 2008) shows observations from Figure 4: Relating CME density structures from white-light image data covering a distance up to about 0.03 AU to in situ plasma and magnetic field measurements at a distance of 0.53 AU. In both data sets we identify the magnetic ejecta region (4) driving several distinct upstream regions, shock (1), sheath (2), and leading edge (3). The image is adapted from Temmer & Bothmer (2022). Figure 5: Polar plots of the solar wind speed during Ulysses’ three orbits of the Sun showing fast solar wind at high latitudes, slow solar wind at low latitudes, and alternating fast and slow solar wind at mid latitudes, during the first (left) and third (right) orbits around solar minimum. Solar wind speeds are more variable in latitude during the second orbit (centre) around the maximum of solar cycle 23. Red/blue colours represent the IMF direction away from/towards the Sun. Representative observations from SOHO and MLSO illustrate the differences in the streamer belt configuration for each orbit (Mcomas et al., 2008). the Ulysses spacecraft, which probed the solar wind up to high latitudes, of the latitudinal structure of the solar wind in a polar plot of the solar wind speed. Observations from the first Ulysses orbit (left panel) made near solar minimum show high-speed flows at higher latitudes and slower flows at low latitudes above the streamer belt, which is evident in the coronal image from the MLSO. At low latitudes, there are also intermittent intervals of higher speed flows predominantly associated with equatorward extensions of polar coronal holes or low latitude coronal holes. Embedded in the streamer belt is a large scale current sheet, the HCS, that separates oppositely-directed magnetic fields from the two polar hemispheres of the Sun (e.g., Smith, 2001); the red/blue color of the speed plot shows the outward/inward magnetic field directions in each hemisphere. Such a latitudinal organization of solar wind speeds may also be inferred from IPS observations (e.g., Rickett & Coles, 1991; Manoharan, 2012; Tokumaru et al., 2021); IPS will be discussed further below. The middle panel of Figure 5 shows similar observations from the second Ulysses orbit during a period of high solar activity. Here, the solar wind speed and magnetic field direction are highly variable in latitude, due to the presence of CMEs propagating away from the Sun over a wide range of latitudes and the higher inclination (tilt angle) of the HCS, resulting from the dominant contribution to the IMF from active regions and the weakening of the polar coronal holes. The average solar wind speed is also lower than during solar minimum. During the third Ulysses orbit (right panel), again at near solar minimum conditions, the large-scale organization of the solar wind speed with latitude returned, but with the magnetic field polarities in each hemisphere reversed. Figure 6 shows the relative occurrence time of CME-associated structures, co-rotating HSS, and slow solar wind at Earth in 1964-2021, averaged over six Carrington rotation intervals. These results are based on a visual inspection of OMNI solar wind observations and other data, and are updated from Richardson et al. (2002) and Richardson & Cane (2012). Figure 6 also illustrates the variation in solar wind structure with the solar cycle. The occurrence of CMEs tends, like the CME rate (e.g., Yashiro et al., 2004; Robbrecht et al., 2009), to follow solar activity levels. Co-rotating HSS remain present throughout the solar cycle but tend to be predominant during the declining and minimum phases, as does slow solar wind (Kamide et al., 1998; Verbanac et al., 2011). There are also clear cycle-to-cycle variations in Figure 6 with a weakening observed for cycle 24. Solar cycle 24 showed a clear drop in all parameters by 20-40% compared to previous cycles (Yermolaev et al., 2021, 2022). Recent studies showed that this might be related to the characteristics of CMEs occurring in different cycles (Bilenko, 2022). Especially for modeling, these cycle-to-cycle variations of the solar wind need to be taken into account. Strong variations definitely affect the model performances as the boundary and initial conditions change from epoch to epoch. Methods of "automated" solar wind structure identification, based on combinations of selected solar wind parameters, have also been proposed (e.g., Neugebauer et al., 2003; Zhao et al., 2009; Xu & Borovsky, 2015), though Neugebauer et al. (2016) Figure 6: Sunspot number (top panel) and the percentage of time the solar wind at Earth is composed of CME-associated structures (e.g., post-shock flows, CMEs), co-rotating HSS, and slow solar wind, for 1964–2021, based on visual examination of OMNI solar wind data and other data sets, as discussed in, and updated from, Richardson & Cane (2012). The bottom panel shows the time when the solar wind classification could not be determined, predominantly due to data gaps. Note that the occurrence of CME-related flows tends to follow the solar activity cycle, while CIRs are most prominent during the declining and minimum phases of the cycle though are present throughout the cycle. Figure 7: A categorization, based on identifying the characteristic features of solar wind plasma parameters in different types of solar wind, of the OMNI data in 1963–2013 into four types of solar wind: ejecta (i.e., CMEs, blue), coronal-hole-origin plasma (red), streamer-belt-origin plasma (green), and sector-reversal-region plasma (purple). The white curve is \(100-0.2\times\) the sunspot number, i.e., the sunspot number is inverted here compared to Figure. White vertical bands are intervals with insufficient data (Xu & Borovsky, 2015). The percentage of the time when the classification is judged to be unclear is largely based on data availability, such as in the 1980s-mid 1990s when solar wind data were only available when the measuring spacecraft, IMP8, was in the solar wind. note that the classifications provided by the three schemes they considered were only in agreement 49% of the time. Figure 7 shows the occurrence of solar wind structures in 1963-2013 obtained by Xu and Borovsky (2015). This shows similar patterns to Figure 6, though the slow solar wind is sub-divided into streamer belt flows and sector reversal regions. Recent efforts to classify solar wind structures have utilized machine-learning (ML; e.g., Camporeale et al., 2017; Heidrich-Meisner and Wimmer-Schweingruber, 2018; Bloch et al., 2020; Li et al., 2020). As an example, Figure 8 (from Bloch et al., 2020) shows a ML classification of the solar wind flows at Ulysses shown in Figure 5. Solar wind structures cover a wide range of size scales (Verscharen et al., 2019), including small-scale FRs (e.g., Moldwin et al., 2000), density structures (e.g., Viall et al., 2008), and features associated with turbulence (Bruno and Carbone, 2013). Recent movies of heavily-processed coronagraph images offer a tantalizing view of a complex, structured solar wind (DeForest et al., 2018). The small-scale solar wind structuring may have effects on the CME propagation itself as CMEs tend to adjust to the solar wind speed and IMF. This may generate CME frontal deformation, and local measurements from in situ data may influence statistical results. While interesting in their own right, such small-scale structures will not be discussed further in this section. Much of our knowledge of the structure and time variation of the solar wind is based on observations from heliospheric spacecraft. However, these have only probed limited regions of the heliosphere (Verscharen et al., 2019), in some cases only at certain solar activity levels. Also, with the notable exception of Ulysses, these spacecraft generally remain close to the ecliptic plane, where only a limited (\(\sim\pm 7^{\circ}\)) sampling of the latitudinal structure of the solar wind is provided by the inclination of the solar equator relative to the ecliptic. Also, Ulysses was still \(\sim\)1 AU from the Sun when at the highest latitudes. Hence, there have only been limited studies of the latitudinal structure of the solar wind in the inner heliosphere using in situ observations. The Helios 1 and 2 spacecraft orbiting at 0.3-1 AU during solar cycle 21 demonstrated clearly that even small changes in spacecraft latitude can significantly affect the solar wind structures observed in situ (Schwenn et al., 1978; Burlaga et al., 1978). More recently, similar conclusions were inferred from STEREO measurements when the two spacecraft had a small separation in latitude but observed or missed features associated with large-scale solar wind structures (e.g., Gomez-Herrero et al., 2011). Since the last Roadmap by Schrijver et al. (2015), PSP and SolO have been launched to probe the solar wind in the inner heliosphere (see also Section 1.4). PSP has, at the time of writing, already sampled into below 20 Rs from the Sun and detected for the first time the sub-Alfvenic point (Kasper et al., 2021), while SolO is commencing a series of maneuvers that will ultimately increase its latitude range to \(\pm 35^{\circ}\). Both missions will improve our knowledge of the solar wind in the inner heliosphere in the next few years. Recent planetary spacecraft have provided observations of the solar wind while in their cruise phases and/or in orbit, such as MESSENGER and BepiColombo (Mercury missions), Venus Express (Venus mission), Mars Odyssey, Mars Express, and MAVEN (Mars missions), Rosetta (Comet 67P mission), Juno (Jupiter mission), Cassini (Saturn mission), and New Horizons (Pluto mission), complementing earlier observations of the outer heliosphere from spacecraft such as Pioneers 10 and 11 and Voyagers 1 and 2. Witasse et al. (2017) demonstrated how combined observations from multiple spacecraft may be used to track an CME from the Sun (on October 14, 2014) out to Cassini at 9.9 AU and possibly to New Horizons at 31.6 AU, and Voyager 2 at 110 AU in late March 2016. Other studies of solar wind structures using planetary spacecraft include Mostl et al. (2015), Prise et al. (2015), Janvier et al. (2019), Davies et al. (2021), Palmerio et al. (2021b), and Winslow et al. (2021a). ### Modeling the Background Solar Wind Models of the solar wind can provide a global view of solar wind structures and help to interpret the structures observed by spacecraft. Several such models are in use in space weather studies, which are described in more detail in Section 3.2. Though differing in details, many based on solving MHD equations on a suitable spatio-temporal grid. (Solar wind models are discussed further in the TI2 paper by Reiss et al. (2023).) Currently, these models generally use as input coronal magnetic field models based on photospheric magnetograms either from the ground (e.g., the GONG network) or spacecraft (e.g., SOHO, SDO). An example is the Wang-Sheeley-Arge (WSA; Arge et al., 2004) model, which is based on an observed anti-correlation between the non-radial expansion of coronal field lines and the solar wind speed (Wang and Sheeley, 1990, see also Section 3.1.1). However, differences in slow and fast solar wind composition and charge states (von Steiger et al., 2000) indicate that expansion alone is not the cause of speed solar wind variations and different solar sources must be involved (Laming, 2015). In particular the slow wind is found to have a substantial transient component (e.g., Bourouaine et al., 2020) that in general is not addressed by current modeling (e.g. there is no truly Figure 8: Machine-learning classification of the Ulysses data in Figure 5 into coronal hole wind (blue), streamer belt wind (orange), and unclassified data (red) (Bloch et al., 2020). The lower plots show the fraction of each type of solar wind as a function of heliolatitude. time dependent global solar wind model based on time dependent synoptic maps, although some global models can provide frequent updates based on updating maps such as ADAPT--see also Section 1). A major problem with modeling the global solar wind in this way is the absence of photospheric magnetic field observations from the far side of the Sun. While magnetic fields observed on the front side can be assumed to persist onto the far side, a significant change on the far side, for example due to the emergence of an active region or a change in a coronal hole boundary, which may alter the solar wind structure, will not be detected until it rotates onto the front side. A recent approach (Jeong et al., 2020) uses artificial intelligence (AI) to predict far-side coronal magnetic fields, though this requires far-side EUV observations such as from the STEREO spacecraft (see also Heinemann et al., 2021), which will now not be available for several years with STEREO-A returning to the front-side of the Sun in August 2023. Observations of the solar magnetic field from spacecraft at L4 and/or L5, \(\sim\)60\({}^{\circ}\) west/east of the Sun-Earth line (Vourlidas, 2015; Posner et al., 2021; Bemporad, 2021) will help to reduce, but not remove, this observational gap. A spacecraft at L5, such as Vigil, would also monitor co-rotating structures around 5 days before they reach Earth (e.g., Simunac et al., 2009). In addition, magnetic fields in the polar regions of the Sun are poorly measured from Earth. SolO moving to higher latitudes in coming years will help to improve our view of the poles. Figure 9 shows an example of a frame from an ENLIL simulation of the solar wind, showing on the left-hand side the speed and density in the ecliptic and in a meridional cut at the location of Earth (yellow dot). Note the large-scale regions of slow and faster solar wind and their spiral configuration, as well as the CIRs indicated by density enhancements at the leading edges of the HSSs, similar to those in the schematic in Figure 2. The solar wind speed is also lower at low latitudes, resembling the Ulysses observations near solar minimum in Figure 5. Time-series plots of the speed and density at Earth and the STEREO spacecraft are shown on the right-hand side, indicating the passage of (different) CIRs on days 5-6 at STEREO-A and -B, which can also be identified by the spiral density enhancements in the in-ecliptic density in the top left of the figure. The global structure of the solar wind in the inner heliosphere may also be inferred using remote-sensing observations such as IPS, which is driven by irregularities in the solar wind density (e.g., Breen et al., 1998; Bisi et al., 2010, 2010) and observations of variations in white light scattered from solar wind density enhancements (e.g., Jackson et al., 2001; Rouillard et al., 2008; Eyles et al., 2009; Howard et al., 2013; Conlon et al., 2015; Plotnikov et al., 2016). However, inferring solar wind structures from such line-of-sight observations is complex. Tomographic reconstructions of the global solar wind density have been derived from IPS and/or white-light observations (e.g. Jackson & Hick, 2002; Bisi et al., 2010; Jackson et al., 2011, 2020, and references therein), and solar wind velocity and density reconstructions using IPS are routinely provided by the University of California, San Diego (UCSD). More details on IPS techniques for space weather and the implementation of IPS data in models are given in Section 4. Validating global solar wind models is a challenge: spacecraft observations only provide comparisons at widely-separated points in the heliosphere and, with the exception of Ulysses, near the ecliptic. While a model may be "tuned" to agree with observations at a specific point, there is no guarantee that this tuning will also improve the agreement at other locations, where no observations may be available to provide validation. Thus, the improved validation of global solar wind models ideally requires observations from as many spacecraft as possible. Recently, Lang et al. (2021) have used DA to improve forecasts of the solar wind parameters at Earth by using observations from widely separated spacecraft to update model inner boundary conditions. Riley et al. (2021) have discussed using PSP observations to constrain MHD heliospheric models with different coronal models as input. Several studies have validated solar wind models using observations at Earth or other locations (e.g., Cohen et al., 2008; Owens et al., 2008; Gressl et al., 2014; Jian et al., 2015, 2016; MacNeice et al., 2018; Reiss et al., 2020). For example, Gressl et al. (2014) and Jian et al. (2015) compared the validity of parameters derived from ENLIL simulations using different magnetograms and coronal field models as input. The validation of solar wind models is discussed further in the TI1 paper by Reiss et al. (2022). A validation of heliospheric modeling algorithms through pulsar observations is given in the TI1 paper by Shaifullah et al. (2022). ### Geomagnetic Effects from CMEs and SIRs/CIRs Geomagnetic effects are driven predominantly by the strength of the southward component of the solar wind magnetic field, \(B_{z}\), and the solar wind speed (e.g., Newell et al., 2007). Studies have shown that CMEs are the major drivers of strong geomagnetic storms, with a smaller fraction associated with CIRs (e.g., Kilpua et al., 2017, and references therein). For example, Zhang et al. (2007) found that of 88 Figure 9: Screenshot from the NOAA (National Oceanic and Atmospheric Administration) Space Weather Prediction Center website ([http://www.swpc.noaa.gov/products/was-enlil-solar-wind-prediction](http://www.swpc.noaa.gov/products/was-enlil-solar-wind-prediction)) showing the density (top row) and solar wind speed (bottom row) predicted by the WSA–ENLIL model. The yellow, red, and blue dots indicate respectively the locations of Earth, STEREO-A, and STEREO-B at the time of the simulation. storms with \(Dst\leq-100\) nT in 1996-2005, only 13% were associated with CIRs. Another 53% were associated with single CMEs, and 24% were produced by interactions of multiple CMEs. Although the southward fields driving CME-associated storms were generally in the ejecta, with the largest storms being associated with MCs/MOs with extended intervals of persistent southward field, 27% of these strong storms were driven by sheath magnetic fields (see also Kilpua et al., 2017, 2018). Also Yermolaev et al. (2021) highlighted that about 10% of moderate to large geomagnetic storms are sheath-induced rather than driven by the ejecta. Geomagnetic activity associated with CIRs is largely driven by intermittent southward turnings of the magnetic field associated with Alfvenic fluctuations that results in extended enhanced activity as measured by the AE index persisting during passage of the HSS (e.g., Tsurutani et al., 2006; Buresova et al., 2014). Because of these differences in the storm drivers, the geomagnetic response as measured by magnetic indices such as Dst (Disturbance storm time), SYM-H (symmetric disturbance of horizontal geomagnetic fields), ASY-H (longitudinally asymmetric disturbance of horizontal geomagnetic fields), AE (auroral electrojet) and K\({}_{\rm p}\) (_planetarische Kennziffer_; global geomagnetic storm index), differs for CIR and CME-driven storms. Further discussion of the geomagnetic effects of CIRs/SIRs can be found in Section 3. Since the coupling processes in the solar-terrestrial system during different kinds of solar wind are not fully understood, this may lead to discrepancies in models and forecasts of the effects of solar wind structures on Geospace. Hence, the accurate prediction of the geoeffectiveness of space weather events (both large- and medium-scale) and the impacts on technological systems is a major challenge (for more details, see the G Cluster TI2 papers by e.g., Opgenoorth et al., 2023; Bruinsma et al., 2023; Tsagouri et al., 2023; Zheng et al., 2023). Because of the close association between geomagnetic storms and CMEs, storm prediction often relies on the observation of a CME associated with frontside solar activity, perhaps combined with modeling to assess whether the related CME is likely to encounter Earth. However, a major challenge is to predict the strength and orientation of the CME magnetic field during Earth encounter as early as possible, ideally using observations of the related solar event (e.g., Savani et al., 2015, 2017). There is also evidence that stealth CMEs, without clear signatures of their solar source, may occasionally give rise to CMEs that produce significant geomagnetic activity. The circumstances of such so-called "problem" geomagnetic storms were recently reviewed by Nitta et al. (2021). ### Summary In summary, this section has briefly described the main features of the solar wind in particular CIRs/SIRs, HSSs, and CMEs that are the major components of the solar wind that drive space weather. Observations from the recently launched PSP and SolO missions already have, and will continue to, provide valuable insights into the configuration and evolution of structures in the inner heliosphere far closer to the Sun than the 0.3 AU achieved by the Helios mission and, in the case of SolO, eventually to higher latitudes than previously attained at such distances from the Sun. New methods, such as ML, as well as new data sources, such as IPS, hold promise to better develop reliable solar wind structure classifications. Observations that extent into the upcoming cycle 25 will enable further studies of cycle-to-cycle variations of the characterisitcs of solar wind structures. ## 3 SIRs/CIRs Formation and Propagation To properly forecast the arrival of transient events, we first need a reliable solar wind model which we do not have at the moment. For predicting the background solar wind structures in interplanetary space with higher accuracy, enhanced knowledge about the physics underlying the processes forming these structures is necessary. In this section, we explore the open questions (see also Viall & Borovsky, 2020) and ongoing scientific research specifically focusing on the generation and evolution of HSSs in the context of space weather, starting from their solar source regions, coronal holes, out to interplanetary space. For complementary ISWAT activities on solar wind generation and modeling we refer to S2 Cluster paper by Reiss et al. (2023). ### SIRs/CIRs and their Solar Sources From coronal observations, Waldmeier (1956) was the first to associate dark regions in the corona (M-regions) with the recurrent geomagnetic activity noted by Maunder (1904). Later, such geomagnetic activity would be clearly related to HSSs emanating from the dark coronal regions that are now known as coronal holes (e.g., Newkirk, 1967; Wilcox, 1968). Hence, HSSs are deeply linked to the presence and evolution of coronal holes on the Sun. In particular, low-latitude coronal holes are most relevant as sources of streams impacting planets in the ecliptic plane. The equatorward extensions of polar coronal holes start to form shortly after solar maximum (Harvey & Recely, 2002), leading to the appearance of the periodic geomagnetic storms that modulate planetary atmospheres and occur at a higher frequency close to solar minimum (e.g., Temmer et al., 2007; Lei et al., 2008). With that, the number of SIRs/CIRs, and as such the heliospheric structure in general, varies strongly depending on the solar cycle and the coronal magnetic field configuration. To reliably forecast the solar wind structure, the number of streams per rotation and their properties need be sufficiently well known and modeled. In many space weather forecasting models, the large-scale solar wind structures in the heliosphere are usually regarded as 'quasi-time-stationary' and evolutionary aspects occurring during a solar rotation or on longer time scales, are often neglected. However, Heinemann et al. (2018, 2020) showed with STEREO data that the evolution of coronal holes causes variations in the resulting HSSs as measured in situ. The more variable denser and slow solar wind, which is also found within coronal hole regions (Bale et al., 2019), plays a role that is not well established in the formation of SIRs. The parameters of the solar wind upstream and downstream of the stream interface, hence, the boundary separating the predominantly fast and predominantly slow wind regimes, have been well studied (e.g., Crooker & McPherron, 2012) and depend on the interplay of slow and fast solar wind (see Section 2); however, short-term variations in both solar wind components have not been considered yet in determining the properties of the resulting SIR. Therefore, a more detailed understanding of the solar wind, heliospheric magnetic field, and their sources is vital for refining and validating space weather forecasting efforts. The processes leading to the formation of SIRs are manifold. Figure 10 depicts several of them and shows how they might interrelate with each other. But it is still not well understood how the conditions in both slow and fast wind influence the formation of the SIR and the resulting space weather effects. #### 3.1.1 Fast Solar Wind Coronal holes are often regarded as coherent, rigid structures that evolve slowly. However, close inspection has revealed that the magnetic structure and substructure within coronal holes is highly complex. According to the standard model of the magnetic field configuration of coronal holes, open magnetic funnels (e.g., Tu et al., 2005) that originate in small scale unipolar photospheric magnetic elements (Heinemann et al., 2018; Hofmeister et al., 2019) located in the lanes and nodes of the magnetic network (Cranmer & van Ballegooijen, 2005, and references therein), expand to fill the coronal space with an approximately uniform vertical magnetic field. This expansion is most likely modulated by low-lying closed loops existing in the space between the open fields (Wiegelmann et al., 2005). Figure 11 summarizes in a cartoon the mix of open and closed magnetic field structures reaching different heights in the corona and that subsequently extend to interplanetary space. The magnetic funnels or flux tubes, that are the sources of the fast solar wind outflow, are the subject of many observational and modeling studies (e.g., Wojcik et al., 2019; Tripathi et al., 2021; Bale et al., 2021, and more). However, _it is still unclear how the funnel properties are linked to the properties of the outflowing solar wind._ The vertical expansion profile may depend on the height and, as such, on the field strength, of the low lying coronal loops in coronal holes which inhibit lateral expansion (for simulations see Wiegelmann et al., 2005). Detailed knowledge about the funnels can help to constrain the parameters of the resulting solar wind and improve understanding of the subsequent formation of SIRs. Often, in situ plasma velocity profiles of HSSs near 1 AU show double or multiple peaks, which suggest that there are multiple centres of solar wind outflow in individual coronal holes (Heinemann et al., 2018; Garton et al., 2018). Knowledge about the source locations of the observed solar wind could increase the chances of observing the actual outflows, thereby improving not only solar wind backmapping methods (e.g., ballistic backmapping, Peleikis et al. 2017; Macneil et al. 2022 or slip backmapping, Lionello et al. 2020) but also the modeling of the solar wind release. However, investigation of the magnetic and plasma structure of coronal holes, especially magnetic funnels, is an arduous task due to the sparse availability of observations at low field strengths. It is questionable whether the commonly-used potential field source surface extrapolation (PFSS), which assumes a zero-current approximation that leads to a potential field, plus a prescribed source surface to open magnetic field lines (Altschuler & Newkirk, 1969; Schatten, 1971), is valid at low heights in coronal holes. The community may want to go forward introducing a more realistic coronal source surface to better estimate solar wind boundary conditions (see also e.g., Asvestari et al., 2019). Promising approaches could be to examine high-resolution spectroscopy in coronal hole outflow regions using e.g., DKIST. However, it is not straightforward to relate solar surface parameters with solar wind parameters measured in situ (e.g., at 1 AU) as interaction processes (such as solar wind acceleration, slow-fast wind interaction, switchbacks, turbulences) may mask any correlation. In situ measurements close to the Sun, such as with PSP, where the slow and fast solar winds have had less time to interact, can help in revealing possible relations. It is known that, in general, larger coronal holes produce HSSs of higher speed. From this relationship, 1D methods of forecasting the solar wind at 1 AU have been developed using empirical models relating the coronal hole area with the in situ measured solar wind peak speed (Nolte et al., 1976; Vrsnak et al., 2007; Temmer et al., 2018; Heinemann et al., 2018; Bu et al., 2019; Akhtemov & Tsap, 2018; Heinemann et al., 2020) and the related geomagnetic activity (Vrsnak et al., 2007; Nakagawa et al., 2019). The relations for peak velocity hold well for defined coronal holes near disk center and a correction may be applied for different latitudes (Hofmeister et al., 2018). _However, the physical principles behind the relation between solar wind peak velocity and coronal hole area are not yet fully understood_. It has been suggested, and analytically shown, that Figure 10: Proposed pathways of the solar wind from origin to heliosphere and release mechanisms. A complex interaction of many different processes may finally produce the slow and fast solar wind that lead to the formation of SIRs. Solar wind values (proton speed, \(v_{p}\), proton density, \(n_{p}\), proton temperature, \(T_{p}\), and charge states, \(n_{\rm He}/n_{p}\)) are taken from Schwenn (2006) and stream interface (SI) criteria by Jian et al. (2006). the coronal area-HSS speed relation may be entirely a propagation effect (associated with slow-fast wind interaction) in interplanetary space caused by a discrete bimodal velocity distribution (Hofmeister et al., 2022). In contrast, the empirical relation (\(v_{\rm SW}\sim 1/f\)) between the solar wind speed \(v_{\rm SW}\) and the flux tube expansion factor \(f\) is often used to explain the connection between coronal holes and solar wind speed (Wang & Sheeley, 1990; Wang, 2010) that produce the observed bimodal distribution (see also Section 2.3). Closed fields can influence the behavior of open fields and vice versa. In particular, magnetic field gradients along the vertical coronal hole boundary can influence the magnetic field expansion behavior and the resulting plasma outflow. Precise in situ solar wind measurements at different radial distances (with PSP and SolO, see e.g., Perrone et al., 2022), as well as more advanced measurements of the first ionisation potential (FIP) effect (Pottasch, 1964a,b) and heavy ion charge states (e.g., Lepri et al., 2013), which help to connect the solar wind with the regions of origin (Brooks & Warren, 2011; Zambrana Prado & Buchlin, 2019; Parenti et al., 2021), will help to shed light on the origin of the relation between coronal hole area and HSS peak velocity. It has been shown that for larger coronal holes without a clearly-defined boundary, the area to HSS peak velocity relation breaks down (Garton et al., 2018; Geyer et al., 2021). These coronal holes without clear boundaries have been observed preferentially during solar minimum (especially in the 2018-2020 minimum, e.g., see Fig. 12). They may contain multiple brighter closed-field regions, and may stretch over large areas at low latitudes. The observed mean magnetic field strengths in such regions are around \(\pm\)1 G, with only a slight flux imbalance suggesting a low open flux. Because they resemble loosely-connected darker patches in EUV observations, these coronal holes have been dubbed _patchy_ coronal holes (Heinemann et al., 2020; Samara et al., 2022). The observed peak velocities of the solar wind emitted by such patchy coronal holes with areas larger than 10\({}^{11}\) km\({}^{2}\) usually range from 450 to 600 km s\({}^{-1}\), which does not follow the usual empirical relation. Due to their many differences from clearly-defined coronal holes, _patchy_ coronal holes need to be treated separately in terms of their space weather effects. We still do not know how HSS plasma emanating from a coronal hole is influenced by the presence or absence of closed magnetic field within the coronal hole and/or nearby it. #### 3.1.2 Slow Solar Wind in the Frame of Solar Wind Interaction When discussing SIRs and CIRs, the contribution of the slow solar wind to the stream-stream interaction cannot be neglected. In contrast to fast solar wind streams, there is no full agreement on the source of the slow solar wind. Typically, slow solar wind has a composition resembling that of closed fields in the corona, but a closed-field source would appear to be inconsistent with the large angular widths of the slow solar wind. The strongest consensus is that reconnection is responsible for the slow solar wind outflow. This includes interchange reconnection of closed and open fields, typically, but not only, at coronal hole boundaries, or closed field reconnection, for example in active region cusps. The magnetic carpet of the Sun, resulting in a separatrix and quasi-separatrix web (dubbed the "S-web"), is often proposed as the source of the ambient solar wind (Antiochos et al., 2011). Pseudo-streamers (Riley & Luhmann, 2012), coronal streamers (Habbal et al., 1997; Ofman, 2004), coronal hole - active region boundaries (Ko et al., 2006) and quiet-Sun regions (Fisk et al., 1998) have also been suggested as the origin of the slow wind. It has also been reported that the slow solar wind may not only originate in closed field regions but also in small equatorial coronal holes (Ohmi et al., 2004; Stansby et al., 2020). Recent PSP observations clearly indicate slow and fast solar wind from an equatorial coronal hole (Bale et al., 2019). There is also evidence for different types of slow solar wind (based on, e.g., FIP abundances, charge states and Alfvenicity), further supporting that there are multiple sources for the slow wind. An open question is whether slow solar wind flows from different sources interact with HSSs in a different way, and how that affects the formation of SIRs and their 1 AU characteristics. ### Solar Wind Properties at 0.1 AU The solar wind properties at 0.1 AU, where it is assumed that most solar wind acceleration has ceased (e.g., Cranmer, Figure 11: Simplified depiction of the magnetic field configuration of a coronal hole from the photosphere to the corona. Figure by S. G. Heinemann, based on drawings by Cranmer & van Ballegooijen (2005) and Wedemeyer-Böhm et al. (2009). Not to scale. Figure 12: Images of two SDO/AIA (Atmospheric Imaging Assembly) coronal holes observed in the 193 Å filter. The left panel shows a clearly defined, compact coronal hole during solar maximum (May 29\({}^{\rm h}\), 2013) and the right panel shows a very large but _patchy_ coronal hole during solar minimum (November 8\({}^{\rm h}\), 2018). 2002; Bemporad, 2017), are commonly used as input for heliospheric models. There are still open questions on the solar wind acceleration process itself that will not be discussed here; the interested reader is referred to Viall and Borovsky (2020). The 0.1 AU properties are usually inferred from coronal models based on photospheric magnetograms and empirical relations. However, the different assumptions and input data used can result in large variations of the inferred properties (MacNeice et al., 2018; Samara et al., 2021; Riley and Ben-Nun, 2021). For the parameters relevant to SIRs/CIRs, their accuracy relies on how coronal holes are represented and on the assumptions made to estimate/derive the plasma and magnetic field parameters. Most commonly, empirical relations between solar magnetic field quantities and the solar wind speed at 0.1 AU are used. These relations may be expressed as \(v_{0.1AU}=v(f,d)\), and depend on the flux tube expansion factor \(f\) and the distance from coronal hole boundary \(d\). The exact form of the relation varies between different authors, studies and models (e.g., see Arge and Pizzo, 2000; Riley et al., 2001; Owens et al., 2008; McGregor et al., 2011; Wiengarten et al., 2014; Pinto and Rouillard, 2017; Pomoell and Poedts, 2018). The outer boundary conditions derived from a coronal model at 0.1 AU are typically used as the inner boundary conditions for heliospheric models. Current state-of-the-art solar wind models include the MHD models ENLIL (Odstrcil and Pizzo, 1999b; Odstrcil, 2003), EUHFORIA (Pomoell and Poedts, 2018; Poedts et al., 2020b), ICARUS (Verbeke et al., 2022), hydrodynamic approaches (Riley and Lionello, 2011; Owens et al., 2020), or kinematic models such as the WSA Inner Heliosphere model (WSA-IH; Arge and Pizzo, 2000). Although these models and methods are widely used in solar and heliospheric physics and space weather research, assumptions need to be made for plasma and magnetic field properties that cannot be observed directly. The solar wind speed, density, and temperature, as well as the magnetic field strength and structure, are not well constrained. As already noted, it is believed that a large proportion of the solar wind acceleration takes place below 0.1 AU, and so this distance corresponds roughly to the transition between the solar and heliospheric regimes. More precise knowledge about the environment at 0.1 AU would lead to a better representation of the heliosphere through modeling. New missions that venture into the close proximity of the Sun, e.g., PSP that has already passed through the Alven point into the solar corona, will provide new in situ measurements of the environment close to and below 0.1 AU (Kasper et al., 2021). In addition, new IPS observations could potentially be used to reconstruct solar wind maps at 0.1 AU (similar to those in Sokol et al., 2015; Jackson et al., 2020). Ideally, a "universal" relation that successfully links solar surface properties to 0.1 AU should be established, making as few assumptions as possible, that will lead to a community consensus on constraining solar wind parameters as input for heliospheric models for research and space weather prediction. At 0.1 AU and beyond, plasma motion dominates the heliosphere (\(\beta_{\rm plasma}\gg 1\)) but the magnetic field structure cannot be neglected. Most heliospheric models produce a mostly smooth bipolar heliosphere (especially during solar minimum) separated by the HCS. However, recent PSP observations provide evidence of a much more complex magnetic field structure close to the Sun (Bale et al., 2019) including changing and mixed polarities due to different origins (e.g., open funnels, closed fields, coronal jets) as well as kinks and twists in the magnetic field (switchbacks; Mozer et al., 2020; Dudok de Wit et al., 2020; Squire et al., 2020; Tenerani et al., 2020). On larger scales, observed variations in the field (e.g., B\({}_{r}\)) can be reproduced by combined PFSS and MHD models. However, such models cannot reproduce the fine structure. Although well-established and commonly used, it is not clear whether the flux tube expansion factor and distance to coronal hole boundary are optimal parameters for deriving magnetic field and plasma properties at 0.1 AU. A better knowledge of that would lead to improved, and more realistic models of the fractured structure of the open fields at 0.1 AU, and might also show what role these structures play in larger-scale heliospheric dynamics. ### Solar Wind Evolution in Interplanetary Space Many observational and modeling studies have investigated the evolution of HSSs and SIR/CIRs with heliocentric distance, in particular during the Helios/Pioneer/Voyager and Ulysses (e.g., Gosling and Pizzo, 1999; Whang and Burlaga, 1990; Burlaga et al., 1990, 1995, 1997; Gazis et al., 1999; Forsyth and Gosling, 2001) as well as more recently (Allen et al., 2021). In particular, Helios observations at 0.3-1 AU showed that the velocity shear between slow and fast solar wind is largest closest to the Sun (consistent with different sources for slow and fast solar wind) and declines rapidly at 0.3-0.5 AU, before becoming approximately constant out to at least 1 AU (Schwenn, 1990). Beyond \(\sim\)1 AU, the expansion speed of a SIR may exceed the local magnetosonic speed, resulting in the formation of a forward shock at the SIR leading edge and a reverse shock at its trailing edge (e.g., Smith and Wolfe, 1976; Gosling et al., 1976). Such shocks are occasionally observed closer to the Sun. The increasing spiral field angle at larger heliocentric distances causes SIRs to become near tangential structures, almost perpendicular to the Sun-spacecraft line, leading to an increase in shock formation from 26% at 1 AU to 91% at 5.4 AU (e.g., Jian, 2008; Geyer et al., 2021). Expansion of the SIR with increasing heliocentric distance tends to erode the difference between the slow and fast solar wind speeds, leading to a weakening of the HSSs. In this respect, the wake of a HSS, i.e., where the fast wind merges with the slow wind, might be of interest for future studies. In addition, these streams can interact and merge, leading to a simplification of the stream structure further from the Sun (e.g., Burlaga et al., 1990). It also has been found that the tilt of a SIR does not necessarily match the tilt of the solar source coronal hole (Broiles et al., 2012). This implies that the shape of the solar source region might not be the dominant factor determining the SIR geometry. Rather, the IMF configuration plays a role. The different space weather impacts resulting from variations in SIR/CIR geometry including the spiral angle, tilt and possible substructures due to local speed variations, need to be further investigated. This will be especially important for future exploration in interplanetary space requiring more detailed knowledge of space weather hazards at distances beyond Earth (e.g., Kajdic et al., 2021). This evolutionary behavior with heliocentric distance also influences the relations between different solar wind parameters. For example, the relations between solar wind density and velocity or proton temperature and velocity at 1 AU have been studied as far back as the 1970s (Burlaga & Ogilvie, 1973; Eyni & Steinitz, 1980; Geranios, 1982), while Lopez & Freeman (1986) used Helios data to study the radial dependence of the speed-temperature relation at 0.3 to 1 AU. It is found that the relations change with radial distance, suggesting that it is not possible to interpolate the solar wind properties measured in situ back to their source regions (Perrone et al., 2019). It has been shown that the relations found for HSSs and SIRs/CIRs may deviate from those found in slow solar wind. Wang (2010) and later Fujiki et al. (2015) showed that the solar wind velocity is inversely proportional to the flux tube expansion factor and that the velocity increases linearly with the strength of the open field footpoints (see also Section 3.1.1). However, this relation cannot be used to improve prediction of the solar wind velocity without additional information about the mass flux, which is a fundamentally important physical parameter for solar wind acceleration. It was suggested that whereas the mass flux close to the Sun is proportional to the field strength, near 1 AU the mass flux isitudinally and longitudinally constant on average. This may imply that interaction processes in the solar wind can break or smooth the proposed relations during propagation. PSP data might help to resolve these discrepancies. The proton temperature of the solar wind might be expected to drop adiabatically with increasing radial distance, but it is found that it drops more slowly, implying that additional heating is required (e.g., Hellinger et al., 2011, 2013) while Perrone et al. (2019, 2019) noted that pure fast wind seems to follow an adiabatic cooling law as expected from radial expansion. The density decreases as function of radial distance as expected, but the magnetic field deviates from Parker's model. The v-T relation for solar wind at 1 AU is usually described by a single linear fit for both slow and fast wind. However, it has been shown that: (1) different solar wind may exhibit different relations, and (2) the relation evolves with radial distance (Elliott et al., 2012). The behavior of the temperature near and within SIRs as function of the radial distance from the Sun is less well studied. ### Specific Challenges for Modeling SIRs/CIRs As described above, many physical processes related to the slow and fast solar wind acceleration are not fully understood. The successful modeling of the background solar wind is still a huge challenge especially keeping in mind that the observational input for numerical models comes from the photosphere (i.e., magnetograms) and/or EUV observations in case of analytical/empirical models. The simulation of SIR/CIR formation relies mostly on the ability of the solar wind model used to produce a bimodal distribution to induce interaction between the fast and slow streams. Certain models, such as the Parker analytical model or the basic polytropic heating used in MHD, do not meet this requirement, although some modifications can be made by varying the adiabatic index or introducing Alfven waves or ad-hoc heating terms. Most models that can simulate SIRs/CIRs in 3-D space are heliospheric models driven by empirical coronal models (e.g., WSA-ENLIL and EUHFORIA). While such models produce rapid and robust results, they may not describe the source regions of the fast and slow wind very accurately. Studies discussing such model results and comparing them with observations include: Owens et al. (2008), using WSA-ENLIL simulations, Hinterreiter et al. (2019) using EUHFORIA, Samara et al. (2021), who compare HSSs modeled by EUHFORIA with observations and results of other models, and Samara et al. (2022), again using EUHFORIA. In the future, instead of empirical coronal models, MHD coronal codes optimized for space weather may be used to provide improved 0.1 AU input boundary conditions for heliospheric codes (e.g., the Virtual Space Weather Modelling Centre VSWMC; Poedts et al., 2020). The time-evolution of coronal codes may also become an important issue, as most of the current models are quasi-static. _How can the time-evolution of solar source regions be incorporated in models to improve the modeling of SIRs/CIRs?_ This might require a number of time-dependent extrapolations such as magneto-frictional (MF; Pomoell et al., 2019) or non-linear force-free (NLFF; Wiegelmann & Sakurai, 2012) modeling. As already discussed, predicting the solar wind at 1 AU and beyond is generally performed by combining models for different regimes (MacNeice et al., 2018), usually in the coronal and heliospheric domains (e.g., solar wind models such as ENLIL or EUHFORIA which combine the coronal WSA model and a heliospheric MHD model; Odstrcil & Pizzo, 1999; Pomoell & Poedts, 2018). Identifying the source of discrepancies when comparing model results to those of other models and/or observations is often a challenge, leading to the question whether the coronal model, the input data or the heliospheric model is the least reliable part. For example, Linker et al. (2021) and Wang et al. (2022) showed that there are significant differences in the estimated open flux when different input magnetograms are used. (For more details about open questions related to the global solar magnetic field, see the S2 Cluster TI2 paper by Reiss et al., 2023). Asvestari et al. (2019) and Caplan et al. (2021) highlighted model-model and model-observation differences for several coronal models that led to differences in the heliospheric domain predictions. The results are strongly depended on which model combination was used and how the transition between the models was performed (e.g., Jian et al., 2015, 2016). An objective evaluation of the performance of different models (see e.g., Wagner et al., 2022) and model combinations is necessary to advance space weather modeling, which requires model developers to be transparent about their (often hidden) model parameters and how they are tuned (see more details from the H1-01 team in Reiss et al., 2022). Without constraints on how models are adjusted for various conditions, events and utilization, reliable comparison of models and estimation of uncertainties will continue to be challenging. With the recent increase in available computational power, computer-based methods, such as DA, ML, and neural networks (NN), have become viable and widely available. Such techniques have been applied to, e.g., solar feature detection (Jarolim et al., 2021; Mackovjak et al., 2021), solar wind forecasting (Wang et al., 2020; Upendran et al., 2020; Raju and Das, 2021), and the prediction of recurrent geomagnetic effects (Zhelavskaya et al., 2019; Haines et al., 2021). These models are however still in their infancy; Camporeale (2019) describes in detail some of the major challenges these models and methods face. Although there will be greater reliance on such computational methods in the future, it is important not to neglect the underlying physical principles and physics. ### Geomagnetic Activity Associated with CIRs/SIRs The solar wind during the passage of a CIR is in itself a sufficiently strong driver for a magnetospheric storm (Koskinen, 2011), and well-developed SIRs and faster HSSs can impact Earth's magnetosphere sufficiently to induce minor to moderate magnetic storms. During the passage of SIRs/CIRs and HSSs, typically \(B_{z}\) fluctuates, and AE is relatively large for an extended interval, whereas the effect in Dst is relatively small, with the positive phase due to compression of the magnetosphere often larger than the negative phase. The geoeffectiveness of SIRs/CIRs has for a long time been underestimated by the space weather community; a significant impetus was provided by the deep solar minimum at the end of solar cycle 23 which was characterized by a large number of SIR/CIR events, and also by the 2005 Chapman Conference "Recurrent Magnetic Storms: Co-rotating Solar Wind Streams" (Tsurutani et al., 2006). An investigation carried out by Zhang et al. (2008) showed that about 50% of 157 "pure" SIRs/CIRs produced interplanetary shocks and 89% of the shocks were followed by magnetic storms. Although the storm recovery phase is characterized by an abatement of perturbations and a gradual return to the "ground state", observations of the disturbed ionosphere show significant departures from climatology within this phase of a storm. For SIR/CIR events, the recovery phase is longer than is typical for the recovery of CME-induced storms (including both sheath or magnetic ejecta- driven storms) because of different method of energy input (Buresova et al., 2014). Statistical analyses of SIR/HSS-related events have revealed that their ionospheric effects may be comparable to the effects of strong CME-induced magnetic storms under higher solar activity conditions but are less dependent on the season (Buresova and Lastovicka, pp.41-48 in Fuller-Rowell et al., 2016). ### Summary In this section, we have explored several open and debated questions relating to SIRs and their space weather effects, ranging from their solar sources, formation and interaction processes to radial evolution and modeling challenges. The magnetic structure and plasma properties of the solar wind source regions, as well as solar wind acceleration processes close to the Sun, are major concerns. In particular: How can the solar wind parameters be constrained at small radial distances when the majority of the acceleration processes has ceased at 0.1 AU, and how can the constrained parameters be used to improve model input? ## 4 CME Propagation Behavior As CMEs have the largest influence on space weather, CME forecasting is an important and wide field of research. The analysis and forecasting of CME propagation can be divided into "pre-event" (using model input from signatures/diagnostics occurring before the onset of the CME) and "post-event" (after the onset of the CME). In this section we discuss the open scientific questions related to post-event forecasting, mainly focusing on CME propagation and interaction in the inner heliosphere starting from 0.1 AU. For the latest developments and future prospects for pre-event forecasting, see the TI2 paper by Georgoulis et al. (2023) from Cluster S. For a review on the relation between CMEs and flares as well as early CME evolution we refer to e.g., Temmer (2021); Mishra and Teriaca (2023). ### CME Propagation Behavior and Uncertainties Strong geoeffectiveness mainly results from the combination of a dynamic pressure enhancement (primarily associated with the sheath/compression region generated by the CME through compression of the preceding solar wind during propagation), and the local southward interplanetary field (\(B_{z}\)) component (primarily within the CME ejecta). Geomagnetic storm forecast modeling is therefore a double challenge, as it requires both -- well constrained CME initial properties to feed the model together with a reliable ambient solar wind simulation. Current state-of-the-art CME forecasts have significant uncertainties for predicting the CME ToA, SoA, and its magnetic properties. This comes on the one hand from the uncertainties in the initial observational parameters used as model input, and on the other hand from the poorly-understood interaction processes between the different CME structures and the ambient solar wind (Section 3). The former is associated primarily with projection effects, as the features observed in coronagraphs are 2-D projections on the plane-of-sky (POS) of the actual 3-D structures, leading to an underestimation of the speed and overestimation of the CME angular width (see e.g. Burkepile et al., 2004; Vrsnak et al., 2007; Temmer et al., 2009; Paouris et al., 2021b, and references therein). Many of the model input parameters are the result of modeling and fitting techniques where the observer plays a decisive but not objective role on the final CME parameters (human-in-the-loop effect; see Verbeke et al., 2022). Moreover, the observed magnetic structures on the Sun related to the eruption may undergo significant development (cf. Figure 13), hence, predicting their properties at 1 AU distance is a major challenge (e.g., Pal et al., 2022). #### 4.1.1 CME Arrival Properties A range of sources contribute to the total uncertainty, including the specification of accurate boundary conditions, the physical approximations used to model CME propagation (e.g., in drag-based or MHD models), the geometric representation used to parameterize the CME (e.g. a cone or a FR model) and the interaction of the CME with the ambient solar wind. Lee et al. (2013) using ensemble modeling found that the accuracy of the modeled ToA not only depends on the initial input CME geometry, but also on the accuracy of the modeled solar wind background, which is driven by the input maps of the photospheric field. Mays et al. (2015) suggested that an ensemble of ambient solar wind WSA-ENLIL model outputs (an improved ensemble forecast of the maps of the photospheric field) would produce predictions that also reflect the uncertainties in the WSA-ENLIL modeled background solar wind in addition to the uncertainties in CME input parameters. Pizzo et al. (2015) investigated CME ToA uncertainty in the WSA-ENLIL+Cone model and demonstrated that, for this model, the most important source of uncertainty was the correct specification of the CME initial conditions at the typical inner boundary distance for the heliospheric model of 0.1 AU, as well as the ambient solar wind structure. Accurate estimation of the ambient solar wind structure is the most challenging problem (see Section 2 and Section 3) and depends sensitively on the nature of the coronal model and the observations used to drive this model (Riley et al., 2015; Gonzi et al., 2021). Riley et al. (2018) reviewed the performance of CME ToA forecasts for a range of models, within the Community Coordinated Modeling Center (CCMC) CME Scoreboard. They concluded that, on average, CME ToA forecasts were accurate to within about \(\pm 10\) hours, whilst the best performing models had a mean absolute error of 13 hours and a standard deviation of 15 hours. Vourlidas et al. (2019) presented a comprehensive analysis of the current status, open issues and path forward for the prediction of the geoeffective properties of CMEs. Taking into account many published works using different CME propagation models, they concluded that the current state of forecasting the ToA has an error of \(9.8\pm 2\) hours. In addition, the authors stress that currently, it is not possible to predict \(B_{z}\) reliably beyond a 40-60 minutes time window determined by the upstream solar wind in situ measurements from L1. What role additional magnetograms from a different viewpoint might play, will be answered with the upcoming ESA/_Vigil_ mission (to be launched 2029; see also Section 1.4). The L5 view covers a larger portion of the solar surface, hence, gives more up-to-date magnetic field information to global numerical models. Advanced IPS techniques may give further insights into the CME magnetic-field rotation as it passes through interplanetary space (see TI1 paper by Fallows et al., 2022). Additional views from the solar surface perspectives on the \(B_{z}\) issue is given in the S Cluster TI2 paper by Reiss et al. (2023). Riley & Ben-Nun (2021) explored the sources of uncertainty in CME ToA forecasts using a set of numerical MHD simulations of cone CMEs in ambient solar wind backgrounds. They concluded that uncertainty in each component of the CME initial parameters, such as longitude, latitude, width, and speed, introduces between 2.5 and 7.5 hours of uncertainty into the total ToA uncertainty. Furthermore, they concluded that the ambient solar wind structure was the largest source of uncertainty, and that without better constraints on the initial conditions of the heliospheric simulations, it is likely that the CME ToA error will remain close to \(\pm 10\) hours. For benchmarking and objective tracking of development improvements of background solar wind models, the H1-01 team has created a validation scheme (see TI1 paper by Reiss et al., 2022). This scheme and platform will be also used to test new models and to derive uncertainty estimates combining different model results in order to more accurately assess the magnitude and source of errors in the ToA. During their evolution, CMEs are influenced and dominated by different forces such as the Lorentz-force close to the Sun and the drag force when propagating within the ambient solar wind. The latter force leads to the deceleration of fast CMEs, i.e., faster than the ambient wind, and to the acceleration of CMEs slower than the solar wind (e.g., Vrsnak & Gopalswamy, 2002). In recent years, drag-based CME propagation models have attracted increased attention from the community. Despite their simple assumptions, including neglecting any other physical parameters besides the drag force, their ability to predict the ToA and SoA of CMEs is not necessarily worse than those of more sophisticated approaches (e.g., Vrsnak et al., 2014). The basic requirement is to feed the model with CME parameters derived from distances further out from the Sun (on average beyond 20 solar radii), i.e., where the driving Lorentz-force due to magnetic reconnection has ceased. Furthermore, these models are computationally inexpensive and can handle ensemble approaches faster than some other models can manage single runs. While the speed of the CME relative to that of the ambient solar wind, is the most important factor when describing a drag-based motion, the drag parameter \(\gamma\) includes information on other important parameters and is given by \[\gamma=C_{\rm D}\frac{A_{\rm CME}\rho_{\rm sw}}{m_{\rm CME}},\] where \(C_{\rm D}\) is the dimensionless drag coefficient (set to unity and therefore assuming an aerodynamic behavior), \(A_{\rm CME}\) is the CME cross section the drag is acting on, \(\rho_{\rm sw}\) is the solar wind Figure 13: Schematics of a magnetic flux rope (here referred to as MC) of a CME interacting with the IMF leading to physical processes affecting its propagation behavior and making it difficult to forecast the CME characteristics, especially its magnetic field component, at a target (taken from Pal et al., 2022). density, and \(m_{\rm CME}\) is the CME mass. Besides the effect on their overall behavior, deformations of CMEs can also occur locally on small scales due to the presence of preceding high-speed solar wind streams or other CMEs, which can lead to a change of the conditions in the preceding medium and influence the drag force on the CME. Also the reconnection and/or the magnetic field erosion of the ejecta in interplanetary space or the driver may be part of the physics covered by the "drag" phenomenon. There is a poor understanding of what constitutes that drag from a physical perspective but it can be interpreted with MHD waves (Cargill et al., 1996). We can rely on such empirical treatments but it would be beneficial to understand the nature of this phenomenon better (see e.g., Ruffenach et al., 2012, 2015; Pal et al., 2022). ### CME Propagation Model Input Parameters CME forecasting using MHD simulations usually introduces the CME at heights above 0.1 AU. However, using CME parameters derived at lower heights to introduce a CME into a model at a larger height is a potential source of uncertainty given that CMEs can undergo deflection and rotation while traveling through the corona (e.g., Yurchyshyn et al., 2009; Isavnin et al., 2014; Kay & Opher, 2015). Coupled solar-heliospheric models would be able to simulate the CME from the eruption site up to the arrival at a target in interplanetary space (e.g., Torok et al., 2018), but usually this is related to very high computational costs which is not practical for real-time forecasting. CME forecast models require several data-driven or assumed initial parameters. Table 2 lists these parameters along with the data sources and techniques used to estimate them. The parameters that are most often required by the models are the CME initiation time, initial height, latitude, longitude, and speed. If the CME model has a geometry that allows for a standard CME shape consisting of two legs and a curved front, or even one without legs (e.g., a spheromak), the model requires the CME tilt and an angular width. These parameters are usually derived via multi-viewpoint coronagraph observations and forward modeling techniques such as the Graduated Cylindrical Shell (GCS; Thernisien et al., 2006, 2009), the FR in 3-D (Fri3D; Isavnin, 2016), and the Stereoscopic CME Analysis Tool (StereoCAT; Mays et al., 2015). The CME parameters can typically be derived until the CME reaches the edge of the field-of-view of the observing coronagraphs. Obviously, these modeling techniques, especially for events which appeared very complex in white light data, might be very demanding for the observer. At such cases the human-in-the-loop plays a decisive role on the final CME parameters obtained by fitting process. The H2-01 team has made a thorough comparison of the skill of different GCS reconstructions to assess the bias and uncertainty in the derived parameters (see TI1 paper by Verbeke et al., 2022). To quantify the uncertainties of the CME parameters the team designed two different synthetic scenarios (ray-tracing from GCS and MHD simulations) where the "true" geometric parameters are known in order to quantify such uncertainties for the first time. From this effort interesting results occurred. CME reconstructions using a single viewpoint had the largest errors and error ranges overall for both synthetic GCS and simulated MHD white-light data. As the number of viewpoints increased from one to two, the errors decreased by approximately 4\({}^{\circ}\) in latitude, 22\({}^{\circ}\) in longitude, 14\({}^{\circ}\) in tilt, and 10\({}^{\circ}\) in half-angle. These results quantitatively show the critical need for at least two viewpoints to be able to reduce the uncertainty in deriving CME parameters. Singh et al. (2022) performed a similar quantification of the uncertainty in GCS fits by comparing GCS parameters reported in multiple studies and catalogs. They determined that GCS estimates of the CME latitude, longitude, tilt, and speed have average uncertainties of about 6\({}^{\circ}\), 11\({}^{\circ}\), 25\({}^{\circ}\), and 11.4%, respectively. Magnetized CME models are being developed to improve \(B_{z}\) forecasting at Earth. These models have to be initialized with the correct magnetic field poloidal and toroidal fluxes and with the correct handedness (helicity sign). This requires expertise from Cluster S and knowledge about the solar surface structures related to the eruption. The poloidal flux is usually estimated via the reconnected flux in the post-eruption arcade (PEA) of the CME source region (Gopalswamy et al., 2017) or the flare ribbons (Kazachenko et al., 2017). The toroidal flux can be estimated from the flux in the coronal dimming regions near the CME source (e.g., Dissauer et al., 2018). The helicity sign can be estimated from EUV and magnetogram observations of the active regions (Palmerio et al., 2017, 2018, and references therein), or more simply via the hemispheric helicity rule (Pevtsov et al., 2014; Savani et al., 2015). Not all magnetized CME models have the capability to be initialized with the desired poloidal and toroidal fluxes, i.e., the twist of the magnetic field lines may not be a free parameter in all models. For example, the spheromak model (Shiota & Kataoka, 2016; Verbeke et al., 2019) and the Gibson-Low model (Gibson & Low, 1998; Singh et al., 2019) use only one parameter to control the CME magnetic flux, making the poloidal and toroidal fluxes proportional to each other and the twist of the magnetic field lines a non-constant but fixed value. However, the removal of force-free assumptions in models such as the modified spheromak model (Singh et al., 2020, 2020) and the constant turn FR model (Singh et al., 2022) allows for the separate input of poloidal and toroidal fluxes, making the twist a free parameter that can be controlled by the model user. See also investigations about the magnetic morphology of CMEs from multi-spacecraft data (e.g., Mostl et al., 2009; Al-Haddad et al., 2013). MHD models also require the CME density or the CME total mass as inputs when introducing CMEs into the simulation domain. The total mass of the CME can be calculated from the total brightness of coronagraph images (Colaninno & Vourlidas, 2009; Bein et al., 2013) on an event-by-event basis. When such an analysis is not feasible (e.g. due to time constraints in forecasting/nowcasting conditions, or observational limitations), default values for the initial CME density may be used as input for propagation models (e.g. Odstrcil, 2003; Mays et al., 2015; Pomoell & Poedts, 2018). Additionally, efforts towards defining a range of realistic CME densities to be used routinely as inputs into CME propagation models and ensemble realizations have been undertaken in recent years (Temmer et al., 2021). Especially when including heliospheric imager (HI) observations, recent studies have shown that the CME kinematics beyond the coronagraphic field-of-view can be used to estimate the CME mass (Amerstorfer et al., 2018; Hinterreiter et al., 2021). Cone and spheromak CME models may consider the density distribution inside the structure by considering pressure gradients. However, since these models do not have solutions for more complex density distributions, usually, the mass is distributed uniformly throughout the CME volume. The validity of this assumption is supported by the recent study of Temmer et al. (2021), but needs to be further tested. Models such as the Gibson-Low model have an analytic solution for the mass density resembling the three-part structure of CMEs. The thermodynamic evolution (e.g., pressure, temperature, heat, entropy) of CMEs is not well understood and it is one of the most challenging problems of space plasma physics (e.g., Liu et al., 2005). The combination of the density, temperature, and ionization states of CMEs constrains their thermal history and can be used to understand the physical processes within the CME plasma. The thermodynamics of the solar wind has been studied extensively since the seminal work of Parker (1960); however, such efforts are limited for the case of CMEs. The heating of plasma in the closed magnetic field configuration of a CME is expected to be different from that in the open magnetic field configuration of the background solar wind and needs to be examined over the different phases of heliospheric propagation. The CME is also an inhomogeneous structure, including substructures with different plasma characteristics. The thermodynamic evolution of a CME is often modeled using a polytropic approximation (e.g., Chen & Garren, 1993). Although different values of the polytropic index might be used to imply different rates of heating, the ideal MHD models used for CME evolution often assume a fixed value of the polytropic index without any justification (Pomoell & Poedts, 2018). Therefore, developing methods and models for estimating the radial gradient in kinematic, thermodynamic, plasma, and magnetic properties inside and outside CMEs is a major requirement for improving understanding of space weather. Earlier studies addressing the thermodynamic state of CMEs often estimated the thermodynamic properties of an expanding CME at a certain position or time (Raymond, 2002; Ciaravella et al., 2003). The temperature of the plasma in the pre- and post-shock regions has been estimated using white-light, EUV, and radio observations of a fast CME (Bemporad & Mancuso, 2010). The polytropic index of CMEs can be estimated by comparing in situ observations of the same CME observed by multiple radially-aligned spacecraft. However, this situation is extremely rare due to the sparse distribution of spacecraft and the difficulty in identifying CMEs in the solar wind. Using observations of several CMEs made by spacecraft located over a range of radial distances, the polytropic index for CME plasma is inferred to be around 1.1 to 1.3 from 0.3 to 20 AU, and nearly constant over the solar cycle (Wang & Richardson, 2004; Liu et al., 2005, 2006). Thus, the expansion of an CME behaves more like an isothermal, rather than an adiabatic, process. It has also been shown that the magnetic field and density decrease faster in CMEs than in the solar wind, but the temperature decreases more slowly in CMEs than in the solar wind (Totten et al., 1995; Liu et al., 2006). This implies that either the plasma in CMEs has to be heated or that these analyses used oversimplified assumptions. A pioneering attempt to understand the thermodynamic evolution of an individual CME during its propagation from the inner to outer corona was made by Wang et al. (2009) by developing the Flux Rope Internal State Model (FRIS). This model was recently modified by Mishra & Wang (2018) so that the evolution of the CME's thermodynamic state is expressed in terms of its kinematics, which are governed by the Lorentz and thermal pressure forces. Although this simplified MHD model has not been used statistically for understanding the general thermodynamic behavior of CMEs, it has been applied to a few case studies giving different results (Mishra & Wang, 2018; Mishra et al., 2020; Nieves-Chinchilla et al., 2020). Future studies should focus on investigating whether there is a critical height where CMEs turn from a heat releasing to a heat absorbing state and whether this has any dependence on CME characteristics. Such study would be feasible using CME kinematics derived from the Metis coronagraph and SoloHI on SolO, as well as from WISPR onboard PSP, in the FRIS model. The performance and reliability of such models need to be examined by comparing in situ observations by SolO and PSP with the results of fully three-dimensional numerical MHD modeling. The solar wind ion charge states in a CME are considered to be frozen-in in the lower corona, and the in situ charge state abundances can provide information on the thermodynamic state of the CME (Lepri et al., 2001; Gruesbeck et al., 2011). Future studies using such charge state compositions measured by spacecraft traveling to previously unexplored regions of the heliosphere are imperative for a better understanding of the heating and acceleration of CMEs as well as the solar wind in general. Observations in the Lyman-alpha line of hydrogen from the Metis coronagraph onboard SolO would help to link the solar atmosphere and inner heliosphere. There may also be variations in the thermodynamics of CMEs in different solar cycles, that future studies should explore (see also Section 2.2). Better understanding of the thermodynamics of CMEs would improve modeling of CME expansion speeds which is crucial for improving CME ToA estimates. ### CME Propagation Models The routine availability of data from spacecraft coronagraphs (e.g., LASCO, COR1, and COR2) and HI during the last two decades has triggered the development of new CME propagation models. These models utilize various CME characteristics to forecast CME kinematics and properties and address fundamental questions about CME propagation, such as CME ToA and impacts. They use a number of different approaches and may be categorized as: empirical models (Gopalswamy et al., 2001, 2005; Schwenn et al., 2005; Nunez et al., 2016; Paouris & Mavromichalaki, 2017; Paouris et al., 2021), analytical and drag-based models (Cargill, 2004; Vrsnak et al., 2013; Shi et al., 2015; Amerstorfer et al., 2018; Mostl et al., 2018; Dumbovic et al., 2018; Kay et al., 2022; Napoletano et al., 2022), MHD models (Odstrcil, 2003; Shiota & Kataoka, 2016; Jin et al., 2017; Pomoell & Poedts, 2018; Torok et al., 2018), heliospheric reconstruction approaches (Sheeley et al., 1999; Kahler & Webb, 2007; Howard et al., 2006; Lugaz et al., 2009a; Davies et al., 2012, 2013; Rollett et al., 2016; Amerstorfer et al., 2018; Paouris & Vourlidas, 2022), and ML models (Sudar et al., 2016; Liu et al., 2018). These models and other related references are presented in Table 3. With so many models available, the comparison of the performance of these models is necessary. However, because of the different principles on which the models are based, model to model comparison is not straightforward. So far, most researchers have performed their own verification and validation studies (see, e.g., Vrsnak et al., 2014; Mays et al., 2015a; Paouris & Mavromichalaki, 2017; Dumbovic et al., 2018; Riley et al., 2018; Wold et al., 2018; Amerstorfer et al., 2021; Paouris et al., 2021a). Typically, these validation studies each use different sets of CME events, CME parameters and metrics. However, some efforts have been made to compare models such as in Dumbovic et al. (2018) and Paouris et al. (2021a), where the performance of the Drag-Based Ensemble Model (DBEM) and Effective Acceleration Model (EAMv3) is compared with WSA-ENLIL by using the same set of events. The necessity of establishing a benchmark dataset that may be used for all validation analyses is clearly apparent. This benchmarking dataset will serve as a validation tool both for new models and for updated versions of already-existing models, where it will be possible to determine the difference in performance between the two versions of the model. With this in mind, the CME Arrival Time and Impact Working Team (H2-01) has been formed within the ISWAT H Cluster. This team was originally founded in 2017 as part of the "International Forum for Space Weather Capabilities Assessment". It aims to develop a data set with a statistical significant sample of 100 or more associated CME and CME arrivals covering different periods within the solar cycle (Verbeke et al., 2019). However, this task requires considerable preparation and community coordination. The first steps towards this goal were taken by the International Space Science Institute (ISSI) Team on "Understanding Our Capabilities In Observing and Modeling Coronal Mass Ejections" (formed of a subset of the H2-01 team). As can be seen from the CME scoreboard, even the same model may produce different outputs if different CME input parameters and model settings are chosen. Human bias also plays a role as different forecasters may generate different forecasts owing to their level of experience or skill (human-in-the-loop effect). As such, when benchmarking CME arrival time models, it is important to collect accurate information about the CME and solar wind inputs selected for each model. The CCMC scoreboard acts in a very similar way as the solar wind benchmarking scheme given in Reiss et al. (2022). With this approach, it may be possible to at least reduce the ambiguity coming from observational data. In addition to CME dataset required for benchmarking, developing a community-agreed, unified set of metrics is of high importance. Verbeke et al. (2019) made a first effort towards this goal. To assess CME arrival predictions, they used two categories of metrics: a) event detection performance metrics (from contingency tables) that aim to determine whether an event was correctly predicted, and b) ToA and SoA metrics (i.e., hit performance metrics) that assess the performance of the model's predicted events. As part of the event detection performance metrics, the observed arrival and/or non-arrival of a CME and the corresponding CME forecast were used to create a contingency table containing information about 'hits' (observed and predicted arrival), 'false alarms' (predicted arrival but not observed),'misses' (observed arrival but not predicted) and finally, 'correct rejections' (arrival not observed nor predicted). Note though that the definition of a hit is dependent on the chosen time interval within which a forecast arrival is assumed to be correctly predicted. See Verbeke et al. (2019) for more details about the skill scores that can be derived from the contingency table. Hit performance metrics focus on the predicted hit arrivals and assess how well the model predicts CME ToA, SoA, or DoA as well as other arrival parameters such as, magnetic field, and temperature (see also Section 1.3). Different metrics can be used for the ToA error, such as the mean error, mean absolute error, the root mean squared error, and the standard deviation. Each of these metrics provides different information on the accuracy of CME propagation model. For example, the mean error quantifies the bias of the model in terms of whether the predictions are early or late on average, while the mean absolute error quantifies the absolute time difference irrespective of whether it is early or late. It remains a difficult and ongoing task to determine how prediction errors originating from the ambient solar wind modeling (see Section 3) and from the chosen CME model can be separated and determined. See more details of the H2-01 and H2-03 team efforts at [https://www.iswat-cospar.org/h2](https://www.iswat-cospar.org/h2). Forecasting the \(B_{z}\) component is one of the key challenges in space weather forecasting. Currently, the most reliable estimates use measurements of the magnetic structures at L1 propagated to Earth, giving a lead time of 40-60 minutes (Vourlidas et al., 2019). Recent attempts have applied new methodologies, such as deep learning (DL) and ML, to remote sensing image data and in situ measurements with the aim of increasing this lead time (e.g., dos Santos et al., 2020; Reiss et al., 2021). Statistical and analytical methods using information from the solar surface, e.g., the helicity rule (Bothmer & Schwenn, 1998), as described in Savani et al. (2015), or using a combination of several forecasting tools, such as the Open Solar Physics Rapid Ensemble Information (OSPREI; see Kay et al., 2022), also show promise. Although models and methods utilizing heliospheric imaging data have successfully tracked CMEs and estimated their kinematics away from the Sun, particularly within the large space between the Sun and Earth, they have their limitations. These are mainly due to the line-of-sight integration of the visible-light signal, and the interaction of CMEs with the background solar wind, CIRs/SIRs, and, most importantly, other CMEs, which complicates the use of such observations in space weather forecasting (see Section 5 for more details on interaction processes). Attempts to address limitations in the localization of large-scale solar wind features have led to the de velopment of a plethora of techniques to aid in the interpretation of HI observations. Several approaches to derive the kinematic properties of CMEs from HI observations are based on the analysis of their time-elongation profiles combined with assumptions about the CME cross-section (Liu et al., 2010; Lugaz et al., 2010; Davies et al., 2012, 2013; Rollett et al., 2016; Amerstorfer et al., 2018; Bauer et al., 2021; Hinterreiter et al., 2021; Paouris and Vourlidas, 2022). Such approaches have often used just the manually-extracted time-elongation profile along a single position angle corresponding to the CME leading edge in the ecliptic plane. Future reconstruction methods need to be developed that track CME features along different heliolatitudes (see e.g., Mostl et al., 2014, SATPLOT tool) while considering the time-varying geometry of CMEs. Some recently developed techniques are "scientifically rich" (Rollett et al., 2012), but show limitations in terms of their operational potential. Some HI-based methods incorporate inputs from other methods, such as the 3-D CME propagation direction from the analysis of coronagraphic observations, to reduce the number of free parameters in the HI-based analysis (Mishra et al., 2014, 2015a). Furthermore, some well-established HI-based techniques include aerodynamic drag to extrapolate the CME speed profile beyond the field-of-view of the HI observations (Mishra and Srivastava, 2013; Rollett et al., 2016). Recently, such a method was enhanced by including the deformation of the CME during propagation arising from interaction with the ambient solar wind (Hinterreiter et al., 2021). This approach of using external information in HI-based analysis including drag forces may surpass the performance of other methods based only on single and multiple viewpoint observations. Paouris and Vourlidas (2022) adopted a slightly more realistic approach for CMEs propagation using HI data. They replaced the common assumption of constant speed in the inner heliosphere with a two-phase behavior consisting of a decelerating (or accelerating) phase from 20 Rs to some distance, followed by a coasting phase to Earth. This new approach improved the ToA of CMEs in some cases. For example, the difference between predicted and observed ToA was below 52 minutes for 21 of the cases considered. The analysis indicates that reasonable forecasts may be attainable with CME HI measurements up to 0.5 AU and with a (mean) lead time of 31 hours (see also Colaninno et al., 2013). Furthermore, because interactions with other large-scale structures can lead to a significant change in CME speed and direction, the accuracy of HI-based techniques used for CME ToA prediction will severely be reduced if post-interaction kinematics are not taken into account (Shen et al., 2012; Mishra and Srivastava, 2014; Rollett et al., 2014; Temmer et al., 2014; Mishra et al., 2016). Several studies have demonstrated that CME-CME interactions are poorly understood (Temmer et al., 2014; Shen et al., 2016; Mishra et al., 2017). However, since interacting CMEs may give rise to enhanced space weather effects (Farugia et al., 2006; Mishra et al., 2015), future research to understand the nature of such interactions at different heliocentric distances is imperative. Even if two CMEs do not physically interact, the preceding CME can "pre-condition" the background solar wind (Temmer and Nitta, 2015). More details about interacting CMEs and pre-conditioning effects can be found in Section 5. Predicting CME arrival at Earth when CME-CME interactions occur remains challenging using observations as well as MHD models. A time-dependent modeling of interplanetary space is needed. The limited cadence and resolution of the HI onboard STEREO has prevented their full use for monitoring solar wind structures. The next generation of HI making observations from a vantage point off the Sun-Earth line, onboard NASA/PUNCH, to be launched in 2025, and ESA/_Vigil_, to be launched 2029, have carefully tailored instrument specifications (field-of-view, cadence, exposure time, and resolution) and may be expected to track CMEs in the heliosphere more accurately. This will help to make further refinements in HI-based reconstruction techniques (Davies et al., 2012, 2013; Rollett et al., 2014; Paouris and Vourlidas, 2022), and in the models that combine these techniques with drag-based motion (Zic et al., 2015; Rollett et al., 2016; Amerstorfer et al., 2018; Hinterreiter et al., 2021). Above we mention the ESA _Vigil_ mission, planned to be launched in 2029 to a location at L5 where it will view the solar surface and active regions 4-5 days before they rotate to central meridian with respect to Earth. With that, _Vigil_ will give us advance warning of how the solar surface behaves, hence, we gain more time to protect vulnerable space equipment and exploration as well as vital infrastructure on the ground. _Vigil_ observations will be used as valuable input to improve heliospheric models and will help to estimate the probability of solar eruptions (see also S Cluster T12 paper by Georgoulis et al., 2023). Adding a complementary L4 mission (Posner et al., 2021) will provide information about those active regions on the western hemisphere and beyond the west limb that are the sources of the most intense SEP events observed at Earth (see Cluster H3 Guo et al., 2023). Combining observations from the vantage points of Earth, L4 and L5 will cover more than 80% of the solar surface, significantly improving modeling inputs and both short- and long-term forecasting abilities. #### 4.3.1 The Heliosphere Observed in Radio Since the 1950's, there have been attempts to relate solar observations to heliospheric structures. Early analyses used metric (Wild and McCready, 1950) and later kilometric (Bougeret et al., 1998) radio observations to track shocks moving outward from the Sun and predict their arrival at Earth (Fry et al., 2001). IPS and Thomson-scattering observations have been utilized to provide the near-Earth morphology of outward-flowing heliospheric structures. Some of the best early studies of this type used IPS data from the Cambridge IPS array (Hewish et al., 1964; Houminer, 1971) to fit both remotely-sensed and in situ observations with modeled co-rotating and transient structures (Behannon et al., 1991). These "by eye" model fits to data were followed by more sophisticated analyses of the IPS observations (made at the UCSD, USA, and Nagoya University, Japan; see Jackson et al., 1998; Kojima et al., 1998) employing iterative 3-D tomographic reconstruction techniques that used no preconceived notion of heliospheric structures present other than assuming outward radial expansion of the solar wind. \begin{table} \begin{tabular}{|l|l|l|} \hline Parameter & Source & Useful references \\ \hline \hline CME start time, start height & Model-dependent; stereoscopic coronal observations + forward modeling techniques & Thernisien et al. (2006, 2009); Isavnin (2016); Mays et al. (2015b); Wood \& Howard (2009) \\ CME longitude, latitude & Stereoscopic coronal observations + forward modeling techniques & Thernisien et al. (2006, 2009); Isavnin (2016); Mays et al. (2015b); Wood \& Howard (2009) \\ CME volume, geometry (e.g. angular width, aspect ratio) & Model-dependent; stereoscopic coronal observations + forward modeling techniques & Thernisien et al. (2006, 2009); Isavnin (2016); Mays et al. (2015b); Wood \& Howard (2009) \\ CME total, translational speeds & Model-dependent; stereoscopic coronal observations + forward modeling techniques & Thernisien et al. (2006, 2009); Isavnin (2016); Mays et al. (2015b); Wood \& Howard (2009) \\ CME-driven shock speed & Model-dependent; stereoscopic coronal observations + forward modeling techniques; associated-flare location + SXR peak & Thernisien et al. (2006, 2009); Isavnin (2016); Mays et al. (2015b); Wood \& Howard (2009); Nunez et al. (2016) \\ CME HI time-elongation profile & Heliospheric images & Zic et al. (2015); Rollett et al. (2016) \\ CME total mass & Geometry-dependent, linked to CME volume and mass density; stereoscopic coronal observations & Colaninno \& Vourlidas (2009); Bein et al. (2013); Temmer et al. (2021) \\ CME mass density & Single view-point coronal observations; stereoscopic coronal observations & Falkenberg et al. (2010); Mays et al. (2015b); Werner et al. (2019); Temmer et al. (2021) \\ CME drag parameter & Model-dependent, linked to CME speed and solar wind pre-conditioning & Vrsnak et al. (2014); Calogovic et al. (2021) \\ CME temperature & Model-dependent & ad-hoc parameter \\ FR handedness & EUV and/or X-ray estimates; hemispheric helicity rule & Bothmer \& Schwenn (1998); Palmerio et al. (2017, 2018); Pevtsov et al. (2014) \\ FR axial orientation & Stereoscopic coronal observations + forward modeling techniques; EUV and photospheric magnetic field estimates & Palmerio et al. (2018); Yurchyshyn et al. (2001); Marubashi et al. (2015); Yurchyshyn (2008) \\ FR axial magnetic field strength; FR total, toroidal, poloidal magnetic fluxes; FR magnetic field twist & EUV and photospheric magnetic field estimates of reconnected flux based on different eruptive signatures \\ \hline \end{tabular} \end{table} Table 2. CME parameters commonly used to initialize CME propagation models \begin{table} \begin{tabular}{|l|l|l|} \hline Model Category/Model name & Input data & Useful references \\ \hline \hline Empirical Models & & \\ \hline Effective Acceleration Model (EAM) & Coronagraph data & Paouris \& Mavromichalaki (2017); Paouris et al. (2021) \\ Empirical Shock Arrival model (ESA) & Coronagraph data & Gopalswamy et al. (2001, 2005); Manoharan et al. (2004) \\ Shock ARrival Model (SARM) & Coronagraph and soft X-Rays data & Núñez et al. (2016) \\ \hline \hline Drag-based Models & & \\ \hline Drag Based Model (DBM) & Coronagraph data & Vrsnak et al. (2013); Cargill (2004) \\ Drag Based Ensemble Model (DBEM) & Coronagraph data & Dumbovic et al. (2018); Calogovic et al. (2021) \\ Drag-based Model Fitting (DBMF) & Coronagraph data & Zic et al. (2015) \\ ELlipse Evolution model based on Heliospheric Imaging (EL evoHI) & HI data & Rollett et al. (2016); Amerstorfer et al. (2018) \\ \hline \hline Reduced-physics Models & & \\ \hline Heliospheric Upwind eXtrapolation with time dependence (HUXt) & Magnetograms and coronagraph data & Owens et al. (2020) \\ Open Solar Physics Rapid Ensemble Information (OSPREI) & Magnetograms and coronagraph data & Kay et al. (2022) \\ \hline \hline MHD Models & & \\ \hline ENIL + Cone & Magnetograms and coronagraph data & Odstrčil \& Pizzo (1999b); Odstrčil (2003); Odstrčil et al. (2005) \\ CORRena-HELiosphere (CORREL)/Magnetohydrodynamic & Magnetograms and coronagraph data & Riley et al. (2012); Lionello et al. (2013); Torok et al. (2018) \\ Algorithm outside a Sphere (MAS) + modified Titov-Demoulin (TDm) & & \\ Alfven Wave Solar Model (AWSoM) & Magnetograms and coronagraph data & van der Holst et al. (2014); Jin et al. (2017) \\ MSFLUKSS + Gibson-Low & Magnetograms and coronagraph data & Singh et al. (2019) \\ MSFLUKSS + modified spheromak & Magnetograms and coronagraph data & Singh et al. (2020b) \\ EUropean Heliospheric FORcasting Information Asset (EUHFORIA) + Cone & Magnetograms and coronagraph data & Pomoell \& Poedts (2018) \\ EUHFORIA + Linear Force-Free Sphero- & Magnetograms and coronagraph data & Verbeke et al. (2019b) \\ mak (LFFS) & Magnetograms and coronagraph data & Verbeke et al. (2022) \\ ICARUS + Cone & Magnetograms and coronagraph data & Verbke et al. (2022) \\ Space-weather-forecast-Usable System & Magnetograms and coronagraph data & Shiota et al. (2014); Shiota \& Kataoka (2016) \\ Anchored by Numerical Operations and Observations (SUSANOO)-CME & & \\ \hline \hline Heliospheric Reconstruction Approach & & \\ \hline Fixed-Phi Fitting (FPF) & HI data & Rouillard et al. (2008) \\ Harmonic Mean Fitting (HMF) & HI data & Lugaz et al. (2009b) \\ Self-Similar Expansion Fitting (SSEF) & HI data & Mostl \& Davies (2013) \\ ELlipse Evolution model based on Heliospheric Imaging (EL evoHI) & HI data & Rollett et al. (2016); Amerstorfer et al. (2018) \\ Drag-based Fitting (DBMF) & HI data & Zic et al. (2015) \\ Heliospheric Reconstruction and Propagation Algorithm (HeRPA) & HI data & Paouris \& Vourlidas (2022) \\ \hline \hline ML Models & & \\ \hline CME Arrival Time Prediction Using ML & Coronagraph and solar wind data & Liu et al. (2018) \\ Algorithms (CAT-PUMA) & & \\ \hline \end{tabular} \end{table} Table 3: Most known and widely used CME propagation models Since the IPS observations were available with delays of only \(\sim\)12 hours, these analyses were also developed to forecast the arrival of heliospheric structures at Earth. To improve the 3-D reconstruction of CMEs, an even more sophisticated, time-dependent model was developed that conserved mass and mass flux, and could also incorporate Thomson-scattering observations (Jackson et al., 2001, 2008; Jackson and Hick, 2002). Results from this model have been fit to in situ data at Earth, usually through least squares Pearson's "R" correlation procedures, in a way that helps refine the remote-sensing analyses and certify forecast performance. With the more abundant Thomson-scattering brightness data available over most of the sky from the Solar Mass Ejection Imager (SMEI; Jackson et al., 2004) launched in early 2003, far higher time-dependent 3-D reconstruction resolutions of heliospheric density became possible (Jackson et al., 2006, 2008). These have led to a capability to use Thomson-scattering analyses to iteratively reconstruct 3-D densities that match in situ measurements near the observer with cadences of about one-hour (Jackson et al., 2020). This technique has also recently been used with well-calibrated STEREO HI data (Harrison et al., 2008; Eyles et al., 2009) to provide high-resolution 3-D reconstructions (Jackson et al., 2020) throughout the region of the heliosphere viewed by these instruments. Other variations of these iterative data-fitting techniques have been developed using IPS and Thomson-scattering observations. The Japanese IPS iterative technique (e.g., Hayashi et al., 2003) has been used to provide boundary conditions for a 3-D MHD model, and more recently IPS data have been used to modify the spheromak-initiated 3-D MHD CME model (SUSANOO-CME) in a time-dependent way (Iwai et al., 2019). Results from the UCSD iterative technique are also currently extracted at 0.1 AU and used to drive 3-D MHD models (Yu et al., 2015) in an analysis that determines CME structure and forecasts their velocity and density as well as the magnetic field components (Jackson et al., 2015). Additionally, the ENLIL model (Odstrcil, 2003; Odstrcil et al., 2005) can now be used as a kernel in the 3-D reconstruction analyses (Jackson et al., 2020). The use of 3-D MHD modeling in the 3-D reconstruction analyses allows the incorporation of more sophisticated physical processes such as shocks and compressive structures, and, as a result, non-radial plasma transport, modifying the outward solar wind flow by temperature and magnetic fields. ### Summary In this Section we have given an overview of state-of-the-art CME propagation models. Despite the plethora of these models as well as observed and modeled CME parameters, reliably simulating CME propagation has still many open questions for research due to the complex and rather poorly understood interplay between CME and solar wind characteristics. There is huge future potential for utilizing multiple observational data (e.g., from coronagraphs and HI from the Lagrange points L1, L5, and maybe L4) as input to CME propagation models (DA) in order to better investigate CME propagation behavior in interplanetary space. This will decisively improve the capability for producing more accurate space weather forecasts. ## 5 Interaction Phenomena (HSSs-CMEs, CIRs/SIRs-CMEs, CME-CME) and Preconditioning At any instant of time, interplanetary space is filled with various large-scale solar wind structures. As already discussed in more detail in Sections 2-4 the key players are transient events, i.e., CMEs, and SIRs/CIRs together with their related HSSs. Each of these structures generates a perturbation in the smooth outflow of the slow solar wind, and interactions between them cause complex processes that alter the characteristics of these structures and hence, the prevailing conditions in interplanetary space. This section gives an overview of the preconditioning effects and interaction processes between CMEs-SIRs and CMEs-CMEs, and how these relate to CME propagation models and space weather forecasting. For a more detailed review on CME-CME interaction we refer to Lugaz et al. (2017) and on the nature of CME collisions to Zhang et al. (2021). ### Variability of Space Environment on Short and Long Terms The evolution of CMEs during propagation through interplanetary space is strongly shaped by the interplay between the internal and external factors controlling their interaction with the surrounding solar wind and other transients (Manchester et al., 2017). The magnetic structure of CMEs is therefore the result of a complex chain of physical processes, including: expansion due to differences in the internal plasma and magnetic pressure, as well as magnetic field magnitude, with respect to the ambient environment, which basically controls the size of the ejecta (see e.g., Demoulin and Dasso, 2009; Pal et al., 2022). CMEs can occur in sequence when successive releases of energy (primarily magnetic) occur in the parent source region. Interactions among multiple CMEs may involve a faster CME that "overtakes" a slower, preceding CME. A CME launched close to a coronal hole may interact with the associated HSS and SIR (see Section 3 for more details). Hence, other CMEs and SIRs present magnetic obstacles to the interacting CME. According to the frozen-in field theorem, interacting magnetic structures cannot easily penetrate each other, resulting in strong changes in the physical properties of CMEs such as: * geometry and size (deformation, compression) * propagation direction and orientation (rotation, deflection) * kinematic properties * erosion or flux injection, magnetic tension) * plasma parameters, thermal properties The space weather impact at a target due to these changes might be larger by up to a factor of 2-3, especially due to compression and the enhancement of the pre-existing negative \(B_{z}(t)\) to more negative values (see e.g., Farrugia et al., 2006; Zhang et al., 2007; Lugaz et al., 2016, 2017; Dumbovic et al., 2015; Shen et al., 2017, 2018; Kilpua et al., 2019; Xu et al., 2019; Scolini et al., 2020; Koehn et al., 2022). The presence of multiple transient structures also leads to a "preconditioning" of the solar wind into which subsequent structures are propagating. As consequence large uncertainties may be introduced into space weather forecasts based on simple (i.e., undisturbed) background solar wind flow simulations. One of the best examples for preconditioning of interplanetary space is the super-fast CME event observed in situ at STEREO-A on July 23, 2012 (Russell et al., 2013). It propagated the Sun to 1 AU distance in less than 21 hours and would have caused major geomagnetic effects, if Earth directed (Baker et al., 2013). The arguments and effects of CME propagation into a previously rarefied region from an earlier CME on July 19, 2012 were very clearly pointed out in the work by Liu et al. (2014). Follow-up studies showed that the strong density depletion lowered the drag by a factor of 10 (Temmer & Nitta, 2015) making the July 23, 2012 event super-fast. The idea that extreme events can result from these combinations (and historical extreme events probably have) is important. Therefore, improving knowledge of the role of preconditioning, and implementing that into models, is a key goal of future research. The significance of preconditioning of interplanetary space and CME properties is clearly expected to be related to the solar cycle (e.g., Cremades et al., 2006). The CME occurrence rate (as viewed by coronagraphs) is only about 0.3/day during solar minimum but rises up to 4-5/day during solar maximum (e.g., St. Cyr et al., 2000; Gopalswamy, 2006). With CME transit times from the Sun to 1 AU of about 1-4 days (average speeds reported are in the order of \(\sim\)500 to \(\sim\)3000 km s\({}^{-1}\)), there might be only a few CMEs at solar minimum, or as many as 20 at solar maximum, in the 4\(\pi\) heliosphere between the Sun and 1 AU (Lugaz et al., 2017). Hence, during solar minimum, interactions between successive CMEs are rare but the occasional CME that is present is more likely to interact with CIRs/HSSs. Model evaluations confirm that during times of increased solar activity, preconditioning dominates and forecasts are more likely to fail (see. e.g., Gressl et al., 2014). It is found that disturbed solar wind conditions at a specific measurement location resulting from a sequence of interacting CMEs extend over 3-6 days after the CME start, which is much longer than the average duration of an individual CME disturbance (Temmer et al., 2017; Janvier et al., 2019). To fully understand and to successfully simulate a specific CME, we need to know the history of the solar wind configuration in a wide analysis window extending back to several days before the time the event is observed (see Schrijver et al., 2015; Palmerio et al., 2021). Figure 14 gives an overview of suggested time windows and parameters that might be useful to estimate the history of solar activity related to a CME and to check for information and parameters required to feed the CME parameter into CME propagation models (see Section 3). Besides the number of transient events also the solar wind parameters themselves changes over the cycle (see also Section 2.2). For cycle 24 a clear drop in the Figure 14: For more accurate forecasts of a specific CME of interest, it is necessary to know the “history” of the erupting active region (AR), i.e., CME source region (SR). In addition to the actual CME properties, information is needed about the ambient environment in which the CME is embedded in, such as nearby coronal holes (CHs) and, hence, fast solar wind (SW) that has not arrived yet at any in situ measurement location. With that, three pillars of information feed forecasting models. The total time range to check covers a window of about 5-9 days. As the CME evolution in interplanetary space progresses, DA from in situ measurements, heliospheric images or radio data might be used to adjust the model input. The increase in accuracy gained due to DA is usually on the cost of a decrease in the forecast lead time. magnetic field and heliospheric pressure led to stronger CME expansion throughout the heliosphere that changed their propagation behavior and the build-up of shocks (e.g., Gopalswamy et al., 2015; Jian et al., 2018; Lugaz et al., 2020). There is still more to learn about the solar cycle influence on solar wind parameters and how this knowledge can be fed into models. It is important to take into account whether a CME forecast is made in a weak or strong solar cycle, and how active the Sun is in the specific forecast window (Owens et al., 2021). In that respect we may use long-term averages of the solar wind pressure, density, and speed to compare the characteristics of individual cycles (see Cluster S1 TI2 paper by Pevtsov et al. (2023)). ### Interaction Processes with Large-Scale Field Structures Knowing the propagation direction and orientation of a CME event is key to a) interpret observational data and b) properly feed models. Changes in the initial propagation direction have manifold reasons. During different phases of the solar cycle CMEs are launched from different latitudes, which is related to the global solar magnetic field configuration (see also the S2 Cluster TI2 paper by Reiss et al., 2023). Even high latitude CMEs from active regions may cause intense geomagnetic storms (e.g., Zhou et al., 2006), suggesting that CMEs are deflected, in this case, towards the ecliptic, during propagation. CME deflection is related to magnetic pressure gradients that are stronger in the corona (MacQueen et al., 1986; Shen et al., 2011; Mostl et al., 2015; Wang et al., 2015; Kilpua et al., 2019; Wang et al., 2020) than in interplanetary space (Wang et al., 2004; Siscoe & Odstrcil, 2008). Thus, CMEs tend to be deflected towards regions of weaker magnetic field (Gui et al., 2011). The majority of CMEs are deflected in latitude towards the equator (MacQueen et al., 1986; Kilpua et al., 2009; Wang et al., 2011). Longitudinal deflections may be either towards or away from the Sun-Earth line. Slow CMEs are found to be deflected more easily than fast ones, and usually an E-W asymmetry is observed such that fast CMEs are deflected to the East and slow ones to the West (Wang et al., 2004). #### 5.2.1 Interaction between CMEs and SIRs/CIRs/HSSs Most strongly CMEs are affected by the presence of open magnetic fields, in particular, coronal holes close to the eruption site (Gopalswamy et al., 2009; Heinemann et al., 2019; Sahade et al., 2020). Being large-scale magnetic structures, HSSs and related SIRs/CIRs (see more details in Section 3), may cause significant changes to the intrinsic physical properties of a CME. It has been shown that, due to interactions between CMEs with HSSs and SIRs, the FR structure of a CME may deform, kink or rotate (Manchester et al., 2004; Riley & Crooker, 2004; Wang et al., 2006; Yurchyshyn, 2008; Isavnin et al., 2013), or erode due to reconnection (Dasso et al., 2006; Ruffenach et al., 2012; Lavraud et al., 2014; Ruffenach et al., 2015; Wang et al., 2018; Pal et al., 2022) and be deflected (Wang et al., 2004; Manchester et al., 2005; Kay et al., 2013; Wang et al., 2014; Kay et al., 2015; Wang et al., 2016; Zhuang et al., 2019; Heinemann et al., 2019). In addition, SIR/CIR/HSS-CME interactions can alter the magnetic field complexity inside CMEs (Winslow et al., 2021; Scolini et al., 2022). The specific effects of the interaction depend on whether the SIR/CIR and related HSS is ahead of or behind the CME. If behind and catching up with the CME, the interaction processes are always associated with the deformation, compression, and acceleration of the CME (Winslow et al., 2016, 2021; He et al., 2018). This can also enhance the geoeffectiveness of the CME and the ability of the CME to form a shock, especially for a slow CME. If ahead of the CME, Heinemann et al. (2019) found that the CME may be deflected through more than 30\({}^{\circ}\) due to the SIR/CIR/HSS-CME interaction. Recently, Lugaz et al. (2022) studied an extended CME simultaneously observed in situ by STEREO-A and Wind. They found that in the part of the CME facing Earth and propagating inside the preceding HSS, a shock and sheath were absent at Wind, whereas a shock structure and a sheath region were found to be associated with the part of the CME observed by STEREO-A that had not interacted with the HSS. #### 5.2.2 HCS Crossing and Connection to Source Region Location The HCS separates regions of open magnetic fields with opposite polarities originating (in the simple case of a dipolar solar magnetic field), in opposite solar hemispheres, and maps down to the streamer belt. During solar minimum CMEs tend to occur in or near the streamer belt and HCS, while near solar maximum, streamers occur all over the Sun, and the connection between a CME and the HCS is less obvious (e.g., Smith, 2001). Pre-existing helmet streamers that are disrupted or blown out by CMEs generally reform in a time interval much shorter than the lifetime of the HCS (Zhao & Hoeksema, 1996), while the HCS exists throughout the solar cycle. Hence, the location of the HCS relative to the source region of a CME and the observation target is important. Henning et al. (1985) first noted the "same-opposite side effect", that disturbances (CMEs and related shocks) associated with flares located on the same side of the current sheet as Earth were of larger magnitude than those associated with flares located on the opposite side. This effect was later confirmed by several other studies. For example, based on observations of hundreds of events over five years, Zhao et al. (2007) found that (1) shocks with the associated flares located near the HCS had a lower probability of reaching Earth, (2) the initial speeds of shocks that encountered Earth were noticeably faster when the associated flares were located near the HCS, (3) shocks associated with flares on the same side of the HCS as Earth were more prone to arrive at Earth than those with their associated flares on the opposite side. The HCS can also serve as a boundary that affects CME expansion and propagation. Several recent in-depth studies of CMEs and HCSs have used multipoint observations. For example, Winslow et al. (2016) attributed a highly turbulent region with distinct properties observed within a FR at STEREO-A (but not at MESSENGER, which was in longitudinal alignment with STEREO-A) to the interaction between the CME and the HCS and the surrounding heliospheric plasma sheet during propagation of the CME. To better understand the physical processes involved in interactions between CMEs and the HCSs, more coordinated remote-sensing and in situ observations, as well as multi-scale modeling, are needed. CME deflections in latitude are constrained by the location of the streamer belt or HCS, and the deflection occurs mostly close to the Sun near the streamers (see e.g., T1I paper by Wang et al., 2022). Based on coordinated remote-sensing and in situ observations, Yurchyshyn (2008) speculated that the axis of an ejecta might be rotated in such a way that it aligns with the local orientation of the HCS. See also more recent studies using observations and (space weather) models (see e.g., Isavnin et al., 2013; Wang et al., 2014; Kay et al., 2015; Asvestari et al., 2022). _The degree of influence on the evolution of large-scale CME properties depends on the ambient solar wind conditions. All of the aforementioned evolutionary aspects of CMEs are found to be amplified by interactions with HSSs, CIRs/SIRs, as well as the HCS and/or heliospheric plasma sheet (see more in e.g., Odstrcil & Pizzo, 1999a; Rodriguez et al., 2016; Zhou & Feng, 2017; Liu et al., 2019; Davies et al., 2020; Scolini et al., 2021)._ ### CME-CME Interaction A variety of magnetic structures resulting from CME-CME interactions have been classified based on 1 AU observations. These include: "multiple ejecta" (Wang et al., 2002), in which a single dense sheath precedes two (or more) distinct ejecta. In such cases, the ejecta are separated by a short period of large plasma beta, which may indicate magnetic reconnection taking place between the structures. It is relatively easy to distinguish individual ejecta in magnetic field time series, especially if simultaneous plasma data are also available at the target location. "Complex ejecta" (Burlaga et al., 2002; Farrujgia & Berdichevsky, 2004) are events where the two (or more) original ejecta cannot be distinguished anymore based on magnetic field observations. Such structures often exhibit the decreasing speed profiles typical of individual CMEs, but have a long duration compared to average ejecta. The magnetic field profile can range from smoothly-rotating magnetic field components to complex magnetic fields. In the former case, it is easy to be misled that such structures are the counterparts of individual CMEs, even when plasma data are available; their interpretation requires information on the broader context (e.g., remote-sensing observations, multi-point in situ observations at different heliocentric distances). Progress on understanding the complex CME interaction processes was not really possible until heliospheric imaging became routine with STEREO/HI (see also Section 4). One of the first CMEs observed by STEREO was in fact a series of two interacting CMEs in January 2007 (Odstrcil & Pizzo, 2009). The energy transfer between the two CMEs was investigated by Lugaz et al. (2009), who found clear indications that the leading CME was accelerated due to its interaction with the overtaking, initially faster, CME. As an CME shock interacts with and propagates through a preceding ejecta, it can cause radial compression, amplification of the magnetic field, a change in the CME aspect ratio, acceleration and heating in the region downstream of the shock within the preceding ejecta (Vandas et al., 1997; Schmidt & Cargill, 2004; Lugaz et al., 2005; Xiong et al., 2006). Observational and numerical studies have also shown that the preceding ejecta might quickly over-expand during this later phase of interaction (Xiong et al., 2006; Gulisano et al., 2010; Lugaz et al., 2012), such that, as the ejecta continues to propagate away from the Sun, the space weather impact may progressively return to pre-interaction levels. Assuming that CMEs are magnetically coherent structures, which is debatable (Owens et al., 2017; Lugaz et al., 2018), elastic or super-elastic collisions may occur (Shen et al., 2012; Temmer et al., 2014; Mishra et al., 2015, 2016, 2017; Lugaz et al., 2017) by converting the magnetic or thermal energy of the CMEs to kinetic energy by some process. In particular, magnetic reconnection plays a crucial role in CME-CME collision (Lugaz et al., 2005). It may lead, as in the case of the interaction with the ambient magnetic field, to magnetic erosion and flux injection occurring at the CME boundaries (Dasso et al., 2006; Ruffenach et al., 2012; Pal et al., 2022) or in their interiors (Crooker et al., 1998), fundamental topological changes of the magnetic structures (Winslow et al., 2016, 2021a; Scolini et al., 2021, 2022), as well as local magnetic field distortions (Torok et al., 2018). This can alter the magnetic connectivity, topology, and size of CME magnetic structures and causes the formation of magnetically complex structures leading to strong geomagnetic effects (Gopalswamy et al., 2001; Wang et al., 2003; Gosling et al., 2005). In the most extreme cases, this may result in the full coalescence of the two original structures (Odstrcil, 2003; Schmidt & Cargill, 2004; Chatterjee & Fan, 2013). Mishra & Srivastava (2014) and Maricic et al. (2014) have shown possible signatures of magnetic reconnection in in situ observations at 1 AU as a result of CME-CME interaction. Hence, knowledge of the relative orientation of the MFR in the interacting CMEs is important (e.g., Xiong et al., 2009; Lugaz et al., 2012; Shen et al., 2012, 2017). Magnetic tension associated with interactions between FRs and ambient magnetic fields has been widely discussed (e.g., Kay et al., 2015; Myers et al., 2015; Vrsnak, 2016). It is worth mentioning the work by Myers et al. (2015), in which they concluded experimentally that the magnetic tension force resulting from the interaction between the background field and current sheet in the FR would halt the eruption process. A general consequence of FR-FR interaction is a change in the magnetic field inside the FR (Shen et al., 2017; Lugaz et al., 2017) which leads to a change of magnetic tension force arising from the change in the toroidal magnetic field component associated with compression of the FR cross-section and thus an enhancement of the tension force. The enhanced magnetic tension force then restricts further deformation of the FR (Suess, 1988; Manchester et al., 2004). More detailed 3-D modeling of CMEs and observational constraints from white-light and IPS data would shed more light on this topic (see e.g., T1I paper by Fallows et al., 2022). The heliospheric distance where the interaction takes place can vary from the low corona to interplanetary space and determines the degree of impact at a specific target. This distance has been termed the "helioffectiveness" (Scolini et al., 2020) and means that the time interval between the CME eruptions and their relative speeds are critical factors in determining the resulting impact of complex CMEs at various heliocentric distances. ### Simulations It is fair to say that CME-CME interactions are complex, acting on different spatial and temporal scales with respect to, for example, energy transfer, momentum exchange, magnetic reconnection, heating, compression, and over-/under-expansion. Sophisticated numerical modeling will help to improve understanding of the processes involved in CME-CME interactions, and recent efforts have been reported in a number of studies (e.g., Lugaz et al., 2013, 2015; Shen et al., 2016; Zhuang et al., 2019; Scolini et al., 2020). More details of single CME propagation models, their observational input requirements and limitations, are given in Section 4. These models may also be used in a simple approach to simulate multiple events by considering the distance where the CMEs interact and where it is necessary to change the model parameters (e.g., when using DBEM; Zic et al., 2015; Dumbovic et al., 2019). ### Summary In conclusion, the reliability of CME space weather forecasts is especially complicated at times when other large-scale solar wind structures lie in the path of the CME. In such cases, neither estimates of the speed at 1 AU, nor in situ magnetic field data upstream of Earth might be sufficient to accurately estimate the magnetic field strength and orientation at the impact location, and more sophisticated modeling tools capable of describing interactions in a physically-consistent manner, in combination with reliable remote-sensing CME observations, are required. Such studies may be possible with the help of in situ observations from PSP, SolO, and other missions sampling heliospheric plasma at different distances from the Sun. ## 6 Improving Heliospheric Modeling/Forecasts The previous Sections have reviewed the current state of modeling of heliospheric transients, i.e., CMEs (see Section 4) and identified the issues that impact the accuracy of forecasts of their impacts on Geospace (see Sections 4 and 5). This section identifies paths forward to improving the physical understanding, modeling, and consequently, forecasting of heliospheric transients. The section starts with a short overview of the current state of forecasting the key physical parameters of transients, and the performance required by various space weather users (Section 6.1). We then outline the top-level gaps in physical knowledge and data availability (Section 6.2), setting the stage for suggestions for closing these gaps and moving the field forward in Section 6.3. ### The Current State of Modeling and Forecasting of Heliospheric Transients Properties A concise method to identify the state of space weather forecasting of heliospheric transients is to compare the current and desired performance of predictions of the key physical parameters used in space weather forecasting. Some of these parameters are identified in Table 2. We also use information from a recent NASA-sponsored Gap Analysis (see also Vourlidas, 2021) that examined a wider range of space weather-related phenomena. Table 4 presents the resulting summary of the current and desired state of the forecasting of heliospheric transients, which is the focus of the H1+H2 Cluster. The table lists the key parameters and their current forecasting accuracy. The desired state is based on space weather user requirements (see Sec. 5.1 in the Gap Analysis, for details). The last column lists the high-level issues that prevent current forecasts from meeting users' expectations. These issues are derived from the literature and discussions in the previous sections and within the H1+H2 Cluster groups. ### Knowledge and Capability Gaps The issues listed in the last column of Table 4 can be broadly classified into two categories: issues arising from gaps in observational coverage, including latency and spatial coverage, and issues arising from limited knowledge of the physics involved in the formation and evolution of the transients in the inner heliosphere. #### 6.2.1 Observational Gaps Sparse coverage of the Sun-Earth space. At present, solar activity is remotely monitored from just two viewpoints- from Earth/Lagrange L1 and from STEREO-A (the time-varying spacecraft positions can be viewed under [https://stereo-ssc.nascom.nasa.gov/where.shtml](https://stereo-ssc.nascom.nasa.gov/where.shtml)). In the next two years (assuming that STEREO-A continues to operate), the two viewpoints will effectively be reduced to one, as STEREO-A orbits at small angular separations from Earth. The incomplete coverage of the photospheric magnetic field, the coronal layers where activity originates and the Sun-Earth line affects all aspects of forecasting (e.g., CME source properties, line-of-sight confusion, interplanetary propagation). The issue is discussed in more detail in Vourlidas et al. (2019) and the NASA Gap Analysis. On the in situ side, consistent measurements upstream of Earth are only available from L1, providing 15 to 60-minute advanced warnings of the arrival of CMEs and interplanetary shocks at Earth. Numerous events have been measured by two or more spacecraft in radial alignment. However, these were serendipitous cases, mostly captured by spacecraft orbiting the inner planets, with only magnetic field measurements available. As a result, changes in the CME properties such as size, expansion, and velocity were difficult to interpret, and the study of shock speed and strength was impossible. Some CMEs exhibited drastic changes in their properties, associated with interactions with ambient structures, while rapid geometric expansion may lead to distortion of CMEs by the ambient solar wind and a resulting lack of coherence in CME structure at different heliospheric locations. (Owens et al., 2017). It is hardly surprising, therefore, that even two or three in situ measurements are insufficient to constrain the properties of transients (Lugaz et al., 2018). The event-to-event variability means that any highly-accurate forecast, especially of the magnetic field strength and direction, will need to rely on plasma and field measurements made within 0.02 to 0.25 AU upstream of Earth (between L1 and Venus), providing a few hours up to a day of advanced warning. Low sensitivity of heliospheric imaging. The STEREO HI achieved breakthrough observations of CMEs and SIRs to 1 AU. Yet, the faintness of the structures, the long lines of sight, and the required long exposures to detect those emissions reduce the structure contrast, particularly of the transient fronts. As a result, ToA prediction have improved only modestly (Wold et al., 2018; Vourlidas et al., 2019). High latency of near-Sun observations. SDO/AIA provides real-time imaging of coronal activity but lacks the field-of-view (and viewpoint) coverage to enable robust detection of CME eruptions and their kinematics for model initialization. This information comes from coronagraphic measurements beyond 2 Rs, at least. However, real-time coronagraphic imaging is not always available from the LASCO or STEREO coronagraphs, even though the latter provide a continuous stream of low-resolution EUV and white-light images (known as'space weather beacon'). Lack of ground-based downlink availability is usually the reason. Inability to measure the coronal magnetic field.Routine spatially-resolved coronal magnetic fields measurements across the solar disk/limb are currently beyond our reach due to the high demands in instrument throughput (Casini et al., 2017). Yet, it is precisely the evolution of this field that, through the accumulation of energy and helicity and their subsequent release, powers flares and CMEs. Our inability to measure the spatio-temporal evolution of key parameters, such as free energy, helicity or currents, in the corona is the biggest impediment in predicting eruptions (see Patsourakos et al., 2020, for details and path forward suggestions). Small event samples.Comprehensive'sun-to-mud' analyses of solar transients became available only in the last cycle thanks to the triple-viewpoint capability of STEREO + SOHO/Earth observations. The larger number of datasets, however, requires more complex analyses, which, in turn, results in small sample studies. Such studies cannot easily avoid selection biases and may have inconsistent criteria for, say, ToA (see Vourlidas et al. (2019) for discussion). #### 6.2.2 Knowledge Gaps Incomplete description of the state of the ambient inner heliosphere. The structure of the ambient heliosphere (background solar wind) plays a critical role in the modeling of transient evolution in the inner heliosphere (discussed in Sections 1-5). CME interaction with HSSs or with other CME en route to Earth can influence the extent and kinematics of the event significantly (Section 5). This is primarily a concern for medium-speed events (\(\sim 600-900\) km/s within 20 Rs) as their speeds are close to the typical ambient solar wind speeds in the inner heliosphere and they appear to evolve kinematically well beyond the typical coronagraph field-of-views (e.g. Colaninno et al., 2013; Sachdeva et al., 2017). Yet, the current heliospheric modeling performance is insufficient, primarily due to a single reason, its 'Achilles heel' (Vourlidas et al., 2019)--incomplete boundary conditions. It is a two-fold weakness: (1) the background photospheric field is measured only across the Earth-facing part of the disk (corresponding to about 1/3 of the total surface), requiring strong assumptions about the far-side and polar field distributions (e.g. Linker et al., 2017; Temmer, 2021), and (2) the sub-Alfvenic corona is poorly understood due to the lack of consistent measurements of its state (temperature, density, composition, kinematic profiles, etc). Expanding the coverage of photospheric magnetic field measurements, from, say the L4/L5 Lagrangian points and the poles, and bringing in long-term off-limb spectroscopic coronal measurements, will go a long way towards closing this knowledge gap. Poor knowledge of internal CME structureWe, presently, lack knowledge about the initial configuration of CMEs in the corona (especially the amount of twist) and how to incorporate more realistic CME initiation models into space weather models. Most current space weather models either assume a very highly twisted FR initiated in the upper corona (EUHFORIA), or a non-magnetized eruption, also in the high corona (ENILL), or a highly-twisted FR initiated in the low corona (SWMF, SUSANOO). While we have some insight from more complex simulations and non-linear force-free reconstructions, these are not yet adapted for real-time space weather forecasting. CME-CME interaction and energetic particles associated with a series of CMEs are especially problematic and these cases are common during solar maximum. In addition to the initial conditions, we still do not understand well how the CME internal magnetic field evolves as the CME propagates and interacts with the solar wind and other transients. This knowledge gap arises from (1) lack of data about the CMEs as described in Section 6.2.1, (2) lack of detailed simulations of the background solar wind with small and intermediate scale features (turbulence, more complex density, magnetic field and velocity profiles), (3) lack of numerical studies focusing on complex and realistic CME topologies with propagation to 1 AU (an exception is the work of Torok et al., 2018) and (4) overly simplified models to reconstruct single-spacecraft measurements at 1 AU. There has been some progress on this last point in the past few years (e.g. Nieves-Chinchilla et al., 2020) but we still rely primarily on fitting models of a force-free FR with a circular cross-section for space weather applications. Incomplete knowledge of transient mesoscalesWhile imaging has probed the large (tens of degrees) scales and in situ observations have measured the small (sub-degree) scales of transients, the results remain far from satisfactory for space weather users. Key constraints on the structure of transients seem to reside in mesoscales (roughly \(\sim 1^{\circ}\), Lugaz et al. (2018)), which are almost totally unexplored due to the lack of closely-space in situ measurements and/or high spatial resolution imaging. For example, the uncertainties in the CME internal magnetic field can be as high as 60% at 1 AU, when considering the limits in the drop-off rate of the magnetic field with distance (between \(r^{-1}\) to \(r^{-2.5}\)). Inefficient use of available assets and capabilities. Although not a knowledge gap, the sub-optimal use of available data is certainly hindering progress in space weather forecasting. We tend to under-utilize individual data streams and to under-exploit their synergies. For example, (i) reconstructions of the solar magnetic field beyond potential field are typically not integrated into space weather models, (ii) remote-sensing observations are typically used to constrain only the CME direction and speed but not its 3-D shape (iii) three-dimensional plasma flows, composition, charge states and pitch-angle distributions of suprathermal electrons are often not integrated consistently in the discussion of CMEs (for example, to check whether the measured flow speed is consistent with the assumed CME shape). ### Moving the Field Forward Advancing the capability of heliospheric modeling and forecasting requires the closure of the knowledge and capability gaps identified in Section 6.2. Here, we outline a strategy for making effective progress on this issue that identifies challenges that can be tackled on short-term, near-term, medium-term and long-term horizons. #### 6.3.1 Short Term (Leverage Existing Knowledge and Assets) Think 'Outside-the-Box'. We offer two suggestions: * Observing System Simulation Experiments (OSSEs) have long been used in the terrestrial weather arena to inform measurement strategies, to design space-based architectures to acquire those measurements and to fine-tune DA schemes to ingest the resulting data products (Zeng et al., 2020). Since we face very similar challenges, investment in leveraging terrestrial weather experience and in developing OSSEs to address the issues in Table 1, seems the most beneficial path forward. * Developing the capability to obtain a missing measurement may not always be the most practical solution for improving space-weather forecasting. What if modeling could provide a sufficient substitute for a missing/incomplete measurement or perhaps an alternative observation/measurement that is available by some different means? For example, could models based on photospheric magnetic-field measurements replace direct solar wind or EUV (or other wavelength) irradiance measurements for some niche space weather-applications or users? Also, could more work be done in using observations of IPS and improving ground-based networks as an alternative way of driving ENLIL as already been explored by Gonzi et al. (2021); Jackson et al. (2022)? Such models, methodologies, alternative observations already exist in early forms and/or can be solicited with targeted funding opportunities bridging into heliophysics expertise from other communities; a prime example here for the modeling since would be from the fields of ML or data analytics. Standardize data quality and analysis approaches. The cross-calibration of magnetograph data is a well-known problem that impacts the reliability and validation of MHD models (Riley et al., 2014; Wang et al., 2022). Various image processing and measurement techniques are applied to imaging data for kinematic or dynamic measurements of CMEs, using samples that are not always vetted for selection bias or data quality, resulting in statistics for ToA (or other space weather-relevant quantities) that cannot be properly assessed. The development of standard data products for space weather analysis (similar to, say, the creation of ML/AI-ready data sets) would greatly improve the assessment of model and forecasting performance and, perhaps more importantly, enhance peer-review validations and data distribution across the community. Standardize Performance Metrics. It is currently challenging to assess and compare the performance of heliospheric modeling frameworks (e.g., Verbeke et al., 2019, also the TI1 paper by Reiss et al. (2022)). Developing a set of common performance metrics with wide community acceptance would provide better insight into the physical realism of different heliospheric models, as well as their performance for operational forecasting. Improve DA Workflows. The objective of DA is to provide an optimal estimate of the state of a dynamical system by combining knowledge of the system's state derived from both a physical model and observations. In practice, DA incorporates a wide range of mathematical techniques whose use depends upon the specifics of a model (e.g., linear and non-linear) and observations (e.g., in situ or remote sensing) of a particular system. DA techniques have revolutionized the performance of terrestrial weather and climate modeling and it is reasonable to assume DA will return similar benefits to heliospheric modeling. Currently, heliospheric modeling constrained by DA is implemented primarily as a research tool only, although there are examples where these techniques are being configured for operational purposes. For example, ADAPT assimilates magnetogram observations of the photosphere into a flux-transport model, returning improved estimates of the state of the photosphere (Arge et al., 2010). Recent works have pursued assimilating both in situ and remote sensing observations of the solar wind and CMEs into heliospheric models, incorporating a range of complexities of both the DA scheme and heliospheric model. For example, Lang et al. (2017) demonstrated a proof of concept sequential DA scheme for the assimilation of in situ plasma observations in the ENLIL 3-D MHD model. Similarly, Lang et al. (2017) implemented a more advanced variational DA scheme into the reduced physics HUX solar wind model. Barnard et al. (2020) presented a method for constraining an ensemble of solar wind simulations with HI observations of CMEs, demonstrating that these could lead to improved hindcasts of CME arrival times, and providing a first step towards the formal DA of HI data in solar wind models. Similarly, Iwai et al. (2021) successfully constrained the SUSANOO-CME 3-D MHD solar wind model with IPS observations, resulting in improved CME ToA forecasts by constraining an ensemble of simulations with the IPS data. One immediate issue is the currently disconnected nature of these efforts. In terrestrial meteorology, forecasts typically rely on coupled DA schemes, which facilitate the self-consistent assimilation of a range of different observables across coupled models (Lea et al., 2015). Heliospheric simulation and prediction could be improved by the development of a coupled DA system that can simultaneously assimilate a range of in situ and remote sensing data. The existing archives of magnetogram, coronagraph, HI, IPS, and in situ plasma data, provide an excellent test bed for establishing the potential of such a coupled DA modeling scheme for use with future assets such as ESA's _Vigil_ and NASA's PUNCH missions. Introduce new/enhance potential data streams. Observations of IPS can provide important data for improving the forecasting output of MHD simulations (e.g Iwai et al., 2019; Jackson et al., 2020; Gonzi et al., 2021, and references therein). They are used to improve background solar wind distributions (e.g. Jackson et al., 2020, see Sec. 2 for details). They can also follow CMEs propagating from 0.1 AU to 1 AU, a range dictated by the metric to deci-metric wavelength range of current IPS stations, and offer the potential for bringing out confirmed CME features and/or indications of the orientation (but not the sign) of CME magnetic fields (e.g. Bisi et al., 2010; Fallows et al., 2022, and references therein). Thus, IPS data can be used to validate and/or drive MHD simulations across the inner heliosphere. For example, Iwai et al. (2022) observed a CME using both LOFAR and ISEE (Nagoya University, Japan) arrays and included those data into the SUSANOO-CME MHD simulation, which successfully improved the reconstruction of the CME. Observations of IPS, being ground-based in nature, have the advantage of easily obtainable real-time data. On the other hand, these observations are available only during daytime (and just before sunrise/after sunset) of each observing station. This limitation can be overcome by coordinated observations across multiple IPS stations in different time zones, known as the Worldwide IPS Stations Network (WIPSS) (Bisi et al., 2016; Bisi et al., 2016, 2016). So far, only ISEE in Japan provides real-time IPS data but several other stations (e.g., LOFAR, MEXART - Mexican Array Radio Telescope) have the potential to do so. Finally, we note that current IPS-based forecasts have only a 0.5-1 day lead-time. Another important new addition to radio-based space weather capabilities is LOFAR48W (e.g. Carley et al., 2020) with stations spread around Europe and observing capabilities across the Solar-Heliosphere-Geospace regimes. Each station can form a two-dimensional steerable beam to track a single radio source. With these new capabilities, observations of IPS can help improve CME modeling, heliosperic reconstructions, and their accuracies (see TI 1 papers by Fallows et al., 2022; Iwai et al., 2022; Jackson et al., 2022; Tiburzi et al., 2022; Shaifullah et al., 2022). A key recommendation here would be for the proposed LOFAR4SW upgrades to be undertaken, thus making the LOFAR system a comprehensive space-weather observatory on the ground, and alongside LOFAR4SW implementation, for other WIPSS Network IPS observatories to make their data available in real time (which would include the full real-time implementation of the ISEE IPS data which are, as noted in earlier sections, only available in one-day intervals lagged by almost a day). This recommendation links into the next subsection of the \(<\)10-year horizon also. #### 6.3.2 Near-term (Within the Next 10 Years) Maintain existing off-SEL coverage. Transient measurements from off-SEL viewpoints are now a vital input for many, if not most, models used in research or operational space weather forecasting (e.g., de Koning et al., 2009; Rodriguez et al., 2020; Bauer et al., 2021). STEREO-A is the sole provider of off-SEL imaging into the near future, followed (potentially) by the ESA L5 Lagrange mission in the 2028-30 time frame. However, it is not certain that STEREO-A, launched in 2007, will continue to operate until then. It is, therefore, urgent to consider smaller missions with shorter development schedules as 'gapfillers' for off-SEL coverage. Improve coverage of the inner heliosphere. As discussed in the previous section, improving space weather models requires data from more places in the heliosphere; namely, distributed in situ measurements, primarily between Venus and L1, in tandem with multi-viewpoint coronal/SEL imaging from the L1, L4, and L5 Lagrange points and wider synchronous coverage of the photospheric magnetic fields. Such measurements are achievable with current technologies and specific implementations have been discussed in the literature, such as the Space Weather diamond (Cyr et al., 2000), the L5 pathfinder (Vourlidas, 2015) or the Heliospheric Research Grid (Vourlidas et al., 2018). The fact that several other concepts are currently under design for NASA's Heliophysics Concept Mission Studies and the Living With a Star (LWS) Architecture Study is encouraging. Improve model sophistication.This is another area where heliospheric modeling can further benefit from the terrestrial weather experience. Terrestrial weather forecasting uses data-assimilative ensemble modeling extensively. Ensemble modeling approaches are being developed (e.g., Mays et al., 2015; Amerstorfer et al., 2018; Barnard et al., 2020; Weiss et al., 2021) but the DA aspect is still in its infancy and needs to be developed to extract the maximum benefit from the widely distributed measurements discussed above. The proposed increase in spatial coverage will produce a corresponding increase in the available data for assimilations, required integrated data mining and ML/AI workflows. #### 6.3.3 Long-term (10+ Years) Close the 'coverage' gap. Ultimately we need complete, so-called '4\(\pi\)' coverage of the solar surface and atmosphere to achieve robust boundary conditions for heliospheric and space weather models (Kleinmann, 2012; Gibson et al., 2018; Vourlidas et al., 2020). A system of 3-4 spacecraft in both near-polar and ecliptic orbits can provide this coverage within realistic cost, schedule and technology constraints while establishing the cornerstone of a long-term systems approach to Solar-Heliosphere-Geospace observations. To enable the human exploration of Mars, the addition of a Sun-Mars L1 monitor to the Lagrangian and Sun-Earth stations will ensure actionable forecasting for both Earth-Mars transits and Martian outposts. Deploy Next Generation operational models.Any long-term modeling development strategy should aspire to the smooth transition of research-grade models (developed during the 'near-term' steps above) into the operational theater, thus providing the space weather community with data-assimilative data-driven models that both meet the performance requirements of space weather users and continue to push the boundaries of our physical knowledge of the inner heliosphere. ### Summary We present a set of ideas for improving the accuracy of modeling, and subsequently of forecasting space weather-relevant parameters of solar transients (CMEs and SIRs). The ideas are based on the current research status as discussed in Sections 1-5, and are focused specifically on the issues surrounding the modeling of transient propagation in the inner heliosphere. Figure 15 summarizes the key findings for moving the field forward. ## 7 Closing Thoughts In the years since the last COSPAR Roadmap (Schrijver et al., 2015) novel methodologies and increasingly sophisticated methodologies have been developed. We attempted to review the status of the field regarding a specific, but highly important, component of Space Weather forecasting chain--CME propagation. Sections 2-5 expanded on the various aspects of CME propagation and relevant background solar wind structures (SIR/CIR). In Section 6, we offered ideas on moving forward with our current gaps in observing, modeling, and physical understanding. We close this effort with an outline of the near-future exciting prospects in observations and modeling and a final summary of our top-level findings. ### Novel Observing Capabilities The recent launches of PSP and SolO (in 2018 and 2020, respectively), constitute a major leap forward for the solar and heliospheric physics communities. The two missions investigate the solar wind in the corona and inner heliosphere from heliocentric distances far closer than the 0.3 AU achieved by the Helios mission. SolO, in particular, will obtain off-ecliptic imaging of the near-polar regions for the first time and will bring new insight about the magnetic field characteristics at high latitudes (see also Section 2.3 and Section 6.2.2). The upcoming ESA L5 (_Vigil_; estimated launch date 2029) operational mission will provide valuable data for further improving operational forecasting. In additional to the Sun-Earth line coverage, Vigil will obtain photospheric magnetic field observations over the East solar limb, thus providing, for the first time, accurate information on the magnetic conditions of the regions rotating towards Earth. The _Vigil_ mission will nicely complement the NOAA SWFO-L1 (Space Weather Follow On-Lagrange 1) and GOES-U (Geostationary Operational Environmental Satellite-U) observatories (early 2025 estimated launch) that will provide operational coronagraphic imaging from the Sun-Earth line. ### Novel Computing Capabilities Novel methodologies, such as ML and AI, have increased in sophistication and gained lots of momentum in the recent years, mostly thanks to the impressive improvements in computational power and investments from the commercial sectors. ML methods have shown considerable promise in addressing the CME Figure 15: A _multifaceted strategy_ is required to significantly increase the accuracy of CME and SIR propagation models within the next ten years heliospheric propagation (see Section 4). ML/AI is an accessible and powerful methodology to investigate large amounts of data, on a statistical basis. We briefly outline some perspectives of ML/AI for space weather forecasting. * Current methods make limited use of the high resolution/cadence solar data (when available) using extracted parameters or single images as input to the models (Camporeale et al., 2017). NNs are able to extract complex relations from multi-dimensional data (LeCun et al., 2015). The increasing computational capabilities will enable the use of larger spatial-, spectral-, and temporal-resolution data, that should lead to novel prediction methods. * The inner workings of NNs are opaque, preventing a clear interpretation of the results ('black-box' problem). Making ML/AI interpretable is a major challenge but promising approaches, such as the Grad-CAM algorithm and visual attention mechanisms (Xu et al., 2015), may address this challenge and hopefully lead to physical insights. * As discussed in Sections 1 and 4, forecasting models can be computationally expensive. DL enables the acceleration of existing methods by training a NN with the results of the simulation. Applications to fluid simulations have already demonstrated that comparable results can be achieved in a fraction of the time (Tompson et al., 2017; Sanchez-Gonzalez et al., 2020). * Extending this concept, NNs can be used to directly learn from physical equations. Physics-informed NNs integrate the information from a physical model (e.g., differential equations) and measured data (Karniadakis et al., 2021). The ability to handle noisy data and imperfect assumptions makes this method promising for future simulation methods that combine multi-instrument data. ### Path Forward Finally, we close this section with a top-level list of recommended actions for improving the modeling of the propagation of CMEs in the heliosphere. 1. Improve background solar wind modeling. The outputs from the current background solar wind models have large uncertainties. It is not clear which model performs better under what conditions (see #3 below). Permanent model evaluation will be able to react on the varying conditions in interplanetary space on different temporal scales (Cluster H2). In general, we recommend driving a variety of models to obtain uncertainty estimates since we do not yet know "the" most reliable one (this holds for CME propagation as well as background solar wind models). 2. Invest on sophisticated ensemble modeling using different models; see discussion in #1 above. 3. Standardize data analysis techniques and metrics; We need a way to intercompare model results and identify whether the problems arise from the inputs or the computations. Developing/adopting standards for data, analysis and performance metrics will greatly facilitate this effort. 4. Facilitate data preparation and sharing to boost collaboration (e.g., see concept by Ringuette et al., 2022). 5. Establish regular off-Sun-Earth line observations (e.g. from L4/L5) with complementary instrumentation (following the STEREO paradigm); future mission for 4\(\pi\) coverage of magnetic field to overcome the \(B_{z}\) issue; 6. Exploit new data streams (e.g., IPS as well as other space-weather observations across the S, H, and G domains, e.g. by the implementation of LOFAR4SW upgrades) and new forecast techniques (ML, DA, NN); 7. Explore the eruption prediction capabilities from active regions in order to increase the lead time of space weather forecasts. Cluster S3 teams are investigating the maximum likelihood of a CME occurring together with its most likely speed and acceleration. Conceivably, if an estimate of the mass is available, the kinetic energy of a CME may also be predicted. The pre-eruption magnetic helicity may also be estimated (see TI2 papers by Georgoulis et al. (2023) and Linton et al. (2023)). These predicted values could be used for Sun-Earth modeling well before the event actually occurs. 8. Improve communications with peer-users (Cluster G community) and end-users; emphasis for end-users must be placed on explaining the complexity of the system versus the terrestrial weather system (e.g., see Marshall et al., 2022) and on setting realistic expectations for the performance of Space Weather forecasting methods given, for example, the issues discussed in this paper. ## 8 Acknowledgments We thank Janet Luhmann for critically reading the paper and giving valuable comments that helped to improve the manuscript. C.S. acknowledges the NASA Living With a Star Jack Eddy Postdoctoral Fellowship Program, administered by UCAR's Cooperative Programs for the Advancement of Earth System Science (CPAESS) under award No. NNX16AK22G, and NASA grants 80NSSC19K0914, 80NSSC20K0197, and 80NSSC20K0700. I.G.R. acknowledges support from the ACE and STEREO missions and NASA program NNH17ZDA001N-LWS. S.G.H. acknowledges funding from the Austrian Science Fund (FWF) Erwin-Schrodinger fellowship J-4560. E.Paouris acknowledges support from the NASA LWS Grant 80NSSC19K0069. A.V. was supported by 80NSSC19K1261 and 80NSSC19K0069. M.M.B. acknowledges support from UKRI-STFC in-house research funding and Space Weather Core funding used in part for the organisation and writing inputs to the paper. T.A. thanks the Austrian Science Fund (FWF): P-36093. D.B. acknowledges support from the Horizon 2020 Framework Programme H2020-INFRAIA-2020-1 Project 101007599 -- PITHIA-NRF. L.K.J. thanks the support of the STEREO mission and NASA's LWS and Heliophysics Support Research (HSR) programs. J.A.L. and E. Palmerio acknowledge support from NASA's LWS-SC program (grant no. 80NSSC22K0893) and NSF's PREEVENTS program (grant no. ICER-1854790). D.S. was supported by NASA LWS 80NSSC17K0718. M.O. is part funded by Science and Technology Facilities Council (STFC) grant numbers ST/V000497/1
2306.12547
DGC-GNN: Leveraging Geometry and Color Cues for Visual Descriptor-Free 2D-3D Matching
Matching 2D keypoints in an image to a sparse 3D point cloud of the scene without requiring visual descriptors has garnered increased interest due to its low memory requirements, inherent privacy preservation, and reduced need for expensive 3D model maintenance compared to visual descriptor-based methods. However, existing algorithms often compromise on performance, resulting in a significant deterioration compared to their descriptor-based counterparts. In this paper, we introduce DGC-GNN, a novel algorithm that employs a global-to-local Graph Neural Network (GNN) that progressively exploits geometric and color cues to represent keypoints, thereby improving matching accuracy. Our procedure encodes both Euclidean and angular relations at a coarse level, forming the geometric embedding to guide the point matching. We evaluate DGC-GNN on both indoor and outdoor datasets, demonstrating that it not only doubles the accuracy of the state-of-the-art visual descriptor-free algorithm but also substantially narrows the performance gap between descriptor-based and descriptor-free methods.
Shuzhe Wang, Juho Kannala, Daniel Barath
2023-06-21T20:21:15Z
http://arxiv.org/abs/2306.12547v2
# DGC-GNN: Descriptor-free Geometric-Color Graph Neural Network for 2D-3D Matching ###### Abstract Direct matching of 2D keypoints in an input image to a 3D point cloud of the scene without requiring visual descriptors has garnered increased interest due to its lower memory requirements, inherent privacy preservation, and reduced need for expensive 3D model maintenance compared to visual descriptor-based methods. However, existing algorithms often compromise on performance, resulting in a significant deterioration compared to their descriptor-based counterparts. In this paper, we introduce DGC-GNN, a novel algorithm that employs a global-to-local Graph Neural Network (GNN) that progressively exploits geometric and color cues to represent keypoints, thereby improving matching robustness. Our global-to-local procedure encodes both Euclidean and angular relations at a coarse level, forming the geometric embedding to guide the local point matching. We evaluate DGC-GNN on both indoor and outdoor datasets, demonstrating that it not only doubles the accuracy of the state-of-the-art descriptor-free algorithm but, also, substantially narrows the performance gap between descriptor-based and descriptor-free methods. The code and trained models will be made publicly available. ## 1 Introduction Establishing 2D-3D matches plays a crucial role in various computer vision applications, including visual localization [19; 37; 39; 34; 50; 35], 3D reconstruction [45; 7; 40; 24], and Simultaneous Localization and Mapping (SLAM) [14; 29; 30]. Traditional methods for establishing point-to-point matches involve extracting keypoints and descriptors from a query image, then matching the 2D and 3D descriptors using exhaustive search. To circumvent the computationally expensive matching process, some approaches [19; 34] narrow the search space by employing image retrieval methods [32; 1] first to identify the most similar images in the database, and then perform descriptor-based image matching [26; 10; 13; 35; 42] between the query and retrieved images. The 2D-3D correspondences are subsequently established by connecting the 2D-2D image matches with the prebuilt 2D-3D correspondences in the database. Another approach [36] proposes an efficient vocabulary-based method to directly build 2D-to-3D matches by searching through all point descriptors. Sattler _et al._[37; 38] further explore the combination of both 2D-3D and 3D-2D search as an active correspondence search step for a faster and more efficient matching process. While descriptor-based algorithms achieve state-of-the-art accuracy, they store and maintain high-dimensional visual descriptors for each point in potentially large 3D point clouds, significantly increasing both memory footprint and maintenance costs. The stored model often requires orders of magnitude more storage than the point cloud and images alone [54]. These methods are susceptible to privacy attacks [12; 11; 6] and necessitate computationally expensive model maintenance and descriptor update procedures [54] when incorporating new descriptors or points into the model. Several approaches have been proposed to address these limitations. Yang _et al.[51]_ employ a learned point selection module to sample a subset of the point cloud for scene compression. Other methods [22; 2] directly learn a function that maps 2D pixels to 3D coordinates without explicitly storing the 3D scene. Additionally, [31] introduces an adversarial learning framework to develop content-concealing visual descriptors that prevent privacy leakage. Recently, researchers [4; 25] have begun exploring deep learning techniques for cross-domain direct 2D-3D matching and pose estimation without visual descriptors, showcasing the potential of descriptor-free matching through differentiable geometric optimization. The recently proposed GoMatch [54] represents significant progress in descriptor-free 2D-3D matching, achieving reasonable matching performance on a variety of real-world datasets [23; 43; 21]. GoMatch first identifies keypoints in the query image, which, along with the 3D points from the model, are converted to bearing vectors in the camera coordinate system. The algorithm employs the attention mechanism [35; 49] to effectively establish reliable 2D-3D correspondences. While GoMatch attains reasonable accuracy, its performance still significantly lags behind its descriptor-based counterparts [35; 34; 38]. Additionally, our experiments reveal that its reliance on geometric cues from the points and their local neighbors renders GoMatch incapable of distinguishing geometrically indistinct structures. These observations lead us to two critical questions: (1) Is geometry the only information we can utilize? (2) How can we leverage the geometric information derived from the points for matching? In practice, humans identify correspondences between objects by considering global structures and local geometric cues. For example, when matching an image to a point cloud as in Fig. 1, we first locate the building based on its unique structure and then identify the local structure of the roof for matching. Besides geometric cues, the visual context, such as the color information at each point, also provides constraints for 2D-3D matching. Importantly, this color information still preserves privacy, as the RGB data from sparse keypoints is insufficient to reconstruct the scene. Building upon these observations and the groundwork set by GoMatch, we propose a novel graph-based pipeline, named **DGC-GNN**, which leverages geometric and color cues in a global-to-local manner for descriptor-free 2D-3D matching. Our pipeline encodes both position and RGB information for each point and extracts a global _distance-angular_ embedding to guide local point matching. Taking inspiration from [42], we employ a cluster-based transformer to constrain information flow within local clusters. We observe, from real-world datasets, that the proposed DGC-GNN leads to substantial improvements in the number of correct matches and the accuracy of pose estimation. Notably, it _doubles the accuracy_ of GoMatch, thereby reducing the gap between descriptor-based and descriptor-free methods. In summary, our paper makes the following contributions: * We introduce a descriptor-free global-to-local GNN for direct 2D-3D point matching. The network leverages multiple cues and incorporates a progressive clustering module to represent the keypoints. This pipeline enhances the robustness and accuracy of 2D-3D matching while requiring low memory, being privacy-preserving and free from 3D model maintenance. Figure 1: **2D-3D matching** (shown by green lines) with the proposed DGC-GNN and GoMatch [54]. In this example, DGC-GNN obtains 78 correct matches with 0.02 meters camera translation and 0.24\({}^{\circ}\) rotation errors, while GoMatch finds only 17 inliers with a pose error of 0.37 meters and 4.37\({}^{\circ}\). * We demonstrate that color information is crucial for 2D-3D matching. By incorporating RGB encoding into our network, we observe significant improvements in performance. * Extensive experiments on multiple real-world datasets show that the proposed network outperforms previous methods by a large margin on both matching and visual localization tasks. ## 2 Descriptor-Free 2D-3D Matching ### Problem Formulation and Notation Considering keypoints \(\mathbf{P}=\{\mathbf{p}_{n}\in\mathbb{R}^{2}\mid n=1,...,N\}\) from a query image \(I\) and a database 3D point cloud \(\mathbf{Q}=\{\mathbf{q}_{m}\in\mathbb{R}^{3}\mid m=1,...,M\}\), where, optionally, each 3D point is associated with a visual descriptor \(\mathbf{d}\in\mathbb{R}^{D}\). The task of 2D-3D point matching is to find a set \(\mathcal{M}_{\mathbf{p},\mathbf{q}}\) of corresponding keypoints such that \[\mathcal{M}_{\mathbf{p},\mathbf{q}}=\{(n,m)\mid||\pi(\mathbf{q}_{m},\mathbf{R },\mathbf{t},\mathbf{K})-\mathbf{p}_{n}||_{2}\leq\epsilon\}, \tag{1}\] where \(\pi(\cdot)\) is a mapping that projects a 3D point \(\mathbf{q}_{m}\) from world coordinates to the image plane, represented by a camera rotation \(\mathbf{R}\in\mathbb{R}^{3\times 3}\), translation \(\mathbf{t}\in\mathbb{R}^{3}\), and intrinsic parameter matrix \(\mathbf{K}\in\mathbb{R}^{3\times 3}\). Parameter \(\epsilon\in\mathbb{R}\) is the threshold specified in pixels. Additionally, we denote the color of point \(\mathbf{p}_{n}\) as \(\mathbf{c}_{n}=[r,g,b]^{\mathsf{T}}\in[0,1]^{3}\). **Bearing Vector.** Similar to [54], we adopt bearing vectors as keypoint representation for both the 2D and 3D points to alleviate their cross-domain nature and represent them in the same space. The bearing vector is the direction from the camera center to a 3D point in the camera coordinate system. Given an image, a 2D pixel \(\mathbf{p}_{n}\) is uplifted to bearing vector as \([\mathbf{b}_{\mathbf{p},n},1]^{\mathsf{T}}=\mathbf{K}^{-1}[\mathbf{p}_{n},1] ^{\mathsf{T}},\mathbf{b}_{\mathbf{p}_{n}},\in\mathbb{R}^{2}\), where \(\mathbf{K}\) is the intrinsic camera matrix. Given a 3D point \(\mathbf{q}_{m}\), the corresponding bearing vector is \([\mathbf{b}_{\mathbf{p},m},1]^{\mathsf{T}}=\frac{\mathbf{R}\mathbf{q}_{m}+ \mathbf{t}}{|\mathbf{R}\mathbf{q}_{m}+\mathbf{t}_{i}|_{2}}\), where \(\mathbf{R}\) is the camera rotation and \(\mathbf{t}\) is its translation in the world coordinate system and subscript \(z\) denotes the third component of the 3D vector. ### Network Architecture The proposed DGC-GNN applies a hierarchical mechanism to leverage both color and geometric cues in a global-to-local fashion. The overall pipeline is illustrated in Fig.2. We initially employ Figure 2: **Pipeline overview.** For keypoints from the 2D image and 3D points from the point cloud, the proposed DGC-GNN (1) considers the bearing vectors and the color at each bearing vector as input. (2) It extracts the point-wise position and color features with two separate encoders and mixes the features as \(\mathbf{f}_{\mathbf{p}}\) and \(\mathbf{f}_{\mathbf{p}}\). (3) The bearing vectors are clustered into \(K\) groups, and geometric graphs are built upon the clusters to extract the global-level geometric embeddings \(\hat{\mathbf{f}}_{\mathbf{p}}^{gg}\) and \(\hat{\mathbf{f}}_{\mathbf{q}}^{gg}\). (4) We then concatenate \(\hat{\mathbf{f}}_{\mathbf{p}}^{gg}\) with \(\mathbf{f}_{\mathbf{p}}\) and \(\hat{\mathbf{f}}_{\mathbf{q}}^{gg}\) with \(\mathbf{f}_{\mathbf{p}}\), and build a local graph at each point as self-attention. A cluster-based attention module is adopted to enhance the local features by forcing the message passing only with the most related features. A differentiable layer matches and optimizes the improved features to obtain score matrix \(\mathcal{S}\). Finally, an outlier filtering network is applied to prune the matches with low confidence, leading to the final 2D-3D correspondences \(\mathcal{M}_{final}\). two local feature extractors to encode RGB and position information for each point simultaneously (Sec. 2.2.1). Additionally, we cluster the points based on their distances and generate global graphs to obtain the global-level geometric embeddings (Sec. 2.2.2). Next, we concatenate the local point features with their corresponding global features and input them into the cluster-based local matching module to identify the initial matches (Sec. 2.2.3). Finally, we incorporate a classification network to filter out matches with low confidence to refine the initial matches (Sec.2.3). #### 2.2.1 Local Feature Extraction To extract points-wise features from both the 2D keypoint set \(\mathbf{P}\) and the 3D point cloud \(\mathbf{Q}\), we consider the inputs as bearing vectors equipped with color information: \(\mathcal{P}=\{\mathbf{b_{p}},\mathbf{c_{p}}\}\) and \(\mathcal{Q}=\{\mathbf{b_{q}},\mathbf{c_{q}}\}\). Two ResNet-style point encoders [17; 4], denoted as \(\mathcal{F}_{b}\) and \(\mathcal{F}_{c}\), are applied to extract position and color embeddings separately. We then obtain the local point features, \(\mathbf{f_{p}}\) and \(\mathbf{f_{q}}\), as follows: \[\mathbf{f_{p}}=\mathcal{F}_{b}(\mathbf{b_{p}})+\mathcal{F}_{c}(\mathbf{c_{p}} ),\ \ \mathbf{f_{q}}=\mathcal{F}_{b}(\mathbf{b_{q}})+\mathcal{F}_{c}(\mathbf{c_{q}}). \tag{2}\] The resulting point-wise features \(\mathbf{f_{p}}\) and \(\mathbf{f_{q}}\) are vectors with dimensions \(\mathbb{R}^{N\times d}\) and \(\mathbb{R}^{M\times d}\) respectively, where \(N\) and \(M\) represent the number of keypoints in \(\mathbf{P}\) and \(\mathbf{Q}\), and \(d\) denotes the dimensionality of the encoded features, e.g., \(d=128\) for SIFT features [26]. #### 2.2.2 Global Geometric Guidance Global context guidance has demonstrated its effectiveness in various computer vision tasks [46; 22; 53; 33]. Global context helps to differentiate local descriptors from similar structures or patches, thereby reducing ambiguity. However, most existing methods [46; 33] consider the outputs from different encoding layers as global and local features. This approach is not suitable for our scenario, as our input is sparse points. Downsampling the sparse point cloud results in losing distinctive geometric structures. Hence, we adopt cluster-based geometric encoding to extract global embeddings. As shown in Fig. 3.(a) and (c), the input bearing vectors, both in the image and in the point cloud, are first clustered into \(K\) groups. The groups represent distinct clusters, each associated with a cluster center as the global position, denoted by \(\hat{\mathbf{b}}_{\mathbf{p},k}\in\mathbb{R}^{2},k=1,...K\). The corresponding global embedding is obtained by computing the average of the point embeddings within the cluster, thus \(\hat{\mathbf{f}}_{\mathbf{p},k}=\frac{1}{K}\sum_{k=1}^{K}\mathbf{f_{p},k}\), \(\hat{\mathbf{f}}_{\mathbf{p},k}\in\mathbb{R}^{d}\). The same is conducted on the 3D points to obtain \(\hat{\mathbf{b_{q}}}\) and \(\hat{\mathbf{f}}_{\mathbf{q}}\). **Global Geometric Graph.** To aggregate and extract the geometric relations among the clusters, we propose a novel graph neural network that encodes both distance and angular cues; the basic GNN structure is built upon [18; 54]. In the following, we describe the graph building for the 2D global points set \(\hat{\mathcal{P}}=\{\hat{\mathbf{b}}_{\mathbf{p}},\hat{\mathbf{f}}_{\mathbf{ p}}\}\) and the same goes for \(\hat{\mathcal{Q}}=\{\hat{\mathbf{b}}_{\mathbf{q}},\hat{\mathbf{f}}_{\mathbf{q}}\}\). Each cluster center point \(\hat{\mathbf{b}}_{\mathbf{p},x}\) is connected to its \(k\)-NN neighbours (\(k\leq K\)) in the coordinate space, and \(\xi_{\mathbf{p},(x,y)}\) is the edge between center point \(\hat{\mathbf{b}}_{\mathbf{p},x}\) and \(\hat{\mathbf{b}}_{\mathbf{p},y}\). We update the feature \(\hat{\mathbf{f}}_{\mathbf{p},x}\) using the following equation: \[{}^{(t+1)}\hat{\mathbf{f}}_{\mathbf{p},x}=\max_{\xi_{\mathbf{p},(x,y)}} \mathcal{H}_{g}(^{(t)}\hat{\mathbf{f}}_{\mathbf{p},x}\oplus(^{(t)}\hat{ \mathbf{f}}_{\mathbf{p},x}-^{(t)}\hat{\mathbf{f}}_{\mathbf{p},y})), \tag{3}\] Figure 3: **Cluster-based geometric encoding**. (a) The clusters obtained from bearing vectors \(\mathcal{Q}\) of the 3D point cloud are visualized by color. The local graph is created from the neighboring cluster centers. Black 3D points are filtered out from matching. (b) Angular embedding from the global graph to obtain rotation-invariant geometric cues. (c) The clusters obtained from 2D keypoints’ bearing vectors \(\mathcal{P}\). Similarly, as in 3D, the local graph is created from the neighboring cluster centers. where the \(\oplus\) denotes concatenation and \(\mathcal{H}_{g}(*)\) is the linear projection with instance normalization [48] and a LeakyReLU function [27]. The global feature \(\mathbf{\hat{f}}_{\mathbf{p},x}\) is updated twice, and calculated as \[\mathbf{\hat{f}}_{\mathbf{p},x}^{g}=\mathcal{H}_{g}(^{(0)}\mathbf{\hat{f}}_{ \mathbf{p},x}\oplus^{(1)}\mathbf{\hat{f}}_{\mathbf{p},x}\oplus^{(2)}\mathbf{ \hat{f}}_{\mathbf{p},x}). \tag{4}\] Besides the distance embedding, inspired by [33], we also adopt the angular embedding to obtain rotation-invariant geometric cues for the global point representation. To do so, we define the embedding on cluster triplets as shown in Fig. 3.(b). Given bearing vector \(\mathbf{\hat{b}}_{\mathbf{p},z}\) and two of its neighbors \(\mathbf{\hat{b}}_{\mathbf{p},x}\) and \(\mathbf{\hat{b}}_{\mathbf{p},y}\), the angular embedding of \(\mathbf{\hat{b}}_{\mathbf{p},z}\) is defined as follows: \[\mathbf{A}_{x,y}^{z}=\mathrm{sine}(\angle(\mathbf{\hat{b}}_{\mathbf{p},z}- \mathbf{\hat{b}}_{\mathbf{p},x},\ \ \mathbf{\hat{b}}_{\mathbf{p},y}-\mathbf{\hat{b}}_{\mathbf{p},x})/\sigma_{a}), \tag{5}\] where \(\mathrm{sine}(\cdot)\) is a sinusoidal function and \(\sigma_{a}\) is a controller constant. We update the _global geometric embedding_\(\mathbf{\hat{f}}_{\mathbf{p}}^{gg}\) as an angular-aware attention mechanism as follows: \[\mathbf{\hat{f}}_{\mathbf{p}}^{gg}=\mathrm{norm}(\mathbf{\hat{f}}_{\mathbf{p} }^{g}+\mathrm{Att}(\mathbf{\hat{f}}_{\mathbf{p}}^{g},\mathbf{A}_{\mathbf{p}})) ;\quad\mathbf{\hat{f}}_{\mathbf{p}}^{gg}\in\mathbb{R}^{K\times d}, \tag{6}\] where \[\mathrm{Att}(\mathbf{\hat{f}}_{\mathbf{p}}^{g},\mathbf{A}_{\mathbf{p}})=( \mathbf{\hat{f}}_{\mathbf{p}}^{g}\mathbf{W}^{\mathbf{V}}).\frac{(\mathbf{A}_ {\mathbf{p}}\mathbf{W}^{\mathbf{A}})(\mathbf{\hat{f}}_{\mathbf{p}}^{g}\mathbf{ W}^{\mathbf{Q}})^{\mathrm{T}}+(\mathbf{\hat{f}}_{\mathbf{p}}^{g}\mathbf{W}^{ \mathbf{Q}})(\mathbf{\hat{f}}_{\mathbf{p}}^{g}\mathbf{W}^{\mathbf{K}})^{ \mathrm{T}}}{\sqrt{dim}}. \tag{7}\] \(\mathbf{W}^{\mathbf{A}},\mathbf{W}^{\mathbf{Q}},\mathbf{W}^{\mathbf{K}}, \mathbf{W}^{\mathbf{V}}\in\mathbb{R}^{d\times d}\) are the projection matrices of each item. The local point representation with global geometric embedding is as \(\widetilde{\mathcal{P}}=\{\mathbf{b}_{\mathbf{p}},\mathbf{\hat{f}}_{\mathbf{p} }\}\) and \(\widetilde{\mathbf{f}}_{\mathbf{p}}=\mathbf{f}_{\mathbf{p}}\oplus\mathbf{\hat {f}}_{\mathbf{p}}^{gg},\widetilde{\mathbf{f}}_{\mathbf{p}}\in\mathbb{R}^{N \times 2d}\). The operation for the bearing vectors obtained from point cloud \(\hat{\mathcal{Q}}\) is the same and we get \(\widetilde{\mathcal{Q}}=\{\mathbf{b}_{\mathbf{q}},\mathbf{\widetilde{f}}_{ \mathbf{q}}\}\). #### 2.2.3 Cluster-based Local Matching After extracting the global geometric embedding, we implement a _cluster-based matching module_ to obtain the initial intra-domain 2D-3D matches. This cluster-based GNN [42] has been shown to be more computationally efficient than its complete-graph counterpart [35]. The network considers the local point features from both \(\widetilde{\mathcal{P}}\) and \(\widetilde{\mathcal{Q}}\) a complete set, then classifies the local feature with strong correlations into the same group and restricts the message passing within each group. In addition to its low computational complexity, we found that the cluster GNN can effectively utilize our global-to-local geometric cues, as the clustering operation inherits the property of global graph clustering and forces it to distinguish ambiguous local features even with similar global embedding. **GoMatch-style Initialization.** Our local graph initialization follows the pipeline in GoMatch [54] and conducts it as geometric self-attention. For each local point \(\mathbf{b}_{\mathbf{p},n}\), we construct a local graph according to its \(k^{\prime}\) nearest neighbours in the Euclidean space and update the associated feature \(\mathbf{\widetilde{f}}_{\mathbf{p},n}\in\mathbb{R}^{2d}\) by Eq. 4. Note that we ignore the angular embedding at this stage due to the unaffordable memory requirements with space complexity \(\mathcal{O}(Nk^{\prime 2})\), where \(N\) is the number of local points. We then adopt linear attention [20, 46] as a cross-attention mechanism, which allows each keypoint in one modality to interact with all keypoints from another modality. This not only facilitates inter-modality in the feature matching but also reduces the computational complexity from \(\mathcal{O}(N^{2})\) to \(\mathcal{O}(N)\). **Cluster-based Attention.** After the graph initialization, the features \(\mathbf{\widetilde{f}}_{\mathbf{p}}\) and \(\mathbf{\widetilde{f}}_{\mathbf{q}}\) coming from the image and the point cloud respectively, are concatenated and processed in a two-level hierarchical clustering attention module. The hierarchical structure is effective in reducing erroneous grouping. At the first level, we directly cluster the feature vectors into \(I\) coarse groups. In the second level, each coarse group is divided into several small groups. The local point information exchange is conducted at each level and only within each group to obtain more representative features. After the sparse clustering, each feature vector is transformed back to its original position and then split again into \(\mathbf{\widetilde{f}}_{\mathbf{p}}^{\prime}\) and \(\mathbf{\widetilde{f}}_{\mathbf{q}}^{\prime}\) to obtain the keypoints both in the 2D and 3D spaces. **Optimal Transport.** We calculate the cost matrix \(\mathcal{M}\in\mathbb{R}^{N\times M}\) between the two transformed feature sets using the \(L_{2}\) distance between pairs of features. Thus, \(\mathcal{M}(n,m)=||\mathbf{\hat{f}}_{\mathbf{p},n}^{\prime}-\mathbf{\hat{f}}_{ \mathbf{q},m}^{\prime}||_{2}\). Following [35], the cost matrix \(\mathcal{M}\) is extended to \(\mathcal{\bar{M}}\) by adding an additional row and column as dustbins for unmatched points. We then iteratively optimize \(\mathcal{\bar{M}}\) running the Sinkhorn algorithm [44, 8] in a declarative layer to obtain the score matrix \(\mathcal{\bar{S}}\). Finally, \(\mathcal{\bar{S}}\) is converted to \(\mathcal{S}\in\mathbb{R}^{N\times M}\) by dropping the dustbins. The initial 2D-3D matching candidates are acquired by mutual top-1 search, thus \(\mathcal{M}_{init}=\{(\widetilde{n},\widetilde{m})\mid\forall(\widetilde{n}, \widetilde{m})\in\text{MNN}(\mathcal{S})\}\), where MNN is the mutual nearest neighbors operator. Set \(\mathcal{M}_{init}\) provides initial 2D-3D matches that we will further filter in the next section to keep the accurate correspondences only. ### Outlier Rejection After obtaining the initial matches, an outlier pruning approach is adopted to remove the incorrect ones. We apply a classification network, as applied in [52; 54], whose input is the concatenated 2D and 3D keypoint features \(\widetilde{\mathbf{f}}_{\widetilde{n},\widetilde{m}}^{\prime}=\widetilde{ \mathbf{f}}_{\widetilde{p},\widetilde{m}}^{\prime}\oplus\widetilde{\mathbf{f} }_{\mathbf{q},\widetilde{m}}^{\prime}\) and outputs the matching confidence of each matched pair. The final predicted matches are obtained as follows: \[\mathcal{M}_{final}=\{(\widetilde{n}^{\prime},\widetilde{m}^{\prime})\mid \forall\text{ classifier}(\widetilde{\mathbf{f}}_{\widetilde{n},\widetilde{m}}^{ \prime}\mid(\widetilde{n},\widetilde{m})\in\mathcal{M}_{init})\geq\theta\}, \tag{8}\] where \(\theta\) is the matching confidence threshold. ### Training Loss We use the same training loss as GoMatch. The loss function \(\mathcal{L}\) consists of two terms, the matching loss \(\mathcal{L}_{ot}\) and the classification loss \(\mathcal{L}_{or}\). The ground truth match set \(\mathcal{M}_{gt}\) is estimated by reprojecting the 3D points to the 2D image plane and calculating the pixel distance. We also include point sets \(\mathcal{I}\) and \(\mathcal{J}\) for the unmatched points in \(\mathcal{P}\) and \(\mathcal{Q}\), respectively. The matching loss \(\mathcal{L}_{ot}\) minimizes the negative log-likelihood of the matching score \(\tilde{\mathcal{S}}\). \[\mathcal{L}_{ot}=-\frac{1}{|\mathcal{M}_{gt}|+|\mathcal{I}|+|\mathcal{J}|}( \sum_{(n,m)\in\mathcal{M}_{gt}}\log\tilde{\mathcal{S}}_{n,m}+\sum_{n\in \mathcal{I}}\log\tilde{\mathcal{S}}_{i,m+1}+\sum_{m\in\mathcal{J}}\log\tilde{ \mathcal{S}}_{N+1,j}). \tag{9}\] The classification loss is defined as \[\mathcal{L}_{or}=-\frac{1}{|\mathcal{M}_{init}|}\sum_{i=1}^{|\mathcal{M}_{init }|}w_{i}(y_{i}\log(p_{i})+(1-y_{i})\log(1-p_{i})), \tag{10}\] where \(w_{i}\) is the balance weight of positive and negative samples, \(y_{i}\) is the gt matching label for the \(i\)-th correspondences, \(p_{i}\) is the predicted probability of a true match for the \(i\)-th correspondences. The total loss is the sum of the two terms, thus, \(\mathcal{L}=\mathcal{L}_{ot}+\mathcal{L}_{or}\). ## 3 Experiments In this section, we first describe the experimental setup, employed datasets, and evaluation protocols. Next, we provide detailed comparisons with the baselines on both matching and visual localization tasks. Finally, we conduct ablation studies on each designed component. **Training.** We train the indoor model of DGC-GNN on the ScanNet [9] dataset and the outdoor model on the MegaDepth [23] dataset. We extract up to 1024 keypoints for each training image by the SIFT detector [26]. Similarly as in GoMatch, we first select a subset of the point cloud by applying image retrieval approaches [1; 47] to obtain potential images observing the same part of the scene as the input one. During the training, we randomly sample the retrieval pairs with a visual overlap of more than 35% on MegaDepth and 65% on ScanNet to ensure enough matches on each pair. For the global geometric embedding, we cluster the 2D/3D bearing vectors into \(K=10\) groups, and each cluster center point is connected to its \(k=4\) nearest neighbors to build the global graph. For the local point graph, we connect each point with its 10 nearest neighbors and the cluster-based attentions are performed twice to force the intra-cluster information exchange. The model is optimized using the Adam with a fixed learning rate of 1e-3. We train DGC-GNN with one 32GB Telsa V100 GPU. The convergence of the model typically requires 25 epochs with around 20 hours for indoor scenes and 35 epochs (approx. 35 hours) for outdoor scenes. **Datasets.** We use ScanNet and MegaDepth for the model training and 2D-3D matching task evaluation. As a downstream application, we perform visual localization on the 7Scenes [43] and Camridge-Landmarks [21] datasets. MegaDepth is a popular outdoor dataset with 196 scenes captured around the world. The sparse 3D reconstructions are provided by the COLMAP [40] structure-from-motion software. Following [54], we train our outdoor model on 99 scenes and evaluate it with 53 scenes. ScanNet is a large-scale RGB-D indoor dataset that consists of 1613 scans with over 2.5 million images. We randomly selected 105 scenes for the training and 30 scenes for the evaluation. Cambridge-Landmarks is a middle-scale outdoor dataset consisting of 6 individual scenes. The ground truth camera poses are provided by a structure-from-motion algorithm. We follow [21; 54] to evaluate our method on four of the scenes. 7scenes is a small indoor dataset with RGB-D images and camera poses provided by the depth SLAM system. We evaluate on the standard test sequences. **Evaluation Protocol.** For matching on ScanNet and MegaDepth, we follow [54] and report the AUC score calculated from the reprojection errors. To calculate the reprojection errors for the 2D-3D matches in \(\mathcal{M}_{final}\), we project the 3D points to the image plane using the ground truth and estimated camera poses. Then we calculate the \(L_{2}\) distance between the ground truth and estimated reprojected 2D points. We use multiple thresholds, 1, 5, and 10 pixels, to evaluate the algorithms. The camera translation and rotation error quantiles at 25%, 50%, and 75% are also reported. Besides, we also evaluate the matching quality by calculating the matching precision P, which is the ratio of inlier matches after PnP-RANSAC to the number of final matches \(\mathcal{M}_{final}\). For visual localization tasks, we report the median translation (in meters) and rotation (in degrees) camera pose errors. ### 2D-3D Matching We compare with the two descriptor-free matchers GoMatch [54] and BPnPnet [4]. During inference, we use the 3D points from the top-\(k\) retrieved database images to match against the keypoints from query images. Following [54], we report the upper bound of the AUC score at 1, 5, and 10 pixels by the ground truth matches. We refer to these values as Oracle. We use the official code with the default setting to generate the evaluation dataset on MegaDepth [23] and rerun the GoMatch and BPnPNet with the released models on the dataset. Note that we also tested GoMatch after retraining it on MegaDepth and achieved similar results as with the released model. **Matching Results.** The results with \(k=1\) and \(k=10\) are presented in Table 1. \(k\) is the number of retrieved image pairs that are used for evaluation. The proposed method outperforms GoMatch and BPnPNet by a significant margin on both scenes. Specifically, DGC-GNN achieves 10.2 / 37.64 / 44.04% reprojection AUC compared to GoMatch with 5.67 / 22.43 / 28.01% on MegaDepth with \(k=1\). DGC-GNN halves the rotation and translation errors of GoMatch on all thresholds and it obtains better matching quality. Notably, the performance of DGC-GNN with \(k=1\) surpasses that of GoMatch with \(k=10\), indicating the effectiveness of our method even with a single view. **Sensitivity to Outliers.** To evaluate the sensitivity of our method to keypoint outliers, we follow the procedure in GoMatch [54]. The outliers are controlled by the outlier ratio, ranging from 0 to 1, calculated as the number of unmatched keypoints divided by the maximum of the numbers of 2D and 3D points. In case the outlier ratio is \(0\), all of the input 2D and 3D points are selected from \begin{table} \begin{tabular}{l the ground truth matches, and no outliers are included in the matching process. When it is \(1\), we directly use the keypoints from the 2D query and 3D points from top-\(k\) retrieved images as input to find the matches without any filtering or outlier removal. The results are shown in Fig. 4. Even in the presence of outliers, DGC-GNN outperforms other methods by a large margin. This indicates that our method is more robust to the presence of outliers and can handle challenging matching scenarios more effectively than the state-of-the-art. **Ablation Study.** We investigate the effectiveness of different components of DGC-GNN on the 2D-3D matching quality. The impact of different components of DGC-GNN is studied on MegaDepth dataset [23] with \(k=10\). The results are reported in Table 2. We provide other results in the supplementary material. We conduct the ablations by gradually adding the components: global geometric embedding (G. Emb), cluster attention (C. Att.), Color, and Angular embedding (Ang.) to the original GoMatch pipeline. Incorporating color information into the matching process significantly impacts the performance, resulting in improvements of 2.55 / 3.88 / 2.55% (AUC@1 / 5 / 10px). This demonstrates the importance of considering color cues for accurate and robust matching. The global-to-local geometric embedding (G. Emb.) and Angular relation embedding (Ang.) substantially improve performance by 1.90 / 5.51 / 5.52% and 1.11 / 3.36 / 3.47%, respectively. It highlights the effectiveness of our idea incorporating global geometric context and local geometric details. The cluster attention mechanism also plays a vital role, improving performance by 0.78 / 3.28 / 3.48%. The best results are obtained when all components are added to the pipeline. ### Visual Localization The visual localization task is the estimation of the 6 degrees-of-freedom camera pose of an input query image _w.r.t_ a known map of the scene. One of the most prominent ways of approaching this problem is via establishing 2D-3D correspondences and running robust pose estimation. Following [54], we ran the proposed DGC-GNN to obtain matches. For each query image, we match its keypoints against the 3D points from the top-10 retrieved views to build the 2D-3D correspondences. The camera pose is then estimated by PnP-RANSAC [15; 16]. We consider two standard datasets 7scenes [43] and Cambridge landmarks [21] to evaluate the proposed method. For 7scenes, we extract the keypoints with the SIFT detectors, and the top 10 pairs are retrieved using DenseVLAD [47]. For \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & G. Emb. & C. Att. & Color & Ang. & Reproj. AUC (\%) & Rotation (\({}^{\circ}\)) & Translation \\ & Sec. 2.2.2 & Sec. 2.2.3 & Sec. 2.2.1 & Sec. 2.2.2 & @1 / 5 / 10px (\(\uparrow\)) & Quantile@25 / 50 / 75\% (\(\downarrow\)) \\ \hline \hline GoMatch [54] & & & & & 8.90 / 35.67 / 44.99 & 0.18 / 1.29 / 16.65 & 0.02 / 0.12 / 1.92 \\ \hline \multirow{2}{*}{Variants} & ✓ & & & & 10.86 / 41.18 / 50.51 & 0.13 / 0.76 / 13.47 & 0.01 / 0.07 / 1.62 \\ & ✓ & ✓ & & & 11.64 / 44.46 / 53.99 & 0.11 / 0.55 / 9.49 & 0.01 / 0.05 / 1.05 \\ & ✓ & ✓ & ✓ & & 14.19 / 48.34 / 56.54 & 0.08 / 0.34 / 9.23 & 0.01 / 0.03 / 1.03 \\ **DGC-GNN** & ✓ & ✓ & ✓ & ✓ & **15.30 / 51.70 / 60.01** & **0.07 / 0.26 / 5.41** & **0.01 / 0.02 / 0.57** \\ \hline \hline \end{tabular} \end{table} Table 2: **Ablation Study.** AUC scores thresholded at 1, 5, and 10 pixels; rotation and translation error quantiles at 25, 50, 75% with the proposed components added one by one to the GoMatch pipeline. Figure 4: **Outlier Sensitivity**. The AUC scores of BPnPNet [4], GoMatch [54] and the proposed DGC-GNN thresholded at 1, 5, and 10 pixels are plotted as a function of the outlier ratio. Oracle represents the AUC score upper bound using ground truth 2D-3D matches. Cambridge landmarks, the keypoints are extracted by SuperPoint [10] to ensure consistency with the SuperPoint-based structure-from-motion model. The top 10 pairs are provided by NetVLAD [1]. **Results.** In Table 3, we present the 3D model maintenance costs, privacy, storage requirements, and camera pose median errors (cm, \({}^{\circ}\)) of standard descriptor-based localization techniques and descriptor-free methods, including the proposed DGC-GNN. The proposed method consistently outperforms GoMatch on all scenes by a significant margin. On the Cambridge-Landmarks dataset, the average median pose error of DGC-GNN is 54 cm / 2.23\({}^{\circ}\), while GoMatch leads to 173 cm / 5.87\({}^{\circ}\) average error. On 7Scenes, the average error of DGC-GNN is 15 cm / 4.47\({}^{\circ}\) and that of GoMatch is 22 cm / 5.77\({}^{\circ}\). DGC-GNN requires a similar amount of memory to other descriptor-free methods. Also, it inherits their privacy-preserving properties due to not requiring and storing visual descriptors. The difference between descriptor-based (DB) and descriptor-free (DF) algorithms is clear. While descriptor-based ones lead to the best accuracy overall, they require excessive memory and descriptor maintenance and are susceptible to privacy attacks. Although the model compression method, HybridSC [5], shows effectiveness in storage saving, it achieves similar performance compared to DGC-GNN on the Cambridge dataset while still requiring descriptor maintenance. End-to-end methods (E2E) overcome these problems and achieve accurate results. However, their main limitation is that such approaches must be trained independently on each scene. The proposed DGC-GNN only needs to be trained once, making it more efficient and convenient to use as an off-the-shelf tool. ## 4 Conclusion In conclusion, this paper introduces DGC-GNN, a novel graph-based pipeline for descriptor-free 2D-3D matching that effectively leverages geometric and color cues in a global-to-local manner. Our global-to-local procedure encodes both Euclidean and angular relations at a coarse level, forming a geometric embedding to guide the local point matching. By employing a cluster-based transformer, we enable efficient information passing within local clusters, ultimately leading to significant improvements in the number of correct matches and the accuracy of pose estimation. Compared to the state-of-the-art descriptor-free matcher GoMatch [54], the proposed DGC-GNN demonstrates a substantial improvement, doubling the accuracy on real-world and large-scale datasets. Furthermore, it results in significantly increased localization accuracy. These advancements contribute to reducing the gap between descriptor-based and descriptor-free methods while addressing the limitations of descriptor-based ones, such as memory footprint, maintenance costs, and susceptibility to privacy attacks. The source code and models will be made publicly available. **Limitations.** The primary limitation of our proposed DGC-GNN method lies in its performance being inferior to traditional descriptor-based algorithms. The performance difference can be attributed to the insufficiency of unique 3D structures in the geometry, which hinders the ability of the algorithm to identify distinct matches in real-world scenarios. Although DGC-GNN demonstrates a notable improvement over existing descriptor-free approaches, there remains a performance gap to overcome in order to achieve results on par with or superior to those of descriptor-based methods. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{No Des.} & \multirow{2}{*}{Proxy} & \multicolumn{6}{c}{Cambridge-Landmarks [21] (cm, \({}^{\circ}\))} & \multirow{2}{*}{MB used} & \multicolumn{6}{c}{7Scenes [43] (cm, \({}^{\circ}\))} & \multirow{2}{*}{MB used} \\ & & & & & King’s & Hospital & Shop & St. Map/s & & & & & & & & & & \\ \hline \multirow{6}{*}{[54]} & MS-Trans. [41] & ✓ & ✓ & 83 & 1.47 & 181 & 2.39 & 36 & 37 & 12 & 11 & 11 & 4.66 & 24 & 9.60 & 14 & 12.19 & 7 & 5.66 & 18 & 4.44 & 17 & 5.94 & 26 & 8.45 & 71 \\ & DSAC-12 & ✓ & ✓ & ✓ & **18.03** & 21 & 20 & 4.0 & **5.03** & 13 & 10 & 4.00 & 112 & 2 & 11.10 & 2 & 12.14 & 11.82 & 3.115 & 4.13 & 4.14 & 4.18 & 3.17 & 1.16 & 196 \\ & HSSCnet [22] & ✓ & ✓ & **18.00** & **19.00** & **19.00** & **6.00** & **9.00** & **9.00** & **9.00** & **9.00** & **9.00** & **9.00** & **9.00** & **9.00** & **9.00** & **9.00** & **9.00** & **1.00** & **9.00** & **1.00** & **9.00** & **1.00** \\ \hline \multirow{6}{*}{[54]} & HybridSC [5] & ✗ & - & 81 & 0.59 & 75 & 1.01 & 19.54 & 50 & 40 & - & - & - & - & - & - & - & - & - & - \\ & AS [38] & ✗ & ✗ & 13 & 0.22 & 20/0.36 & 4.02 & 8.05 & 813 & 3 & 0.87 & 2 & 1.01 & 1 & 0.82 & 4.15 & 7 & 71.60 & 5.172 & **4.10** & - \\ & SP [10]+SG [35] & ✗ & ✗ & **12.02** & **15.00** & **4.02** & **7.02** & **3.215** & **2.08** & **2.08** & **1.08** & **1.075** & **3.092** & **5.130** & **4.140** & 5 & 1.47 & 25977 \\ \cline{1-1} & GoMatch [54] & ✓ & ✓ & 25 & 0.64 & 253/8.14 & 48 & 47.47 & 335/9.94 & 48 & 4 & 1.65 & 13 & 33.86 & 9.517 & 11 & 12 & 48 & 16.32 & 13 & 25.84 & 89 & 21.12 & 302 \\ \cline{1-1} & **DGC-GNN** & ✓ & ✓ & **18.07** & **75.28** & **15.17** & **106.40** & **6.09** & **3.1/4.5** & **5.177** & **4.2/2.95** & **6.141** & **8.1/5.38** & **8.2/299** & **71/19.5** & 355 \\ \hline \end{tabular} \end{table} Table 3: **Visual localization.** We report the median pose errors (cm, \({}^{\circ}\)) and storage requirements (MB) on the scenes of the 7Scenes [43] and Cambridge-Landmarks [21] datasets. Three groups of methods are shown: end-to-end (E2E), descriptor-based (DB), and descriptor-free (DF). We do not show BPnPNet as it fails on most scenes. The best results are shown in bold in each group. **Acknowledgements.** This work was supported by the Academy of Finland (grant No. 327911, No. 353138), and by ETH Postdoc fellowship. We acknowledge the computational resources provided by the Aalto Science-IT project and CSC-IT Center for Science, Finland. ## 5 Supplementary Material ## Appendix A Training and Evaluation Details **Dataset Generation.** The training data generation process for MegaDepth [23] follows the methodology outlined in GoMatch [54]. The undistorted SfM model reconstructions used in MegaDepth are provided by D2Net [13]. For training, we sample up to 500 images from each scene. For each sampled image, we select the top-\(k\) co-visible views that have at least 35% image overlap. This ensures that there are enough matches available for training. The overlapping score is computed by dividing the number of co-visible 3D points by the total number of points in the training image. In the case of ScanNet [9], a similar procedure is conducted. We also sample up to 500 images from each scene for the training set generation. The co-visible images are obtained using the co-visible scores provided by LOFTR [46]. We extract all the co-visible views of the training image with co-visible scores larger than 0.65, and then randomly sample the top-\(k\) views for training. Since ScanNet is an RGB-D dataset without SfM models, we obtain the 3D points for each image by projecting the detected 2D keypoints with valid depth to the 3D coordinate. By doing this for each image, we reconstruct a sparse 3D point cloud based on the detected 2D keypoints. Note that the correspondence between different co-visible frames is not required in this case. In total, for MegaDepth, we generate a training set consisting of 25,624 images from 99 scenes and a test set comprising 12,399 images covering 53 scenes. For ScanNet, we create a training set with 52,008 images from 105 scenes. The test set for ScanNet consists of 14,892 query images from 30 scenes. The data generation of 7Scenes [43] and Cambridge Landmarks [21] follows the same procedure in [54]. **Inference.** We consider a query with at least 10 keypoints as valid input. The 3D points from the top-\(k\) retrieved database images are then applied to match against the queries with our proposed pipeline. We use the Sinkhorn algorithm [44, 8] to optimize the extended cost matrix \(\bar{\mathcal{M}}\in\mathbb{R}^{N+1,M+1}\) in an iterative manner with up to 20 iterations to obtain the initial matches. The final matches are obtained by filtering the matches with matching confidence \(\theta<0.5\) in the outlier rejection module. For the visual localization task, the camera poses are estimated by the P3P solver with RANSAC [15] implemented in OpenCV [3] and then refined by Levenberg-Marquardt [28] algorithm on the inliers matches, minimising the reprojection error. ## Appendix B Additional Results **Qualitative Results.** More visualizations of inlier matches provided by DGC-GNN and GoMatch on MegaDepth are shown in Fig. 5. DGC-GNN consistently achieves more matches on multiple scenes, highlighting the effectiveness of the proposed method. **Additional Ablation Results.** In addition to the ablation results presented in the main paper, we also provide ablation results for single-view matching with \(k=1\) on MegaDepth [23]. Furthermore, we conduct two additional ablations to investigate the impact of different component selections. Firstly, we compare the effectiveness of the geometric global embedding (G. Emb.) used in the main paper with the global clustering label embedding (G. Label). Instead of encoding geometric cues, we encode the label of each global cluster and concatenate it to the local point feature. Then, we explore the selection of different clustering algorithms. We compare the performance of K-Means and Mean-Shift clustering algorithms in our pipeline. The results are presented in Table 4. We observe similar conclusions for each component as in the main paper. The results obtained using the global label embedding (G. Label) with cluster attention (C.Att). show even worse performance compare to geometric embedding (G. Emb.) only, indicating the superiority of our clustering-based geometric embedding over the label embedding and highlighting the importance of incorporating geometric cues in the embedding process for effective point matching. Regarding the impact of different clustering algorithms, we only observe a minor difference in K-Means and Mean-Shift results, suggesting that our approach is robust to the choice of the clustering algorithm. In addition to the numerical results, we also visualize the inlier matches (see Fig. 6), the training loss and the number of predicted matches during training (see Fig. 7) with different architectures to provide deep insights into the behaviour and performance of different architectures. Figure 5: **2D-3D Matching** (shown by green lines) with the proposed DGC-GNN and GoMatch [54]. ## Appendix C Generalizability Similar to [54], we discuss the generalizability of our DGC-GNN model on the visual localization task across different types of training and evaluation scenes. Specifically, we investigate the performance of our model when trained on MegaDepth [23] and ScanNet [9], and evaluated on the 7Scenes dataset [43]. We also explore the impact of using different keypoint detectors, namely SIFT [26] and SuperPoint [10], during the evaluation. The results of these experiments are summarized in Table 5, which provides an overview of the performance of our DGC-GNN model under different training and evaluation conditions. The results show that the DGC-GNN model trained on the ScanNet dataset, using the SIFT keypoint detector, and evaluated on the indoor 7Scenes dataset with the same detector, achieves the best overall performance. This demonstrates the effectiveness and robustness of our model in this specific training and evaluation scenario. However, we also note minor differences in performance when the model is trained on the MegaDepth dataset or evaluated using SuperPoint keypoints. Despite these variations, our findings suggest that the DGC-GNN model is scene-agnostic and can generalize well to different types of keypoint detectors. This is an encouraging result as showcases the versatility and generalization capability of the DGC-GNN approach, allowing it to be applied in various settings without significant degradation in performance. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Global} & \multirow{2}{*}{C. Att. Color Ang.} & \multirow{2}{*}{Cluster} & \multicolumn{2}{c}{Reproj. AUC (\%)} & \multicolumn{2}{c}{Rotation (\({}^{\circ}\))} & \multicolumn{2}{c}{Translation} \\ & & & & & @1 / 5 / 10px (\(\uparrow\)) & \multicolumn{2}{c}{Quantile@25 / 50 / 75\% (\(\downarrow\))} \\ \hline \hline GoMatch [54] & & & & & & 5.67 / 22.43 / 28.01 & 0.60 / 10.08 / 34.63 & 0.06 / 1.06 / 3.73 \\ \hline \multirow{5}{*}{Variants} & G.Emb & & & & K-means & 7.68 / 28.41 / 34.36 & 0.28 / 6.78 / 34.52 & 0.03 / 0.73 / 3.77 \\ & G.Label & ✓ & & & K-means & 7.13 / 27.33 / 33.18 & 0.31 / 7.34 / 33.63 & 0.03 / 0.76 / 3.64 \\ & G.Emb & ✓ & & & K-means & 8.10 / 30.64 / 37.07 & 0.24 / 4.48 / 34.30 & 0.03 / 0.63 / 3.51 \\ & G.Emb & ✓ & ✓ & & K-means & 9.82 / 35.29 / 41.16 & 0.17 / 2.88 / 31.74 & 0.02 / 0.27 / 3.24 \\ & G.Emb & ✓ & ✓ & ✓ & Mean-shift & 10.07 / 36.01 / 43.03 & 0.16 / 2.15 / 28.99 & **0.01** / 0.20 / 3.26 \\ \hline **DGC-GNN** & G.Emb & ✓ & ✓ & ✓ & K-means & **10.20** / **37.64** / **44.04** & **0.15** / **1.53** / **27.93** & **0.01** / **0.15** / **3.00** \\ \hline \hline \end{tabular} \end{table} Table 4: **Additional Ablation Results. AUC scores thresholded at 1, 5, and 10 pixels on \(k=1\); rotation and translation error quantiles at 25, 50, 75% with the proposed components added one by one to the GoMatch pipeline.** Figure 6: **Qualitative Matching Results of Different Architectures. We visualize the number of inlier matches after the PnP-RANSAC with different architectures (shown by green lines ).** ## Appendix D Model Parameters and Timing We discuss the model parameters and running time of DGC-GNN in this section. DGC-GNN incorporate global geometric embedding and local clustering attention, which has around 5.7 million trainable parameters and an estimated model size of 22.6 MB. The average inference time for each image pair over the Megadepth evaluation queries is 77.8ms, and the number is 70.9ms on ScanNet queries. The measurements are conducted on a 32GB NVIDIA Telsa V100 GPU.
2301.07853
DECISIVE Benchmarking Data Report: sUAS Performance Results from Phase I
This report reviews all results derived from performance benchmarking conducted during Phase I of the Development and Execution of Comprehensive and Integrated Subterranean Intelligent Vehicle Evaluations (DECISIVE) project by the University of Massachusetts Lowell, using the test methods specified in the DECISIVE Test Methods Handbook v1.1 for evaluating small unmanned aerial systems (sUAS) performance in subterranean and constrained indoor environments, spanning communications, field readiness, interface, obstacle avoidance, navigation, mapping, autonomy, trust, and situation awareness. Using those 20 test methods, over 230 tests were conducted across 8 sUAS platforms: Cleo Robotics Dronut X1P (P = prototype), FLIR Black Hornet PRS, Flyability Elios 2 GOV, Lumenier Nighthawk V3, Parrot ANAFI USA GOV, Skydio X2D, Teal Golden Eagle, and Vantage Robotics Vesper. Best in class criteria is specified for each applicable test method and the sUAS that match this criteria are named for each test method, including a high-level executive summary of their performance.
Adam Norton, Reza Ahmadzadeh, Kshitij Jerath, Paul Robinette, Jay Weitzen, Thanuka Wickramarathne, Holly Yanco, Minseop Choi, Ryan Donald, Brendan Donoghue, Christian Dumas, Peter Gavriel, Alden Giedraitis, Brendan Hertel, Jack Houle, Nathan Letteri, Edwin Meriaux, Zahra Rezaei Khavas, Rakshith Singh, Gregg Willcox, Naye Yoni
2023-01-19T02:50:40Z
http://arxiv.org/abs/2301.07853v2
# DECISIVE Benchmarking Data Report ###### Abstract This report reviews all results derived from performance benchmarking conducted during Phase I of the Development and Execution of Comprehensive and Integrated Subterranean Intelligent Vehicle Evaluations (DECISIVE) project by the University of Massachusetts Lowell, using the test methods specified in the DECISIVE Test Methods Handbook v1.1 for evaluating small unmanned aerial systems (sUAS) performance in subterranean and constrained indoor environments, spanning communications, field readiness, interface, obstacle avoidance, navigation, mapping, autonomy, trust, and situation awareness. Using those 20 test methods, over 230 tests were conducted across 8 sUAS platforms: Cleo Robotics Dronut X1P (P = prototype), FLIR Black Hornet PRS, Flyability Elios 2 GOV, Lumenier Nighthhawk V3, Parrot ANAFI USA GOV, Skyldio X2D, Teal Golden Eagle, and Vantage Robotics Vesper. Best in class criteria is specified for each applicable test method and the sUAS that match this criteria are named for each test method, including a high-level executive summary of their performance. **Disclaimer: Certain commercial entities, equipment, or materials are identified in this document. Such identification is not intended to imply recommendation or endorsement by the U.S. Army Combat Capabilities Development Command Soldier Center (DEVCOM-SC), the Army, or the Department of Defense (DoD), nor is it intended to imply that the entities, materials, or equipment are necessarily the best available for the purpose.** **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **UMASS** **UWALS** **UWALS** **UWALS** **Table of Contents** **Table of Contents** **Executive Summary** **SUAS Platforms Evaluated** **Communications** **Non-Line-of-Sight (NLOS) Communications** **Non-Line-of-Sight (NLOS) Video Latency** **Interference Reaction** **Field Readiness** **Runtime Endurance** **Takeoff and Land/Perch** **Room Clearing** **Indoor Noise Level** **Logistics Characterization** **Interface** **Operator Control Unit (OCU) Characterization** **Obstacle Avoidance** **Obstacle Avoidance and Collision Resilience** **Navigation** **Position and Traversal Accuracy** **Navigation Through Apertures** **Navigation Through Confined Spaces** **Mapping** **Indoor Mapping Resolution** **Indoor Mapping Accuracy** **Autonomy** **Non-Contextual Autonomy Ranking** **Contextual Autonomy Ranking** **Trust** **Characterizing Factors of Trust** **Situation Awareness** **Interface-Afforded Attention Allocation** **Situation Awareness (SA) Survey Comparison** **References** **DECISWE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 ## Executive Summary Using the 20 test methods specified in the DECISIVE Test Methods Handbook v1.1 [Norton et al., 2022], over 230 tests were conducted across 8 SUAS platforms: Cleo Robotics Dronut X1P (P = prototype), Filt Black Hornet PRS, Flyability Elios 2 GOV, Lumenier Nighthawk V3, Parrot ANAFI USA GOV, Skydio X2D, Teal Golden Eagle, and Vantage Robotics Vesper. The table below provides a high-level review of each system designated as best in class per test method, followed by a summarization of best in class per each test method category. Some tests are not shown in the below tables if best in class criteria is not appropriate (e.g., Logistics and OCU Characterization). It should be noted that the results contained in this report should be interpreted as benchmarks for each system at this particular moment in time and that their performance may differ in future evaluations due to system updates. ## DECISIVE Benchmarking Data Report U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 The sUAS platforms evaluated for this project were selected due to matching some of the desired performance capabilities for operating in subterranean and constrained indoor environments. These desired capabilities are derived from various Army reference documents, including the U.S. Army Subterranean and Dense Urban Environment MATDEV CoP Future Materiel Experiment (MATEx) Planning: Dense Urban Materiel Concepts and Capabilities RFI, as well as guidance from DEVCOM-SC. These capabilities include (in order of decreasing importance): GPS-denied operation, collision avoidance, ability to perch and stare, ability to operate in lowlight conditions, and small enough to comfortably fit through a typical door threshold. A set of 8 platforms were evaluated, 4 of which are from the Blue sUAS list and the remaining 4 are NDAA compliant systems. While not all systems match all selection criteria, the 8 that were selected initially claimed to meet the minimum defined criteria for GPS-denied operation and being physically small enough to fit through a typical door threshold. The systems are listed below; full configuration details are provided in the **Logistics Characterization** test method results. **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 ## Communications ### Non-Line-of-Sight (NLOS) Communications ### Summary of Test Method This test method consists of connecting the SUAS and the OCU at an initial position and then moving the SUAS to other positions that are in NLOS with the initial point due to a wall or floor obstruction. The NLOS communication range for each position is recorded, measured as a straight path through one or more floors and walls between the SUAS and the OCU. The position at which communication fails is indicated by a lack of ability to transmit video, control signal, or command the SUAS to perform tasks. This measure provides an approximate scenario at which the SUAS would be expected to lose communications signal in a real-world deployment. For each OCU position in the test, the SUAS starts on the ground while the operator attempts to make initial connection to confirm video and control signals (i.e., static connection test). Once confirmed, the operator attempts the following tasks: takeoff, hovering in place, yawing, pitching forward and back, rolling left and right, ascending and descending, camera movement, and landing. This test method can be run concurrently with the NLOS Video Latency test method. ### DECISIVE Benchmarking Data Report U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 ## 5 UMass ### Benchmarking Results Tests were conducted at the UMass Lowell NERVE Center and Musccatatuck Urban Training Center (MUTC) in the Fire Trainer and Hotel Trainer test sites, both comprised of Conex container structures. Results from both test locations are shown below. ### UMass Lowell NERVE Center [MISSING_PAGE_POST] **Performance data** Best in class = maximum NLOS performance through 4 walls or 2 floors (i.e., 1 less than the maximum number of walls or floors successfully transmitted through across all systems) **UMASS** **Performance data** Best in class = maximum NLOS performance through 4 walls or 2 floors (i.e., 1 less than the maximum number of walls or floors successfully transmitted through across all systems) **UMASS** **SUACS and communications frequency** **Metrics** **X** **1** **2** **3** **4** **5** **X** **1** **2** **3** **4** **Connect** **✓** **✓** **✓** **X** **X** **✓** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** ** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** ** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **X** **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMASS Lowell Approved for public release: PAO #PR2023_74172 **Performance data** Best in class = maximum NLOS performance through 3 walls or 4 floors (i.e., 1 less than the maximum number of walls or floors successfully transmitted through across all systems) **Horizontal, through walls** **Vertical, through floors** **SUAS and communications frequency** **Metrics** **X** **1** **2** **3** **4** **X** **1** **2** **3** **4** **5** **Connect** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-**-** **-** **-** **-** **-** **-** **-**-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-**-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-**-** **-** **-**-** **-** **-** **-**-** **-** **-** **-**-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-**-** **-** **-** **-** **-** **-** **-**-** **-** **-**-** **-** **-** **-** **-** **-** **-** **-** **-**-** **-** **-** **-** **-** **-**-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** ** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** ** **-** **-** **-** **-** **-** ** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** **-** ** **QUASS** **"Note: The Skydio X2D was not able to takeoff inside of the hallways of the Conex building (horizontal positions 1 and 3) due to the ceiling height (2.4 m [8 ft]) and lateral distance to the walls (1.2 m [4 ft]), causing it to crash when takeoff was attempted. However, the system could still operate in this environment if flown it from the outside, so successful performance can still be claimed. Due to this issue, it was not attempted to be flown in the vertical test, all of which took place inside of Conex containers of the same measurements.** **"Note: Due to instability of the Teal Golden Eagle when flying indoors, we did not attempt to confirm its communications performance at each location.** **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023, 74172 ###### Abstract This test method is an expansion of an existing test method currently under development by NIST for standardization through the ASTM E54.09 Committee on Homeland Security; Subcommittee on Response Robots. In that test method, a flashing light is placed within view of the SUAS and an external camera is used to record the flashing light and the OCU display of the flashing light as seen by the SUAS camera in the same view. The SUAS and light are positioned further and further apart from the OCU while still maintaining that the light and OCU screen are visible in the external camera view to evaluate the impact of range on video latency. The external camera records while the light flashes several times. The video is then exported and the amount of delay between when the light actually flashes compared to when it is seen flashing on the OCU screen is calculated by counting video frames and converting to milliseconds (based on the frames per second of the recorded video). This test method adapts the existing method for NLOS operations by instead using two synchronized stopwatches (with millisecond displays) rather than camera flashing lights, which will move between the different rooms and floors that separate the SUAS and the OCU. An external video camera captures one of the stopwatches and the OCU display in a single frame. Once the watches are synchronized, the SUAS and other stopwatch are moved into position, pointing the SUAS camera at the stopwatch such that both stopwatches can be seen in the external camera view back at the starting point. After all positions are completed, the video from that camera is then exported and evaluated the same as previously described (i.e., counting time difference between the two). This test method can be run concurrently with the NLOS Communications test method. **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMASS Lowell Approved for public release: PAO #PR2023_74172 ## Performance data Best in class = maximum NLOS latency through 4 walls or 2 floors (i.e., 1 less than the maximum number of walls or floors successfully transmitted through across all systems) is within 2 standard deviations of the average latency across that all of that system's measurements in the same direction (i.e., horizontal or vertical) \begin{tabular}{l|l| ###### Abstract The test consists of generating an interfering radio-frequency signal whose frequency will fall within the sUAS communication camera or control channels (i.e., jamming its communication channel). The possible outcomes of sUAS behavior once jammed include exhibiting lost or degraded communication functionality (e.g., landing, return to home), automatic channel hopping to deconflict with the interfering signal, or inability to reconnect after interference has ceased (i.e., sUAS needs to be restarted before connection is regained). There are multiple types of interference tests that are performed (note that each test serves as a prerequisite for running the subsequent tests; e.g., run the Hovering test before running the Command Input test): Frequency Characterization: Before the sUAS signal can be interfered with, a receiver antenna connected to a spectrum analyzer can be used to determine exactly which frequencies are being used by the sUAS to operate. Grounded Interference: While the sUAS is grounded, transmit the interfering signal and attempt to take off. Hovering Interference: While the sUAS is hovering, transmit the interfering signal and attempt to continue hovering in place, yaw, pitch forward and back, roll left and right, ascend and descend, move the camera, and then land. Command Input Interference: Command the sUAS to continuously yaw in place while hovering, then proceed to transmit the interfering signal. Note whether the sUAS either continues to turn or stops if and when control is lost. Note: running these tests may severely degrade existing WiFi networks in the area where testing is conducted. ## 6 Conclusion In this paper, we have proposed a novel framework for the detection of the low power signal in the presence of a low power signal. We have proposed a novel approach to detect the low power signal in the presence of a low power signal. We have proposed a novel approach to detect the low power signal in the presence of a low power signal. We have proposed a novel approach to detect the low power signal in the presence of a low power signal. We have proposed a novel approach to detect the low power signal in the presence of a low power signal. We have proposed a novel approach to detect the low power signal in the presence of a low power signal in the **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO HPR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO HPR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO HPR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO HPR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO HPR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO HPR2023_74172 * **Note: Due to instability of the Teal Golden Eagle when flying indoors, we did not attempt to confirm its ability to Hover and Yaw in Place during low or high power interference.** **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO HPR2023_74172 \begin{tabular}{l|c| ## Funding ### Funding ### Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staff for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staff for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staff for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staff for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staff for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staffs for the construction and operation of the LHC. ## Funding The authors would like to thank the technical and administrative staff for the construction and operation of the LHC. *Note: The Lumenier Nighthawk V3 has a tendency to overheat over long periods of time, causing it to disconnect from the controller. For this reason, neither the Indoor Movement or Perch and Stare tests were successfully evaluated. *Note: The Skydio X2D will reset itself approximately every hour in order to clear its available RAM space for recording data. This is alerted to the operator on the OCU and the operator must reconnect at range when this happens. *Note: Due to instability of the Teal Golden Eagle when flying indoors, we did not attempt to evaluate its Indoor Movement endurance. The controller has a tendency to disconnect from the system around 30% battery. During the Perch and Stare test, the camera unit overheated causing the camera signal to cut out, triggering the end of the test, despite still having some amount of battery remaining. \(\Delta\)Note: The Vantage Robotics Vesper was unavailable for testing due to being out with the vendor for repairs. **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 \begin{tabular}{c c ###### Abstract We present a new method for constructing the "Dark Performance" of the proposed method. We propose a new method for constructing the "Dark Performance" of the proposed method. **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 ## Chapter 4 Conclusion In this chapter, we have presented a new method for computing the performance of the proposed method. We have presented a novel method for computing the performance of the proposed method. We have presented a novel method for computing the performance of the proposed method. ### Dark Operations This test only consisted of running a variation of condition 1 (flat), but in darkness. A set of 5 repetitions were attempted. If takeoff was not successful, then the lights were turned on to allow the system to takeoff, then the lights were turned off while hovering, and land/perch was attempted. Best in class = 90% success or higher \begin{tabular}{l l l|c} \hline \hline \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{**Flat, darkness**} \\ \cline{3-4} \multicolumn{1}{c|}{**SUAS**} & **Metrics** & **Takeoff** & **Land/Perch** \\ \hline \multirow{3}{*}{Cleo Robotics Dronut XIP} & Completion & 20\% & 100\% \\ \cline{2-4} & Collisions & 0 & 0 \\ \cline{2-4} & Rollovers & 0 & 0 \\ \hline \multirow{3}{*}{Filt Black Hornet PRS*} & Completion & 0\% & 0\% \\ \cline{2-4} & Collisions & 0 & 2 \\ \cline{2-4} & Rollovers & 0 & 0 \\ \hline \multirow{3}{*}{Flyablilty Elos 2 GOV} & Completion & 100\% & 100\% \\ \cline{2-4} & Collisions & 0 & 0 \\ \cline{2-4} & Rollovers & 0 & 0 \\ \hline \multirow{3}{*}{Lumenier Nighthawk V3\({}^{\dagger}\)} & Completion & 0\% & 0\% \\ \cline{2-4} & Collisions & 0 & 0 \\ \cline{2-4} & Collisions & 0 & 0 \\ \hline \multirow{3}{*}{Parrot ANAFI USA GOV} & Completion & 100\% & 100\% \\ \cline{2-4} & Collisions & 0 & 0 \\ \cline{2-4} & Collisions & 0 & 0 \\ \hline \multirow{3}{*}{Skydio X2D} & Completion & 0\% & 0\% \\ \cline{2-4} & Collisions & 0 & 0 \\ \cline{2-4} & Rollovers & 0 & 0 \\ \hline \multirow{3}{*}{Teal Golden Eagle#} & Completion & \% & 0\% \\ \cline{2-4} & Collisions & \% & 0\% \\ \cline{2-4} & Rollovers & \% & 0\% \\ \cline{2-4} & Completion & \% & 0\% \\ \cline{2-4} & Collisions & \% & 0\% \\ \hline \multirow{3}{*}{Vantage Robotics Vesper\(\Delta\)} & Completion & - & - \\ \cline{2-4} & Collisions & - & - \\ \cline{2-4} & Rollovers & & \\ \hline \hline \end{tabular} *Note: The FLIR Black Hornet PRS must takeoff from and land in the operator's hand. *Note: The Nighthawk has illuminators and is typically able to operate in the dark, but was not able to be tested due to availability (out for repairs). #Note: Due to instability of the Teal Golden Eagle when flying indoors, we did not attempt to fly it to confirm communications performance at each location. \(\Delta\)Note: Due to a firmware issue, the vendor recommended the Vantage Robotics Vesper to be grounded, making it unavailable to perform this test. ### DECISIVE Benchmarking Data Report U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 #### Summary of Test Method While the environments in a mission-context where room clearing can be performed will vary in terms of room dimensions and types of obstructions in the room, a standard room is specified for this test method to be representative of a room clearing task. The nominally-sized room has a series of visual acuity targets on all surfaces and is without obstructions to provide a clear view of all targets for inspection. A future variant of this test method may be developed that includes one or more sets of standard obstruction layouts. The sUAS can takeoff inside of the room or enter from outside, whichever is preferred; either way, the actual test will not begin until after the sUAS is hovering in the center of the room. From there, the sUAS performs a visual inspection of the room. While not required, it is recommended that the sUAS remain in the center of the room and manipulate its gimbal camera to increase vertical field of view (FOV) for inspecting the floor and ceiling, while yawing in place. The sUAS gimbal movement range and FOV will impact the number of visual acuity targets that can be inspected; e.g., some sUAS will not be able to see the targets on the floor or ceiling due to lack of gimbal capability, while others may be able to see multiple surfaces at once through the use of 360 degree cameras. Additionally, the control and stabilization of the sUAS is exercised by attempting to yaw in place to scan the room. The sUAS is free to move through the room as needed (e.g., navigate forward, back, ascend, descend, etc.), although the room is intentionally narrow to influence a more expedient scanning technique of yawing in place. Two variants of room clearing capability are exercised: * Static camera: Without the use of camera zoom functionality, likely resulting in faster, coarser room clearing at reduced visual acuity. * Zoom camera: Allowing for the use of camera zoom functionality if available, likely resulting in slower, finer room clearing at increased visual acuity. During the test, the operator inspects the visual acuity targets and the test lasts until all visual acuity targets able to be inspected (i.e., those that the sUAS has the capability of inspecting; some targets may not be able to be inspected due to limitations in sUAS gimbal movement), have been successfully inspected. Room clearing be run either as an elemental or operational test: Elemental Room Clearing: The operator may maintain line-of-sight with the sUAS such as by following the system with the OCU and standing in the doorway to maintain communications link, allowing for room clearing to be evaluated in as close to an ideal setting as possible and reduce potential collisions with the boundaries. Operational Room Clearing: The operator is positioned away from the room with their back to the doorway, unable to maintain line of sight throughout the test. This is similar to an actual operational mission, including all related situation awareness issues that may arise (e.g., collisions with the boundaries, misunderstanding which wall is being inspected, etc.). #### DECISWE Benchmarking Data Report U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 Figure 1: Rendering of example sUAS field of view when inspecting a wall, floor, and ceiling by manipulating the gimbal camera. Note: ceiling is not shown, but is present during testing. ## Appendix B Benchmarking Results Operational room clearing was performed for all systems. Best in class = 90% coverage or higher with average acuity of 3 mm or higher (note: lower acuity measurements in millimeters equate to higher acuity) \begin{tabular}{|l|l|c|c|c|c|c|} \hline & & \multicolumn{2}{c|}{**In-situ**} & \multicolumn{2}{c|}{**In-situ**} & \multicolumn{2}{c|}{**Post-hoc**} & \multicolumn{2}{c|}{**Post-hoc**} \\ **SUAS** & **Metrics** & **Static cam** & **Zoom cam*** & **Static cam** & **Zoom cam*** & **360 Superzoom** \\ \cline{2-7} & Duration (min) & 3.4 & - & 1.4 & - & - \\ \cline{2-7} Clee Robotics & Coverage & 93\% & - & 93\% & - & - \\ \cline{2-7} & Average acuity (mm) & 11.2 & - & 7.7 & - & - \\ \hline \multirow{4}{*}{F\(\&\) Black} & Duration (min) & 5.2 & 5.2 & 5.2 & 5.2 & - \\ & Coverage & 83\% & 83\% & 83\% & 83\% & - \\ \cline{2-7} & Average acuity (mm) & 7.8 & 7.8 & 7.4 & 7.4 & - \\ \hline \multirow{4}{*}{F\(\&\) F\(\&\)} & Duration (min) & 5.0 & - & 2.0 & - & - \\ & Coverage & 100\% & - & 100\% & - & - \\ & Coverage & 100\% & - & 100\% & - & - \\ & Average acuity (mm) & 5.5 & - & 2.8 & - & - \\ \hline \multirow{4}{*}{Lumennier} & Duration (min) & 4.0 & - & 4.0 & - & - \\ & Coverage & 100\% & - & 100\% & - & - \\ & Average acuity (mm) & 7.1 & - & 6.9 & - & - \\ \cline{2-7} & Duration (min) & 4.6 & 12.1 & 2.1 & 7.8 & - \\ \cline{2-7} Parrot ANAFI & Coverage & 100\% & 100\% & 100\% & 100\% & - \\ USA GOV & Average acuity (mm) & 3.0 & 1.3 & 2.9 & 1.3 & - \\ \hline \multirow{4}{*}{Skydio X2D} & Duration (min) & 6.6 & 6.6 & 3.6 & 10.7 & 0.4 \\ & Coverage & 100\% & 100\% & 100\% & 100\% & 100\% \\ \cline{2-7} & Average acuity (mm) & 2.7 & 2.7 & 2.2 & 1.5 & 19.1 \\ \cline{2-7} & Duration (min) & - & - & - & - & - \\ \cline{2-7} & Coverage & - & - & - & - & - \\ \cline{2-7} & Average acuity (mm) & - & - & - & - & - \\ \hline \multirow{4}{*}{Skydio X2D} & Duration (min) & 3.6 & 3.6 & 2.4 & 2.4 & - \\ & Coverage & 96\% & 96\% & 96\% & 96\% & 96\% \\ \cline{2-7} & Average acuity (mm) & 3.0 & 3.0 & 2.8 & 2.8 & - \\ \hline \multirow{4}{*}{Teal Golden} & \multirow{4}{*}{Parrot ANAFI USA} & \multirow{4}{*}{Parrot ANAFI USA} & \multirow{4}{*}{Parrot ANAFI USA} & \multirow{4}{*}{GOV} & \multirow{4}{*}{Parrot ANAFI USA} & \multirow{4}{*}{N/A} \\ & & & & & & \\ \cline{1-1} & & & & & & \\ \cline{1-1} & & & & & & \\ \cline{1-1} & & & & & & \\ \cline{1-1} & & & & & & \\ \cline{1-1} & & & & & & \\ \cline{1-1} & & & & & & \\ \hline \end{tabular} *Note: for systems whose digital zoom did not appear to result in higher acuity, the metrics from the static cam test was copied over rather than run separately. For systems without zoom (optical or digital), the zoom cam condition was not run at all. ## Chapter 1 Introduction The development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the the development of the development of the development of the development of the development of the development of the development of the the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the the development of the development of the development of the development of the development of the the development of the development of the development of the the development of the development of the development of the development of the the development of the development of the the development of the development of the development of the development of the the development of the development of the the development of the development of the the development of the development of the the development of the the development of the development of the the development of the development of the the development of the the development of the development of the the development of the development of the the development of the development of the the development of the development of the development of the development of the development of the the development of the development of the development of the the development of the the development of the development of the the development of the development of the the development of the development of the the development of the development of the the development of the the development of the the development of the the development of the development of the the development of the the development of the the development of the the development of the the development of the the development of the of the development of the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the development of the the development of the the development of the development of the the development of the the development of the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the the development of the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the the development of the the development of the the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the of the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the the development of the the development of the the development of the the development of the the development of the the the development of the the development of the the development of the the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the the development of the the development of the the development of the the the development of the the development of the the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the the development of the the development of the the development of the the the development of the the development of the the development of the the the development of the the the development of the the the development of the the the development of the the development of the the development of the the the development of the the development of the the the development of the the development of the the the development of the the development of the the development of the the the development of the the development of the the the development of the the development of the the development of the the the development of the the development of the the development of the the development of the the the development of the the development of the the development of the the the development of the the the development of the the development of the the the development of the the development of the the the development of the the the development of the the development of the the the development of the the development of the the development of the the development of the the development of the the development of the the of the development of the the development of the the development of the the the development of the the development of the the development of the the the development of the the development of the the the development of the the development of the the development of the the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the development of the the development of the the development of the the development of the the development of the development of the the development of the the development of the the development of the development of the the development of the the development of the the development of the the development of the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the the development of the development of the the development of the development of the the development of the development of the the development of the the development of the development of the the development of the the development of the development of the the development of the the development of the the development of the the development of the the development of the the development of the development of the the development of the the development of the the development of the the development of the the development of the the development of the development of the the development of the the development of the the development of the development of the development of the the development of the development of the development of the the development of the the development of the development of the the development of the development of the development of the the development of the the development of the the development of the the development of the development of the development of the the development of the development of the the development of the development of the the development of the the development of the the development of the the development of the development of the the development of the development of the the development of the development of the the development of the the development of the development of the the development of the development of the development of the the development of the development of the the development of the development of the development of the the development of the development of the development of the the development of the development of the the development of the development of the the development of the development of the development of the development of the the development of the development of the development of the the development of the development of the development of the development of the the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the the development of the development of the the development of the the development of the development of the development of the development of the development of the the development of the development of the development of the development of the the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the development of the the development of the development of the development of the development of the development of the development of the development of the development of the development of the ## 5.1. Losstics Characterization ### Summary of Test Method A series of characteristics concerning the logistics of operating, maintaining, and collecting data are outlined across seven categories that are to be filled with the relevant information for each suas platform being evaluated: physical measurements, power, heat dissipation, safety precautions, body/frame and maintenance, data collection and access, and system survivability. In each category, several fields are posed as prompts/questions for the user to respond to based on empirical evidence and experience of operating and maintaining the system. Some data may be able to be initially derived from vendor-provided specification sheets, but should be verified empirically. All fields are open response, although some may require a specific format of response (e.g., yes/no, dimension units, etc.). The information captured under each field is as follows: * Physical measurements: * Deployed size, Collapsed size, Controller size, Table / Screen size, Dimensions of drone carry case, Weight of drone carry case w/o drone, Dimensions of controller carry case, Weight of frame without battery, Battery weight, Weight of controller, Weight of controller carry case w/o drone, Dimensions of charger carry case, Weight of controller carry case w/o controller, Weight total * Battery type, Battery charge time, Average flight with full battery, Controller battery type, Controller charge time, Can the power be switched on/off * Is battery level displayed to user, Is battery remaining time indicated, Is flight time remaining indicated, Is the user prompted about damaged battery, Are actions prevented at critical battery levels, Are there failsafe lockouts, Can user override failsafe lockouts, What behavior is exhibited at critical power is battery connection easily accessible * Heat dissipation and consideration: * Operation Temp range, Can the drone idle without overheating, Does it have internal or passive cooling, Is the user prompted on critical heat levels, What happens on overheat, Do batteries need to cool down after use * Safety precautions: * Does operation require hearing protection, Does operation require eye protection, Does operation require head protection, Does operation suggest a respirator * Frame and maintenance: * Are parts serviceable, Are parts reinforced or weatherized, Are parts custom made or off shelf, Is drone 3d printed or mass produced, Are parts interchangeable, Is drone stored fully assembled, Are propellers protected, Can prop guards be attached, Are tools provided with drone * Data collection and access: * Does the drone start recording when armed, Is data stored onboard or in controller, How is data accessed, What format is data stored in, Is software required to process data * System survivability: * Visual detectability, Audible signature, Cybersecurity/encryption * Does the drone have obstacle avoidance, Does the drone prevent hitting objects, Can obstacle avoidance be disabled, Does the drone have auto takeoff, Does the drone have an auto land, Does the drone have an emergency stop, Is the drone able to carry a payload Across a set of suas platforms, their logistics characteristics can be compared. Additionally, criteria can be set for each characteristic to determine if a system meets the relevant requirements set forth by another entity. For example, soldier feedback provided to the Soldier-Borne Sensor (SBS) program indicated that a minimum of 2 hours of HD video be able to be recorded. This threshold of acceptable performance can be compared to the information provided for candidate suas in the _data collection & access_ category. For a coarse representation of requirement matching, a percentage can be calculated of the number of fields that match a given set of criteria. While the test method specification does allow for comparison against a set of requirements, no comprehensive set of criteria was provided to compare the sUAS characteristics against. So, no evaluations of the resulting characterizations were performed, aside from comparison to the other sUAS. No best in class criteria is specified. 2.2.3.4.5.6.7.8.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9..9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9.9..9.9.9.9.9.9..9.9.9.9.9.9.9..9.9.9.9.9..9.9.9.9..9.9.9.9.9.9..9.9.9.9.9..9.9.9.9.9..9.9.9.9.9..9.9.9..9.9.9..9.9.9.9.9.9..9.9.9.9..9.9.9.9.9.9..9.9.9.9.9..9.9.9.9..9.9.9.9.9..9.9.9.9.9..9.9.9.9..9.9.9.9.9.9.9..9.9.9..9.9.9.9..9.9.9.9..9.9.9.9..9.9.9.9.9..9.9.9..9.9.9.9..9.9.9.9..9.9.9.9..9.9.9..9.9.9..9.9.9.9.9..9.9.9.9..9.9.9.9.9..9.9.9..9.9.9..9.9.9..9.9.9..9.9.9..9.9.9..9.9.9.9..9.9.9..9.9.9.9..9.9..9.9.9.9.9..9.9..9.9.9..9.9.9..9.9.9.9..9.9.9.9..9.9.9..9.9.9.9..9.9.9..9.9.9..9.9.9.9..9.9.9.9..9.9..9.9.9..9.9..9.9..9.9.9..9.9.9.9..9.9.9..9.9.9..9.9..9.9.9..9.9.9..9.9..9.9.9..9.9..9.9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9.9..9..9.9..9.9..9.9..9..9.9..9.9..9..9..9.9..9.9..9.9..9.9..9.9..9..9..9.9..9.9..9..9.9..9..9.9..9.9..9..9.9..9..9..9..9..9..9.9..9..9.9..9.9..9..9.9..9..9..9..9..9.9..9..9..9..9..9..9.9..9..9..9..9.9..9..9..9..9..9..9..9..9..9.9..9..9.9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9..9...9..9..9..9..9..9..9..9..9..9..9..9..9..9...9..9..9..9..9..9..9..9..9..9..9..9..9..9..9...9..9..9..9..9..9..9..9..9..9..9..9...9..9..9..9...9..9..9..9..9..9...9..9..9..9..9..9...9..9..9..9..9..9..9..9...9.9..9...9..9..9..9..9..9...9..9..9..9...9..9...9..9..9..9..9..9...9..9..9...9...9...9..9...9..9...9..9...9..9..9..9...9..9...9..9...9..9...9...9..9..9...9..9...9...9...9..9..9...9..9...9..9...9..9....9..9...9...9...9...9...9...9...9...9....9..9....9....9....9....9....9...9......9.. **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMASS Lowell Approved for public release: PAO #PR2023_74172 ## 6. Discussion We have presented a new method for detecting the changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of the changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of changes in the number of the changes in the number of changes in the **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMASS Lowell Approved for public release: PAO #PR2023_74172 #### DECISIVE Benchmarking Data Report U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMASS Lowell Approved for public release: PAO #PR2023_74172 #### DECISIVE Benchmarking Data Report U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMASS Lowell Approved for public release: PAO #PR2023_74172 ## 10.11 Interface ### 10.1.1 Operator Control Unit (OCU) Characterization Summary of Test Method A series of characteristics concerning the OCU, its input functionalities through the controller, and the output provided via display modalities on the interface are outlined across five categories that are to be filled with the relevant information for each sUAS platform being evaluated: controller and UI, power, navigation, camera, and additional functionality and accessories. In each category, several fields are posed as prompts/questions for the user to respond to based on empirical evidence and experience of operating and maintaining the system. Some data may be able to be initially derived from vendor-provided specification sheets, but should be verified empirically. All fields are open response, although some may require a specific format of response (e.g., yes/no, dimension units, etc.). The information captured under each field is as follows: Controller and UI: Is the controller labeled, How many non virtual buttons are there Do flight modes change the configuration of how functionality is mapped to the controller inputs How is the user alerted to critical states, Is flight information fused with nav page, Do settings reset on power cycle Does the drone have obstacle avoidance, Does the drone prevent hitting objects, Can obstacle avoidance be disabled, Are obstacle avoidance notifications shown Does the drone have an auto land, Does the drone have an emergency stop, Are some features disabled in specific modes, Controller display lighting Power: Is battery remaining time indicated, Is flight time remaining indicated, Is the user prompted about damaged battery, Are actions prevented at critical battery levels Communications link: Does the interface display the current comms link connection level is the user prompted about reduced comms link Is the user prompted about loss of comms link What happens on comms loss Does the drone alert the user to magnetic interference Navigation: What type of navigation system is used, Is the drone GPS capable, Is mapping data displayed during flight, Is the drone GPS Denied compatible, Does its behavior change without GPS Is touch screen used during flight Maximum wind resistance, Do flight modes change wind resistance Do flight modes change obstacle avoidance, Does the drone switch modes automatically, Are critical environmental conditions alerted, Is the drone able to hover in place w/o input Camera: Are cameras usable when not armed, Is there a thermal camera Is information fused on nav footage, Is nav cam always visible in menus in flight Is zoom digital or physical Additional functionality and accessories: Does the drone have illuminators, Does the drone have IR sensors / emitters, Does the drone have a laser or pointer, Is the drone able to carry a payload Across a set of sUAS platforms, their OCU characteristics can be compared. Additionally, criteria can be set for each characteristic to determine if a system meets the relevant requirements set forth by another entity. For example, soldier feedback provided to the Soldier-Borne Sensor (SBS) program indicated that, ideally, a system's OCU functionality should not change when operating with GPS or when GPS-denied. This threshold of acceptable performance can be compared to the information provided for candidate sUAS in the _navigation_ category. For a coarse representation of requirement matching, a percentage can be calculated of the number of fields that match a given set of criteria. \begin{tabular}{l l l} [MISSING_PAGE_POST] **UU**UU**UU**UU**UU**UUU**UUU**UU**UU**UU**UU**UU**UU**UU**UU**U**UU**UU**U**UU**UUU**UU**U**UU**UU**UU**UU**UU**UU**UU**UU**UUU**UU**UUU**UU**UUUU**UUU**UUU**UUUU**UUUUU**UUUU**UUUUU**UUUUUU** \begin{tabular}{l|l|l|l} **UMASS** & & & \\ \hline \multicolumn{3}{l}{**overwrite**} & & \\ \multicolumn{3}{l}{**overwrite**} & **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 #### DECISIVE Benchmarking Data Report U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 #### DECISIVE Benchmarking Data Report U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 ## 6. Conclusion We have presented a new approach to the development of the proposed approach to the development of the proposed approach. We have shown that the proposed approach is capable of producing a new approach to the development of the proposed approach. We have shown that the proposed approach is capable of producing a new approach to the development of the proposed approach. ## 7. Conclusion We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the approach. We have presented a new approach to the development of the approach. We have presented a new approach to the development of the proposed approach. We have presented a new approach to the development of the approach. We have presented a new approach to the development of the approach. ## 6.4. Discussion In this section, we present a new method for detecting the performance of the proposed method. We propose a novel method for detecting the performance of the proposed method. **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 ## Abstract ### Abstract The test method consists of flying an SUAS directly towards different types of obstacles and recording the response. The test method is categorized into two classes based on the fundamental capabilities of the SUAS being evaluated. Such systems can usually be classified as having (a) an active collision avoidance system based on a full (or shared) autonomy mode, or (b) a passive collision resilience system (e.g., propeller guards or cage) that limit the impact of collisions with obstacles when they do occur. The different classes of SUAS obstacle capabilities each require their own test methodology. SUAS without any obstacle avoidance or collision resilience capabilities cannot be tested using this method. Metrics such as stopping times and maximum deceleration experienced will be evaluated. The appropriate tests will be performed for three scenarios: (a) head-on-collision course with obstacle (i.e. flight direction is perpendicular to plane of obstacle), (b) collision path that is angled at 45-degrees from the plane of the obstacle, and (c) sideways collision with impact on SUAS starboard or portside. Tests will be performed for the following obstacle elements: walls, chain link fences, mesh materials, and doors. ### Abstract The obstacle avoidance test methodologies pertain to any SUAS systems that possess autonomous obstacle avoidance capabilities, i.e. the SUAS should be able to perceive the presence of an obstacle and take corrective actions to avoid collisions. This test methodology does not cover human-piloted SUAS. The test method evaluates various metrics such as minimum time to collision, minimum distance to collision, and number of collisions. The tests show not only if the system is able to avoid the obstacle but it also assesses SUAS performance for different materials. Some of the materials used in these tests (such as chain link fence and meshes) are significantly more difficult to perceive by SUAS systems as compared to others (such as doors and walls). The tests seek to assess the obstacle avoidance performance in these different scenarios. Collision Resilience: Collision resilience test methodologies apply to all SUAS, including human-piloted and autonomous systems. These tests evaluate the ability of the SUAS to be resilient to collisions, by analyzing numerical metrics such as maximum deceleration experienced during a collision event, and pre-post collision change in velocity, as well as newly-devised categorical resilience metrics, as discussed in the Metrics section. These test methods are especially useful for analyzing the efficacy of SUAS platforms with additional protection such as propeller guards or cages. The tests with wall, chain link fence, and mesh, have similar methodology which includes flying towards the obstacle in various configurations (forward flight, sideways flight) and at various angles (trajectory is perpendicular to obstacle or at 45 degrees). The collision resilience tests for doors have slightly different setups which include flying towards a door obstacle that is closed, partially open, or open. There are three different types of Collision Resilience (CR) metrics: Modified Acceleration Severity Index (MASI), Maximum Delta-V, and Categorical metrics for success/failure outcomes. **Note:** due to the lack of reliable indoor obstacle avoidance behaviors available on several SUAS platforms and the risks associated with failed obstacle avoidance tests due to the fragility of said platforms, no benchmarking data is available the Obstacle Avoidance tests. Thus, only data from the Collision Resilience test is shown in this report. ### Collision Resilience Four types of obstacles were used for evaluating Collision Resilience: wall, mesh, chain link fence, and door. The first three obstacles were evaluated with five collision tests (head-on, port side, starboard, port side at 45\({}^{*}\) incidence, and starboard at 45\({}^{*}\) incidence), while the door was evaluated in three conditions (open, closed, and partially open). All means and standard deviations were calculated from 5 runs. Only the SUAS with protective hardware are included in this evaluation (Cleo Robotics Dronut X1P, Flyability Elios 2 GOV, Vantage Robotics Vesper), as all others without propeller guards are not able to withstand collisions. **Modified Acceleration Severity Index (MASI)** The dimensionless Modified Acceleration Severity Index (MASI) metric is defined as: \[MASI=\frac{1}{g}\sqrt{a_{x}^{2}+a_{y}^{2}+a_{z}^{2}}\] where \(a_{x}\) represents the longitudinal acceleration of the SUAS, \(a_{y}\) represents the lateral acceleration of the SUAS, \(a_{z}\) represents the vertical acceleration of the SUAS, and \(g\) represents the acceleration due to gravity (9.8 m/s\({}^{2}\)). All quantities are in units of m/s\({}^{2}\). For the performed tests, SUAS flight only occurred in the horizontal plane, so then we assume that \(a_{z}=0\) m/s\({}^{2}\). **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO HPR2023_74172 No best in class criteria is specified due to the Vantage Robotics Vesper only performing a little over half of the available tests (10 of 18), leaving only two remaining platforms to evaluate, whose performance, on average, is very similar to each other. \begin{tabular}{|l|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Obstacle**} & \multirow{2}{*}{**Test**} & \multirow{2}{*}{**Metrics (m/s)**} & \multicolumn{5}{c|}{**sUNAS**} \\ \cline{3-8} & & & \multicolumn{1}{c|}{Cleo Robotics Dronut X1P} & \multicolumn{1}{c|}{Flyability Elos 2 GOV} & \multicolumn{1}{c|}{Vantage Robotics Vesper*} \\ \cline{3-8} & & & Mean & Stdev & Mean & Stdev & Mean & Stdev \\ \hline \multirow{8}{*}{**Wall**} & \multirow{2}{*}{**Head-on Collision**} & X-axis & -0.03 & 0.01 & -0.12 & 0.15 & -0.01 & 0.00 \\ \cline{3-8} & & Y-axis & -0.01 & 0.04 & -0.03 & 0.03 & -0.03 & 0.05 \\ \cline{3-8} & & X-axis & -0.04 & 0.02 & -0.02 & 0.01 & \\ \cline{3-8} & & Y-axis & -0.05 & 0.01 & -0.02 & 0.01 & \\ \cline{3-8} & & X-axis & -0.05 & 0.01 & -0.02 & 0.01 & \\ \cline{3-8} & & Y-axis & -0.05 & 0.00 & -0.01 & 0.01 & \\ \cline{3-8} & & Y-axis & -0.05 & 0.03 & -0.06 & 0.08 & \\ \cline{3-8} & & Y-axis & -0.08 & 0.05 & -0.03 & 0.03 & \\ \cline{3-8} & & Y-axis & -0.06 & 0.01 & -0.14 & 0.19 & \\ \hline \multirow{8}{*}{**Mesh**} & \multirow{2}{*}{**Head-on Collision**} & X-axis & -0.078 & 0.010 & -0.084 & 0.010 & -0.087 & 0.010 \\ \cline{3-8} & & Y-axis & -0.029 & 0.007 & -0.021 & 0.007 & -0.017 & 0.005 \\ \cline{3-8} & & X-axis & -0.033 & 0.003 & -0.018 & 0.001 & \\ \cline{3-8} & & Y-axis & -0.067 & 0.013 & -0.061 & 0.051 & \\ \cline{3-8} & & X-axis & -0.039 & 0.020 & -0.019 & 0.003 & \\ \cline{3-8} & & Y-axis & -0.016 & 0.022 & -0.033 & 0.023 & \\ \cline{3-8} & & X-axis & -0.084 & 0.008 & -0.076 & 0.015 & -0.112 & 0.048 \\ \cline{3-8} & & Y-axis & -0.051 & 0.010 & -0.043 & 0.015 & -0.088 & 0.092 \\ \cline{3-8} & & Y-axis & -0.068 & 0.020 & -0.079 & 0.020 & -0.054 & 0.044 \\ \cline{3-8} & & Y-axis & -0.017 & 0.002 & -0.025 & 0.010 & -0.041 & 0.037 \\ \hline \multirow{8}{*}{**Chain link**} & \multirow{2}{*}{**Head-on Collision**} & X-axis & -0.100 & 0.016 & -0.099 & 0.035 & -0.106 & 0.026 \\ \cline{3-8} & & Y-axis & -0.027 & 0.005 & -0.043 & 0.014 & -0.025 & 0.006 \\ \cline{3-8} & & Y-axis & -0.057 & 0.028 & -0.017 & 0.012 & \\ \cline{3-8} & & Y-axis & -0.087 & 0.072 & -0.064 & 0.046 & \\ \cline{3-8} & & X-axis & -0.062 & 0.027 & -0.067 & 0.033 & \\ \cline{3-8} & & Y-axis & -0.026 & 0.005 & -0.009 & 0.012 & \\ \cline{3-8} & & Y-axis & -0.092 & 0.029 & -0.105 & 0.024 & -0.046 & 0.027 \\ \cline{3-8} & & Y-axis & -0.037 & 0.009 & -0.055 & 0.016 & -0.020 & 0.014 \\ \cline{3-8} & & Y-axis & -0.051 & 0.011 & -0.010 & 0.012 & 0.359 & 0.069 \\ \cline{3-8} & & Y-axis & -0.037 & 0.039 & -0.026 & 0.008 & -0.043 & 0.013 \\ \hline \multirow{8}{*}{**Doen**} & \multirow{2}{*}{**X-axis**} & \multirow{2}{*}{-0.044} & \multirow{2}{*}{0.023} & \multirow{2}{*}{-0.023} & \multirow{2}{*}{0.012} & \multirow{2}{*}{-0.015} & \multirow{2}{*}{0.007} \\ \cline{3-8} & & Y-axis & -0.044 & 0.048 & -0.021 & 0.012 & -0.015 & 0.009 \\ \cline{3-8} & & X-axis & -0.192 & 0.237 & -0.157 & 0.114 & -0.062 & 0.081 \\ \cline{1-1} \cline{2-8} & & Y-axis & -0.032 & 0.046 & -0.092 & 0.124 & -0.067 & 0.078 \\ \cline{1-1} \cline{2-8} & & X-axis & -0.220 & 0.244 & -0.110 & 0.103 & -0.040 & 0.023 \\ \cline{1-1} \cline{2-8} & & Y-axis & -0.096 & 0.138 & -0.058 & 0.024 & -0.049 & 0.022 \\ \hline \multicolumn{8}{l}{**Average MASI Across All Tests**} & \multicolumn{1}{c|}{-0.067} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-0.064} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-0.028} & \multicolumn{1}{c|}{-} \\ \hline \end{tabular} * Note: The Vantage Robotics Vesper was not able to be evaluated in all conditions due to firmware issues that raised safety concerns. **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **UMASS** **UWALL** **UWALL** **Maximum Delta-V** The Maximum Delta-V metric calculates the maximum value of the change in velocity before and after collision over a given time window. For the performed sUAS tests, the Maximum Delta-V is evaluated over a 0.3s window that begins at the time instant of sUAS' collision with the obstacle. A caveat for this test is that the sensing apparatus must record position and velocity information at a frequency of at least 10 Hz to obtain a good estimate of the metric. All values are in m/s. No best in class criteria is specified due to the Vantage Robotics Vesper only performing a little over half of the available tests (11 of 18), leaving only two remaining platforms to evaluate, whose performance, on average, is very similar to each other. **DECISWE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **Categorical Metrics** The Collision Resiliency (CR) Categorical metric is used to determine the various success or failure scenarios of the SUAS operation in indoor or subterranean environments. Broadly, the three categories represent successful test (A), failed test (B), and test abandonment (C). A lower alphanumeric is better, i.e. CR-A1 indicates better performance than CR-B2. The categories are described in the table below: \begin{tabular}{p{34.1pt}|p{284.5pt}} \hline **Categorical CR metric** & \\ \hline \hline \multicolumn{1}{p{34.1pt}}{**A: Resilient: The category level A represents that the SUAS passed the resiliency test with no or little degradation in operation.**} \\ \multicolumn{1}{p{34.1pt}}{**CR-A1**} & **Specifically, Category CR-A1 corresponds to the scenario that that the SUAS platform collided with the obstacle, but did not suffer any failure, and was able to continue operations. This represents perfect collision resilience properties.** \\ \hline \hline \multicolumn{1}{p{34.1pt}}{**Category CR-A2 is similar to CR-A1 except for the fact that the SUAS temporarily loses operational continuity. For example, after a collision with an obstacle has occurred, the SUAS may retreat to a fail-safe mode, such as executing a safe landing. However, this action does not imply lack of resilience, as the SUAS can return to its operational capacity after the fail-safe mode is disabled (such as return to flight after a safe landing).** \\ \hline \hline \multicolumn{1}{p{34.1pt}}{**Category CR-A3 is similar to CR-A2 except for the fact that in this scenario the SUAS fails to enter a fail-safe mode after the collision event, but is still able to return to operation after the event. Thus, the SUAS suffers only a temporary loss in operational continuity. For example, the SUAS may suffer an uncontrolled descent (i.e., crash) in this scenario as opposed to the scenario in CR-A2 where, for example, the descent was a programmed landing activated by a fail-safe mode.** \\ \hline \hline \multicolumn{1}{p{34.1pt}}{**CR-B1**} & **B: Lack of resiliency: The levels in this category correspond to the scenario where the SUAS failed to resolve a collision gracefully.** \\ \hline \hline \multicolumn{1}{p{34.1pt}}{**Category CR-B2 indicates further degradation of behavior as compared to CR-B1. Specifically, in this scenario, the SUAS may execute a successful landing in fail-safe mode, but communication drop-out prevents a return to operation. Without** \\ \multicolumn{1}{p{34.1pt}}{**CR-B2**} & **Communication, teleoperation of the SUAS cannot be carried out, i.e. SUAS integrity is maintained, but flight capabilities are lost. In this scenario, the SUAS control system may or may not be operational, but since control commands cannot be communicated. This failure mode does not apply to the collision resiliency of autonomous SUAS platforms.** \\ \hline \hline \multicolumn{1}{p{34.1pt}}{**Category CR-B3 is one level of further degradation as compared to CR-B2. In this scenario, the SUAS' attempt to execute a safe landing after the collision is unsuccessful. Specifically, the SUAS may have landed on with a tilt or flipped over, i.e. not in its usual take-off configuration. In this scenario, resume flight operations after a take-off event can not be guaranteed. In this scenario, the control system and/or the communication channel may or may not be operational.** \\ \hline \hline \multicolumn{1}{p{34.1pt}}{**Category CR-B4 indicates the final level of performance degradation as it includes loss of structural integrity of the SUAS platform, presenting a potential permanent loss of operational continuity. It is possible to provide a minor distinction between actual structural damage and disintegration of some components (such as protective propeller guards), but they both indicate a lack of collision resilience of the SUAS structural frame.** \\ \hline \hline \multicolumn{1}{p{34.1pt}}{**CR-C1 corresponds to a scenario where the evaluation team had to terminate the test to ensure safety and structural integrity of the SUAS platform.** \\ \hline \end{tabular} **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO HPR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMASS Lowell Approved for public release: PAO #PR2023_74172 ## 6.2. Summary of Test Method The test method consists of four different tests: (a) wall following, (b) waypoint navigation, (c) straight line path traversals, (d) corner navigation, and (e) aperture navigation. Each test is conducted five times for each s UAS being evaluated and the metrics are aggregated across tests. All tests require the availability of telemetry data either via on-board vendor provided data streams, or external tracking systems. All tests can be performed with minimal additional apparatus (beyond the tracking system) if appropriate subterranean or indoor environments are available. If existing environments do not meet specifications, they can be constructed using readily available materials. All evaluation flights should be performed using line-of-sight operation as remote FPV operation may confound navigation capabilities of the s UAS. Wall Following: The wall following test examines the ability of the s UAS to navigate a specific traversal path while operating in the vicinity of a wall at both 1 m (close) and 2 m (far) from the wall. This is a common use case scenario in specific indoor and subterranean operations. This test is performed in two common s UAS orientations for such missions: parallel (i.e., s UAS camera/front is pointed parallel to the wall surface while moving along it, pitching to fly forward) and perpendicular (i.e., s UAS camera/front pointed perpendicular to the wall while moving along it, strafing right or left to fly sideways). Waypoint Navigation: The waypoint navigation evaluation methodology determines the ability of the s UAS to land at the desired waypoint location. The accuracy and precision metric is for reaching the desired waypoint, defined using the difference between the desired waypoint location and the final landing position of the s UAS. Linear Path Traversal: The straight line traversal will require the s UAS to fly in a rectangular pattern made of four (4) linear path traversals. Deviations from the rectangular path will be used to evaluate the ability of the s UAS to perform straight line traversals. If a limited flight testing area is available, a single linear path traversal may be used for evaluation (instead of rectangular path). Hallway Navigation: This test seeks to examine the ability of the s UAS to navigate a confined space with turns (such as a corridor or hallway). To eliminate the confounding factors associated with piloting skills, the test requires a flight pattern such that the corner navigation is performed via an in-place 90-degree turn, rather than smooth turning curves that expert pilots might execute in confined spaces. This test examines the effects of wind eddy currents in cases where there are walls on both sides of the s UAS (such as hallways). Hallway-induced wind eddy currents are expected to generate higher turbulence than the other navigation tests discussed here. Corner Navigation: This test is similar to the hallway navigation test, but with corner partitions only on one side of the s UAS flight path Aperture Navigation: This test evaluates the ability of the s UAS platform to successfully navigate through an aperture. In subterranean environments, sometimes drones need to fly in such cases. They must be able to do so without contact with the surrounding material but if it does it must be able to withstand collision. This is why this test is not numerically evaluated like the other navigation tests but has a tiered result table listed below. \begin{tabular}{c|c|c} \hline **Result** & **Condition of test** & **Explanation of result** \\ \hline A1 & Pass through no contact & Drone went through aperture did not touch any sides \\ \hline A2 & Pass through contact no trip & Drone went through aperture did touch sides but no tears \\ \hline A3 & Pass through contact triped & Drone went through aperture did touch sides tear occurred \\ \hline B1 & Failed pass through due to contact & Drone was unable to go through and land properly due to contact with aperture \\ \hline \end{tabular} ## 6.3. Conclusion In this paper, we have presented a novel method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS UAS UAS to a wide variety of environments. The proposed method is a new method for building the s UAS UAS UAS UAS to a wide variety of environments. The proposed method is a new method for building the s ### Path Deviation Path deviation: Deviation of the SUAS platform's actual trajectory from a defined straight line path. 1. For the Wall Following test, the path is a constant distance away from a wall, parallel to it (i.e., along the edge of the testing zone). 2. For the Linear Path Traversal test, the path is one or more defined straight line(s) within the testing zone. 3. For the Corner Navigation test, the path is two straight line(s) that constitute a 90-degree turn trajectory. The evaluation of navigation abilities of the SUAS is defined as the deviation of the SUAS from the desired path. We evaluate both, the mean value of the deviation from the desired path (indicating the accuracy of navigation), as well as the standard deviation of deviation (indicating the precision of navigation). Best in class = average deviations across all tests run for all platforms (i.e., metrics from all tests except Wall Following 1 m, Hallway Corner Navigation, Aperture 1.5x, Aperture 2x, and Through Door) that are less than the aggregate average of those data points (0.122) C = data recorded by the external tracking system become corrupted, so no performance metrics are available ## Waypoint Navigation Metric shown is for waypoint accuracy (m). Best in class = average waypoint accuracy across all tests run for all platforms (i.e., metrics from all tests except Wall Following 1 m, Hallway Corner Navigation, Aperture 1.5x, Aperture 2x, and Through Door) that are less than the aggregate average of those data points (0.234) \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & & & & & & & & & & & & & & & & & \\ \cline{2-13} & \multirow{2}{*}{Cleo Robotics} & \multirow{2}{*}{Flyability Elios} & \multirow{2}{*}{Lumenier} & \multirow{2}{*}{Parrot ANAFI} & \multirow{2}{*}{Skydio X2D} & \multirow{2}{*}{Real Golden} & \multirow{2}{*}{Vantage Robotics} \\ & & & & 2 GOV & & & & & & & & & & & & & \\ \cline{2-13} & Mean & \multicolumn{1}{c}{Stdev} & \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{Stdev} & \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{Stdev} & \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{Stdev} & \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{Stdev} & \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{Stdev} \\ \hline Wall Following 1 m & 1.149 & 0.208 & 0.321 & 0.129 & 0.289 & 0.065 & 0.188 & 0.045 & \multicolumn{1}{c}{G} & 0.132 & 0.051 & 0.530 & 0.233 \\ \hline Wall Following 2 m & 0.586 & 0.166 & 0.196 & 0.044 & 0.096 & 0.060 & 0.126 & 0.091 & 0.126 & 0.091 & 0.197 & 0.102 & 0.161 & 0.075 \\ \hline Corner Navigation & 0.135 & 0.086 & 0.204 & 0.054 & 0.338 & 0.083 & 0.293 & 0.037 & 0.688 & 0.125 & 0.167 & 0.111 & 0.380 & 0.110 \\ \hline Hallway Corner Navigation & 0.256 & 0.197 & 0.814 & 1.316 & \multicolumn{1}{c}{} & 0.319 & 0.176 & 0.232 & 0.038 & \multicolumn{1}{c}{\(\geq\)} & 0.285 & 0.163 \\ \hline Aperture 1.5x & 1.718 & 0.606 & 2.147 & 0.249 & \multicolumn{1}{c}{} & 1.998 & 0.102 & 2.119 & 0.267 & \multicolumn{1}{c}{\(\geq\)} & 1.810 & 0.160 \\ \hline Aperture 2x & 1.497 & 1.312 & 0.868 & 0.850 & \multicolumn{1}{c}{} & 0.550 & 0.085 & 0.753 & 0.130 & \multicolumn{1}{c}{\(\geq\)} & 0.804 & 0.158 \\ \hline Forward Motion & 0.382 & 0.146 & 0.107 & 0.068 & 0.297 & 0.230 & 0.220 & 0.074 & 0.221 & 0.062 & 0.229 & 0.056 & 0.232 & 0.120 \\ \hline Side Motion & 0.232 & 0.040 & 0.092 & 0.072 & 0.263 & 0.165 & 0.182 & 0.082 & 0.382 & 0.143 & 0.273 & 0.147 & 0.145 & 0.055 \\ \hline Diagonal Motion & 0.274 & 0.101 & 0.072 & 0.025 & 0.144 & 0.072 & 0.102 & 0.047 & 0.613 & 0.196 & 0.184 & 0.095 & 0.173 & 0.068 \\ \hline Square & 0.313 & 0.118 & 0.029 & 0.010 & 0.121 & 0.039 & 0.132 & 0.041 & 0.236 & 0.122 & 0.275 & 0.124 & 0.212 & 0.047 \\ \hline Through Door & 0.684 & 0.200 & 0.436 & 0.355 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Average Deviation Across & & & & & & & & & & & & & & & \\ All Tests Run For All & 0.320 & - & 0.117 & - & 0.210 & - & 0.176 & - & 0.378 & - & 0.221 & - & 0.217 & - \\ Platforms & & & & & & & & & & & & & & & \\ \hline \multicolumn{11}{c}{Position and Traversal Accuracy: Waypoint Navigation} \\ \hline \multirow{6}{*}{Best in class} & \multirow{6}{*}{Flyability Elios 2 GOV} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Parrot ANAFI} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumenier Nighthav V3} & \multirow{6}{*}{Lumen Nighthav V3} & \multirow{6}{*}{Lumen Nighthav V3} & \multirow{6}{*}{Lumen Nighthav V3} & \multirow{6}{*}{Lumen Nighthav V3} & \multirow{6}{*}{Lumen Nighthav V3} \\ \cline{1 The operator commands the sUAS to navigate through an aperture that either exists already in a real-world environment (e.g., a doorway or window in a building) or a fabricated apparatus that matches the relevant dimensions and shapes for each type of aperture. Three types of apertures are defined for navigation tests, each of which require horizontal or vertical traversal through spaces that are horizontally and/or vertically confined: doorway, window, and manhole. Navigation is performed multiple times to establish statistical significance and the associated probability of success and confidence levels based on the number of successes and failures (see the metrics section). For each trial, the sUAS begins from a starting location that requires it to traverse in a direction not parallel to the navigation route through the aperture, which may also require it to turn. Similarly, the end location for each trial also requires the sUAS to traverse in a direction not parallel to the navigation route. More simply, a single trial constitutes the sUAS traversing from the A side of the apparatus to the B side, navigating through the aperture, then traversing back over to the A side. See Figure 1. Each navigation test can be run either as elemental or operational navigation. The areas on either side of the aperture should measure 3 m (118 in) square or larger to allow for much less obstructed flight than when navigating through the aperture. These areas may contain walls perpendicular to the wall/floor containing the aperture; for example, it is common for doors to be justified to one side of a room. The presence of these obstructions may be problematic for sUAS navigation due to airflow issues when a system flies too close to a wall and/or due to obstacle avoidance functionality (e.g., sUAS may attempt to maintain X distance between it and obstacles for safety, causing it to not be able to navigate through the aperture). If a wall is present on either area outside of the aperture, within 20 cm (8 in) of the edge of the aperture opening, then that area is considered obstructed. Figure 1: Each type of aperture navigation test, left to right: doorway, window, and manhole. ###### Abstract This paper presents a novel approach to the study of the effect of the system on the system's behavior of the system's behavior. The system's behavior is modeled by a system's behavior. The system's behavior is modeled by a system's behavior. **Performance data** Testing was conducted as operational navigation unless only elemental navigation was possible due to NLOS communications issues or perceived operational risks. A bar chart is not shown given that no average speed metrics were recorded for all evaluated systems. Best in class = 100% completion for all navigation tests conducted **SUAS** **Metrics** **NERVE Doorway** **Navigation type** **Operational** **Cloe Robotics Dronnut XIP** **Completion** **100%** **Average speed (m/s)** **unknown** **Navigation type** **Operational** **F\(\ddot{\text{i}}\)R Black Hornet PRS** **Completion** **100%** **Average speed (m/s)** **unknown** **Navigation type** **Operational** **F\(\ddot{\text{i}}\)yability Elios 2 GOV** **Completion** **100%** **Average speed (m/s)** **unknown** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** ** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** ** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational** **Navigation type** **Operational **Navigation type** **Operational** **Navigation type** **Operational **Navigation type** **Operational** **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational ** **Navigation type** **Operational ** **Navigation type** **Operational ** **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** **Operational **Navigation type** ### Environment characterization Three instances of window navigation testing have been conducted: MUTC window, MUTC shaft entrance/exit, and a variable sized window at the NERVE Center. \begin{tabular}{|l|l|c|c|} \hline \multicolumn{4}{|c|}{**MUTC Window**} \\ \hline Environment & **Outside aperture, area 1** & **Aperture** & **Outside aperture, area 2** \\ \hline Dimensions & - & W x h: 89 x 89 cm (35 x 35 in) & - \\ \hline Lighting & Shaded Sun & - & Dark, Sun beam through aperture \\ \hline Walls & Corrugated Steel, Cone Container & - & Corrugated Steel, Conex Container \\ \hline Floor & Grass, Dirt & - & Wood \\ \hline Obstruction & Vertical Scaffold Beams & - & Conex Container, Filing Cabinet \\ \hline Type & Outdoor & - & Indoor \\ \hline \end{tabular} \begin{tabular}{|l|l|c|c|} \hline \multicolumn{4}{|c|}{**MUTC Shift Entrance/exit**} \\ \hline Environment & **Outside aperture, area 1** & **Aperture** & **Outside aperture, area 2** \\ \hline Dimensions & - & W x h: 132 x 97 cm (52 x 38 in) & \\ \multicolumn{4}{|l|}{} & Note: the ladder edge is considered the & - \\ \multicolumn{4}{|l|}{} \\ \multicolumn{4}{|l|}{outer edge of the aperture} \\ \hline Lighting & Dark (<100 LUX) & - & Indirect Sunlight (100-200 LUX) \\ \hline Walls & Corrugated Steel, Conex Container & - & Corrugated Steel, Conex Container \\ \hline Floor & Wood & - & Safety Tarp \\ \hline Obstruction & N/A & - & Ladder \\ \hline Type & Indoor & - & "Outdoor" (Open roof) \\ \hline Image & & & \\ \hline \end{tabular} ### DECISWE Benchmarking Data Report U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO HPR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO HPR2023_74172 ## Performance data Testing was conducted as operational navigation unless only elemental navigation was possible due to NLOS communications issues or perceived operational risks. Only test results with 100% completion are shown in the chart below. The data from the MUTC shaft entrance was extracted from the Navigation Through Confined Spaces: Shaft testing, and the data from the NERVE variable window was extracted from the Position and Traversal Accuracy testing, so average speed metrics for those tests are unknown. The variable window size used is 1.5 times the longest diagonal dimension (i.e., prop-tip to prop-tip) for each sUAS, with a lower bound of 30 cm (12 in). Best in class = 2 or more navigation tests conducted with 100% completion for all navigation tests conducted \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{**sUAS**} & \multirow{2}{*}{**Metrics**} & \multicolumn{2}{c}{**MUTC Shaft**} & \multirow{2}{*}{**NERVE Variable Window**} \\ & & & & **Entrance/Exit** & \\ \hline \multirow{4}{*}{Cleo Robotics Dronut X1P} & Navigation type & Elemental & Operational & Elemental \\ & Completion & 100\% & 100\% & 80\% & 30 x 30 cm \\ \cline{2-5} & Average speed (m/s) & 0.15 & unknown & unknown & (12 x 12 in) \\ \hline \multirow{4}{*}{FulR Black Hornet PRS} & Navigation type & Elemental & Operational & Elemental \\ & Completion & 100\% & 100\% & 100\% & 30 x 30 cm \\ \cline{2-5} & Average speed (m/s) & 0.08 & unknown & 0.07 & (12 x 12 in) \\ \hline \multirow{4}{*}{Flyability Elios 2 GOV} & Navigation type & Operational & Operational & Operational & W x H: \\ & Completion & 100\% & 100\% & 100\% & 59 x 59 cm \\ \cline{2-5} & Average speed (m/s) & unknown & unknown & unknown & (23 x 23 in) \\ \hline \multirow{4}{*}{Lumenier Nighthawk V3} & Navigation type & Operational & & & \\ \cline{2-5} & Completion & 100\% & & & \\ \cline{2-5} & Average speed (m/s) & unknown & & & \\ \hline \multirow{4}{*}{Parrot ANAFI USA GOV} & Navigation type & Elemental & Operational & Elemental \\ & Completion & 33\% & 100\% & 100\% & 63 x 63 \\ \cline{2-5} & Average speed (m/s) & 0.10 & unknown & unknown & (25 x 25 in) \\ \hline \multirow{4}{*}{Skydio X2D, teleoperation} & Navigation type & - & Elemental & W x H: \\ & Completion & - & 100\% & 112 x 112 cm \\ \cline{2-5} & Average speed (m/s) & - & - & unknown & (44 x 44 in) \\ \hline \multirow{4}{*}{Skydio X2D, visual waypoints} & Navigation type & - & - & - \\ \cline{2-5} & Completion & - & - & - \\ \cline{2-5} & Average speed (m/s) & - & - & - \\ \hline \multirow{4}{*}{Teal Golden Eagle*} & Navigation type & X & X & X \\ \cline{2-5} & Completion & X & X & X \\ \cline{2-5} & Average speed (m/s) & X & X & X \\ \hline \multirow{4}{*}{Vantage Robotics Vesper} & Navigation type & Elemental & Elemental & W x H: \\ \cline{2-5} & Completion & 100\% & 100\% & 61 x 61 cm \\ \cline{2-5} & Average speed (m/s) & 0.20 & unknown & (24 x 24 in) \\ \hline \multirow{4}{*}{Best in class} & \multicolumn{4}{c}{**Window Navigation**} \\ & & & & **FulR Black Hornet PRS** \\ \cline{2-5} & & & & **Flyability Elios 2 GOV** \\ \cline{2-5} & & & & **Vantage Robotics Vesper** \\ \hline \hline \end{tabular} *Note: Due to instability of the Teal Golden Eagle when flying indoors, it was not evaluated in this test. ## DECISIVE Benchmarking Data Report U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 The operator commands the sUAS to navigate through a confined space that either exists already in a real-world environment (e.g., a hallway or stairwell in a building) or a fabricated apparatus that matches the relevant dimensions and shapes for each type of confined space. Four types of confined spaces are defined for navigation tests, each of which require horizontal and/or vertical traversal through spaces that are horizontally and/or vertically confined: hallway, tunnel, stairwell/incline, and shaft. Navigation is performed multiple times to establish statistical significance and the associated probability of success and confidence levels based on the number of successes and failures. For each trial, the sUAS begins from a starting location that requires it to traverse in a direction not parallel to the navigation route through the confined space, which may also require it to turn. Similarly, the end location for each trial also requires the sUAS to traverse in a direction not parallel to the navigation route. More simply, a single trial constitutes the sUAS traversing from the A side of the apparatus to the B side, navigating through the confined space, then traversing back over to the A side. See Figure 1 and Figure 2. Each navigation test can be run either as elemental or operational navigation. ## Appendix B Benchmarking Results Tests were conducted in each of the four defined confined spaces: Hallway, Tunnel, Stairwell, and Shaft. Notes: * Average speed of the Flyability Elios 2 GOV during most navigation testing is not known because dedicated Navigation Through Confined Spaces testing was not conducted with the system. Rather, the system demonstrated multiple successful navigation runs while conducting other tests that took place in the same environments (e.g., Indoor Mapping Accuracy). * Two flight configurations of the Skydio X2D were tested: using standard joystick teleoperation and using its touch screen to place visual waypoints for autonomous flight to these waypoints. The corresponding data points are labeled "teleoperation" and "visual waypoints," respectively. ## Appendix C Environment characterization Three instances of hallway navigation testing have been conducted, all of which took place at Muscatatuck Urban Training Center (MUTC): outdoor corridor, Conex hallway, and prison hallway. ## Performance data Testing was conducted as operational navigation unless only elemental navigation was possible due to NLOS communications issues or perceived operational risks. Best in class = 100% completion for all operational navigation tests conducted \begin{tabular}{l|l|c|c|c} \hline **SUAS** & **Metrics** & **MUTC Outdoor Corridor** & **MUTC Cone Hallway** & **MUTC Prison Hallway** \\ \cline{2-5} & Navigation type & & - & Operational \\ \cline{2-5} Cleo Robotics Dronut XIP & Completion & & - & 100\% \\ \cline{2-5} & Average speed (m/s) & & - & 0.56 \\ \hline \multirow{5}{*}{FIR Black Hornet PRS} & Navigation type & & - & Operational \\ \cline{2-5} & Completion & & - & 100\% \\ \cline{2-5} & Average speed (m/s) & & - & 0.28 \\ \hline \multirow{5}{*}{Flyability Elos 2 GOV} & Navigation type & Operational & Operational & Operational \\ \cline{2-5} & Completion & 100\% & 100\% & 100\% \\ \cline{2-5} & Average speed (m/s) & unknown & unknown & unknown \\ \hline \multirow{5}{*}{Lumenier Nighthawk V3} & Navigation type & Operational & - & Operational \\ \cline{2-5} & Completion & 100\% & - & 100\% \\ \cline{2-5} & Average speed (m/s) & 0.39 & - & 0.42 \\ \hline \multirow{5}{*}{Parrot ANAF1 USA GOV} & Navigation type & Elemental & Operational & Operational \\ \cline{2-5} & Completion & 100\% & 100\% & 100\% \\ \cline{2-5} & Average speed (m/s) & 0.52 & 0.28 & 0.56 \\ \hline \multirow{5}{*}{Skydio X2D, teleoperation} & Navigation type & Operational & - & Operational \\ \cline{2-5} & Completion & 100\% & - & 100\% \\ \cline{2-5} & Average speed (m/s) & 0.39 & - & 0.83 \\ \hline \multirow{5}{*}{Skydio X2D, visual waypoints} & Navigation type & Operational & - & Operational \\ \cline{2-5} & Completion & 100\% & - & 25\% \\ \cline{2-5} & Average speed (m/s) & 0.26 & - & N/A \\ \hline \multirow{5}{*}{Teal Golden Eagle*} & Navigation type & & - & \\ \cline{2-5} & Completion & & - & \\ \cline{2-5} & Average speed (m/s) & & - & \\ \hline \multirow{5}{*}{Vantage Robotics Vesper} & Navigation type & Elemental & Operational & Operational \\ \cline{2-5} & Completion & 100\% & 25\% & 25\% \\ \cline{2-5} & Average speed (m/s) & 0.39 & 0.14 & 0.83 \\ \hline \end{tabular} *Note: Due to instability of the Teal Golden Eagle when flying indoors, it was not evaluated in this test. ### Environment characterization Only two one-off tests of tunnel navigation were conducted: MUTC tunnel and NERVE tunnel. **MUTC Tunnel** **Environment** **Outside confined space, area 1** **Inside confined space** **Outside confined space, area 2** **Dimensions** **-** **W** **x** **h**: 1.3 x 1.3 m (4.4 x 4.2 ft), L unknown** **-** **Lighting** **Shaded Sunlight** **Dark** **Sunlight** **Walls** **N/A** **Dirt** **N/A** **Floor** **Rocks, Dirt, Grass** **Dirt, Mud** **Grass** **Obstruction** **Metal Door** **N/A** **N/A** **Type** **Outdoor** **Indoor** **Outdoor** **Outdoor** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** **Image** ****Image** **Image** **Image** **Image** ****Image** **Image** **Image** **Image** **Image** **Image** **Image** ****Image** **Image** ****Image** **Image** **Image** ****Image** ****Image** ****Image** ****Image** ****Image** ****Image** ****Image** ****Image** ******Image** ******Image** **Performance data** Testing was conducted as operational navigation unless only elemental navigation was possible due to NLOS communications issues or perceived operational risks. Due to the lack of tunnel testing conducted, no best in class criteria is specified. **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 ### Environment characterization One instance of stairwell navigation testing was conducted at the MUTC subway platform. #### 4.1.1 DECISIVE Benchmarking Data Report U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **Performance data** Testing was conducted as operational navigation unless only elemental navigation was possible due to NLOS communications issues or perceived operational risks. Best in class = 100% completion operational navigation \begin{tabular}{l l|c} \hline ### Environment characterization One instance of shaft navigation testing was conducted at the MUTC hotel trainer. ###### Abstract The MUTC hotel trainer is a _MUTC_ model for the MUTC hotel trainer. The MUTC hotel trainer is trained to predict the MUTC hotel trainer's performance. The MUTC hotel trainer is trained to predict the MUTC hotel's performance. The MUTC hotel trainer is trained to predict the MUTC hotel's performance. The MUTC hotel trainer is trained to predict the MUTC hotel's performance. The MUTC hotel trainer is trained to predict the MUTC hotel's performance. The MUTC hotel trainer is trained to predict the MUTC hotel's performance. The MUTC hotel trainer is trained to predict the MUTC hotel's performance. The MUTC hotel trainer is trained to predict the MUTC hotel's performance. The MUTC hotel trainer is trained to predict the MUTC hotel's performance. The MUTC hotel trainer is trained to predict the MUTC hotel's performance. The MUTC hotel trainer is trained to predict the MUTC hotel's performance. The MUTC hotel trainer is trained to predict the MUTC hotel's performance. The MUTC trainer is trained to predict the MUTC hotel's performance. The MUTC trainer is trained to predict the MUTC hotel's performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC trainer. The MUTC trainer is trained to predict the MUTC trainer. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC trainer. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC trainer. The MUTC trainer is trained to predict the MUTC trainer. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC trainer. The MUTC trainer is trained to predict the MUTC trainer. The MUTC trainer is trained to predict the MUTC trainer. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC performance. The MUTC trainer is trained to predict the MUTC trainer. The MUTC trainer is trained to predict the MUTC trainer. ## Performance data Testing was only conducted as operational navigation. Best in class = 100% completion for all operational navigation tests conducted \begin{tabular}{l|l|c|c} SUAS & **Metrics** & **MUTC Shift, Descension** & **MUTC Shift, Ascension** \\ \hline \multirow{3}{*}{Cleo Robotics Dronut X1P} & Navigation type & Operational & Operational \\ & Completion & 100\% & 100\% \\ & Average speed (m/s) & 0.24 & 0.24 \\ \hline \multirow{3}{*}{FLR Black Hornet PRS} & Navigation type & Operational & Operational \\ & Completion & 100\% & 100\% \\ \cline{2-4} & Average speed (m/s) & 0.08 & 0.08 \\ \hline \multirow{3}{*}{Fyablilty Eios 2 GOV} & Navigation type & Operational & Operational \\ & Completion & 100\% & 100\% \\ \cline{2-4} & Average speed (m/s) & unknown & unknown \\ \hline \multirow{3}{*}{Lumenier Nighthawk V3} & Navigation type & Operational & Operational \\ \cline{2-4} & Completion & 50\% & 0\% \\ \cline{2-4} & Average speed (m/s) & 0.06 & N/A \\ \hline \multirow{3}{*}{Parrot ANAF1 USA GOV} & Navigation type & Operational & Operational \\ \cline{2-4} & Completion & 50\% & 0\% \\ \cline{2-4} & Average speed (m/s) & 0.06 & N/A \\ \hline \multirow{3}{*}{Skydio X2D, teleoperation} & Navigation type & Operational & Operational \\ \cline{2-4} & Average speed (m/s) & 0.06 & N/A \\ \hline \multirow{3}{*}{Skydio X2D, visual waypoints} & Navigation type & Operational & Operational \\ \cline{2-4} & Completion & 100\% & 100\% \\ \cline{2-4} & Navigation type & Operational & Operational \\ \hline \multirow{3}{*}{Teal Golden Eagle*} & Navigation type & Operational & Operational \\ \cline{2-4} & Completion & 100\% & 100\% \\ \cline{2-4} & Average speed (m/s) & 0.06 & N/A \\ \hline \multirow{3}{*}{Vantage Robotics Vosper} & Navigation type & Operational & Operational \\ \cline{2-4} & Average speed (m/s) & 0.06 & N/A \\ \hline \end{tabular} * Note: Due to instability of the Teal Golden Eagle when flying indoors, it was not evaluated in this test. ## 6.2. Image Processing ### Image Processing The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process. The image processing process is a simple way to process the image processing process process. The image processing process is a simple way to process the image processing process process. The image processing process is a simple way to process the image processing process process process the image processing process process. The image processing process is a simple way to process the image processing process process process process the image processing process process process the image processing process process process the image processing process process process process the image processing process process process process the image processing ## Appendix A Performance data ### Interior Boundaries Point clouds and photogrammetric map with singulated visual acuity targets shown below. ### DECISIVE Benchmarking Data Report U.S. Army DEVCOM-SC Contract # WS911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 ## 6 Summary of Test Method The operator flies the SUAS through an environment to collect mapping data. A series of split-cylinder fiducials are positioned throughout the environment. An accurate ground truth of the environment must be available in order to compare the SUAS map. The ground truth for comparison is a map of the environment that is generated using a more precise method with high confidence of accuracy (e.g., industrial ground robot, handheld lidar system). An accurate 3D ground truth may be very expensive and/or difficult to generate, whereas a 2D map can be more easily gathered (e.g., architectural layout, dimensional measurement). Multiple flights may be conducted in order to change batteries. The collected data is downloaded to generate a map; if multiple flights were conducted, multiple maps will be generated for each incremental flight (e.g., map of flight 1, map or flights 1+2, etc.) and evaluated separately. For maps generated from multiple flights, evaluations should differentiate between automatic alignment of the individual maps and manual alignment performed by an operator. The test can be run in lighted (100 lux or greater) or dark (less than 1 lux) conditions, as either an elemental or operational test: Elemental Mapping: The operator may maintain line-of-sight with the SUAS such as by following the system with the OCU throughout the environment to maintain communications link. This allows the system's map generation capability to be evaluated in as close to an ideal setting as possible. Operational Mapping: The operator remains at the launch point during execution, unable to maintain line of sight throughout the test, without prior knowledge of the layout of the space. This is similar to an actual operational mission, including all related SUAS communications and operator situation awareness that may arise (e.g., losing comms link at range and/or through obstructions, monitoring battery life such that the SUAS can be flown back before it dies). If multiple flights are conducted, the SUAS must be flown back to the launch point where the operator is stationed in order to change batteries. **UUMASS** **UUWALS** **UWALS** Benchmarking Results **Note: performance data is only shown for the Flyability Elios 2 GOV due to it being the only system with indoor mapping capabilities.** **No best in class criteria is specified.** ### Horizontal Mapping **Environment characterization** **U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172** **Two flights mapped with Flyability Inspector 3.0** Point cloud map generated from two flights (first flight in black, second flight in blue) merged automatically using CloudCompare shown below (overhead view) with singulated and labeled fiducials. Performance data is provided for just the first flight and for the combined first and second flight in the table below. \begin{tabular}{} \end{tabular} \begin{tabular}{} \end{tabular} \begin{tabular}{} \end{tabular} \begin{tabular}{} \end{tabular} \begin{tabular}{} \end{tabular} \begin{tabular}{} \end{tabular} \begin{tabular}{} \end{tabular} \begin{tabular}{} \end{tabular} **UMASS** **Four flights mapped with Pix4Dmapper** Photogrammetric map generated from four flights merged manually using Pix4Dmapper shown below (overhead view) with singulated and labeled fiducials. **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **One flight mapped with Flyability Inspector 3.0** Point cloud map generated from one flights shown below (side view) with singulated and labeled fiducials. **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 ## 3D Mapping ### Environment characterization ### DECISIVE Benchmarking Data Report U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **Performance data** Note: The results of all photogrammetric mapping was very poor, so only two examples are shown to illustrate this. **MUTC Fire Trainer: One flight mapped with Flyability Inspector 3.0** Point cloud map generated from one flight shown below with singulated and labeled fiducials. **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **QUASS** **MUTC Fire Trainer: One flight mapped with Pix4Dmapper** Photogrammetric map generated from one flight using Pix4Dmapper shown below (isometric view) with singulated and labeled fiducials. **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 [MISSING_PAGE_POST] **QU **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 **DECISIVE Benchmarking Data Report** U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMASS Lowell Approved for public release: PAO #PR2023_74172 [MISSING_PAGE_POST] [MISSING_PAGE_POST] ### Supplementary ### Supplementary ### Supplementary ### Supplementary ### Supplementary [MISSING_PAGE_POST] ### Supplementary ### Supplementary ### Supplementary ### Supplementary ### Supplementary ### Supplementary ### Supplementary ### Supplementary ### Supplementary ### Supplementary \begin{tabular}{l|c|c|c|c|c|c} \hline **SUAS** & \(\mathbf{S_{\text{max}}}\) & \(\mathbf{S_{\text{map}}}\) & \(\mathbf{S_{\text{int}}}\) & \(\mathbf{S_{\text{sum}}}\) & \(\mathbf{P}\) & \(\mathbf{N_{\text{KL}}}\) \\ \hline Cleo Robotics Dronut XIP & 0.18 (6) & 0.33 (5) & -0.98 (7) & 0.05 (7) & 2.48 (6) & 3 \\ Flyability Eios 2 GOV & 0.17 (7) & 0.30 (7) & 0.22 (2) & 0.05 (6) & 2.29 (7) & 1 \\ Lumenier Nighthawk V3 & 0.20 (5) & 0.30 (6) & -0.09 (5) & 0.06 (5) & 2.60 (5) & 0 \\ Parrot ANAFI USA GOV & 0.35 (4) & 0.50 (4) & 0.05 (4) & 0.09 (4) & 3.48 (4) & 3 \\ \hline Skydo X2D & 0.53 (1) & 0.72 (1) & 0.93 (1) & 0.14 (1) & 4.63 (1) & 3 \\ \hline Teal Golden Eagle & 0.39 (2) & 0.57 (2) & 0.05 (3) & 0.10 (3) & 3.81 (2) & 0 \\ \hline Vantage Robotics Vesper & 0.38 (3) & 0.52 (3) & -0.27 (6) & 0.10 (2) & 3.56 (3) & 0 \\ \hline \multicolumn{8}{c}{**Non-Contextual Autonomy Ranking**} \\ \cline{2-7} \multicolumn{8}{c}{} & \multicolumn{5}{c}{Parrot ANAFI USA GOV} \\ \multicolumn{8}{c}{**Best in class**} & \multicolumn{5}{c}{Skylo X2D} \\ \multicolumn{8}{c}{} & \multicolumn{5}{c}{Teal Golden Eagle} \\ \multicolumn{8}{c}{} & \multicolumn{5}{c}{Vantage Robotics Vesper} \\ \hline \end{tabular} SUAS platform autonomy measure represented in the NCAP coordinate \(<\)_NAL_, \(N_{\text{CP}}\)\(>\) with uniform weight values: \begin{tabular}{l|c|c|c|c|c} \hline **DESIVE Benchmarking Data Report** & & & & & \\ \hline U.S. Army DEVCOM-SC Contract \# W911QY-18-2-0006 UMass Lowell Approved for public release: PAO \#PR2023_74172 & & & \\ \hline \end{tabular} ## 10.1. Contextual Autonomy Ranking ### Affiliated publications: [Donald et al., 2023] In this test, the user selects a sub-task to be evaluated in a specific environment. Several examples have been described in this document including Runtime Endurance in Enclosed Spaces, Takeoff and Land/Perch, Navigation Through Apertures, Navigation Through Corridors, and the Room Clearing test. It should be noted that the proposed framework is not limited to these sets of tests and can be used for any SUAS mission. For each experiment, data should be collected according to three axes of Environmental Complexity (EC), Mission Complexity (MC), and Human Independence (HI). This is similar to the Autonomy Levels for Unmanned Systems (ALFUS) framework [Huang et al., 2007; Durst and Gray, 2014]. These three axes allow the user to categorize various factors of a mission that can affect a system's autonomy efficiently. The Environmental Complexity (EC) axis accounts for the differences in terrains and environments of a mission. In this axis, larger values correspond to more complex environments while smaller values correspond to simpler environments. The Mission Complexity (MC) axis accounts for the different levels of difficulty in movements, actions, or decisions required to complete a mission successfully. In this axis, larger values represent a mission requiring more complex decisions for completion while the smaller values represent simpler missions. The Human Independence (HI) axis accounts for the level of independence the SUAS offers to the operator for the successful completion of a mission. This axis also accounts for the types of actions which are able to be completed by the SUAS, and the complexities of those actions. Higher values on this axis indicate SUAS that can complete more complex movements while requiring more independence from the operator. The more difficult the tasks which the SUAS is able to perform results in a larger value along the HI axis. In addition, the larger the portion of the mission which is able to be completed by the SUAS without the operator, will also result in a larger value along the HI axis. The user can add or remove the mission-specific features to each axis as they deem relevant. The following features are relevant examples for subterranean and constrained indoor operations: aperture/hallway cross-sectional area, ambient light level, verticality of the hallway, number of crashes, number of rollovers, completion percentage, static roll angle, static pitch angle, static vertical obstruction, static horizontal obstruction, coverage percentage, C's detected, duration, and obstructions. Note that metrics used in the evaluation of sub-tasks, as described in other sections of this document, can be used as features in this test for comparing different SUAS autonomy levels. \begin{tabular}{l|c|c|c|c|c|c|c} \multicolumn{1}{l}{} & \multicolumn{1}{l}{**Navigation:**} & \multicolumn{1}{l}{**Navigation:**} & \multicolumn{1}{l}{**Navigation:**} & \multicolumn{1}{l}{**Navigation:**} & \multicolumn{1}{l}{**Takesoff**} & \multicolumn{1}{l}{**Landing**} & \multicolumn{1}{l}{**Runtime Endurance:**} & \multicolumn{1}{l}{**Room Clearing**} & \multicolumn{1}{l}{**Predictive Score**} \\ \multicolumn{1}{l}{} & \multicolumn{1}{l}{**Through Corridors**} & \multicolumn{1}{l}{**Through Apertures**} & \multicolumn{1}{l}{**Takesoff**} & \multicolumn{1}{l}{**Landing**} & \multicolumn{1}{l}{**Indoor Movement**} & \multicolumn{1}{l}{**Room Clearing**} & \multicolumn{1}{l}{**Predictive Score**} \\ \hline \multicolumn{10}{l}{Cleo Robotics} \\ \multicolumn{10}{l}{Dronut XIP} \\ \multicolumn{10}{l}{Fayability Elios 2} \\ \multicolumn{10}{l}{GOVA} \\ \multicolumn{10}{l}{Lumenier} \\ \multicolumn{10}{l}{Nighthawk V3} \\ \multicolumn{10}{l}{Partor ANAFI} \\ \multicolumn{10}{l}{USA GOV} \\ \multicolumn{10}{l}{Skydio X2D} \\ \multicolumn{10}{l}{-} \\ \multicolumn{10}{l}{Teal Golden} \\ \multicolumn{10}{l}{Eagle} \\ \multicolumn{10}{l}{-} \\ \multicolumn{10}{l}{Vantage Robotics Vesper} \\ \multicolumn{10}{l}{**Contextual Autonomy Ranking**} \\ \multicolumn{10}{l}{**Best in class**} \\ \multicolumn{10}{l}{Fayability Elios 2 GOV} \\ \multicolumn{10}{l}{Lumenier Nighthaw V3} \\ \multicolumn{10}{l}{**Vantage Robotics Vesper**} \\ \multicolumn{10}{l}{**The following figure illustrates contextual autonomy score and ranking of the evaluated SUAS according to the above table.**} \\ \end{tabular} \begin{tabular}{l|c|c|c|c|c|c|c} \multicolumn{1}{l}{} & \multicolumn{1}{l}{**Decisive Benchmarking Data Report**} \\ \multicolumn{10}{l}{U.S. Army DEVCOM-SC Contract \# W911QY-18-2-0006 UMASS Lowell Approved for public release: PAO \#PR2023_74172 ## Chapter 10 Introduction The study of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of impact of the impact of the impact of the impact of the impact of the impact of the impact of impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of impact of the impact of the impact of the impact of the impact of the impact of the impact of impact of the impact of the impact of the impact of the impact of the impact of impact of the impact of the impact of impact of the impact of the impact of the impact of impact of the impact of the impact of impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of impact of the impact of the impact of the impact of the impact of the impact of impact of the impact of the impact of impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of impact of the impact of the impact of the impact of impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of impact of the impact of the impact of impact of the impact of the impact of impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of impact of the impact of the impact of the impact of the impact of the impact of the impact of impact of the impact of the impact of impact of the impact of the impact of impact of the impact of the impact of the impact of impact of the impact of the impact of the impact of the impact of the impact of the impact of the impact of impact of the impact of the impact of the impact of the impact of the impact of impact of the impact of the impact of impact of the impact of impact of the impact of the impact of impact of the impact of the impact of impact of the impact of the impact of impact of the impact of impact of the impact of the impact of impact of the impact of the impact of impact of the impact of impact of the impact of the impact of impact of the impact of the impact of impact of the impact of the impact of the impact of impact of the impact of impact of the impact of impact of the impact of the impact of impact of the impact of the impact of the impact of impact of the impact of the impact of impact of the impact of impact of the impact of the impact of the impact of the impact of impact of the impact of impact of the impact of impact of the impact of the impact of impact of the impact of the impact of impact of the impact of impact of the impact of the impact of impact of the impact of impact of the impact of impact of the impact of impact of the impact of the impact of impact of the impact of impact of the impact of impact of the impact of impact of the impact of impact of impact of the impact of the impact of impact of the impact of impact of the impact of impact of the impact of impact of the impact of impact of impact of the impact of impact of the impact of impact of impact of the impact of impact of impact of impact of the impact of impact of the impact of impact of the impact of impact of impact of the impact of impact of impact of the impact of impact of impact of impact of impact of impact of the impact of impact of the impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of the impact of impact of impact of impact of impact of impact of the impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of the impact of impact of the impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of impact of the impact **Notes:** the Feature Characterization test was performed in order to determine what elements should be evaluated for trust, whereas the System Comparison test was performed as a means to compare platforms to one another. Thus, only data from the System Comparison test is shown in this report. ### System Comparison Due to the delayed timing of receiving sUAS platforms for the DECISIVE program, trust evaluations were conducted to compare the Flyability Elios 2 GOV (a system received very early on) and a DJI Mavic 2 Pro (a system purchased prior to the program). The evaluation results explicitly reflect trust measures for these two sUAS, but the characteristics by which they are compared are shared by the other sUAS that are benchmarked throughout this report. As such, the results below are generalized to apply to these other systems. Evaluations were performed to compare the following sUAS characteristics: * Noise generated by the system: quiet vs. loud * Protective hardware on the system: with propeller protection vs. without propeller protection * Low light performance, i.e. when the system is in a dark environment: high visibility video transmission vs. low visibility video transmission These characteristics are applied to each sUAS as follows, with characteristics matching the Flyability Elios 2 GOV in blue and those matching the DJI Mavic Pro 2 in orange: **SUAS** & **Noise** & **Protective Hardware** & **Low Light Performance** Cleo Robotics Dronut XIP & **Loud** & **Yes** & **Low** DJI Mavic Pro 2 & **Quiet** & **No** & **Low** FUR Black Hornnet PRS & **Quiet** & **No** & **Low** Fyability Elios 2 GOV & **Loud** & **Yes** & **High** Lumenier Nighthawk V3 & **Quiet** & **No** & **High** Parrot ANAFI USA GOV & **Quiet** & **No** & **Low** Skydio X20 & **Quiet** & **No** & **Low** Teal Golden Eagle Significant differences or a trend toward significant differences are presented when comparing the results of individual questionnaire items. Only one significant difference was found for the noise characteristic, so no conclusive evaluation data is presented. Multiple significant or trending towards significant differences were found for the protective hardware and performance characteristics; the results of those evaluations are shown below: \begin{tabular}{p{42.7pt} p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}} \hline \hline \multicolumn{5}{c}{**Protective Hardware**} \\ \hline **Trust Measure** & **Items with significant or a trend toward significant differences** & **Fyability Elios 2 GOV** & **Dil Mavic Pro 2** & **Mann-Whitney** & **More Trustworthy** \\ \cline{2-5} & **Significant differences** & **Mean Score** & **Mean Score** & **Test** & **System** \\ \hline 1.1 beleive that there could be negative consequences when using the drone & 3.43 & 2.73 & U=375 p=0.09 & Dil Mavic Pro 2 \\ \cline{2-5} & 3. It is risky to interact with the drone & 3.53 & 2.93 & U=378 p=0.10 & Dil Mavic Pro 2 \\ \cline{2-5} & 4.1 beleive that the drone will act in my best interest & 4.93 & 4.23 & U=373 p=0.09 & Flyability Elios 2 GOV \\ \cline{2-5} & 6.1 beleive that the drone is interested in understanding my needs and preferences & 3.93 & 2.93 & U=345 p=0.05 & Flyability Elios 2 GOV \\ \cline{2-5} & 10. If use the drone, I think I would be able to depend on it completely & 4.63 & 4.19 & U=292 p=0.08 & Flyability Elios 2 GOV \\ \hline **CPPA** & 3.1 am confident in the system. & 5.16 & 4.31 & U=329.5 p=0.07 & Flyability Elios 2 GOV \\ \hline \hline \multicolumn{5}{c}{**Low Light Performance**} \\ \hline **Trust Measure** & **Items with significant or a trend toward significant differences** & **Fyability Elios 2 GOV** & **Dil Mavic Pro 2** & **Mann-Whitney** & **More Trustworthy** \\ \cline{2-5} & **Significant differences** & **Mean Score** & **Mean Score** & **Test** & **System** \\ \hline 2.1 feel I must be cautious when using the drone & 4.80 & 5.43 & U=360.5 p=0.08 & Flyability Elios 2 GOV \\ \cline{2-5} & 3. It is risky to interact with the drone & 3.50 & 4.22 & U=293 p=0.05 & Flyability Elios 2 GOV \\ \cline{2-5} & 6.1 beleive that the drone is interested in understanding my needs and preferences & 3.56 & 3.03 & U=371 p=0.11 & Flyability Elios 2 GOV \\ \cline{2-5} & 11.1 can always rely on the drone for performing mapping task & 5.53 & 4.90 & U=373 p=0.12 & Flyability Elios 2 GOV \\ \hline **CPPA** & 3.1 am confident in the system. & 5.64 & 4.96 & U=329.5 p=0.07 & Flyability Elios 2 GOV \\ \cline{2-5} & 4. The system provides security. & 4.92 & 4.20 & U=333.5 p=0.08 & Flyability Elios 2 GOV \\ \hline \end{tabular} Based on these results, it is suggested that the Flyability Elios 2 GOV is considered to be the more trustworthy system when compared to the Dil Mavic Pro 2. Generalizing these results to the other sUAS in order to determine a best in class evaluation across all platforms (i.e., those with protective hardware and those with higher performance in low light environments) is as follows: \begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt}|p{42.7pt}|p{42.7pt}} \hline \hline \multicolumn{5}{c}{**More Trustworthy System Per Possession of**} \\ \multicolumn{5}{c}{**Protective Hardware**} \\ \hline ## Situation Awareness Interface-Afforded Attention Allocation ### Affiliated publications: [20] Summary of Test Method This test method is used to evaluate the probability of attending to various situational elements (SEs) provided by the SUAS interface over a series of operationally relevant scenarios (ORS), using the SEEV model for analysis. In order to calculate SA by applying the SEEV model, each SE can be divided into the four SEEV parameters (Salience, Effort, Expectancy, and Value) to quantify the attention allocation proportion (f). The quantitative value is the weighted sum or multiplication of four parameters. Salience is weighted according to the color, size, and type of the SE. Effort is weighted according to the movement of the operator's eye to see the SE. Expectancy refers to the weight of the event frequency or changeability of SE. Value refers to the importance according to the mission of the task. SEs can be quantified by substituting numbers corresponding to information of S5s about Salience/Effort/Expectancy/Value into the SEEV model. Four Landolt Cs (Orange/Red/Green/Blue), two images (Radiocative/Oxygen), and four indicators (Altitude/Heading/From Distance to Surface/System Battery) were identified as SEs from a series of operationally relevant scenarios (ORSs) were designed in order to mimic specific conditions in the environment, elements of interest, and mission tasks for SUAS operations in subterranean and constrained indoor spaces. Based on the result of the attention allocation proportion (f), we set out to compare f; among different platforms (depending on the platform, there are cases where there is no SE(s) among indicator SEs). ### Benchmarking Results Graph of the attention allocation proportion (f) for each platform: ### DECISWE Benchmarking Data Report U.S. Army DEVCOM-SC Contract # W911QY-18-2-0006 UMass Lowell Approved for public release: PAO #PR2023_74172 ## 6 Summary of Test Method According to the flight missions (Aviate/Navigate/Hazard detection), SEs are divided into required SEs and desire SEs based on the highest importance perceived by pilots. Therefore, the difference in abilities according to the missions can be emphasized by assigning weight required SEs and desired SEs. Through an experiment with a participant, each perception level (p(SE)), which is measured whether the participant is aware of the SE (i.e., undetected, detected, or comprehended), is obtained. The perception levels of SEs can be expressed as a metric of SA. The metric is the weighted sum of required SEs and desired SE. Existing models and assessment methods, including the MIDAS (Man-machine Integration Design and Analysis)-based SA model and Attention Allocation Model [27], are adopted and refined in order to make them more suitable for sUAS operations in subterranean and constrained indoor environments. The metric goes through the theoretical models and OSA is obtained. Participants can be recruited to perform this test in-person or online (e.g., Amazon Mechanical Turk, Prolific), noting that online questionnaires can yield higher response rates. Participants watch an evaluation video of a sUAS flight with a specific focus to observe and comprehend the SEs of the operational environment. At certain points while watching the video, the video will pause and questions from the SAGAT questionnaire will pop up for the participant to answer. The participants' answers are evaluated for accuracy to determine a corresponding score, and SA values are calculated using quantitative assessment methods. ### Benchmarking Results Boxplots of operator SA values from the MIDAS-based SA model (MDS) and the attention allocation SA model (AAM):
2310.11970
Quantifying Privacy Risks of Prompts in Visual Prompt Learning
Large-scale pre-trained models are increasingly adapted to downstream tasks through a new paradigm called prompt learning. In contrast to fine-tuning, prompt learning does not update the pre-trained model's parameters. Instead, it only learns an input perturbation, namely prompt, to be added to the downstream task data for predictions. Given the fast development of prompt learning, a well-generalized prompt inevitably becomes a valuable asset as significant effort and proprietary data are used to create it. This naturally raises the question of whether a prompt may leak the proprietary information of its training data. In this paper, we perform the first comprehensive privacy assessment of prompts learned by visual prompt learning through the lens of property inference and membership inference attacks. Our empirical evaluation shows that the prompts are vulnerable to both attacks. We also demonstrate that the adversary can mount a successful property inference attack with limited cost. Moreover, we show that membership inference attacks against prompts can be successful with relaxed adversarial assumptions. We further make some initial investigations on the defenses and observe that our method can mitigate the membership inference attacks with a decent utility-defense trade-off but fails to defend against property inference attacks. We hope our results can shed light on the privacy risks of the popular prompt learning paradigm. To facilitate the research in this direction, we will share our code and models with the community.
Yixin Wu, Rui Wen, Michael Backes, Pascal Berrang, Mathias Humbert, Yun Shen, Yang Zhang
2023-10-18T13:51:27Z
http://arxiv.org/abs/2310.11970v1
# Quantifying Privacy Risks of Prompts in Visual Prompt Learning ###### Abstract Large-scale pre-trained models are increasingly adapted to downstream tasks through a new paradigm called prompt learning. In contrast to fine-tuning, prompt learning does not update the pre-trained model's parameters. Instead, it only learns an input perturbation, namely prompt, to be added to the downstream task data for predictions. Given the fast development of prompt learning, a well-generalized prompt inevitably becomes a valuable asset as significant effort and proprietary data are used to create it. This naturally raises the question of whether a prompt may leak the proprietary information of its training data. In this paper, we perform the first comprehensive privacy assessment of prompts learned by visual prompt learning through the lens of property inference and membership inference attacks. Our empirical evaluation shows that the prompts are vulnerable to both attacks. We also demonstrate that the adversary can mount a successful property inference attack with limited cost. Moreover, we show that membership inference attacks against prompts can be successful with relaxed adversarial assumptions. We further make some initial investigations on the defenses and observe that our method can mitigate the membership inference attacks with a decent utility-defense trade-off but fails to defend against property inference attacks. We hope our results can shed light on the privacy risks of the popular prompt learning paradigm. To facilitate the research in this direction, we will share our code and models with the community.1 Footnote 1: [https://github.com/yxoh/prompt_leak_usenix2024/](https://github.com/yxoh/prompt_leak_usenix2024/). ## 1 Introduction Recent research has provided ample evidence that increasing the size of machine learning (ML) models, i.e., the number of parameters, is a pivotal factor in enhancing their overall performance [6, 40, 11]. One of the commonly employed strategies for adapting such large-scale pre-trained ML models to downstream tasks is fine-tuning [59], which updates model parameters for specific downstream tasks via back-propagation. Fine-tuning, however, suffers from two main drawbacks. First, it leads to high computational costs because all model parameters need to be updated. In addition, it is storage inefficient since a separate copy of the fine-tuned model needs to be stored for each downstream task. In order to address these limitations, researchers have proposed prompt learning as an alternative to fine-tuning [56, 4, 5, 20, 25, 30, 26, 31, 41]. Prompt learning involves learning an input perturbation, referred to as a _prompt_, that enables shifting downstream task data to the original data distribution. The pre-trained model generates a task-specific output based on this prompt. It is important to note that, during prompt learning, the pre-trained model remains frozen, leading to a significant decrease in the number of learned parameters compared to fine-tuning (see Section 2). In recent years, prompt learning has been extensively validated and shown to be effective in the domains of computer vision (CV) [56, 56, 4, 20, 4] and natural language processing (NLP) [25, 26, 30, 41]. It is expected that prompt as a service (PaaS) will gain popularity.2 In this scenario, a user can request a prompt for a downstream task from the PaaS provider without the need for arduous fine-tuning. The user then combines their data with the prompt and inputs them into the pre-trained model to obtain the predictions, as depicted in Figure 1. In this way, the user can run the pre-trained model and keep their data on-premise, while the PaaS provider can reuse a single pre-trained model to support multiple downstream tasks. These benefits differentiate PaaS from machine learning as a service (MLaaS) [42]. As a result, a well-generalized prompt becomes a valuable asset for PaaS providers, as they invest significant efforts and use proprietary data to develop it. Footnote 2: [https://twitter.com/AndrewNG/status/1650938079027548160](https://twitter.com/AndrewNG/status/1650938079027548160). Previous research has demonstrated that ML models are vulnerable to various privacy attacks, such as property inference attacks [55, 14] and membership inference attacks [44, 28, 46], which can disclose sensitive information about the training data used to create the models. Such data leakage can severely damage the provider's privacy as well as intellectual property. However, to the best of our knowledge, previous research about such privacy risks has focused on ML models at the model level and has not yet been explored on prompts at the input level. As the number of learned parameters is significantly reduced in prompt learning, it is natural to assume that this paradigm would compress the proprietary information of its training data, leading to less effective privacy attacks (see Section 2.2). This motivates us to investigate whether a prompt also leaks the proprietary information of its training data that the PaaS provider does not intend to disclose, especially when such prompts are generated from images containing sensitive private information. **Contributions.** In this paper, we conduct the first privacy risk assessment of prompts learned by prompt learning. We focus on prompt learning for image classification tasks [4], which represents one of the most promising directions in computer vision research [4, 5, 8, 20, 22, 31, 49, 51, 56]. Our primary objective is to determine _to what extent a visual prompt possesses the potential to disclose confidential information_. Specifically, we perform property inference and membership inference, two dominant privacy attacks against ML models [7, 14, 36, 46], where the former aims to deduce sensitive properties of the dataset used to train the target prompt, and the latter determines if a given data sample is part of the target prompt's training dataset. We adopt the existing attack methodologies for property inference [3, 14] and membership inference (neural network-based attacks [44, 46], metric-based attacks [47], and gradient-based attacks [24, 38]). Note that our goal is not to develop new property inference attacks or membership inference attacks against prompts. Instead, we aim to use existing methods with well-established threat models to systematically assess the privacy risks of prompt learning. The overview of our study is depicted in Figure 1. The empirical evaluation shows prompts are susceptible to property inference attacks across multiple datasets and pre-trained models. For example, we can achieve at least 81% accuracy in inferring different target properties from prompts learned for CelebA [34]. Moreover, when inferring the training dataset size of the prompt, we can achieve 100% test accuracy in all cases. We also conduct a cost analysis to show that the adversary can either train the shadow prompts for fewer epochs or use fewer shadow prompts to minimize their cost while maintaining decent attack performance. Our study also provides empirical evidence that membership inference poses a practical threat to prompts. The experimental results demonstrate that existing attack methodologies are effective across a range of datasets and pre-trained models. In particular, the metric-based attack with modified prediction entropy is the most effective one, e.g., achieving 93% membership inference accuracy on the AFAD dataset [39]. The gradient-based attacks follow closely behind and outperform the neural network-based (NN-based) attacks. We further investigate factors that may affect membership inference from both the victim's and the adversary's perspectives. Specifically, from the victim's side, we conduct a detailed analysis of the relationship between the overfitting levels of prompts and attack success [50]. The results indicate that the attack success is positively correlated with the overfitting level. Moreover, excessive training epochs and inadequate training data increase overfitting levels, exacerbating the privacy threat posed by membership inference attacks. From an adversarial perspective, we demonstrate that the adversary can relax the assumption that the shadow dataset has the same distribution as the target prompt's training dataset. This finding further exemplifies the membership privacy risks of prompts learned by prompt learning. We also conduct preliminary investigations into mitigating privacy risks associated with prompt learning. In particular, we explore the effectiveness of adding Gaussian noise to prompts, as proposed in prior research [54, 18, 57]. Our experiments demonstrate that there exists a decent utility-defense trade-off when mitigating both naive and adaptive membership inference attacks. However, when defending against property inference attacks, we need higher Gaussian noise to reduce the attack performance, leading to unacceptable utility deterioration. Our findings indicate that the statistical information of the training dataset in the target prompts is harder to hide than individual information, i.e., membership. Our results highlight the need for further research into more effective defense mechanisms for mitigating property inference attacks in prompt learning. **Impact.** This study presents an exploration of the privacy risks associated with prompt learning, an emerging machine-learning paradigm. Our investigation represents the first of its kind in this area. Our findings indicate that prompts learned through prompt learning are susceptible to privacy breaches. We hope our study will increase the awareness of the stakeholders when deploying prompt learning in real-world applications. Moreover, to facilitate research in the field, we will share our code and models. ## 2 Preliminaries ### Prompt Learning **Overview.** Prompt learning is a new machine-learning paradigm introduced to address the limitations of fine-tuning [4, 56, 20, 25, 26, 30, 31, 41, 20]. It aims at learning a task-specific prompt that can be added to the input data while keeping the pre-trained model's parameters frozen. With this new paradigm, the service provider can share the same pre-trained model across various downstream tasks with different prompts in a space- and computation-efficient manner. In this paper, we focus on prompt learning in computer vision, i.e., _visual prompt learning (VPL)_[4]. It is generally composed of two stages: input transformation and output transformation. **Input Transformation.** As shown in Figure 2, the goal of input transformation is to learn an input prompt \(\delta\) in the Figure 1: Overview of prompt usage and inference attacks. The prompt is a pixel patch. The prompted image is an original image with an added prompt. Property inference infers sensitive properties of the target prompt’s training dataset that the PaaS provider does not intend to disclose. Membership inference infers whether a given sample was in the target prompt’s training dataset. pixel space, _i.e., in the form of a single image_, via back-propagation. Given a dataset \(\mathcal{D}=(\mathcal{X},\mathcal{Y})\), a pre-trained model \(\mathcal{M}\) parameterized by \(\omega\), and a prompt \(\delta\) parameterized by \(\theta\), the prompt generation process \(q(\mathcal{D},\mathcal{M})\) uses Equation 1 to maximize the likelihood of \(\mathcal{Y}\): \[\max_{\theta}P_{\theta,\omega}(\mathcal{Y}|\mathcal{X}+\delta_{\theta}), \tag{1}\] where the prompt parameters \(\theta\) are learned via back-propagation and the model parameters \(\omega\) are frozen. Note that the prompt can be any visual template chosen by the users, e.g., padding [4]. At inference time, the learned prompt \(\delta\) is added to each test image \(x\) to specify the task. **Output Transformation.** Usually, the pre-trained model has a different number of classes from the downstream tasks. To accomplish the downstream task, the prompt owner supplies a label mapping scheme \(\tau\) to map the model's outputs into the target labels. As shown in Figure 2, a commonly used scheme is hard-coded mapping [13]. It consists of mapping the first \(n\) pre-trained model class indices to the downstream class indices, where \(n\) is the number of classes in the downstream task. The unassigned pre-trained classes are left out for the loss computation. We rely on hard-coded mapping due to its simplicity and proven effectiveness [4]. ### Prompt Learning vs. Fine-Tuning **Training Time.** The fine-tuning paradigm updates all parameters of the pre-trained model via back-propagation. However, as shown in Figure 2, VPL learns a visual prompt, _i.e., in the form of a single image_, on the training dataset \(\mathcal{D}_{\text{train}}=(\mathcal{X},\mathcal{Y})\). During the back-propagation, the pre-trained model is frozen, and only the parameters of the visual prompt are updated. In this way, prompt learning dramatically lowers the bar for users adapting large-scale vision models for real-world applications. Prompt learning saves significant training resources and storage space, especially when a pre-trained model serves multiple downstream tasks. For example, the Vision Transformer (ViT-B) [23] we use in later experiments has 86,567,656 parameters, and the visual prompt, i.e., a padding template with a prompt size of 30, has 69,840 parameters. For each downstream task, the fine-tuning paradigm updates the entire model (86,567,656 parameters), whereas, in prompt learning, a single prompt, i.e., a single image with only 69,840 parameters, is updated. The number of parameters updated by prompt learning is only 0.08% of those of fine-tuning, so it is natural to assume that prompt learning would heavily compress the training dataset information, leading to less effective privacy attacks. However, we show that the prompts are still susceptible to two privacy attacks in later experiments. **Inference Time.** As shown in Figure 1, both the trained prompt and pre-trained models are involved in the inference process. Given a test image \(x\), the user gets the prompted image, i.e., adding the trained prompt \(\delta\) to \(x\), and then feeds the prompted image into the pre-trained model \(\mathcal{M}\) to get the prediction result. In the fine-tuning approach, the user directly feeds the given test image \(x\) into the fine-tuned model to get the prediction result. ### Application Scenario Taking a medical researcher as an example, they aim to classify CT images for COVID-19 diagnosis. Instead of hiring a computer vision expert to fine-tune a model, the researcher can request a prompt from a PaaS provider. They can either opt for a publicly available pre-trained model or allow the PaaS provider to select a suitable one for them. The provider uses their proprietary data, e.g., CT images with explicit consent, to learn a customized prompt and return it to them. At inference time, the researcher simply combines their testing data with the prompt and feeds them to the pre-trained model to get the predictions. In this way, users minimize their effort in developing a well-generalized prompt and keep their data on-premise, while the PaaS provider can reuse a single pre-trained model to support multiple downstream tasks. Meanwhile, the user can adapt to different tasks, e.g., clinical decision support, by trivially switching to different prompts. These benefits differentiate PaaS from machine learning as a service (MLaaS) [42]. ## 3 Property Inference Attacks We first measure the privacy risks of prompt learning through the lens of property inference attacks. Our objective here is not to devise novel attacks for prompts but rather to leverage well-established threat models and existing techniques to gauge the privacy implications of prompts. ### Threat Model **Attack Scenario.** The PaaS provider is a resourceful entity that uses a pre-trained model \(\mathcal{M}\) and their private dataset \(\mathcal{D}_{target}\) to create well-generalized prompts \(\Delta\) for downstream tasks. The adversary can be any legitimate user of this PaaS provider and can obtain a prompt \(\delta\) for a target downstream task together with the white-box access to \(\mathcal{M}\). Note that a pre-defined label mapping \(\tau\) is also provided by the PaaS provider (see Section 2.1). The adversary runs the target downstream task locally and does not interact with the PaaS provider. **Adversary's Goal.** Given a target prompt \(\delta_{target}\), the goal of the adversary is to infer confidential macro-level properties of the training dataset \(\mathcal{D}_{target}\), which the PaaS provider does not intend to share. Taking a prompt \(\delta_{target}\) for facial recognition as an example, the adversary may intend to infer the confidential properties of the private dataset \(\mathcal{D}_{target}\), such Figure 2: Overview of visual prompt learning (VPL). We learn an input prompt via back-propagation [4] at the input transformation stage. We apply hard-coded mapping [13] to map the pre-trained model’s outputs into the target labels at the output transformation stage. as the proportion of males and the proportion of youth. The adversary considers such confidential properties as targets, causing real-world harm to the PasS provider, e.g., reputation damages, if the adversary can infer that certain classes of people, such as minorities, are underrepresented in the training data [14]. For simplicity, we focus on binary properties, such as if the proportion of males in the training dataset is 30% or 70%, in most of our experiments, following previous work [14]. We later show that our attack can be generalized to properties with multiple choices (see Section 3.4). **Adversary's Knowledge and Capability.** We assume that the adversary has white-box access to the pre-trained model \(\mathcal{M}\) and the label mapping \(\tau\). Note that the white-box access in the paper is more restricted than conventional white-box access, as the latter can retrieve all information about the model, such as model parameters and intermediate outputs. In this paper, the adversary only needs to know the architecture and version of the pre-trained model from the PaaS provider, and such knowledge is often disclosed by the PaaS provider for marketing purposes. We also assume that the adversary has a shadow dataset \(\mathcal{D}_{shadow}\) of similar distribution as \(\mathcal{D}_{target}\). For instance, in our evaluation (see Section 3.3), we select both \(\mathcal{D}_{shadow}\) and \(\mathcal{D}_{target}\) from the same dataset CelebA [34]. These two subsets are disjoint and may have different statistical properties, such as gender/race/age ratios. We emphasize that previous property inference attacks also make the same assumption [14, 55]. ### Measurement Methodology **Shadow Prompt Generation.** Given a shadow dataset \(\mathcal{D}_{shadow}\) and associated data properties \(\mathcal{P}=\{p^{1},...,p^{k}\}\), the adversary uses Equation 2 to generate the shadow prompts: \[\Delta_{shadow}=\{q(s_{\mathcal{P}}(\mathcal{D}_{shadow},\Phi_{i},N_{i}), \mathcal{M})\}_{i=1}^{m}, \tag{2}\] where \(s_{\mathcal{P}}\) denotes a sampling function that samples \(N_{i}\) data points from \(\mathcal{D}_{shadow}\) without replacement and the distribution of sampled data with properties \(\mathcal{P}\) satisfying the conditions \(\Phi_{i}\). Note that, \(\Phi_{i}=\{\phi_{1}^{1},\phi_{i}^{2},...,\phi_{i}^{k}\}\), \(\phi_{i}^{k}\) is the actual value of \(p^{k}\) in round \(i\), \(m\) denotes the number of shadow prompts, and \(N_{i}\) denotes the size of the sampled dataset from \(\mathcal{D}_{shadow}\) in round \(i\). In previous work [14, 36, 55], apart from the targeted property, say \(p^{1}\) (and associated \(\phi^{1}\)), they tend to use a fixed set of other conditions, i.e., \(\{\phi^{2},...,\phi^{k}\}\), and \(N\). For example, the target property is the proportion of males. They tend to keep the training data size the same when training all target prompts and shadow prompts in their evaluation. However, the training data size of target prompts and shadow prompts is likely to be different in a realistic scenario. If the target prompt is trained on 500 samples with 70% males, but shadow prompts are trained on 2000 samples with 70% males. Such discrepancies in training dataset sizes, e.g., 500 and 2000, may influence the attack performance. In contrast to those approaches, we consider a _mixed setting_ by design. As we can see in Equation 2, in every round \(i\), we generate a prompt \(\delta_{i}\) from a subset sampled by \(s_{\mathcal{P}}(\mathcal{D}_{shadow},\Phi_{i},N_{i})\) with properties \(\mathcal{P}\) satisfying different \(\Phi_{i}\). For instance, given \(\mathcal{P}=\{youth,male\}\), \(\Phi=\{70\%,70\%\}\), and \(N=2000\), \(s_{\mathcal{P}}(\mathcal{D},\Phi,N)\) samples 2000 data points from \(\mathcal{D}\) to train the prompt. Among them, 980 data points are _young males_, 420 data points are _young females_, 420 data points are _old males_, and 180 data points are _old females_. As such, our approach can guarantee a more realistic shadow prompt generation and fairer evaluation. **Attack Model Training.** After obtaining the shadow prompts \(\Delta_{shadow}\), we can build the attack model for each property \(p^{k}\): \[\mathcal{A}:\Delta_{shadow}\to y^{k}. \tag{3}\] We train the attack model \(\mathcal{A}\) by optimizing the following loss function: \[\mathcal{L}[\mathcal{A}(\Delta_{shadow}),y^{k}], \tag{4}\] where \(\mathcal{L}\) is a cross-entropy loss function in this paper. Concretely, the attack model takes \(\delta_{i}\in\Delta_{shadow}\) as input. To incorporate the input size of the attack model, we use zero value to pad it to an image of size \(224\times 224\), with RGB channels. This means the attack model is an image classifier. The adversary then treats the corresponding condition value \(\phi_{i}^{k}\) of the target property \(p^{k}\) as the class labels \(y_{i}^{k}\). To infer the target property \(p^{k}\) of \(\mathcal{D}_{target}\), the adversary queries the attack model \(\mathcal{A}\) with \(\delta_{target}\) and obtains the corresponding prediction result, i.e., the exact condition value of \(p^{k}\). ### Measurement Settings **Datasets and Downstream Tasks.** We use four datasets in our study, including CIFAR10 [1], CelebA [34], UTKFace [52], and AFAD [39]. These datasets contain sensitive properties (the proportion of males, the proportion of youth, etc.) and are widely used to evaluate the performance of property inference attacks [14, 55]. The introduction of these datasets and corresponding downstream tasks are as follows. * **CIFAR10** is a benchmark dataset for image classification that contains 60K images in 10 classes. In this paper, the downstream task is a 10-class image classification. * **CelebA** is a large-scale facial attribute dataset containing more than 200K facial images with 40 binary attributes. We pick three attributes, including _MouthSlightlyOpen_, _Attractive_, and _WearingLipstick_, and use their combinations to create an 8-class attribute classification as the downstream task. * **UTKFace** has about 23K facial images. Each image has three attributes: _gender_, _race_, and _age_. We consider race classification, i.e., White, Black, Asian, Indian, and Others, as the downstream task. * **AFAD** is short for Asian Face Age Dataset. It contains more than 160K facial images, each with _age_ and _gender_ attributes. In this paper, we consider age classification as the downstream task. Specifically, we divide the values of _age_ attribute into five bins: \(15\leq age<20\), \(20\leq age<25\), \(25\leq age<30\), \(30\leq age<35\), and \(35\leq age<40\), leading to a 5-class image classification. **Property Inference Task Configurations.** For each task, we split the dataset into three disjoint subsets \(\mathcal{D}_{target}\), \(\mathcal{D}_{shadow}\) and \(\mathcal{D}_{validation}\) in the ratio of \(0.475:0.475:0.05\). \(\mathcal{D}_{target}\) and \(\mathcal{D}_{shadow}\) are used to develop the target prompt set \(\Delta_{target}\) and shadow prompt set \(\Delta_{shadow}\), respectively. We evaluate the utility of all prompts on \(\mathcal{D}_{validation}\). We train 2000 shadow prompts to construct the attack training dataset and 400 target prompts to build the attack testing dataset in our experiments. Our property inference targets include training dataset size, proportion of males, and proportion of youth. Note that recent research demonstrates that the size of the training dataset significantly affects the performance of the model, necessitating substantial efforts to identify the optimal values [35, 58]. Consequently, we also view the training dataset size as confidential information and as one of our inference objectives. We outline the details of all inference tasks below and summarize them in Table 1. * **Inference Task on CIFAR10 (\(T_{1}\)).** For CIFAR10, we only consider the size of the prompt training dataset \(N\) as the property inference target (\(T_{1}^{slice}\)). We focus on two training data sizes, i.e., \(y^{1}\in\{500,2000\}\), and run the sampling function (see Equation 2) 1000 times on \(\mathcal{D}_{shadow}\) to generate 1000 shadow prompts for each training data size. Meanwhile, we generate 200 target prompts in the same manner for each training data size. * **Inference Task on CelebA (\(T_{2}\)).** For CelebA, we consider the size of the prompt training dataset (\(T_{2}^{size}\)), the proportion of males (\(T_{2}^{male}\)), and the proportion of youth (\(T_{2}^{south}\)) of the data samples used to train the target prompts as the property inference targets. \(T_{2}^{male}\) is based on the _male_ attribute, and \(T_{2}^{south}\) is based on the _young_ attribute. Both attributes are binary. The inference labels of each property are: \(y^{1}\in\{500,2000\}\), \(y^{2}\in\{30\%,70\%\}\), and \(y^{3}\in\{30\%,70\%\}\). Recall that we consider a mixed data sample strategy. Given these three properties, we end up with eight sampling functions in total. We run each sampling function 250 times on \(\mathcal{D}_{shadow}\) and 50 times on \(\mathcal{D}_{target}\) to generate the shadow prompt set \(\Delta_{shadow}\) and target prompt set \(\Delta_{target}\), respectively. * **Inference Task on UTKFace (\(T_{3}\)).** For UTKFace, we also consider the size of the prompt training dataset (\(T_{3}^{slice}\)), the proportion of males (\(T_{3}^{male}\)), and the proportion of youth (\(T_{3}^{south}\)) as the property inference targets. Note that \(T_{3}^{male}\) is based on the _gender_ attribute, and \(T_{3}^{south}\) is based on the _age_ attribute. Specifically, we use the median of _age_ values from all images, i.e., 30, as the threshold. We then label samples with \(0\leq\textit{age}\leq 30\) as Young and \(30\leq\textit{age}\leq 116\) as Old. The inference labels of each property are the same as those of CelebA. Thus, we follow the same sampling settings as those of \(T_{2}\) to generate the shadow and target prompts. * **Inference Task on AFAD (\(T_{4}\)).** For AFAD, we consider the size of the prompt training dataset (\(T_{4}^{size}\)) and the proportion of males (\(T_{4}^{male}\)) as the property inference targets. \(T_{4}^{male}\) is based on the _gender_ attribute. The inference labels of each property are: \(y^{1}\in\{500,2000\}\) and \(y^{2}\in\{30\%,70\%\}\). We use four sampling functions to generate the shadow and target prompts. We run each sampling function 500 times on \(\mathcal{D}_{shadow}\) and 100 times on \(\mathcal{D}_{target}\) to generate the shadow prompt set \(\Delta_{shadow}\) and target prompt set \(\Delta_{target}\), respectively. **Metric.** As the attack training/testing dataset is balanced in terms of class distribution, we use test accuracy as the main metric to evaluate the prompt utility and the property inference attacks. **Pre-trained Models.** We select three representative vision models in our experiments, including ResNet-18 (RN18) [15], Big Transfer (BiT-M) [23], and Vision Transformer (ViT-B) [12]. More details can be found in Appendix A. **Prompts.** We follow the default training settings [4] to train prompts on the above vision models. Specifically, we choose the padding template with a prompt size of 30. The number of parameters for each prompt is calculated as \(2\times C\times p\times(H+W-2p)\), where \(p\), \(C\), \(H\), and \(W\) are prompt size, image channels, height, and width, respectively. All images are resized to 224 \(\times\) 224 to match the input of the pre-trained models. The number of parameters for each prompt is 69,840. We leverage the same hard-coded mapping method [4] to map the first \(n\) indices of the pre-trained model's outputs to the target labels, where \(n\) is the number of target classes. We adopt cross-entropy as the loss function and SGD as the optimizer with a learning rate of 40 and the cosine scheduler. In our property inference attacks, we train all prompts for 50 epochs for efficiency. **Attack Models.** We leverage the pre-trained RN18 [15] as the backbone of the attack model \(\mathcal{A}\). We fit a linear classifier on top of the pre-trained RN18 to infer the property labels. We employ cross-entropy as the loss function and Adam as the optimizer with a learning rate of 1e-5. The attack model \begin{table} \begin{tabular}{c|c|c|c|c|c c c} \hline \hline Inference & Dataset & Downstream & Target & Inference & \multicolumn{3}{c}{Text Accuracy} \\ Task & & Task & Property & Labels & RN18 & BiT-M & ViT-B \\ \hline \(T_{1}\) & CIFAR10 & Image Classification & Size (\(T_{1}^{slice}\)) & \(\{500,2000\}\) & 100.00 & 100.00 & 100.00 \\ \hline \multirow{3}{*}{\(T_{2}\)} & \multirow{3}{*}{CalbA} & \multirow{3}{*}{Multi-Attribute Classification} & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & Size (\(T_{1}^{slice}\)) & \(\{500,2000\}\) & 100.00 & 100.00 & 100.00 \\ & & & & & & & \\ \cline{1-1} \cline{6-8} & & & & & & & \\ \cline{1-1} \cline{6-8} & & & & & & & \\ \hline \multirow{3}{*}{\(T_{3}\)} & \multirow{3}{*}{UTKFace} & \multirow{3}{*}{Race Classification} & \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \(\{500,2000\}\) & 100.00 & 100.00 & 100.00 \\ & & & & & & & \\ \cline{1-1} \cline{6-8} & & & & & & & \\ \cline{1-1} \cline{6-8} & & & & & & & \\ \cline{1-1} \cline{6-8} & & & & & & & \\ \hline \multirow{3}{*}{\(T_{4}\)} & \multirow{3}{*}{AEAD} & \multirow{3}{*}{Age Classification} & Size (\(T_{1}^{slice}\)) & \(\{500,2000\}\) & 100.00 & 100.00 & 100.00 \\ & & & & & & & \\ \cline{1-1} \cline{6-8} & & & & & & & \\ \cline{1-1} \cline{6-8} & & & & & & & \\ \hline \multirow{3}{*}{\(T_{5}\)} & \multirow{3}{*}{UTKFace} & \multirow{3}{*}{Race Classification} & \multirow{3}{*}{ \begin{tabular}{} \end{tabular} } & \(\{500,2000\}\) & 100.00 & 100.00 & 100.00 \\ & & & & & & & \\ \cline{1-1} \cline{6-8} & & & & & & & \\ \cline{1-1} \cline{6-8} & & & & & & & \\ \cline{1-1} \cline{6-8} & & & & & & & \\ \hline \multirow{3}{*}{\(T_{6}\)} & \multirow{3}{*}{AEAD} & \multirow{3}{*}{Age Classification} & Size (\(T_{1}^{slice}\)) & \(\{500,2000\}\) & 100.00 & 100.00 & 100.00 \\ & & & & & & & \\ \cline{1-1} \cline{6-8} & & & & & & & & \\ \cline{1-1} \cline{6-8} & & & & & & & \\ \cline{1-1} \cline{6-8} & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Experimental settings of the property inference attacks with the corresponding attack performance. is trained on the shadow prompt set \(\Delta_{shadow}\) for 100 epochs. ### Measurement Results **Property Inference Privacy Risks.** We report the main results on four datasets and three pre-trained models in Table 1. In general, we observe that proposed attacks achieve good performance across different pre-trained models and datasets. For example, on CelebA and RN18, we achieve at least 93.00% accuracy in inferring target properties. Furthermore, we achieve maximum performance (100.00%) on all datasets, considering the size of the prompt training dataset as the target property. Additionally, we observe that the pre-trained model has a moderate influence on the attack performance. Specifically, when inferring the proportion of youth on UTKFace (\(T_{3}^{yoint}\) in Table 1), the test accuracy is 81.75% on RN18, 87.50% on BiT-M, and 84.00% on ViT-B. Although the test accuracy varies across pre-trained models and datasets, the proposed attacks are generally effective, indicating prompts are indeed vulnerable to property inference attacks. **Extension to Multi-Class Property Inference.** In the above experiments, we treat property inference as a binary classification task. Here, we extend it to multi-class classification and explore if the adversary can infer finer granularity information from the prompts. To this end, we adjust the condition values of the proportion of males in CelebA to {10%, 30%, 50%, 70%, 90%} and use RN18 as the pre-trained model. Note that the condition values of the training dataset size and the proportion of youth remain the same. In turn, we have 20 sampling functions in total. We keep the sizes of \(\Delta_{shadow}\) and \(\Delta_{target}\) unchanged and run each sampling function 100 times on \(\mathcal{D}_{shadow}\) and 20 times on \(\mathcal{D}_{target}\) to generate the shadow prompt set \(\Delta_{shadow}\) and target prompt set \(\Delta_{target}\), respectively. We further adjust the condition values of the training dataset size in CIFAR10 to {500, 1000, 1500, 1750, 1800, 2000} and use RN18 as the pre-trained model to explore the performance of the property inference attack when the range of options for the dataset size is closer together. To this end, we have 6 sampling functions in total and run each sampling function 400 times on \(\mathcal{D}_{shadow}\) and 80 times on \(\mathcal{D}_{target}\) to generate the shadow prompt set \(\Delta_{shadow}\) and target prompt set \(\Delta_{target}\), respectively. The test accuracy for the proportion of males is 90.25%, while for training dataset size is 95.40%, demonstrating that property inference attacks can successfully infer finer granularity information from prompts. **Takeaways.** We show that the property inference attacks achieve remarkable performance on diverse datasets and pre-trained models. Moreover, the proposed attacks can be extended to multi-class classification, providing evidence that the adversary can infer fine granularity property information from prompts. ### Factors Affecting Property Inference We conduct an empirical analysis to investigate the factors that may influence the performance and cost of property inference attacks on prompts. **Number of Epochs.** Previously, we train both shadow prompts and target prompts for 50 epochs. However, it is likely that the adversary has no knowledge about the number of epochs for target prompts. Next, we investigate whether the number of epochs in the training process of shadow prompts must match that of the target prompts in order to maintain a strong attack performance. Concretely, we vary the number of epochs for shadow prompts from 30 to 70 while fixing the number of epochs for target prompts to 50. The minimum number of epochs is 30 because the prompt starts to outperform the pre-trained model solely on the downstream task at this point. We show the attack performance on CelebA in Figure 2(a). In general, the proposed attacks work similarly well when the number of epochs for target and shadow prompts do not match. The results also show that the proposed attacks can achieve comparable performance even with fewer epochs, e.g., 30 epochs, to train shadow prompts. In addition, increasing the number of epochs for shadow prompts does not improve the attack performance. For example, the test accuracy for inferring the proportion of youth is between 92.50% and 94.00% depending on the number of epochs. This implies that the proposed attacks are robust to variations in the number of epochs for training shadow prompts, making them more practical and efficient. **Attack Training Dataset Size.** So far, we have assumed the adversary can rely on an attack training dataset containing 2000 shadow prompts. However, creating such a dataset costs considerable computational resources. Hence, we investigate the influence of the attack training dataset size on the attack performance. Specifically, we randomly sample balanced subsets from the original attack training dataset on CelebA with different sizes {400, 800, 1200, 1600, 2000}. The size of the attack testing dataset remains the same as for the previous experiments, i.e., 400 target prompts. As shown in Figure 2(b), the size of the training dataset only has negligible influence on the attack performance, indicating that a relatively small number of training samples, e.g., 400 shadow prompts, are sufficient to launch the property inference attacks against prompts. This finding implies that the cost of the proposed attack can be further reduced. **Takeaways.** We demonstrate that the proposed attacks can be performed with cost-efficiency by training the shadow prompts with fewer epochs or a smaller number of shadow Figure 3: Attack performance of the proposed property inference attacks on CelebA with (a) different numbers of epochs for training shadow prompts and (b) different sizes of the attack training dataset, using RN18 as the pre-trained model. prompts. We further show that to achieve a good attack performance, the adversary must have a shadow dataset of similar distribution as the target dataset and must have access to the same pre-trained model. The results are displayed in Appendix B. ### Defense **Gaussian Noise as Defense.** We propose a defense mechanism [54, 18, 57]. Specifically, the PaaS provider adds Gaussian noise \(\mathcal{N}(0,\sigma^{2}I)\) to the released prompts, resulting in noised target prompts \(\Delta^{\prime}_{target}=\{\delta_{i}+\epsilon_{i}\mid\forall\delta_{i}\in \Delta_{target},\epsilon_{i}\sim\mathcal{N}(0,\sigma^{2}I)\}\). The magnitude of the noise is controlled by the value of \(\sigma\), with larger values corresponding to higher noise. We examine the effectiveness of the proposed defense with \(\sigma\in\{1,2,3,4,5\}\). We first report the target performance, i.e., prompt utility, on all datasets in Figure 4. The evaluation metric is the average test accuracy of all target prompts in the attack testing dataset on specific downstream tasks. In general, we observe that the target performance decreases on all datasets by a large margin with the increase of \(\sigma\). For example, the prompt utility decreases from 33.95% to 10.55%, which is even lower than random guess (12.50%), meaning the prompt is no longer usable. We present the attack performance in Figure 4(a). We can observe that the effectiveness of the proposed attack significantly declines with the increase of \(\sigma\). The test accuracy on CelebA and UTKFace drops to almost random guess when \(\sigma\geq 2\). The attacks on CIFAR10 are more robust to the defense, but the performance still starts decreasing when \(\sigma=3\). **Adaptive Attacks.** We further consider an adaptive adversary [21] who is aware of the defense mechanism, i.e., that Gaussian noise has been added to the target prompts. They can construct their attack training dataset with noised shadow prompts. Similarly, we set \(\sigma\in\{1,2,3,4,5\}\) for both shadow and target prompts. We report the performance of adaptive attacks on all datasets in Figure 4(b). The results show that the attack performance declines less and more slowly. For example, the attack performance barely decreases when \(\sigma=1\) on all datasets. In addition, when considering the size of the prompt training dataset as the target property, the attack performance only has negligible degradation with the growth of Gaussian noise. For example, the attack performance has almost no drop even with \(\sigma=5\) on all datasets. **Takeaways.** These findings indicate that adding Gaussian noise as a defense mechanism can ostensibly decrease the attack performance. But the defender suffers from unacceptable prompt utility degradation. Moreover, this defense can be bypassed by the adaptive attack. We leave it as future work to investigate more effective defenses. We later Figure 4: Target performance on four datasets. The x-axis denotes the magnitude of the Gaussian noise, from 0 to 5, where 0 means the proposed defense mechanism is not implemented. The y-axis represents the target performance on the downstream tasks with respect to the average test accuracy of all target prompts in the attack testing dataset. Figure 5: Attack performance of (a) naïve attacks where the adversary is not aware of the proposed defense and (b) adaptive attacks on four datasets. The x-axis denotes the magnitude of Gaussian noise, from 0 to 5, where 0 means the proposed defense mechanism is not implemented. The y-axis represents the attack performance with respect to the test accuracy. show that the proposed defense can achieve a decent utility-defense trade-off by using a smaller \(\sigma\), e.g., \(\sigma=0.6\), indicating that the statistical information of the training dataset in the target prompts is harder to hide than individual information, i.e., membership (see Section 4.7). ## 4 Membership Inference Attacks In this section, we leverage the membership inference attacks to quantify the privacy risks of prompts. ### Threat Model **Adversary's Goal.** In membership inference, the goal of the adversary is to infer whether a given data sample \(x\) is in the training dataset of the target prompt \(\delta_{target}\). **Adversary's Knowledge and Capability.** Similar to the property inference attack, the adversary can query the PaaS service to get \(\delta_{target}\) and has white-box access to the pre-trained model \(\mathcal{M}\). The adversary has a shadow dataset \(\mathcal{D}_{shadow}\) that is from the same distribution as \(\mathcal{D}_{target}\) to train the shadow prompt \(\delta_{shadow}\). We later demonstrate that the adversary can operate in a data-free manner, i.e., leveraging \(\mathcal{D}_{shadow}\) that comes from a different distribution than \(\mathcal{D}_{target}\). ### Measurement Methodology **Attack Setup.** The adversary first divides the shadow dataset into two disjoint subsets: \(\mathcal{D}_{shadow}^{train}\), referred to as the member split, and \(\mathcal{D}_{shadow}^{test}\), referred to as the non-member split. The member split is then utilized for training the shadow prompt \(\delta_{shadow}\), which mimics the behavior of \(\delta_{target}\). **Attack Descriptions.** We adopt three types of membership inference attacks, i.e., neural network-based (NN-based) attacks [44], metric-based attacks [47], and gradient-based attacks [38, 24]. We outline their technical details below. _NN-based Attacks [44]._ The adversary constructs the attack training dataset on \(\mathcal{D}_{shadow}\). Specifically, they combine each sample in \(\mathcal{D}_{shadow}\) with the shadow prompt trained on \(\mathcal{D}_{shadow}^{train}\) and query the corresponding pre-trained model to get the top-5 posteriors as attack input features. Then, for each sample in the member split, the adversary labels the corresponding top-5 posteriors as "member." For samples that belong to the non-member split, their top-5 posteriors are labeled as "non-member." At inference time, the adversary queries the pre-trained model with the given data sample \(x\) and \(\delta_{target}\) to obtain the top-5 posteriors and feeds them to the attack model to obtain its membership prediction. _Metric-based Attacks [47]._ Song and Mittal propose metric-based attacks using four metrics, i.e., prediction correctness (metric-corr), prediction confidence (metric-conf), prediction entropy (metric-ent), and modified prediction entropy (metric-ment). Unlike NN-based attacks where a neural network is trained to make membership predictions, metric-based attacks first calculate class-wise thresholds over \(\delta_{shadow}\). Then, at inference time, the adversary calculates the metric values and compares them with the pre-calculated thresholds to determine the membership status for given data samples. It is worth noting that in scenarios where the adversary possesses data from a different distribution than the target dataset, we calculate an overall threshold for all classes. This is because certain classes present in the target dataset may not be represented in the shadow dataset, so class-specific thresholds would not be applicable. _Gradient-based Attacks [38]._ Nasr et al. propose gradient-based attacks on the basis of the NN-based attacks by incorporating augmented input information. Specifically, the adversary has white-box access to the pre-trained model and target prompt with its intermediate computations, e.g., gradients. They combine each sample \(x\) with the prompt and input resulting data into the pre-trained model to obtain top-5 posteriors, the loss incurred during the forward pass, the gradient of the prompt during the backward pass, and an indicator that denotes the correctness of the prediction. These obtained data are treated as the attack input for the attack model. ### Measurement Settings **Datasets and Downstream Tasks.** We reuse CIFAR10, CelebA, UTKFace, and AFAD to evaluate membership inference attacks. The downstream tasks for all datasets are the same as those for property inference attacks (see Section 3.3). We randomly sample 8000 data samples for each dataset in the main experiments and then evenly split each dataset into four disjoint sets, i.e., \(\mathcal{D}_{target}^{train}\), \(\mathcal{D}_{target}^{test}\), \(\mathcal{D}_{shadow}^{train}\), and \(\mathcal{D}_{shadow}^{test}\). \(\mathcal{D}_{target}^{train}\) is used to develop the target prompt \(\delta_{target}\), and \(\mathcal{D}_{target}^{test}\) is the evaluation set. \(\mathcal{D}_{shadow}^{train}\) and \(\mathcal{D}_{shadow}^{test}\) are used to build the attack model as discussed in Section 4.2. **Attack Configurations.** All experimental settings of the pre-trained models and target prompts are the same as those for property inference attacks except for the number of epochs. We follow the default setting to train all prompts for 1000 epochs. For attacks that leverage neural networks as the attack model, we employ the cross-entropy loss function and optimize it using Adam optimizer. We conduct a grid search on {1e-2, 1e-3, 1e-4, 1e-5} to determine the optimal learning rate for each attack, and all attack models are trained for 100 epochs. For the NN-based attacks, we use a 2-layer MLP as the attack model and set the size of the hidden layer to 32. For the gradient-based attacks, we utilize an attack model composed of four sub-networks, each corresponding to one attack information (gradient, top-5 posteriors, loss, and indicator), and the outputs of these sub-networks are concatenated to form the final input of a 2-layer MLP. **Metric.** Following the convention [17, 46], we use test accuracy as the main metric to evaluate the attack performance. ### Measurement Results **Membership Inference Privacy Risks.** We report the performance of three membership inference attacks in Figure 6. We conduct three separate runs of each attack experiment and report the average values as the final results. We observe that metric-based attacks achieve the best performance in most cases, e.g., 93.20% membership inference attack accuracy on AFAD. Unless otherwise specified, we use metric-ment attacks as the representative of the metric-based attacks, as they consistently achieve the best performance across all datasets (see Appendix C). The gradient-based attacks also exhibit strong performance, with results that are closely comparable to those of the metric-based attacks. Song and Mittal [47] also report that the metric-based attacks can outperform NN-based attacks and have similar performance as gradient-based attacks. NN-based attacks perform worse than gradient-based attacks. This is expected since the adversary leverages less information from the target prompt. **Analysis.** Figure 6 shows that the performance of three attacks varies on different pre-trained models and different datasets. We hypothesize that the different overfitting levels may affect the attack performance. Following previous work [17, 43], we calculate the difference between train accuracy and test accuracy to measure the overfitting level of a given target prompt. We train five target prompts with different random seeds for each experimental setting. The relationship between overfitting levels and attack performance is illustrated in Figure 7. We observe that different pre-trained models and different datasets have different overfitting levels. Meanwhile, our results demonstrate that overfitting does have a significant effect on membership inference attacks. The overall trend is that the higher the overfitting level, the better the attack performance. To quantify this correlation, we calculate the Pearson correlation scores between the overfitting level and attack performance. The result is 0.89. Our finding is in line with previous analyses [33, 45]. **Takeaways.** We show that prompts can leak sensitive membership information of their training dataset. Similar to previous analyses, overfitting is strongly correlated with the membership inference performance. ### Factors Affecting Membership Inference From the Victim's Side In this section, we measure the factors that may affect the membership inference privacy risks from the perspective of the victim. As shown in Figure 8, the number of epochs used to train the target prompt and its training dataset size (\(\mathcal{D}_{target}^{train}\)) are closely related to the overfitting level of the target prompt. A larger number of epochs results in larger overfitting levels. On the contrary, a larger training dataset size results in reduced overfitting levels. Therefore, we explore how these two factors affect the attack performance. **Number of Epochs.** We set the number of epochs for the target prompt to {200, 400, 600, 800, 1000} and use the same number of epochs for the shadow prompt in each experiment. The results are shown in Figure 9. We observe that, in general, more epochs lead to better attack performance, hence greater membership inference privacy risks. The attack performance becomes steady after 500 epochs, while the overfitting level also becomes stable simultaneously in Figure (a)a. **Prompt Training Dataset Size.** We investigate the effect of the training dataset size on the attack performance by varying the size from 500 to 5000. To control the variables, we always fix the other three sets (\(\mathcal{D}_{target}^{test}\), \(\mathcal{D}_{shadow}^{train}\), and \(\mathcal{D}_{shadow}^{test}\)) to the same size as \(\mathcal{D}_{target}^{train}\). As illustrated in Figure 10, the attack performance decreases as the dataset size grows. The general trend of the attack performance is also consistent with the findings in Figure (b)b. That is, more training data reduces the overfitting level in most cases, leading to a decrease in the attack performance. There is a significant drop in the overfitting level on CIFAR10 when increasing the size of \(\mathcal{D}_{target}^{train}\) from 2000 to 5000. Therefore, the test accuracy of metric-based attacks drops from 86.60% to 64.00%. **Takeaways.** We perform an analysis of the relation between overfitting levels and attack performance. Our results show that more epochs and fewer training data can aggravate overfitting and pose a more severe threat to membership privacy. Figure 8: Overfitting levels of target prompts with (a) different numbers of epochs and (b) different sizes of the training dataset, using ViT-B as the pre-trained model. Figure 6: Attack performance of three membership inference attacks on four datasets. Figure 7: Overfitting levels of target prompts across (a) different pre-trained models on AFAD and (b) different datasets using BiT-M as the pre-trained model. Different points with the same marker denote different runs of the same pre-trained model/dataset using different random seeds. ### Factors Affecting Membership Inference From the Adversary's Side We evaluate the factors that may affect the membership inference privacy risks from the perspective of the adversary. In previous experiments, we made two assumptions: 1) the adversary has a dataset \(\mathcal{D}_{shadow}\) that comes from the same distribution as \(\mathcal{D}_{\textit{target}}\), and 2) the PaaS provider offers users the target prompt with white-box access to the pre-trained model. Here, we evaluate if these two assumptions are needed to mount a successful membership inference attack. **Dataset Assumption.** We relax the same distribution assumption by leveraging a shadow dataset that comes from a different distribution than \(\mathcal{D}_{\textit{target}}\) to train the shadow prompt; the results with three attack methodologies are shown in Figure 11. In the diagonal of the heatmaps, we show the results of the adversary having access to \(\mathcal{D}_{\textit{shadow}}\) that comes from the same distribution as \(\mathcal{D}_{\textit{target}}\). We observe that the performance of NN-based, metric-based, and gradient-based attacks is slightly reduced but remains effective. For instance, as shown in Figure 10(b), using any one of the four datasets as the shadow dataset to launch the metric-based attack can achieve a test accuracy of around 86.00%, when the target dataset is CIFAR10. Interestingly, CIFAR10 contains images of 10 classes such as cars and trucks, but the other three datasets only include facial images. This supports the findings of Salem et al. [44] and Li et al. [27], which also report the effectiveness of membership inference using shadow datasets from different domains. Moreover, we present the average test accuracy and the average drop in accuracy of three attacks on different pre-trained models in Appendix D. The results show that the metric-based and gradient-based attacks achieve the best attack performance on average, while the NN-based and gradient-based attacks, in general, are more robust than metric-based attacks. Hence, we conclude that the gradient-based attacks exhibit superior performance in terms of both utility and robustness after relaxing the dataset assumption. However, it should be noted that the gradient-based attacks come at the cost of high computational resources and a significant amount of information needed. Overall, our findings suggest that we can relax the assumption of the same-distribution shadow dataset, implying greater membership inference privacy risks of prompts. **Pre-trained Model Assumption.** In the previous evaluation, we assume the adversary has white-box access to the pre-trained model. However, the PaaS provider may only allow users to submit prompted images and receive corresponding results, thus limiting access to the pre-trained model. The adversary has to develop their pre-trained models, which may be different from the pre-trained models used to train the target prompts (abbreviated as the target model). We, therefore, measure the impact of the discrepancy in architecture between the pre-trained model used to train the shadow prompts (abbreviated as shadow model) and the target model on the attack performance. The results of three attacks are shown in Figure 12. In the diagonal of the heatmaps, we show the results of the adversary having white-box access to the same pre-trained model used to train the target prompt. We observe that, in some cases, the attack performance decreases noticeably but remains effective. For example, when the pre-trained model of the target prompt is ViT-B on CelebA, the performance of metric-based attacks drops from 86.00% to 78.40% (77.10%) when using RN18 (BiT-M) as the pre-trained model for the shadow prompt. However, in certain cases, all three attacks fail completely, i.e., they become random guesses. For instance, when the adversary uses ViT-B to attack the target prompt trained on RN18, these three methodologies become random guesses. We also present the average test accuracy and the average drop in accuracy of three attacks on different datasets in Appendix D. The gradient-based and metric-based attacks achieve the best attack performance, and the Figure 10: Attack performance of three membership attacks with different sizes of \(\mathcal{D}_{\textit{target}}^{\textit{train}}\) using ViT-B as the pre-trained model. Figure 9: Attack performance of three membership attacks with varying number of epochs, using ViT-B as the pre-trained model. gradient-based attacks are more robust than the metric-based attacks. However, the average drop in accuracy of all attacks after relaxing the pre-trained model assumption, in general, is larger than that of relaxing the dataset assumption. **Discussion.** We have shown that all methodologies only have slight performance degradation after relaxing the dataset assumption. Meanwhile, after relaxing the pre-trained model assumption, these attack methodologies are effective in some cases but fail to maintain high robustness, i.e., they fail in other cases. Previous work [27, 44] on membership inference against traditional ML classifiers has shown that having shadow models with different architectures than the target models does not have a strong impact on the attack performance. However, we do not observe the same in VPL. One possible explanation is that a prompt is specific to the machine learning model it is trained on. In other words, prompts from different models share less similarity, which makes the membership inference knowledge hard to transfer among them. As illustrated in Figure 13, we find that metric-corr attacks have no performance deterioration after relaxing these assumptions, as they do not rely on the shadow technique. Thus, the adversary can leverage the metric-corr attacks when relaxing the data assumption and the pre-trained model assumption. **Takeaways.** Our results show that the adversary can be data-free, as the attack performance only has a slight deterioration and remains effective. The results also indicate that the adversary has some dependency on the knowledge of pre-trained models to steal private information, as not all attack methodologies can be successfully launched after relaxing the pre-trained model assumption. However, we show that the adversary can still leverage the metric-corr attacks to obtain decent attack performance with high robustness, as they do not rely on the shadow technique. ### Defense **Gaussian Noise as Defense.** We have demonstrated that the prompts are also vulnerable to membership inference attacks. Meanwhile, in the above experiments, we observe that the performance of membership inference is heavily related to the overfitting level of the target prompt. Potentially, a defender can decrease the threat to membership privacy by reducing the overfitting level. As shown in Figure 9 and Figure 10, leveraging fewer epochs and more data to train the target prompt can decrease the attack performance to some extent. However, using these methods comes at the cost of either the utility of the target prompt or the resource to collect and process data. We also apply the widely adopted Differential Privacy-Stochastic Gradient Descent (DP-SGD) [2], which involves adding noise to clipped gradients, as a defense mechanism. However, the experimental results show that it is hard to maintain the prompt utility even with a larger privacy budget, e.g., \(\epsilon=20\). We hypothesize that DP-SGD may work on large datasets, but not on the data for prompt learning since it is relatively small. Hence, we investigate if the defense mechanism used in Section 3.6, i.e., adding Gaussian noise to the prompts, can reduce the risks of membership leakage. We set \(\sigma\in\{0.2,0.4,0.6,0.8,1.0\}\). We first report the target performance on CIFAR10 in Figure 13(a). The evaluation metric is the test accuracy of the Figure 11: Attack performance of three attacks after relaxing the dataset assumption, using ViT-B as the pre-trained model. Figure 12: Attack performance of three attacks after relaxing the pre-trained model assumption on CelebA. Figure 13: Attack performance of metric-corr attacks after relaxing (a) dataset assumption using ViT-B as the pre-trained model and (b) pre-trained model assumption on AFAD. target prompt. We observe that the target performance, i.e., prompt utility, only decreases heavily when \(\sigma\geq 0.6\). For example, the prompt utility remains above 41.40% when \(\sigma\leq 0.6\) and then decreases heavily from 41.40% to 29.70% on CIFAR10 when increasing \(\sigma\) from 0.6 to 0.8. We then present the attack performance where the adversary is unaware of the defense mechanism in Figure (b)b. We can observe that all attacks are close to random guesses when \(\sigma\geq 0.6\), showing that there is a practical utility-defense trade-off when \(\sigma=0.6\). **Adaptive Attacks.** We further consider an adaptive adversary [21] who is aware of the defense mechanism. Hence, the adversary can craft their attack training datasets using the shadow prompt with Gaussian noise. We set \(\sigma\in\{0.2,0.4,0.6,0.8,1.0\}\) for both shadow and target prompts. We report the performance of adaptive attacks on CIFAR10 in Figure (c)c. The results show that the attack performance is still close to random guess when \(\sigma\geq 0.6\). **Takeaways.** When defending against membership inference attacks, the proposed defense mechanism can achieve a decent utility-defense trade-off when setting \(\sigma=0.6\). A similar conclusion can be drawn from the other three datasets. ## 5 Related Work **Property Inference Attacks.** Property inference [3, 7, 53, 36, 55, 53, 3] aims to extract sensitive global properties of the training data distribution from an ML model that the model owner does not want to share. It is an important privacy attack against ML models, as it can violate prompt owner's privacy, i.e., proprietary information about the dataset, and enable attackers to perform tailored attacks, e.g., enhancing membership inference attacks [55]. The main approach for launching these attacks is building a meta-classifier on a large number of shadow models [3]. Existing work focuses on deep neural networks, including fully connected neural networks [14], generative adversarial networks (GANs) [55], and graph neural networks (GNNs) [53]. **Membership Inference Attacks.** Membership inference [24, 28, 29, 44, 46, 47] is another important type of privacy attack against ML models, where the adversary aims to infer whether the given data sample was involved in a target model's training dataset. Shokri et al. [46] propose the first membership inference attack which depends on training multiple shadow models for developing their attack models. Salem et al. [44] then relax assumptions proposed by Shokri et al. [46]. Yeom et al. [50] attribute the vulnerability of membership inference to the overfitting of ML models. Song and Mittal [47] propose metric-based attacks that rely on pre-calculated thresholds over shadow models to determine the membership status. Nasr et al. [38] perform a thorough investigation of membership privacy in both black-box and white-box settings for both centralized and federated learning scenarios. More recently, Liu et al. [32] leverage the loss trajectory to further enhance the attack performance. Most recent work focuses on deep neural networks, including GNNs [48, 16], multi-modal models [19], and multi-exit networks [27]. Previous work has demonstrated that the fine-tuning paradigm is vulnerable to these privacy attacks [37, 10]. The privacy risk in the fine-tuning paradigm resides at the model level, as the privacy information is leaked through fine-tuned models. This differs from the privacy risk associated with the prompt learning paradigm, where the risk lies at the input level, as the prompt exists in the pixel space. ## 6 Limitation and Future Work **Efficacy of VPL.** VPL is an emerging ML paradigm. Although its current performance cannot rival that of a fine-tuned model, an increasing number of studies are attempting to enhance its performance through various approaches, e.g., label mapping [9] and better data homogeneity [20]. Since we are the first to explore the vulnerabilities of the visual prompt, we have focused on the widely recognized VPL paradigm and followed their default training settings [4]. We anticipate that as VPL with enhanced performance are introduced in the future, it will be straightforward for us to extend our measurement, and thus we recognize this as a promising avenue for future research. **Defense.** In the evaluation, we show that adding Gaussian noise to the prompt can mitigate the membership inference attacks with a decent utility-defense trade-off but fails to defend against property inference attacks. DP-SGD fails to preserve the original prompt utility. Since the privacy risk in the prompt learning paradigm is at the input level, devising diverse defense mechanisms for it is more challenging compared to addressing the privacy risk at the model level. We leave it as a future work to explore effective defenses against property inference attacks. **NLP Prompt Learning.** Another interesting future work is to apply the two proposed privacy attacks along with their motivation to prompt learning in the NLP domain [25, 26], as the NLP prompt is essentially a (soft) token that can be Figure 14: Prompt utility and attack performance using the proposed defense on CIFAR10. added to the text input, operating at the input level. ## 7 Conclusion In this paper, we conduct the first privacy assessment of prompts learned by VPL through the lens of property inference attacks and membership inference attacks. Our empirical evaluation shows that prompts are vulnerable to both of these attacks. Moreover, we have discovered that an adversary can successfully mount the property inference attacks by training only a few shadow prompts. They can also relax the dataset assumption to achieve effective membership inference attacks. We further make some initial investigations on possible defenses. Experiments show that our method, i.e., adding Gaussian noise to prompts, can mitigate the membership inference attacks with a decent utility-defense trade-off but fails to defend against property inference attacks. We hope our results can raise the awareness of the stakeholders when deploying prompt learning in real-world applications. Moreover, we will share our code and models to facilitate research in this field. **Acknowledgements.** We thank all anonymous reviewers for their constructive comments. This work is partially funded by the European Health and Digital Executive Agency (HADEA) within the project "Understanding the individual host response against Hepatitis D Virus to develop a personalized approach for the management of hepatitis D" (D-Solve) (grant agreement number 101057917).
2303.11848
Dens-PU: PU Learning with Density-Based Positive Labeled Augmentation
This study proposes a novel approach for solving the PU learning problem based on an anomaly-detection strategy. Latent encodings extracted from positive-labeled data are linearly combined to acquire new samples. These new samples are used as embeddings to increase the density of positive-labeled data and, thus, define a boundary that approximates the positive class. The further a sample is from the boundary the more it is considered as a negative sample. Once a set of negative samples is obtained, the PU learning problem reduces to binary classification. The approach, named Dens-PU due to its reliance on the density of positive-labeled data, was evaluated using benchmark image datasets, and state-of-the-art results were attained.
Vasileios Sevetlidis, George Pavlidis, Spyridon Mouroutsos, Antonios Gasteratos
2023-03-21T13:48:53Z
http://arxiv.org/abs/2303.11848v1
# Dens-PU: PU Learning with Density-Based Positive Labeled Augmentation ###### Abstract This study proposes a novel approach for solving the PU learning problem based on an anomaly-detection strategy. Latent encodings extracted from positive-labeled data are linearly combined to acquire new samples. These new samples are used as embeddings to increase the density of positive-labeled data and, thus, define a boundary that approximates the positive class. The further a sample is from the boundary the more it is considered as a negative sample. Once a set of negative samples is obtained, the PU learning problem reduces to binary classification. The approach, named Dens-PU due to its reliance on the density of positive-labeled data, was evaluated using benchmark image datasets, and state-of-the-art results were attained. ## 1 Introduction Labeled data are often scarce and expensive to obtain in many real-world applications, making training machine-learning models a challenging task [1]. In traditional supervised learning, the goal is to train a model to predict the correct class label for every sample in a training dataset [2]. The training data consist of labeled examples associated with a known class label. Typically, the class distribution of labeled data is assumed to be representative of the class distribution of unlabeled ones. Prior knowledge of the labels makes it easy to train a model to accurately predict the class labels for unseen samples. As Figure 1 shows, in the case of PU learning, the class label is known only for data belonging to a single class; thus, for negative samples, the label is unknown [3]. The lack of this knowledge makes it impossible to effectively train a typical binary classification model to distinguish between positive and negative classes. The unknown distribution of the negative samples renders the PU learning problem challenging [4]. This has led to the development of many different approaches, as described in the following section. The proposed novel methodology is boundary-aware and utilizes Gaussian sampling and anomaly detection. It creates a mass of embeddings from pairs of encoded positive-labeled data, which is essential for defining a rule-based boundary around the positive class. Negative samples are obtained from unlabeled data using anomaly detection. With the available positive-labeled data and the newly acquired negative set, the PU learning problem is simplified into a binary classification problem. A deep-learning binary classifier is employed to address this problem. Two datasets, CIFAR-10 [5] and Fashion-MNIST [6], were used as the evaluation benchmarks. The proposed method achieved state-of-the-art results by following the same protocols as in [7]. In the following sections, related work in the field of PU learning is reviewed, and details of the steps involved in Dens-PU and experiments are described. The findings and implications of the results are discussed, followed by the potential impact of this approach. The paper concludes by demonstrating the effectiveness and usefulness of a boundary-aware PU learning methodology for image classification. ## 2 Related work PU learning aims to learn a classifier \(f\) that can accurately distinguish between the positive and negative classes despite the lack of labeled negative examples [3]. A generic approach to PU learning is to estimate the probability of each example in the set of unlabeled data \(U\) belonging to the positive class and then use this probability estimate to train a binary classifier \(f\)[8]. Traditional binary classification algorithms can be adapted to handle the absence of negative-labeled data in PU learning, as discussed in [9]. An unbiased risk estimator for PU learning, called uPU, was introduced in [10], where the risk estimator may result negative values, which can be problematic owing to strong model overfitting. To address this, a non-negative risk estimator known as nnPU was proposed in [11]. Recently, a self-supervised extension of nnPU was introduced in [12], which utilizes auxiliary tasks, such as model calibration and a self-paced curriculum. Another variant of nnPU, presented in [13], modifies the weights to compensate for imbalanced data in the minority class. PUSB [14] relaxes the assumption of the order-preserving property, whereas, aPU [15] addresses the PU learning problem by fixing the negative distribution, while a positive distribution can shift arbitrarily. PU learning approaches often rely on heuristic or statistical methods to estimate the negative class from unlabeled data, because identification of the negative class is challenging [16, 17]. Two-step methods, such as graph-based methods [18, 19], differ in their approach for assigning labels to unlabeled data. PUbN assumes that the unlabeled data include a small number of negative examples that are highly representative of the negative class and combines them with positive-labeled data to train the model [4]. GenPU [20] leverages the GAN framework, whereas KLDCE [21] translates PU learning into a label noise problem and weakens its side effects via the centroid estimation of the corrupted negative set. PULNS [22] incorporates reinforcement learning to select the effective negative samples. However, these approaches are prone to errors, and may have limited effectiveness in specific scenarios [23]. The identification of the negative class from the unlabeled data can be formulated as a problem of identifying samples that are unlikely to have been generated by the normal process Figure 1: Toy dataset for illustrating the complete label knowledge (left) in typical supervised learning, and (right) in PU learning, where only some samples of a single label are given. the literature, including the Local Outlier Factor [24], also LOF, which measures the degree of the local density of each point relative to its neighbors to detect the outliers, One-Class Support Vector Machines [25] learns a hypersphere in the feature space that encompasses the majority of the data points and identifies the outliers as the points outside this hypersphere and Isolation Forest [26] which discovers anomalies by calculating the degree of separation based on the steps needed to isolate a sample from its group [27]. However, all these methods have limitations in the context of PU learning. Anomaly detection methods rely on the assumption that the negative class is less frequent and distinctively different from the positive class. In practical scenarios, the negative class may be very similar to the positive one, making it difficult to detect outliers. Additionally, anomaly detection methods are known to be sensitive to the choice of hyperparameters and may require careful tuning to achieve optimal performance. This study shares a similar intuition with [28], which suggests that density-based augmentation of the positive-labeled class can help distinguish negative samples. However, their approach differs from the proposed in this paper in that they utilize a probabilistic generative model to characterize the density distribution of the positive class. Another study, Dist-PU [7] shares similarities with our study as it also aims to enhance the separability of the positive-negative distributions through entropy minimization and proposes using interpolation-based methods such as mixup to mitigate confirmation bias. The novelty of the proposed approach lies in that unlike those that rely on biased negative data or variance-penalizing techniques, it generates biased positive samples to learn a boundary that encloses them and then estimates the degree of unlabeled samples belonging to the negative class, based on their distance from the learnt boundary. ## 3 Methodology In a typical PU learning scenario, there is a set \(X\) of samples \(x_{i}\in X\) and \(i=\{1,...,n\}\), a corresponding set \(Y=\{0,1\}\) of labels \(y_{i}\in Y\), and only one class is known for any given training subset of \(X\). As a convention, \(y=1\) is chosen as the known label such that \(p(y=1|x)=1\) for some \(x\), those considered the training set. Thus, there is no available information regarding the samples belonging to class \(y=0\) during the training. In set notation, there are positive-labeled \(P_{L}\) and \(U\) unlabeled data, such that \(X=P_{L}\cup U\). \(U\) is composed of negative samples belonging to \(N\) and the remaining unlabeled positive samples in \(P_{UL}\), such that \(U=P_{UL}\cup N\). PU learning aims to learn a classifier that correctly assigns both labels to unseen data using the avail Figure 2: The proposed methodology broken down in six steps. information. The proposed methodology is motivated by the intuition that although we know nothing about the nature of \(N\), knowing something about \(P\) is sufficient for learning how to separate it from any other distribution. Although this is a perilous task, in this work, it is proposed that under the Selected Completely at Random Labeling Assumption (SCAR), it is possible to apply a rule-based boundary around the known distribution to separate it from dissimilar ones. Admittedly, this yields a highly biased model of \(P\), but our aim is not to make \(P_{L}\) larger with more positive-labeled samples; it is to draw a sample of \(N\) from \(U\) with confidence, such that the difficulty of the PU learning problem is reduced to a typical binary classification. ### High-level overview A high-level overview of the proposed approach is shown in Figure 2. Specifically, Dens-PU exploits a Convolutional Autoencoder, which is trained on the positive-labeled data (step 1), to extract encodings using the latent space (step 2). The encodings are combined linearly in pairs to acquire new samples that lie between them (step 3). A dense mass is defined using the new samples as embeddings and the original encodings (step 4). This mass serves the purpose of delineating a boundary around the positive class, assuming that many data points outside the boundary are negative samples. This is where anomaly detection becomes relevant. Anomaly detection algorithms can identify data points that are significantly different from the majority of the data, allowing the separation of negative samples from the remaining unlabeled data (step 5). Thus, from the moment a negative class sample is obtained, the problem can be treated as any typical binary class classification in a supervised setting (step 6). The Dens-PU involves several techniques explained in the following sections. ### Convolutional Autoencoder A Convolutional Autoencoder (CAE) is a neural network architecture that is used for unsupervised learning. The network attempts to learn a transformed representation of the input data, in this case images, while reconstructing the original data as accurately as possible [29]. A CAE consists of two parts: an encoder \(z=f_{enc}(x)=\sigma(Wx+b)\) and a decoder \(\hat{x}=f_{dec}(z)=\sigma(W^{\prime}z+b^{\prime})\), where \(x\) is the input image, \(W\),\(W^{\prime}\) are the weight matrices and \(b\), \(b^{\prime}\) are the bias vectors of the encoder and decoder respectively, \(\sigma(\cdot)\) is an activation function (such as ReLU), \(z\) is the encoded representation, and \(\hat{x}\) is the reconstructed image. The loss function used to train CAE is typically a measure of the difference between the input image and its reconstruction. The mean squared error (MSE) is a common choice for the reconstruction loss \(L(x,\hat{x})=\frac{1}{n}\sum_{i=1}^{n}(x_{i}-\hat{x}_{i})^{2}\), where \(n\) is the number of pixels in the image, \(x_{i}\) is the \(i\)th pixel of the input image, and \(\hat{x}_{i}\) is the corresponding pixel of the reconstructed image. A regularization term is frequently added to the loss function to prevent overfitting. One common choice is L\({}_{2}\) regularization and the total loss function becomes \(L_{total}=L(x,\hat{x})+L_{reg}=L(x,\hat{x})+\frac{\lambda}{2}(\|W\|_{2}^{2}+\| W^{\prime}\|_{2}^{2})\), where \(L(x,\hat{x})\) is the reconstruction loss, and \(L_{reg}\) is the regularization term. CAE is the first block in the proposed methodology. It is trained on \(P_{L}\) and is used to extract the encodings \(Z_{L}\) and \(Z_{U}\) from \(P_{L}\) and \(U\) respectively. Although one would expect a CAE to learn only class-specific representations, this is not true, since autoencoders are label agnostic and reconstruct any input globally. Nevertheless, any class-specific information is just an artifact retained within the weights as a result of thematically restricted input. In Dens-PU, the CAE learns how to reconstruct any natural image (despite labeling), given a sufficient number of input images. This is shown with a toy experiment in Figure 3 (left), where the distributions of PSNR values for positive (blue) and negative (red) samples from CIFAR-101 different2, they cannot be used for classification. Therefore, applying a simple threshold to the PSNR of two images -one with a known label and the other with an unknown- to determine whether they belong to the same class is not a viable approach. Footnote 2: A Mann-Whitney statistical test with p-value\(<\)0.01 rejects the hypothesis of the two distributions being the same. ### Augmentation via Encoding Interpolation Data augmentation is an essential aspect of successful deep learning pipelines. This helps to mitigate sample size issues and often prevents overfitting. _Mixup_ is a recently introduced data augmentation technique that generates samples as random convex combinations of data points from a training set [30]. The label assignment of the newly created samples follows the distribution defined by the interpolation proportion. Studies have found that it significantly improves generalization in various tasks such as computer vision, natural language processing, and semi-supervised learning [31]. Mixup modifies the Vicinal Risk Minimization [32], in which the joint distribution \(P(x,y)\) of a dataset \(D=\{(x_{i},y_{i})\}_{i=1}^{n}\) is approximated by \(P_{\nu}(\tilde{x},\tilde{y})=\frac{1}{n}\sum_{i=1}^{n}\mu(\tilde{x},\tilde{y} |x_{i},y_{i})\), where \(\mu\) is a _vicinity distribution_. The vicinity distribution measures the probability of a _virtual_ feature pair \((\tilde{x},\tilde{y})\) being close to a _training_ pair. The _mixup distribution_ is introduced as follows: \[\mu(\tilde{x},\tilde{y}|x_{i},y_{i})=\frac{1}{n}\sum_{j}^{n}\mathrm{E}[\delta( \tilde{x},\tilde{y})] \tag{1}\] where \(\tilde{x}=\lambda\cdot x_{i}+(1-\lambda)\cdot x_{j},\tilde{y}=\lambda\cdot y _{i}+(1-\lambda)\cdot y_{j}\), \(\delta(x=x_{i},y=y_{i})\) is a Dirac mass centered at \((x_{i},y_{i})\) and \(\lambda\sim Beta(\alpha,\alpha)\) for \(\alpha\in(0,\infty)\). In the PU learning problem setting, the training data belong to the same class, which implies that the interpolated labels are also in the same class: \(\tilde{y}=\lambda\cdot y_{i}+(1-\lambda)\cdot y_{j}=y_{i}\). Thus, (1) becomes: \[\mu(\tilde{x}|x_{i},y_{i})=\frac{1}{n}\sum_{j}^{n}\mathrm{E}[\delta(\tilde{x} =\lambda\cdot x_{i}+(1-\lambda)\cdot x_{j},y_{i})] \tag{2}\] Therefore, the use of the distribution \(P_{\nu}(\tilde{x},\tilde{y})\) around \(\mu(\tilde{x}|x_{i},y_{i})\), where \((x_{i},1)\in P_{L}\), generates samples to augment the positive class. Note that \(P_{\nu}\) contains the distribution \(P_{L}\) because \(\lambda\in[0,1]\), especially for the cases where \(\lambda\) indeed takes either 0 or 1. In this sense, the number of examples generated by sampling \(P_{\nu}\) for any given pair \(x_{i}\) and \(x_{j}\), where \(i\neq j\) and \(x_{i}\neq x_{j}\), only makes the initial distribution denser. Figure 3: The evaluation of the trained CAE using positive and negative data. In the context of the proposed Dens-PU, the set of encodings \(Z_{L}\) received from the encoder of the CAE seed the creation of the set of embeddings \(Z_{\nu}\), that follows a vicinity distribution. The proposed augmentation modifies the original mixup by (a) fixing the \(y_{i}\) in (2) owing to fact only one label known in PU-learning, (b) changing the sampling distribution for \(\lambda\) from Beta into a Gaussian and (c) ensuring that the initial pair of samples are not considered in the produced samples (\(\lambda\in(0,1)\)). Specifically, \(\lambda\) takes values from \[\mathcal{N}=(\frac{z_{j}-z_{i}}{2},e^{k}\cdot||\frac{z_{j}-z_{i}}{2}-z_{i}||_{2 }^{2}) \tag{3}\] where \(z_{i},z_{j}\in Z_{L}\) and the parameter \(k\in(0,1)\) controls the spread of the variance, i.e., the similarity between the interpolated and the original samples. Figure 4 shows the different values \(\lambda\) can take using the distribution from mixup (Eq. 1 and the proposed 2. Mixup focuses on staying close to the original samples whereas the proposed close to the center, avoiding the extreme values zero and one. Figure 5 shows a toy dataset consisting of positive (green) and unlabeled (black) samples (left), the result of mixup (middle) and the proposed density augmentation (right). The result of the augmentation are the magenta and cyan samples. ### Anomaly detection The problem in anomaly detection lies in identifying a subset of anomalous or outlier data points. Most methods rely on the assumption that the negative class is less frequent and Figure 4: The values \(\lambda\) acquires for different distributions. Figure 5: A toy dataset (left) with positive (green) and unlabeled (black) samples, their mixup (middle) and the proposed augmentation (right). distinctively different from the positive class. In other words, this can be formulated as a problem of identifying samples that are unlikely to have been generated by the normal process of the underlying data distribution. Let \(p\) be the probability density function of the embeddings i.e.,, the inliers and \(q\) be the probability density function of the encodings i.e.,, the outliers. In addition, let \(C\) be the contamination fraction, which is the proportion of the dataset expected to be outliers. The expected loss for anomaly detection is \[L_{E}(\psi_{pred},\psi_{true})=C\mathrm{E}\chi\sim p(\chi)[\psi _{pred}=1|\psi_{true}=0]+\\ +(1-C)\mathrm{E}\chi\sim q(\chi)[\psi_{pred}=0|\psi_{true}=1] \tag{4}\] where the variable \(\psi_{pred}\) represents the predicted label and \(\psi_{true}\) represents the true label of sample \(\chi\). The first term in the expected loss is the probability of a sample being misclassified as an inlier, which is equivalent to the probability of a false negative. The second term represents the probability that a point sampled from the inliers is misclassified as an outlier, which is equivalent to the probability of a false positive in the anomaly detection task. At this stage the encodings \(Z_{L}\) are labeled as outliers and the embeddings \(Z_{\nu}\) inliers, owing to the significantly larger population of the embeddings. Hence, the contamination fraction \(C=\frac{q}{p}=\frac{|Z_{L}|}{|Z_{\nu}|}\) can be estimated. This knowledge does not directly affect the form of the loss function, but helps to tune the \(C\) constant. For example, if the fraction is small (i.e.,, the contamination rate is low), then the loss function may place more emphasis on minimizing false negatives (i.e.,, maximizing the true positive rate), because false positives are relatively rare. Overall, an anomaly detection method is used to learn the separation of \(Z_{\nu}\) from \(Z_{L}\) optimally. In practice, this is not a trivial task because \(Z_{\nu}\) is an approximation of \(Z_{L}\). Thus, parameter \(C\) is expected to be near 0 and the boundary between the two distributions to be very thin. ### From PU learning to Binary Classification Thus far, this methodology has transformed an input \(x\) from \(P_{L}\) into an encoding \(z\in Z_{L}\) based on the latent representation of a CAE trained solely on data that lie in the same class. A simple yet novel augmentation method, which works similarly to the mixup, increased the density of the encodings, producing the embeddings. Labeling the embeddings as the inliers and the encodings as the outliers, allowed to define a boundary of the inlier class in the latent space using an anomaly detection method. Note that \(U\) contains unlabeled samples belonging to either \(P_{UL}\) or \(N\); therefore, the corresponding encodings \(Z_{U}=Z_{P_{UL}}\cup Z_{N}\) are obtained by the CAE. Unfortunately, the encodings do not provide sufficient information to separate \(P_{UL}\) from \(N\), as shown in Figure 3. According to the framework of PU learning, only samples belonging to the positive class are labeled. Thus, the samples predicted as inliers were labeled \(\tilde{Z}_{PUL}\) and were subsequently removed from the unlabeled population, thus resulting in learning a data-driven boundary, using \(Z_{U}-\tilde{Z}_{P_{UL}}\). Although removing the predicted positive samples from the unlabeled ones might seem to imply that \(\tilde{Z}_{N}=Z_{U}-\tilde{Z}_{P_{UL}}\) this is not explicitly true; technically, the leftovers in \(Z_{U}-\tilde{Z}_{P_{UL}}\) are still unlabeled samples. However, some information remains to be exploited, namely the degree of every leftover sample being an anomaly. Leftover samples reside outside of the learnt boundary, yet some samples are further than others. In this study \(L_{A}\) is a sample's degree of being an anomaly, and is expressed as a rank defined by a sample's position within a sorted list of distances from the outermost samples in \(Z_{P}\cup\tilde{Z}_{P_{UL}}\), such that: \[L_{A}(z_{i})=\min_{\forall z_{i}\in Z_{U}-Z_{P_{UL}}}d(z_{i}-z_{j}) \tag{5}\] where \(d(\cdot)\) is a metric function, \(z_{i}\in Z_{U}-\tilde{Z}_{P_{UL}}\), \(z_{j}\in Z_{P}\cup\tilde{Z}_{P_{UL}}\), and \(l_{n}<l_{n+1}\forall l\in L_{A}\) for \(n=\{1,2,...,|Z_{U}-\tilde{Z}_{P_{UL}}|\}\). Any \(\tilde{N}\) subset of the ranked list 3 can be used with confidence as a set of counter-examples to positive-labeled data. At the end of this process, \(P_{L}\) and \(\tilde{N}\) are sets that can confidently be used as opposite datasets in a binary classification setting. Footnote 3: Practically, \(\tilde{N}\) can be as many as the initial \(P_{L}\) samples. ## 4 Experiments ### Configuration In the experiments conducted4 for the validation of the proposed methodology, two well-known datasets have been used, namely CIFAR-10 [5] and Fashion-MNIST [6]. For the purposes of the PU learning setting, transportation related images were merged into one class and living things related images into the second, for the case of CIFAR-10, whereas top clothes were merged into one class and the rest images in the other one, for the case of Fashion-MNIST (see Figure 1). Table 1 shows how the datasets were handled. \begin{table} \begin{tabular}{l|c c c c c c} \hline Dataset & Method & Acc (std) & Prec (std) & Rec (std) & F1 (std) & AUC (std) \\ \hline \hline \multirow{8}{*}{\begin{tabular}{c} Fashion-MNIST \\ CIFAR-10 \\ \end{tabular} } & uPU [10] & 94.20 (0.3) & 92.50 (1.26) & 92.59 (0.8) & 92.53 (0.31) & 97.34 (0.54) \\ & nnPU [11] & 94.44 (0.49) & 91.69 (1.13) & 94.69 (0.84) & 93.16 (0.57) & 97.53 (0.48) \\ & RP [33] & 92.37 (1.08) & 88.58 (1.56) & 92.94 (2.38) & 90.60 (1.39) & 97.14 (0.58) \\ & PUSB [14] & 94.5 (0.36) & 93.12 (0.44) & 93.12 (0.44) & 93.12 (0.44) & 97.31 (0.5) \\ & PUbN [4] & 94.82 (0.16) & 92.92 (0.5) & 94.24 (0.93) & 93.57 (0.24) & 94.72 (0.29) \\ & self-PU [12] & 94.75 (0.25) & 91.73 (0.8) & 95.50 (0.61) & 93.57 (0.28) & 97.62 (0.31) \\ & aPU [15] & 94.71 (0.34) & 92.71 (0.5) & 94.20 (1.06) & 93.44 (0.45) & 97.67 (0.4) \\ & VPU [34] & 92.26 (1.11) & 89.04 (2) & 92.01 (2) & 90.48 (1.35) & 97.38 (0.44) \\ & ImbPU [13] & 94.54 (0.42) & 92.81 (1.53) & 93.66 (1.67) & 93.21 (0.52) & 97.67 (0.81) \\ & Dist-PU [7] & 95.4 (0.34) & 94.18 (0.9) & 94.34 (1) & 94.25 (0.43) & 98.57 (0.24) \\ \cline{2-8} & **Dens-PU** & **95.73** (0.53) & 92.36 (1.1) & **94.8** (0.6) & **94.80** (0.62) & 96.00 (0.73) \\ \hline \hline \multirow{8}{*}{ \begin{tabular}{c} Fashion-MNIST \\ CIFAR-10 \\ \end{tabular} } & uPU [10] & 88.35 (0.45) & 87.18 (2.39) & 83.23 (2.68) & 85.10 (0.31) & 94.91 (0.62) \\ & nnPU [11] & 88.89 (0.45) & 86.18 (1.15) & 86.05 (1.42) & 86.10 (0.57) & 95.12 (0.52) \\ & RP [33] & 88.73 (0.15) & 86.01 (1.01) & 85.82 (1.51) & 85.9 (1.39) & 95.17 (0.23) \\ & PUSB [14] & 88.95 (0.41) & 86.19 (0.51) & 86.19 (0.5) & 86.19 (0.44) & 95.13 (0.52) \\ & PUbN [4] & 89.83 (0.3) & 87.85 (0.98) & 86.56 (1.87) & 87.18 (0.24) & 89.28 (0.54) \\ & self-PU [12] & 89.28 (0.72) & 86.16 (0.78) & 87.21 (2.35) & 86.67 (0.28) & 95.47 (0.58) \\ & aPU [15] & 89.05 (0.52) & 86.29 (1.3) & 86.37 (0.79) & 86.32 (0.45) & 95.09 (0.42) \\ & VPU [34] & 87.99 (0.58) & 86.72 (1.41) & 82.71 (2.84) & 84.63 (1.35) & 94.51 (0.41) \\ & ImbPU [13] & 89.41 (0.46) & 86.69 (0.9) & 86.87 (0.82) & 86.77 (0.52) & 95.52 (0.27) \\ & Dist-PU [7] & 91.88 (0.52) & 89.87 (1.09) & 89.84 (0.8) & 89.85 (0.43) & 96.92 (0.45) \\ \cline{2-8} & **Dens-PU** & **93.63** (0.44) & **92.68** (1.31) & **91.25** (1.12) & **91.96** (0.8) & 93.22 (0.99) \\ \hline \end{tabular} \end{table} Table 2: Comparison of performances between the proposed method ”Dens-PU” and the state of the art. \begin{table} \begin{tabular}{l l|c c c c c c} \hline Dataset & Image Size & \(|P_{L}|\) & \(|U|\) & Testing & Positive Class & Negative Class \\ \hline \hline F-MNIST & 28 x 28 & 1,000 & 59,000 & 10,000 & \{0, 2, 4, 6\} & \{1, 3, 5, 7, 8, 9\} \\ CIFAR-10 & 32 x 32 x 3 & 1,000 & 49,000 & 10,000 & \{0, 1, 8, 9\} & \{2, 3, 4, 5, 6, 7\} \\ \hline \end{tabular} \end{table} Table 1: Positive and Unlabeled dataset splits. To keep the hyperparameters of the algorithms and deep learning architectures the same, the images from Fashion-MNIST were upscaled to match the 32 \(\times\) 32 size of CIFAR-10. Moreover, two additional channels were added such that a grayscale image from the Fashion-MNIST be treated as an RGB image, by replicating the single channel. The CAE was a symmetrical autoencoder with three convolutional and three deconvolutional layers. The architecture was kept as simple as possible, because obtaining only latent encodings was desirable. The filters for each encoding layer were 64, 32, and 8, respectively, and the reverse order was applied to the decoder. MaxPooling halved the shape of the filters between each convolution. The encoding extracted from the latent space has dimensions \(Z\in\mathbb{R}^{1\times 512}\). The training batch size was set to 64 for all datasets. Adam [35] was chosen as the optimizer. MSE acted as a loss function. The initial learning rate and weight decay were set as \(10^{-4}\) and \(10^{-3}\), respectively. The architecture is trained for 50 epochs. The proposed interpolation (3) had three parameters to be configured: (a) the proximity towards the midpoint controlled by \(k=0.2\), (b) the sample size 5 of positive pairs \(|(z_{i},z_{j})|=16,000\) which were sampled with no replacement and were used as the extreme points of the line segment made by said interpolation, and (c) the number of interpolated samples that belong to the line segment of a given pair was \(s=11\). Footnote 5: from the possible pairs \(\binom{|Z_{L}|}{2}\), which corresponds to 99% confidence with a 1% margin of error As an anomaly detection method, Isolation Forest was chosen because it is fast to train owing to its random splits, adapts well in highly non-linear spaces, and it is easy to optimize its hyperparameters [26, 27, 36]. Hence, the number of estimators was set to 1000, the depth was left at the default operation, the number of samples per estimator was 256 and the contamination fraction at \(C=\frac{|Z_{L}|}{s\cdot|Z_{u}|}\approx 0.005\). Conveniently, Isolation Forest uses the number of steps needed to separate a sample as an anomaly score. Dens-PU uses this score to replace \(d\) in (5) and to calculate the ordered list \(L_{A}\). Finally, the VGG-16 architecture was chosen as a binary classifier for all the experiments. This decision was made because VGG-16 is not a complex deep classifier, yet it is capable of performing well in typical classification tasks. The weights were randomly initialized. Average pooling and two dense layers are added after the last VGG block. The dense layers consisted of 128 neurons with ReLU activation, connected to a single neuron and sigmoid activation. The optimizer was SGD with a learning rate and weight decay of \(10^{-4}\) and \(10^{-3}\) respectively. The model was trained for 200 epochs or until it reached a plateau with a batch size of 32 samples. ### Results Table 2 presents the evaluation results for ten PU-learning methods on two datasets, Fashion-MNIST and CIFAR-10. The \(F_{1}\)-score, Area Under the Curve (AUC), Accuracy, Precision, and Recall metrics [3] were used to evaluate the performance of each algorithm for both datasets. For Fashion-MNIST, Dens-PU outperformed all traditional methods in terms of F1-score, Precision, Recall, and accuracy metrics. Among the state-of-the-art methods, Dist-PU and ImbPU exhibited the best performance. For CIFAR10, Dens-PU also outperformed all the traditional methods in the same two metrics. The performance of the state-of-the-art methods was comparable, with Dist-PU, nnPU, and ImbPU attaining the highest scores. Self-PU and PUbN were among the most competitive baselines because of their extra designs, such as mentor \begin{table} \begin{tabular}{l c c c c c} \hline \hline & 5\% & 10\% & 25\% & 30\% & 50\% \\ \hline \hline F-MNIST 88.71 (\(\pm 0.67\)) 93.83 (\(\pm 0.74\)) 94.35 (\(\pm\) 0.93) 94.71 (\(\pm 0.80\)) 94.68 (\(\pm 0.68\)) 95.1 (\(\pm 0.87\)) \\ CIFAR-10 82.85 (\(\pm 1.8\)) 89.65 (\(\pm 0.90\)) 92.23 (\(\pm 0.56\)) 92.02 (\(\pm 0.72\)) 92.55 (\(\pm 1.03\)) 93.79 (\(\pm 0.96\)) \\ \hline \hline \end{tabular} \end{table} Table 3: Assessing \(F_{1}\)-score performance for varying initial populations of known samples in \(P_{L}\). nets and pre-trained models. The VPU method, which does not use class prior information, shows relatively less promising results than other baselines. ### Ablation studies Before moving on to the presentation of the ablations studies, it is important to note that minibatch training was used in all cases of unbalanced class ablation study. A subset of samples from the larger class is selected to match the size of the smaller class. Failure to do so would result in the classifier heavily overfitting the larger class, leading to an extremely low \(F_{1}\)-score. However, the relatively high performance observed may be attributed to learning with noisy labels [37], which is beyond the scope of this work. Instead of using minibatch training when having uneven classes, other approaches involve learning a weighted classifier [38] or applying a weighted loss[39], but these are not explored here as well. The ablation begins with the investigation of the impact of different sizes of positive-labeled sample populations at the beginning of the method, denoted as \(|P_{L}|\). The \(F_{1}\)-score is used to measure the performance, and Table 3 shows the results for both the CIFAR-10 and Fashion-MNIST datasets. The findings reveal that the performance of the proposed methodology remains unaffected for an initial sample size of \(5-30\%\) of the training dataset. When limiting the initial information to only \(1\%\), the performance drops significantly below that of the state-of-the-art, although classification remains feasible within acceptable accuracy limits (\(F_{1}\)-score remains high). Conversely, when half of the training dataset is available, the performance improves; however, such large amounts are not usually available in typical PU learning scenarios. A second ablation study examined the effect of the two parameters controlling (a) the data used for learning the data-driven boundary and (b) the criterion regarding counter-example selection. Regarding (a) three modes for the selection of \(Z_{\nu}\) were considered, (i) using only the available encodings of positive-labeled samples with no other information, (ii) using the encodings of positive-labeled samples and embeddings from mixup, and (iii) using the encodings of positive-labeled samples and embeddings generated by the proposed interpolation. Regarding (b) random sampling was selected, denoted by \(\in_{R}\). The reason for evaluating Dens-PU on these particular parameters is that this work proposes to be performed differently than others, and they are considered the main contributions. Table 4 shows the results of this test, where \(F_{1}\)-score was used as the evaluation metric. Variant 1 demonstrates a naive binary classification approach in which the unlabeled set is used directly as a counter-example to the positive-labeled. Variants 1 to 3 exhibit the worst performance compared with the other variants, which is expected because the data-driven boundary no longer reflects its intended design purposes. A density augmentation technique is required to construct a boundary around the approximated positive-labeled class. Variants 6 and 9 demonstrate that using an anomaly score as proposed is a logical criterion for selecting samples to be assigned to the counter-examples class \(\tilde{N}\) compared with random sampling from the unlabeled set (i.e.,, variants 4 and 6) and random sampling from the leftovers (i.e.,, variants 5 and 7). As (1) (mixup) and (2) (proposed) are similar, variants 4, 5 and 6, 7 show comparable performance, with the second group exhibiting more stable accuracy. Overall, variant 7 outperformed all others, indicating that the proposed modules provide an additional advantage over the existing. A final ablation study considered the different population sizes of the counter-example set \(\tilde{N}\). Three variants were examined, with the counter-example set size being (1) equal to the leftovers (i.e.,, \(|\tilde{N}|=|U-P_{PUL}|\)), effectively choosing all remaining samples, (2) a random number within the leftover size limits, and (3) equal to the size of the positive-labeled set (i.e.,, \(|\tilde{N}|=|P_{L}|\)). The performance was evaluated using the \(F_{1}\)-score in Table 5. As expected, the first variant performs the worst compared to the other two because selecting all leftovers means mistakenly acquiring positive samples predicted as anomalies. Moreover, the leftover size can be larger than the positive-labeled set (i.e., \(|U-P_{PUL}|\gg|P_{L}|\)), causing imbalanced classes and resulting in a poor binary classifier performance. Selecting a random number exhibits similar rks; however, this time, the counter-example set can be smaller than the positive-labeled set. The third variant confirms that balancing the classes is the best option. ## 5 Discussion The experiments conducted in this study demonstrate that the proposed approach effectively addresses the problem of PU learning. The results confirmed that density augmentation plays a critical role in PU learning. Overall, the proposed approach outperforms ten reference methods in terms of accuracy, precision, recall, and \(F_{1}\)-score. Although several components were combined, Dens-PU was easy to set up and no special computational resources were required for the experiments. One drawback of the proposed methodology is its dependence on the quality of encodings extracted from positive-labeled data. The performance may be affected if the encodings contain noisy or irrelevant features. In addition, the choice of Gaussian sampling parameter \(k\) can affect the quality of the created samples. For example, if \(k\) is set close to 0, all samples may look like the midpoint, reducing them to a single sample. If \(k\) is set too high, the boundary between the positive-labeled distribution and its approximation may not be defined, leading to lower performance in the PU learning task. ## 6 Conclusion This study addressed the problem of PU learning by introducing a novel method, Dens-PU, that learns a data-driven boundary around the approximated distribution of positive-labeled data. It uses an anomaly detection algorithm to discover negative samples with confidence, and once obtained, a typical supervised learning binary classification can be performed. Dens-PU was extensively tested against reference methods from the relevant literature, using benchmark image datasets, and the results showed a significant performance improvement achieving state-of-the-art accuracy with respect to the \(F_{1}\)-score. Potentially, it can be applied for classification scenarios that labeled data are expensive or difficult to obtain, like medical imaging, fraud detection and dataset creation pipelines. \begin{table} \begin{tabular}{c c c c} \hline Variant & \(Z_{\nu}\) & Selection of \(\bar{N}\) & \(F_{1}\) \\ \hline \hline 1 & - & \(x\in_{R}U\) & 80.46 (\(\pm\)0.18) \\ 2 & \(Z_{P}\) & \(x\in_{R}U-\tilde{P}_{PUL}\) & 86.74 (\(\pm\)0.41) \\ 3 & \(Z_{P}\) & (5) & 83.02 (\(\pm\)1.89) \\ 4 & MixUp [30] & \(x\in_{R}U-\tilde{P}_{PUL}\) & 90.38 (\(\pm\)0.71) \\ 5 & MixUp [30] & (5) & 90.51 (\(\pm\)0.63) \\ 6 & (2) & \(x\in_{R}U-\tilde{P}_{PUL}\) & 89.88 (\(\pm\)1.10) \\ \hline 7 & (2) & (5) & **91.20 (\(\pm\)0.97)** \\ \hline \end{tabular} \end{table} Table 4: Evaluating the \(F_{1}\)-score performance of the proposed Dens-PU and its variants on CIFAR-10. \begin{table} \begin{tabular}{c c c c} \hline Variant & \(|\tilde{N}|\) & \(F_{1}^{(F)}\) & \(F_{1}^{(C)}\) \\ \hline \hline 1 & \(|U-P_{PUL}|\) & 91.73 (\(\pm\)0.31) & 89.80 (\(\pm\)0.84) \\ 2 & \(\in_{R}|U-P_{PUL}|\) & 88.91 (\(\pm\)3.01) & 84.12 (\(\pm\)1.13) \\ 3 & \(|P_{L}|\) & **93.41 (\(\pm\)0.62)** & **91.33 (\(\pm\)0.75)** \\ \hline \end{tabular} \end{table} Table 5: Evaluating the F1-score performance for different counter-example populations on F-MNIST (F) and CIFAR-10(C). Future studies can investigate other density augmentation methods or improve the classification results by exploring training strategies for unbalanced classes, for example, by applying weighted loss or learning a weighted classifier such that the population of the counter-example set can be increased. Dens-PU uses encodings to draw a negative sample from the unlabeled data, therefore applying it to other modalities like text and audio might be feasible.
2306.13659
Toward A Logical Theory Of Fairness and Bias
Fairness in machine learning is of considerable interest in recent years owing to the propensity of algorithms trained on historical data to amplify and perpetuate historical biases. In this paper, we argue for a formal reconstruction of fairness definitions, not so much to replace existing definitions but to ground their application in an epistemic setting and allow for rich environmental modelling. Consequently we look into three notions: fairness through unawareness, demographic parity and counterfactual fairness, and formalise these in the epistemic situation calculus.
Vaishak Belle
2023-06-08T09:18:28Z
http://arxiv.org/abs/2306.13659v1
# Toward A Logical Theory Of Fairness and Bias ###### Abstract Fairness in machine learning is of considerable interest in recent years owing to the propensity of algorithms trained on historical data to amplify and perpetuate historical biases. In this paper, we argue for a formal reconstruction of fairness definitions, not so much to replace existing definitions but to ground their application in an epistemic setting and allow for rich environmental modelling. Consequently we look into three notions: fairness through unawareness, demographic parity and counterfactual fairness, and formalise these in the epistemic situation calculus. Logic, Fairness, Bias, Situation Calculus, Knowledge, Action 10.1017/xxxxx ## 1 Introduction Machine Learning techniques have become pervasive across a range of different applications, and are the source of considerable excitement but also debate. For example, they are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis and insurance pricing (Chouldechova, 2017; Khandani et al., 2010). In some of these applications, the prevalence of machine learning techniques has raised concerns about the potential for learned algorithms to become biased against certain groups. This issue is of particular concern in cases when algorithms are used to make decisions that could have far-reaching consequences for individuals (for example in recidivism prediction) (Chouldechova, 2017; Angwin et al., 2016). Attributes which the algorithm should be "fair" with respect to are typically referred to as _protected_ attributes. The values to these are often hidden from the view of the decision maker (whether automated or human). There are multiple different potential fields that might qualify as protected attributes in a given situation, including ethnicity, sex, age, nationality and marital status (Zemel et al., 2013). Ideally, such attributes should not affect any prediction made by "fair" algorithms. However, even in cases where it is clear which attributes should be protected, there are multiple (and often mutually exclusive) definitions of what it means for an algorithm to be unbiased with respect to these attributes, and there is disagreement within the academic community on what is most appropriate (Dwork et al., 2011; Kusner et al., 2017; Zafar et al., 2017). However, even amid pressing concerns that algorithms currently in use may exhibit racial biases, there remains a lack of agreement about how to effectively implement fairness, given the complex socio-technical situations that such applications are deployed in and the background knowledge and context needed to assess the impact of outcomes (e.g., denying a loan to someone in need). To address such issues broadly, an interesting argument has been championed by the symbolic community: by assuming a rich enough understanding of the application domain, we can encode machine ethics in a formal language. Of course, with recent advances in statistical relational learning, neuro-symbolic AI and inductive logic programming (Raedt et al., 2016; Muggleton et al., 2012), it is possible to integrate low-level pattern recognition based on sensory data with high-level formal specifications. For example, the _Hera_ project (Lindner et al., 2017) allows for the implementation of several kinds of (rule-based) moral theory to be captured. _Geneth_(Anderson and Anderson, 2014) uses inductive logic programming to create generalised moral principles from the judgements of ethicists about particular ethical dilemmas, with the system's performance being evaluated using an _ethical Turing test_. On the formalisation side, study of moral concepts has long been a favored topic in the knowledge representation community (Conway and Gawronski, 2013; Alexander and Moore, 2016; Czelakowski, 1997; Hooker and Kim, 2018), that can be further coupled against notions of beliefs, desires and intentions (Broersen et al., 2001; Georgeff et al., 1998). Finally, closer to the thrust of this paper, (Pagnucco et al., 2021) formalize consequentialist and deontological ethical principles in terms of "desirable" states in the epistemic situation calculus, and (Classen and Delgrande, 2020) formalize obligations using situation calculus programs. ## 2 Contributions Our thesis, in essence, is this: complementing the vibrant work in the ML community, it is worthwhile to study ethical notions in formal languages. This serves three broad objectives: 1. We can identify what the system needs to know versus what is simply true (Reiter, 2001; Halpern and Moses, 2014) and better articulate how this knowledge should impact the agent's choices. It is worth remarking that epistemic logic has served as the foundation for investigating the impact of knowledge on plans and protocols (Levesque, 1996; Lesperance et al., 2000; Halpern et al., 2009). 2. We implicitly understand that we can further condition actions against background knowledge (such as ontologies and databases), as well as notions such as intentions and obligations (Sardina and Lesperance, 2010). 3. We can position the system's actions not simply as a single-shot decision or prediction, as is usual in the ML literature, but as a sequence of complex events that depend on observations and can involve loops and recursion: that is, in the form of programs (Levesque et al., 1997). It would beyond the scope of a single paper to illustrate the interplay between the three objectives except in some particular application scenario. Thus, we focus on the interplay between A and C in the sense of advocating a "research agenda," rather than a single technical result, or a demonstration of a single application. In particular, what we seek to do is a formal reconstruction of some fairness definitions, not so much to replace existing definitions but to ground their application in an epistemic, dynamic setting. Consequently we look into three notions: fairness through unawareness, demographic parity and counterfactual fairness, and formalise these in the epistemic situation calculus (Scherl and Levesque, 2003; Lakemeyer and Levesque, 2011). In particular, our contributions are as follows: * Consider the notion of fairness through unawareness (FTU) in machine learning. Here, a "fair" classifier is one that predicts outputs by not using any information about protected attributes. In a dynamic setting, imagine a (virtual or physical) robot that is acting in service of some objective \(\phi\). For example, in a loan setting, which is classically treated as a static model in machine learning, we can expect intelligent automated agents to carry out many operations: check the yearly budget of the bank to determine the total amount to be loaned, rank applicants based on risk, determine the impact of recession, and ultimately synthesize a plan to achieve \(\phi\) (loan approval), but by virtue of FTU, it should never be the case that the agent has had access to protected information. In this paper, we provide a simple but general definition to capture that idea, in a manner that distinguishes what is true from what is known by the agent. * Analogously, consider the notion of demographic parity (DP). It is understood as a classifier that is equally likely to make a positive prediction regardless of the value of the protected attribute. For example, the proportion of men who are granted loans equals the proportion of women granted loans. So, if \(\phi(x)\) is the granting of a loan to individual \(x\), how do we capture the notion that the agent has synthesized a plan that achieves \(\phi(x)\) for both males as well as females? What would it look like for planning agents that want to conform to both FTU and DP? What if, instead of DP, we wished to only look at those granted loans, and among this group, we did not want the classifier to discriminate based on the individual's gender? For all these cases, we provide definitions in terms of the agent's mental state and action sequences that the agent knows will achieve \(\phi(x)\)(Levesque, 1996). * Finally, counterfactual fairness insists that the prediction should not differ if the individual's protected attributes take on a different value. For a planning agent to ensure this, we would need to make sure that _deleting_ facts about the current value for an individual \(x\)'s protected attribute and _adding_ a different value still achieves \(\phi(x)\) after the sequence. We characterize this using the notion of _forgetting_ because we permit, in general, any arbitrary first-order theory for the initial knowledge base, and not just a database interpreted under the closed-world assumption. These definitions can be seen to realize a specification for "fair" cognitive robots: that is, reasoning and planning agents (Lakemeyer and Levesque, 2007) that ensure through the course of their acting that, say, they never gain knowledge about the protected attributes of individuals, and guarantee that individuals are not discriminated based on values to these attributes. It should be clear that our definitions are loosely inspired by the ML notions. And so our formalisation do not argue for one definition over another, nor challenge any existing definition. We do, however, believe that studying the effects of these definitions in a dynamic setting provides a richer context to evaluate their appropriateness. Moreover, a formalisation such as ours lends itself to various types of implementations. For example, the synthesis of (epistemic) programs and plans (Wang and Zhang, 2005; Baral et al., 2017; Muise et al., 2015; Classen et al., 2008; McIlraith and Son, 2002) that achieve goals in socio-technical applications in a fair manner is an worthwhile research agenda. Likewise, enforcing fairness constraints while factoring for the relationships between individuals in social networks (Farnadi et al., 2018), or otherwise contextualising attributes against other concepts in a relational knowledge base (Aziz et al., 2018; Fu et al., 2020) are also worthwhile. By stipulating an account in quantified logic, it becomes possible to further unify such proposals in a dynamic setting. **Logic and fairness.** Let us briefly remark on closely related efforts. At the outset, note that although there has been considerable work on formalizing moral rules, there is no work (as far as we are aware) on the formalization of fairness and bias in a _dynamic epistemic_ setting, where we need to explicate the interaction between actions, plans and meta-beliefs. However, there is some work that tackles epistemic and logical aspects. For example, the work of [14] considers a statistical epistemic logic and its use for the formalisation of statistical accuracy as well as fairness, including the criterion of equality of opportunity. There are a few key differences to our work: that work is motivated by a probabilistic reconstruction of prediction systems by appealing to distance measures, and so knowledge is defined in terms of accessibility between worlds that are close enough. The language, moreover, allows for "measurement" variables that are interpreted statistically. In contrast, our account is not (yet) probabilistic, and if our account were to be extended in that fashion, the most obvious version would reason about degrees of belief [1, 2]; see [1] for discussions on the differences between statistical belief and degrees of belief. Moreover, our account is dynamic, allowing for explicit modalities operators for actions and programs. Consequently, our definitions are about studying how, say, the agent remains ignorant about protected attributes when executing a plan. Be that as it may, the work of [14] leads to an account where fairness can be expressed as a logical property using predicates for protected attributes, remarkably similar in spirit to our approach if one were to ignore actions. This should, in the very least, suggest that such attempts are very promising, and for the future, it would be worthwhile to conduct a deeper investigation on how these formalisation attempts can be synthesized to obtain a general probabilistic logical account that combines the strength of dynamic epistemic languages and statistical measures. (In a related vein to [14], [15] seek to axiomatize ML systems for the purpose of explanations in a modal logic.) An entirely complementary effort is the use of logic for verifying fair models [13], where existing definitions and classifiers are encoded using logical functions and satisfiability modulo theories. To summarize, all these differ from our work in that we are attempting to understand the interplay between bias, action and knowledge, and not really interested in capturing classifiers as objects in our language. Thus, our work, as discussed above, can be seen as setting the stage for _"fair" cognitive robots_. There is benefit to unifying these streams, which we leave to the future. ## 3 A logic for knowledge and action We now introduce the logic \(\mathcal{ES}\)[16].1 The non-modal fragment of \(\mathcal{ES}\) consists of standard first-order logic with =. That is, connectives \(\{\wedge,\forall,\neg\}\), syntactic abbreviations \(\{\exists,\equiv,\supset\}\) defined from those connectives, and a supply of variables variables \(\{x,y,\ldots,u,v,\ldots\}\). Different to the standard syntax, however, is the inclusion of (countably many) _standard names_ (or simply, names) for both objects and actions \(\mathcal{R}\), which will allow a simple, substitutional interpretation for \(\forall\) and \(\exists\). These can be thought of as special extra constants that satisfy the unique name assumption and an infinitary version of domain closure. Like in the situation calculus, to model immutable properties, we assume rigid predicates and functions, such as _IsPlant(\(x\))_ and _father(\(x\))_ respectively. To model changing properties, \(\mathcal{ES}\) includes fluent predicates and functions of every arity, such as _Broken(\(x\))_ and _height(\(x\))_. Note that there is no longer a situation term as an argument in these symbols to distinguish the fluents from the rigids. For example, \(\mathcal{ES}\) also includes distinguished fluent predicates _Poss_ and \(SF\) to model the executability of actions and capture sensing outcomes respectively, but they are unary predicates (that is, in contrast to the classical situation calculus [11] because they no longer include situation terms.) Terms and formulas are constructed as usual. The set of ground atoms \(\mathcal{P}\) are obtained, as usual, from names and predicates. There are four modal operators in \(\mathcal{ES}\): \([a]\), \(\Box\), \(\mathbf{K}\) and \(\mathbf{O}\). For any formula \(\alpha\), we read \([a]\alpha,\Box\alpha\) and \(\mathbf{K}\alpha\) as "\(\alpha\) holds after \(a\)", "\(\alpha\) holds after any sequence of actions" and "\(\alpha\) is known," respectively. Moreover, \(\mathbf{O}\alpha\) is to be read as "\(\alpha\) is only-known." Given a sequence \(\delta=a_{1}\cdots a_{k}\), we write \([\delta]\alpha\) to mean \([a_{1}]\cdots[a_{k}]\alpha\). In classical situation calculus parlance, we would use \([a]\alpha\) to capture successor situations as properties that are true after an action in terms of the current state of affairs. Together with the \(\Box\) modality, which allows to capture quantification over situations and histories, basic action theories can be defined. Like in the classical approach, one is interested in the entailments of the basic action theory. **Semantics.** Recall that in the simplest setup of the possible-worlds semantics, worlds mapped propositions to \(\{0,1\}\), capturing the (current) state of affairs. \(\mathcal{ES}\) is based on the very same idea, but extended to dynamical systems. So, suppose a world maps \(\mathcal{P}\) and \(\mathcal{Z}\) to \(\{0,1\}\).2 Here, \(\mathcal{Z}\) is the set of all finite sequences of action names, including the empty sequence \(\langle\rangle\). Let \(\mathcal{W}\) be the set of all worlds, and \(e\subseteq\mathcal{W}\) be the _epistemic state_. By a _model_, we mean a triple \((e,w,z)\) where \(z\in\mathcal{Z}\). Intuitively, each world can be thought of as a situation calculus tree, denoting the properties true initially but also after every sequence of actions. \(\mathcal{W}\) is then the set of all such trees. Given a triple \((e,w,z)\), \(w\) denotes the real world, and \(z\) the actions executed so far. Footnote 2: We need to extend the mapping to additionally interpret fluent functions and rigid symbols, omitted here for simplicity. To account for how knowledge changes after (noise-free) sensing, one defines \(w^{\prime}\sim_{z}w\), which is to be read as saying "\(w^{\prime}\) and \(w\) agree on the sensing for \(z\)", as follows: * if \(z=\langle\rangle\), \(w^{\prime}\sim_{z}w\) for every \(w^{\prime}\); and * \(w^{\prime}\sim_{z\cdot a}w\) iff \(w^{\prime}\sim_{z}w\), \(w^{\prime}[\textit{Poss}(a),z]=1\) and \(w^{\prime}[\textit{SF}(a),z]=w[\textit{SF}(a),z]\). This is saying that initially, we would consider all worlds compatible, but after actions, we would need the world \(w^{\prime}\) to agree on the executability of actions performed so far as well as agree on sensing outcomes. The reader might notice that this is clearly a reworking of the successor state axiom for the knowledge fluent in [11]. With this, we get a simply account for truth. We define the satisfaction of formulas wrt (with respect to) the triple \((e,w,z)\), and the semantics is defined inductively: * \(e,w,z\models p\) iff \(p\) is an atom and \(w[p,z]=1\); * \(e,w,z\models\alpha\wedge\beta\) iff \(e,w,z\models\alpha\) and \(e,w,z\models\beta\); * \(e,w,z\models\neg\alpha\) iff \(e,w,z\not\models\alpha\); * \(e,w,z\models\forall x\alpha\) iff \(e,w,z\models\alpha_{n}^{x}\) for all \(n\in\mathcal{R}\); * \(e,w,z\models[a]\alpha\) iff \(e,w,z\cdot a\models\alpha\); * \(e,w,z\models\Box\alpha\) iff \(e,w,z\cdot z^{\prime}\models\alpha\) for all \(z^{\prime}\in\mathcal{Z}\); * \(e,w,z\models\mathbf{K}\alpha\) iff for all \(w^{\prime}\sim_{z}w\), if \(w^{\prime}\in e\), \(e,w^{\prime},z\models\alpha\); and * \(e,w,z\models\mathbf{O}\alpha\) iff for all \(w^{\prime}\sim_{z}w\), \(w^{\prime}\in e\), iff \(e,w^{\prime},z\models\alpha\). We write \(\Sigma\models\alpha\) (read as "\(\Sigma\) entails \(\alpha\)") to mean for every \(M=(e,w,())\), if \(M\models\alpha^{\prime}\) for all \(\alpha^{\prime}\in\Sigma\), then \(M\models\alpha\). We write \(\models\alpha\) (read as "\(\alpha\) is valid") to mean \(\{\}\models\alpha\). **Properties.** Let us first begin by observing that given a model \((e,w,z)\), we do not require \(w\in e\). It is easy to show that if we stipulated the inclusion of the real world in the epistemic state, \(\mathbf{K}\alpha\supset\alpha\) would be true. That is, suppose \(\mathbf{K}\alpha\). By the definition above, \(w\) is surely compatible with itself after any \(z\), and so \(\alpha\) must hold at \(w\). Analogously, properties regarding knowledge can be proven with comparatively simpler arguments in a modal framework, in relation to the classical epistemic situation calculus. Valid properties include: 1. \(\Box(\mathbf{K}(\alpha)\land\mathbf{K}(\alpha\supset\beta)\supset\mathbf{K}(\beta))\); 2. \(\Box(\mathbf{K}(\alpha)\supset\mathbf{K}(\mathbf{K}(\alpha)))\); 3. \(\Box(\neg\mathbf{K}(\alpha)\supset\mathbf{K}(\neg\mathbf{K}(\alpha)))\); 4. \(\Box(\forall x.\ \mathbf{K}(\alpha)\supset\mathbf{K}(\forall x.\ \alpha))\); and 5. \(\Box(\exists x.\ \mathbf{K}(\alpha)\supset\mathbf{K}(\exists x.\ \alpha))\). Note that such properties hold over all possible action sequences, which explains the presence of the \(\Box\) operator on the outside. The first is about the closure of modus ponens within the epistemic modality. The second and third are on positive and negative introspection. The last two reason about quantification outside the epistemic modality, and what that means in terms of the agent's knowledge. For example, item 5 says that if there is some individual \(n\) such that the agent knows \(Teacher(n)\), it follows that the agent believes \(\exists xTeacher(x)\) to be true. This may seem obvious, but note that the property is really saying that the existence of an individual in some possible world implies that such an individual exists in all accessible worlds. It is because there is a fixed domain of discourse that these properties come out true; they are referred to a the Barcan formula. As seen above, the logic \(\mathcal{ES}\) allows for a simple definition of the notion of only-knowing in the presence of actions (Levesque 1990), which allows one to capture both the beliefs as well as the non-beliefs of the agent. Using the modal operator \(\mathbf{O}\) for only-knowing, it can be shown that \(\mathbf{O}\alpha\models\mathbf{K}\beta\) if \(\alpha\models\beta\) but \(\mathbf{O}\alpha\models\neg\mathbf{K}\beta\) if \(\alpha\not\models\beta\) for any non-modal \(\{\alpha,\beta\}\,\). That is, only-knowing a knowledge base also means knowing everything entailed by that knowledge base. Conversely, it also means not believing everything that is not entailed by the knowledge base. In that sense, \(\mathbf{K}\) can be seen as an "at least" epistemic operator, and \(\mathbf{O}\) captures both at least and "at most" knowing. This can be powerful to ensure, for example, that the agent provably does not know protected attributes. We will now consider the axiomatization of a basic action theory in \(\mathcal{ES}\). But before explaining how successor state axioms are written, one might wonder whether a successor state axiom for \(\mathbf{K}\) is needed, as one would for \(Knows\) in the epistemic situation calculus. It turns out because the compatibility of the worlds already accounted for the executability of actions and sensing outcomes in accessible worlds, such an axiom is actually a property of the logic: \[\models\Box[a]\mathbf{K}(\alpha)\equiv(SF(a)\land\mathbf{K}(SF(a)\supset[a]\alpha))\ \lor(\neg SF(a)\land\mathbf{K}(\neg SF(a)\supset[a]\alpha)).\] (As is usual, free variables are implicitly quantified from the outside.) Thus, what will be known after an action is understood in terms of what was known previously together with the sensing outcome. The example below will further clarify how \(SF\) works. **Basic Action Theories.** To axiomatize the domain, we consider the analogue of the basic action theory in the situation calculus (Reiter 2001b). It consists of: * axioms that describe what is true in the initial states, as well as what is known initially; * precondition axioms that describe the conditions under which actions are executable using a distinguished predicate \(Poss\); * successor state axioms that describe the conditions under which changes happen to fluents after actions (incorporating Reiter's monotonic solution to the frame problem); and * sensing axioms that inform the agent about the world using a distinguished predicate \(SF\). Note that foundational axioms as usually considered in Reiter's variant of the situation calculus (Reiter 2001b) are not needed as the tree-like nature of the situations is baked into the semantics. Let us consider a simple example of a loan agency set up for the employees of a company. For simplicity, assume actions are always executable: \(\Box Poss(a)=true\). Let us also permit a sensing axiom that allows one to look up if an individual is male: \(\Box SF(a)\equiv(a=isMale(x)\wedge Male(x))\lor a\neq isMale(x)\). For simplicity, we assume binary genders, but it is a simple matter of using a predicate such as \(Gender(x,y)\) instead to allow individuals \(x\) to take on gender \(y\) from an arbitrary set. To now consider successor state axioms, let us suppose having a loan is simply a matter of the manager approving, and unless the manager denies it at some point, the individual continues to hold the loan. For illustration purposes, we will consider a company policy that approves loans for those with high salaries. High salaries are enabled for an "eligible" individual if they are promoted by the manager, and salaries remain high unless they get demoted. Finally, we model eligibility and maleness as a rigid, but this is not necessary, and we can permit actions that updates the gender of individuals in the database. These are formalized as the axioms below, where the left hand side of the equivalence captures the idea that for every sequence of actions, the effect of doing \(a\) on a predicate is given by the right hand side of the equivalence. \[\Box[a]hasLoan(x)\equiv a=approve(x)\vee(hasLoan(x)\wedge a\neq deny(x)).\] \[\Box[a]highSalary(x)\equiv(a=pronote(x)\wedge Eligible(x))\vee( highSalary(x)\wedge a\neq demote(x)).\] \[\Box[a]Eligible(x)\equiv Eligible(x).\] \[\Box[a]Male(x)\equiv Male(x).\] We will lump the successor state, precondition and sensing axioms as \(\Sigma_{dyn}\). The sentences that are true initially will be referred to by \(\Sigma_{0}\); however, the agent cannot be expected to know everything that is true, and so let \(\Sigma^{\prime}_{0}\) be what is believed initially. It may seem natural to let \(\Sigma^{\prime}_{0}\subseteq\Sigma_{0}\), but that it not necessary. The agent might be uncertain about what is true (e.g., \(\Sigma_{0}\) might have \(p\) but \(\Sigma^{\prime}_{0}\) has \(p\lor q\) instead).3 However, for simplicity, we will require that agents at least believe the dynamics works as would the real world. Therefore, we consider entailments wrt the following _background theory:_ Footnote 3: If the agent believes facts that are conflicted by observations about the real world, beliefs may need to be revised (Delgrande and Levesque 2012), a matter we ignore for now. Our theory of knowledge is based on _knowledge expansion_ where sensing ensures that the agent is more certain about the world (Scherl and Levesque 2003; Reiter 2001b). \[\Sigma=\Sigma_{0}\wedge\Sigma_{dyn}\wedge\mathbf{O}(\Sigma^{\prime}_{0}\wedge \Sigma_{dyn}).\] In our example, let us suppose: \(\Sigma_{0}=\left\{Male(n_{i}),\neg Male(n^{\prime}_{i}),Eligible(n_{i}),\neg Eligible (n^{\prime}_{i})\mid i\in N\right\}\) whereas, what is believed by the agent initially is: \(\Sigma^{\prime}_{0}=\left\{Eligible(n_{i}),\neg Eligible(n^{\prime}_{i})\mid i \in N\right\}\) So there are two groups of individuals, \(n_{i}\) and \(n^{\prime}_{i}\), the first male and the second female, the first considered eligible and the second not considered eligible. All that the agent knows is the eligibility of the individuals. Note that \(N\) here is any set, possibly an infinite one, that is, the language allows \(N=\mathbb{N}\). For ease of readability, however, we let \(N=\{1\}\) in our examples below, and we write \(n_{1}\) as \(n\) and \(n^{\prime}_{1}\) as \(n^{\prime}\).4 Footnote 4: Note that although the language has infinitely many constants, a finite domain can be enforced using domain relativization. For example, let: \(\forall x(Individual(x)\equiv x=john\vee\ldots\lor x=jane)\). This declares finitely many individuals. Then instead of saying \(\exists x\). _Eligible_(\(x\)), which in general means that any one of the infinitely many constants is eligible, we would write: \(\exists x(Individual(x)\wedge Eligible)\), which declares that only one from \(\{john,\ldots,jane\}\) is eligible. It is worth quickly remarking that many features of the language are omitted here for simplicity. For example, \(\mathcal{ES}\) can be extended with second-order variables (Classen and Lakemeyer, 2008), which allows one to consider the equivalent of GOLOG programs (Levesque et al., 1997). Likewise, notions of probabilistic actions (Bacchus et al., 1999), epistemic achievability (Lesperance et al., 2000), and causality (Batusov and Soutchanski, 2018) in addition to studying program properties (Classen, 2018) are interesting dimensions to explore in the fairness context. **Forgetting.** In some of the definitions of fairness, we will need to force the setting where information about protected attributes is forgotten. While standard ML approaches propose to do this via column deletion (e.g., remove all entries for the gender attribute), a richer notion is arguably needed for a first-order knowledge base. We appeal to the notion of forgetting (Lin and Reiter, 1994). Lin and Reiter defined the notion of forgetting, which is adapted to \(\mathcal{ES}\) below. They show that while forgetting ground atoms is first-order definable, forgetting relations needs second-order logic. We only focus on the case of atoms, but it would interesting to study how fairness notions are affected when protected attributes are completely absent from a theory. Suppose \(S\) denotes a finite set of ground atoms. We write \(\mathcal{M}(S)\) to mean the set of all truth assignments to \(S\). Slightly abusing notation, given a ground atom \(p\), we write \(w^{\prime}\sim_{p}w\) to mean that \(w^{\prime}\) and \(w\) agree on everything initially, except maybe \(p\). That is, for every atom \(q\neq p\), \(w[q,\langle\rangle]=w^{\prime}[q,\langle\rangle]\). Next, for every action sequence \(z\neq\langle\rangle\) and every atom \(q^{\prime}\), \(w[q^{\prime},z]=w^{\prime}[q^{\prime},z]\). _Definition._ Given a formula \(\phi\) not mentioning modalities, we say \(\phi^{\prime}\) is the result of forgetting atom \(p\), denoted \(\mathit{Forget}(\phi,p)\), if for any world \(w\), \(w\models\phi^{\prime}\) iff there is a \(w^{\prime}\) such that \(w^{\prime}\models\phi\) and \(w\sim_{p}w^{\prime}\). Inductively, given a set of atoms \(\{p_{1},\ldots,p_{k}\}\), define \(\mathit{Forget}(\phi,\{p_{1},\ldots,p_{k}\})\) as \(\mathit{Forget}(\mathit{Forget}(\phi,p_{1}),\ldots,p_{k})\). It is not hard to show that forgetting amounts to setting an atom to true everywhere or setting it false everywhere. In other words: \(\mathit{Proposition}\). \(\mathit{Forget}(\phi,S)\equiv\bigvee_{M\in\mathcal{M}(s)}\phi[M]\), where \(\phi[M]\) is equivalent to \(\phi\wedge\bigwedge_{i}(p_{i}=b_{i})\) understood to mean that the proposition \(p_{i}\) is accorded the truth value \(b_{i}\in\{0,1\}\) by \(M\). Abusing notation, we extend the notion of forgetting of an atom \(p\) for basic action theories and the background theory as follows in applying it solely to what is true/known initially: * \(\mathit{Forget}(\Sigma_{0}\wedge\Sigma_{dyn},p)=\mathit{Forget}(\Sigma_{0},p)\); and * \(\mathit{Forget}(\Sigma,p)=\mathit{Forget}(\Sigma_{0},p)\wedge\Sigma_{dyn} \wedge\boldsymbol{O}(\mathit{Forget}(\Sigma^{\prime}_{0},p)\wedge\Sigma_{dyn})\). One of the benefits of lumping the knowledge of the agent as an objective formula in the context of the only-knowing operator is the relatively simple definition of forgetting. _Proposition._ Suppose \(\phi\) is non-modal. Suppose \(p\) is an atom. For every objective \(\psi\) such that \(\mathit{Forget}(\phi,p)\models\psi\) it is also the case that \(\boldsymbol{O}(\mathit{Forget}(\phi,p))\models\boldsymbol{K}\psi\). Because \(\mathbf{O}\phi\models\mathbf{K}\psi\) for every \(\{\phi,\psi\}\) provided \(\phi\models\psi\), the above statement holds immediately. In so much as we are concerned with a non-modal initial theory and the effects of forgetting, our definition of \(\textit{Forget}(\Sigma,p)\) above (notational abuse notwithstanding) suffices. In contrast, forgetting with arbitrary epistemic logical formulas is far more involved (Zhang and Zhou, 2009). ## 4 Existing notions As discussed, we will not seek to simply retrofit existing ML notions in a logical language; rather we aim to identify the principles and emphasize the provenance of unfair actions in complex events. Nonetheless, it is useful to revisit a few popular definitions to guide our intuition. **Fairness through unawareness.** Fairness through unawareness (FTU) is the simplest definition of fairness; as its name suggests, an algorithm is "fair" if it is unaware of the protected attribute \(a_{p}\) of a particular individual when making a prediction (Kusner et al., 2017). _Definition._ For some set of attributes \(X\) any mapping \(f:X\to\hat{y}\), where \(a_{p}\not\in X\) satisfies fairness through unawareness (Kusner et al., 2017). (Assume \(y\) denotes the true label.) This prevents the algorithm learning direct bias on the basis of the protected attribute, but does not prevent indirect bias, which the algorithm can learn by exploiting the relationship between other training variables and the protected attribute (Pedreschi et al., 2008; Hardt et al., 2016). Moreover, if any of the training attributes are allocated by humans there is the potential for bias to be introduced. **Statistical measures of fairness.** Rather than defining fairness in terms of the scope of the training data, much of the existing literature instead assesses whether an algorithm is fair on the basis of a number of statistical criteria that depend on the predictions made by the algorithm (Hardt et al., 2016; Kusner et al., 2017; Zemel et al., 2013). One widely used and simple criterion is demographic parity (DP). In the case that both the predicted outcome and protected attribute \(a_{p}\) are both binary variables, a classifier is said to satisfy predictive parity (Hardt et al., 2016) if: \(P(\hat{y}=1|a_{p}=1)=P(\hat{y}=1|a_{p}=0)\). By this definition, a classifier is considered fair if it is equally likely to make a positive prediction regardless of the value of the protected attribute \(a_{p}\). **Fairness and the individual.** Another problem with statistical measures is that, provided that the criterion is satisfied, an algorithm will be judged to be fair regardless of the impact on individuals. In view of that, various works have introduced fairness metrics which aim to ensure that individuals are treated fairly, rather than simply considering the statistical impact on the population as a whole (Dwork et al., 2011; Kusner et al., 2017). Counterfactual fairness (CF), for example, was proposed as a fairness criterion in (Kusner et al., 2017). The fundamental principle behind this definition of fairness is that the outcome of the algorithm's prediction should not be altered if different individuals within the sample training set were allocated different values for their protected attributes (Kusner et al., 2017). This criterion is written in the following form: \(P(\hat{y}_{A_{p}\gets a_{p}}|A=a,X=x)=P(\hat{y}_{A_{p}\gets a_{p}}|A= a,X=x)\ \forall y,a^{\prime}\). The notation \(\hat{y}\leftarrow_{A_{p}\gets a_{p}}\) is understood as "the value of \(\hat{y}\) if \(A_{p}\) had taken the value \(a_{p}\)" (Kusner et al., 2017). ## 5 Formalizing Fairness At the outset, let us note a few salient points about our formalizations of FTU, DP and CF: 1. Because we are not modeling a prediction problem, our definitions below should be seen as being loosely inspired by existing notions rather that faithful reconstructions. In par ticular, we will look at "fair outcomes" after a sequence of actions. Indeed, debates about problems with the mathematical notions of fairness in single-shot predictions problems are widespread (Dwork et al. 2011; Kusner et al. 2017; Zafar et al. 2017a), leading to recent work on looking at the long-term effects of fairness (Creager et al. 2020). However, we are ignoring probabilities in the formalization in current work only to better study the principles behind the above notions - we suspect with a probabilistic epistemic dynamic language (Bacchus et al. 1999), the definitions might resemble mainstream notions almost exactly and yet organically use them over actions and programs, which is attractive. 2. The first-order nature of the language, such as quantification, will allow us to easily differentiate fairness for an individual versus groups. In the mainstream literature, this has to be argued informally, and the intuition grasped meta-linguistically. 3. Because we model the real-world in addition the agent's knowledge, we will be able to articulate what needs to be true vs just believed by the agent. In particular, our notion of equity will refer to the real-world. 4. De-re vs de-dicto knowledge will mean having versus not having information about protected attributes respectively. Sensing actions can be set up to enable de-re knowledge if need be, but it is easy to see in what follows that de-dicto is preferable. 5. Action sequences can make predicates true, and this will help us think about equity in terms of balancing opportunities across instances of protected attributes (e.g., making some property true so that we achieve gender balance). **Fairness through unawareness.** Let us begin with FTU: recall that it requires that the agent does not know the protected attributes of the individuals. To simplify the discussion, let us assume we are concerned with one such attribute \(\theta(x)\), say, \(Male(x)\), in our examples for concreteness. We might be interested in achieving \(hasLoan(x)\) or \(highSalary(x)\), for example, either for all \(x\) or some individual. _Definition._ A sequence \(\delta=a_{1}\cdots a_{k}\) implements FTU for \(\phi\) wrt protected attribute \(\theta(x)\) iff \(\Sigma\models[\delta]\mathbf{K}\phi\); and for every \(\delta^{\prime}\leq\delta\): \(\Sigma\models[\delta^{\prime}]\neg\exists x(\mathbf{K}\theta(x))\). The attractiveness of a first-order formalism is that in these and other definitions below where we quantify over all individuals, it is immediate to limit the applicability of the conditions wrt specific individuals. Suppose \(n\) is such an individual. Then: _Definition._ A sequence \(\delta=a_{1}\cdots a_{k}\) implements FTU for \(\phi\) wrt attribute \(\theta(x)\) for individual \(n\) iff (a) \(\Sigma\models[\delta]\mathbf{K}\phi\); and (b) for every \(\delta^{\prime}\leq\delta\): \(\Sigma\models[\delta^{\prime}]\neg\mathbf{K}\theta(n)\). _Example._ Consider \(\Sigma\) from [eq:example], \(Male(x)\) as the protected attribute, and suppose \(\delta=approve(n)\cdot approve(n^{\prime})\). It is clear that \(\delta\) implements FTU for both the universal \(\phi=\forall xhasLoan(x)\) as well as an individual \(\phi=hasLoan(n)\). Throughout the history, the agent does not know the gender of the individual. Before turning to other notions, let us quickly reflect on proxy variables. Recall that in the ML literature, these are variables that indirectly provide informations about protected attributes. We might formalize this using entailment: _Definition._ Given a protected attribute \(\theta(x)\) and theory \(\Sigma\), let the proxy set \(Proxy(\theta(x))\) be the set of predicates \(\{\eta_{1}(x),\ldots\eta_{k}(x)\}\) such that: \(\Sigma\models\forall x(\eta_{i}(x)\supset\theta(x))\), for \(i\in\{1,\ldots,k\}\). That is, given the axioms in the background theory, \(\eta_{i}(x)\) tells us about \(\theta(x)\). _Example._ Suppose the agent knows the following sentence: \(\forall x(EtonForBoys(x)\supset Male(x))\). Let us assume \(EtonForBoys(x)\) is a rigid, like \(Male(x)\). Let us also assume that \(\mathbf{K}(EtonForBoys(n))\). It is clear that having information about this predicate for \(n\) would mean the agent can infer that \(n\) is male. The advantage of looking at entailment in our definitions is that we do not need to isolate the proxy set at all, because whatever information we might have the proxy set and its instances, all we really need to check is that \(\Sigma\not\models\exists xK\theta(x)\).5 Footnote 5: With this discussion, we do not mean to insist that analyzing “relevant” predicates for \(\theta(x)\) is a pointless endeavor. Rather we only want to point out that regardless of the information available to the agent, as long as we check that it is actually ignorant about the gender, other relevant predicates may not matter. Of course, a biased agent can enable actions that favors individuals based on such proxy predicates instead, but in that case, such proxy predicates would also need to be included in the protected attribute list. **Demographic parity.** Let us now turn to DP. In the probabilistic context, DP is a reference to the proportion of individuals in the domain: say, the proportion of males promoted is the same as the proportion of females promoted. In logical terms, although FTU permitted its definition to apply to both groups and individuals, DP, by definition, is necessarily a quantified constraint. In contrast, CF will stipulate conditions solely on individuals. _Definition._ A sequence \(\delta=a_{1}\cdots a_{k}\) implements DP for \(\phi(x)\) wrt attribute \(\theta(x)\) iff: \(\Sigma\models[\delta]\mathbf{K}((\forall x\theta(x)\supset\phi(x))\wedge(\forall x \neg\theta(x)\supset\phi(x)))\). To reiterate, in probabilistic terms, the proportion of men who are promoted equals the proportion of women who are promoted. In the categorial setting, the agent knows that all men are promoted as well as that all women are promoted. _Example._ Consider \(\delta=approve(n)\cdot approve(n^{\prime})\). It implements DP for \(hasLoan(x)\) wrt attribute \(isMale(x)\). Note that even though the agent does not know the gender of the individuals, in every possible world, regardless of the gender assigned to an individual \(n\) in that world, \(n\) has the loan. In other words, all men and all women hold the loan. This is de-dicto knowledge of the genders, and it is sufficient to capture the thrust of DP. We might be tempted to propose a stronger requirement, stipulating de-re knowledge: _Definition._ A sequence \(\delta=a_{1}\cdots a_{k}\) implements strong DP for \(\phi(x)\) wrt attribute \(\theta(x)\) iff: (a)\(\Sigma\models[\delta]\mathbf{K}((\forall x\theta(x)\supset\phi(x))\wedge(\forall x \neg\theta(x)\supset\phi(x)))\); and (b) \(\Sigma\models[\delta]\forall x(\mathbf{K}\theta(x)\lor\mathbf{K}\neg\theta(x))\). That is, the agent knows whether \(x\) is a male or not, for every \(x\). _Example._ Consider \(\delta=isMale(n)\cdot isMale(n^{\prime})\cdot approve(n)\cdot approve(n^{\prime})\). It implements strong DP for \(hasLoan(x)\) wrt attribute \(isMale(x)\). Of course, by definition, \(\delta\) also implements DP for \(hasLoan(x)\). **FTU-DP.** In general, since we do not wish the agent to know the values of protected attributes, vanilla DP is more attractive. Formally, we may impose a FTU-style constraint of not knowing on any fairness definition. For example, _Definition._ A sequence \(\delta=a_{1}\cdots a_{k}\) implements FTU-DP for \(\phi(x)\) wrt attribute \(\theta(x)\) iff: (a) \(\Sigma\models[\delta]\mathbf{K}((\forall x\theta(x)\supset\phi(x))\wedge(\forall x \neg\theta(x)\supset\phi(x)))\); and (b) for every \(\delta^{\prime}\leq\delta\): \(\Sigma\models[\delta^{\prime}]\neg\exists x\mathbf{K}\theta(x)\). Again, it is worth remarking that mixing and matching constraints is straightforward in a logic, and the semantical apparatus provides us with the tools to study the resulting properties. _Example._ The example for de-dicto DP is applicable here too. Consider \(\delta=approve(n)\cdot approve(n^{\prime})\). It implements FTU-DP for \(hasLoan(x)\) wrt attribute \(isMale(x)\). That is, (a) \(\Sigma\not\models\exists x\mathbf{K}\theta(x)\); (b) \(\Sigma\not\models[approve(n)]\exists x\mathbf{K}\theta(x)\); and (c) \(\Sigma\not\models[approve(n)\cdot approve(n^{\prime})]\exists x\mathbf{K}\theta(x)\). Reversing the actions, not surprisingly, \(\delta^{\prime}=approve(n^{\prime})\cdot approve(n)\) does not affect the matter: \(\delta^{\prime}\) also implements FTU-DP. Had the sequence including sensing, a reversal could matter. One can also consider situations where some knowledge of protected attributes is useful to ensure there is parity but to also account for special circumstances. In this, the protected attribute itself could be "hidden" in a more general class, which is easy enough to do in a relational language. _Example._ Suppose we introduce a new predicate for underrepresented groups. We might have, for example: \(\forall x(\neg Male(x)\ \vee\ldots\lor RaceMinority(x)\supset Underrepresented(x))\). This could be coupled with a sensing axiom of the sort: \(\Box SF(checkU(x))\equiv Underrepresented(x)\). Add the predicate definition and the sensing axioms to the initial theories and dynamic axioms in \(\Sigma\) respectively. Consider \(\delta=checkU(n)\cdot checkU(n^{\prime})\cdot approprove(n)\cdot approprove(n^{ \prime})\). Then \(\delta\) implements strong DP for \(hasLoan(x)\) wrt attribute \(Underrepresented(x)\). That is, both represented and underrepresented groups have loans. **Equality of opportunity.** One problem with DP is that (unless the instance rate of \(y=1\) happens to be the same in both the \(a_{p}=0\) group and \(a_{p}=1\) group), the classifier cannot achieve 100% classification accuracy and satisfy the fairness criterion simultaneously (Hardt et al., 2016). Also, there are scenarios where this definition is completely inappropriate because the instance rate of \(y=1\) differs so starkly between different demographic groups. Finally, there are also concerns that statistical parity measures fail to account for fair treatment of individuals (Dwork et al., 2011). Nonetheless it is often regarded as the most appropriate statistical definition when an algorithm is trained on historical data (Zafar et al., 2017; Zemel et al., 2013). A modification of demographic parity is "equality of opportunity" (EO). By this definition, a classifier is considered fair if, among those individuals who meet the positive criterion, the instance rate of correct prediction is identical, regardless of the value of the protected attribute (Hardt et al., 2016). This condition can be expressed as (Hardt et al., 2016): \(P(y=1|a_{p}=a,\hat{y}=1)=P(y=1|a_{p}=a^{\prime},\hat{y}=1)\ \ \forall\ a,a^{\prime}\). In (Hardt et al., 2016), it is pointed out that a classifier can simultaneously satisfy equality of opportunity and achieve perfect prediction whereby \(\hat{y}=y\) (prediction=true label) in all cases. In the logical setting, this can be seen as a matter of only looking at individuals that satisfy a criterion, such as being eligible for promotion or not being too old to run for office. _Definition._ A sequence \(\delta\) implements EO for \(\phi(x)\) wrt attribute \(\theta(x)\) and criterion \(\eta(x)\) iff: \[\Sigma\models[\delta]\mathbf{K}((\forall x(\eta(x)\wedge\theta(x))\supset\phi(x)) \wedge(\forall x\neg(\eta(x)\wedge\theta(x))\supset\phi(x))).\] _Example._ Consider \(\delta=promote(n)\cdot promote(n^{\prime})\), let \(\phi(x)=highS\mathit{alary}(x)\) and the criterion \(\eta(x)=Eligible(x)\). Although the promote action for \(n^{\prime}\) does not lead her to obtain a high salary, because we condition the definition only for eligible individuals, \(\delta\) does indeed implement EO. Note again that the agent does not know the gender for \(n^{\prime}\), but in every possible world, regardless of the gender \(n^{\prime}\) is assigned, \(n^{\prime}\) is known to be ineligible. In contrast, \(n\) is eligible and \(\delta\) leads to \(n\) having a high salary. That is, every eligible male now has high salary, and every eligible female also has high salary. (It just so happens there are no eligible females, but we will come to that.) In general, the equality of opportunity criterion might well be better applied in instances where there is a known underlying discrepancy in positive outcomes between two different groups, and this discrepancy is regarded as permissible. However, as we might observe in our background theory, there is systematic bias in that no women is considered eligible. **Counterfactual fairness.** Let us now turn to CF. The existing definition forces us to consider a "counterfactual world" where the protected attribute values are reversed, and ensure that the action sequence still achieves the goal. _Definition._ A sequence \(\delta=a_{1}\cdots a_{k}\) implements CF for \(\phi\) wrt attribute \(\theta(x)\) for individual \(n\) iff: * \(\Sigma\models(\theta(n)=b)\) for \(b\in\{0,1\}\) and \(\Sigma\models[\delta]\mathbf{K}\phi\); and * \(\textit{Forget}(\Sigma,\theta(n))\land(\theta(n)\neq b)\models[\delta]\mathbf{K}\phi\). _Example._ Let us consider the case of loan approvals. Consider the individual \(n\) and the action \(\delta=approve(n)\). Let \(\phi=hasLoan(n)\), and the protected attribute \(Male(x)\). Clearly \(\Sigma\models Male(n)\), and indeed \(\Sigma\models[\delta]hasLoan(n)\). If we consider \(\Sigma^{\prime}\) where the gender for \(n\) is swapped, it is still the case that \(\Sigma^{\prime}\models[\delta]hasLoan(n)\). Thus \(\delta\) implements CF for \(hasLoan(n)\) wrt \(Male(n)\). The definition of CF is well-intentioned, but does not quite capture properties that might enable equity. Indeed, there is a gender imbalance in the theory, in the sense that only the male employee is eligible for promotions and the female employee can never become eligible. Yet CF does not quite capture this. Let us revisit the example with getting high salaries: _Example._ Consider \(\delta=\textit{promote}(n)\) for property \(\textit{highS}\textit{alary}(n)\) wrt attribute \(Male(n)\). It is clear that \(\delta\) implements CF because the gender is irrelevant given that \(n\) is eligible. However, given \(\delta^{\prime}=\textit{promote}(n^{\prime})\), we see that \(\delta^{\prime}\) does not implement CF for \(\textit{highS}\textit{alary}(n^{\prime})\) wrt \(Male(n^{\prime})\). Because \(n^{\prime}\) is not eligible, \(\textit{highS}\textit{alary}(n^{\prime})\) does not become true after the promotion. **Equity.** Among the many growing criticisms about formal definitions of fairness is that notions such as CF fail to capture systemic injustices and imbalances. We do not suggest that formal languages would address such criticisms, but they provide an opportunity to study desirable augmentations to the initial knowledge or action theory. Rather than propose a new definition, let us take inspiration from DP, which seems fairly reasonable except that it is the context of what the agent knows. Keeping in mind a desirable "positive" property such as \(\textit{Eligible}(x)\), let us consider DP but at the world-level: _Definition._ Given a theory \(\Sigma\), protected attribute \(\theta(x)\), positive property \(\eta(x)\), where \(x\) is the individual, define _strong equity_: \(\Sigma\models\forall x(\theta(x)\supset\eta(x))\land\forall x(\neg\theta(x) \supset\eta(x))\). In general, it may not be feasible to ensure that properties hold for all instances of both genders. For example, there may be only a handful of C-level executives, and we may wish that there are executives of both genders. _Definition._ Given a theory \(\Sigma\), protected attribute \(\theta(x)\), positive property \(\eta(x)\), where \(x\) is the individual, define _weak equity_: \(\Sigma\models\exists x(\theta(x)\land\eta(x))\land\exists x(\neg\theta(x) \land\eta(x))\). It is implicitly assumed that the set of positive and negative instances for \(\theta(x)\) is non-empty: that is, assume the integrity constraint: \(\Sigma\models\exists x,y(\theta(x)\land\neg\theta(y))\). We assume weak equity and focus on FTU below. The definitions could be extended to strong equity or other fairness notions depending on the modelling requirements. _Definition._ A sequence \(\delta=a_{1}\cdots a_{k}\) implements equitable FTU for \(\phi\) wrt protected attribute \(\theta(x)\) and property \(\eta(x)\) iff (a) either weak equity holds in \(\Sigma\) and \(\delta\) implements FTU; or (b) \(\delta\) implements equitable FTU for \(\phi\) wrt \(\theta(x)\) and \(\eta(x)\) for the updated theory \(\textit{Forget}(\Sigma,S)\), where \(S=\{\eta(n_{i})\mid i\in N\}\). Note that we are assuming that \(N\) is finite here because we have only defined forgetting wrt finitely many atoms. Otherwise, we would need a second-order definition. _Example._ Consider \(\delta=\textit{promote}(n)\cdot\textit{promote}(n^{\prime})\) for goal \(\phi=\forall x(\textit{highS}\textit{alary}(x))\) wrt protected attribute \(Male(x)\) and property \(\textit{Eligible}(x)\). It is clear that weak equity does not hold for \(\Sigma\) because there is a female who is not eligible. In this case, consider \(\Sigma^{\prime}=\textit{Forget}(\Sigma,S)\) where \(S=\{\textit{Eligible}(n),\textit{Eligible}(n^{\prime})\}\). And with that, \(\Sigma^{\prime}\) also does not mention that \(n\) is eligible, so the promotion actions does not lead to anyone having high salaries. So \(\delta\) does not enable knowledge of \(\phi\). _Example_. Let us consider \(\Sigma^{\prime}\) that is like \(\Sigma\) except that \(Eligible(x)\) is not rigid, and can be affected using the action \(make(x)\): \(\square[a]Eligible(x)\equiv Eligible(x)\vee(a=make(x))\). That is, either an individual is eligible already or the manager makes them. Of course, \(\delta=promote(n)\cdot promote(n^{\prime})\) from above still does not implement equitable FTU, because we have not considered any actions yet to make individuals eligible. However, consider \(\delta^{\prime}=make(n)\cdot make(n^{\prime})\cdot promote(n)\cdot promote(n^{ \prime})\). Because \(\Sigma\) does not satisfy weak equity, we turn to the second condition of the definition. On forgetting, no one is eligible in the updated theory, but the first two actions in \(\delta^{\prime}\) makes both \(n\) and \(n^{\prime}\) eligible, after which, they are both promoted. So \(\delta^{\prime}\) enables knowledge of \(\forall x(highS\,alary(x))\). Thus, the actions have made clear that eligibility is the first step in achieving gender balance, after which promotions guarantee that there are individuals of both genders with high salaries. ## 6 Conclusions In this paper, we looked into notions of fairness from the machine learning literature, and inspired by these, we attempted a formalization in an epistemic logic. Although we limited ourselves to categorical knowledge and noise-free observations, we enrich the literature by considering actions. Consequently we looked into three notions: fairness through unawareness, demographic parity and counterfactual fairness, but then expanded these notions to also tackle equality of opportunity as well as equity. We were also able to mix and match constraints, showing the advantage of a logical approach, where one can formally study the properties of (combinations of) definitions. Using a simple basic action theory we were nonetheless able to explore these notions using action sequences. As mentioned earlier, this is only a first step and as argued in works such as [4, 10, 11] there is much promise in looking at ethical AI using rich logics. In fact, we did not aim to necessarily faithfully reconstruct existing ML notions in this paper but rather study underlying principles. This is primarily because we are not focusing on single-shot prediction problems but how actions, plans and programs might implement fairness and de-biasing. The fact that fairness was defined in terms of actions making knowledge of the goal true, exactly as one would in planning [12], is no accident. State-of-the-art analysis in fairness is now primarily based on false positives and false negatives [13]. So we think as the next step, a probabilistic language such as [1] could bring our notions closer to mainstream definitions, but now in the presence of actions. In the long term, the goal is to logically capture bias in the presence of actions as well as repeated harms caused by systemic biases [14]. Moreover, the use of logics not only serve notions such as verification and correctness, but as we argue, could also provide a richer landscape for exploring ethical systems, in the presence of background knowledge and context. This would enable the use of formal tools (model theory, proof strategies and reasoning algorithms) to study the long-term impact of bias while ensuring fair outcomes throughout the operational life of autonomous agents embedded in complex socio-technical applications. Of course, a logical study such as ours perhaps has the downside that the language of the paper is best appreciated by researchers in knowledge representation, and not immediately accessible to a mainstream machine learning audience. But on the other hand, there is considerable criticism geared at single-shot prediction models for not building in sufficient context and commonsense. In that regard, operationalising a system that permits a declaration of the assumptions and knowledge of the agents and their actions might be exactly "what the doctor ordered." See also efforts in causal modelling [10] that are close in spirit.
2304.14997
Towards Automated Circuit Discovery for Mechanistic Interpretability
Through considerable effort and intuition, several recent works have reverse-engineered nontrivial behaviors of transformer models. This paper systematizes the mechanistic interpretability process they followed. First, researchers choose a metric and dataset that elicit the desired model behavior. Then, they apply activation patching to find which abstract neural network units are involved in the behavior. By varying the dataset, metric, and units under investigation, researchers can understand the functionality of each component. We automate one of the process' steps: to identify the circuit that implements the specified behavior in the model's computational graph. We propose several algorithms and reproduce previous interpretability results to validate them. For example, the ACDC algorithm rediscovered 5/5 of the component types in a circuit in GPT-2 Small that computes the Greater-Than operation. ACDC selected 68 of the 32,000 edges in GPT-2 Small, all of which were manually found by previous work. Our code is available at https://github.com/ArthurConmy/Automatic-Circuit-Discovery.
Arthur Conmy, Augustine N. Mavor-Parker, Aengus Lynch, Stefan Heimersheim, Adrià Garriga-Alonso
2023-04-28T17:36:53Z
http://arxiv.org/abs/2304.14997v4
# Towards Automated Circuit Discovery ###### Abstract Through considerable effort and intuition, several recent works have reverse-engineered nontrivial behaviors of transformer models. This paper systematizes the mechanistic interpretability process they followed. First, researchers choose a metric and dataset that elicit the desired model behavior. Then, they apply activation patching to find which abstract neural network units are involved in the behavior. By varying the dataset, metric, and units under investigation, researchers can understand the functionality of each component. We automate one of the process' steps: to identify the circuit that implements the specified behavior in the model's computational graph. We propose several algorithms and reproduce previous interpretability results to validate them. For example, the ACDC algorithm rediscovered 5/5 of the component types in a circuit in GPT-2 Small that computes the Greater-Than operation. ACDC selected 68 of the 32,000 edges in GPT-2 Small, all of which were manually found by previous work. Our code is available at [https://github.com/ArthurConmy/Automatic-Circuit-Discovery](https://github.com/ArthurConmy/Automatic-Circuit-Discovery). ## 1 Introduction Rapid progress in transformer language modelling (Vaswani et al., 2017; Devlin et al., 2019; OpenAI, 2023, _inter alia_) has directed attention towards understanding the causes of new capabilities (Wei et al., 2022) in these models. Researchers have identified precise high-level predictors of model performance (Kaplan et al., 2020), but transformers are still widely considered 'black-boxes' (Alishahi et al., 2019; Buhrmester et al., 2019) like almost all other neural network models (Fong and Vedaldi, 2017; Buhrmester et al., 2021).2 Interpretability research aims to demystify machine learning models, for example by explaining model outputs in terms of domain-relevant concepts (Zhang et al., 2021). Footnote 2: Though this perspective is not universal (Lipton, 2016). Mechanistic interpretability focuses on reverse-engineering model components into human-understandable algorithms (Olah, 2022). Much research in mechanistic interpretability views models as a computational graph (Geiger et al., 2021), and circuits are subgraphs with distinct functionality (Wang et al., 2023). The current approach to extracting circuits from neural networks relies on a lot of manual inspection by humans (Rauker et al., 2022). This is a major obstacle to scaling up mechanistic interpretability to larger models, more behaviors, and complicated behaviors composed of many sub-circuits. This work identifies a workflow for circuit research, and automates part of it by presenting several methods to extract computational graphs from neural networks. Our main contributions are as follows. First, we systematize the common workflow prevalent in many existing mechanistic interpretability works, outlining the essential components of this process (Section 2). One of its steps is to find a subgraph of the model which implements the behavior of interest, which is a step possible to automate. We introduce Automatic Circuit DisCovery (ACDC), a novel algorithm that follows the way in which researchers identify circuits (Section 3), and adapt Subnetwork Probing (SP; Cao, Sanh, and Rush, 2021) and Head Importance Score for Pruning (HISP; Michel, Levy, and Neubig, 2019) for the same task. Finally, we introduce quantitative metrics to evaluate the success of circuit extraction algorithms (Sections 4 and 4.2). We present a detailed ablation study of design choices in Appendix D and qualitative studies in Appendices E, F, G, H and I. ## 2 The Mechanistic Interpretability Workflow Mechanistic interpretability attempts to explain and predict neural network behaviors by understanding the underlying algorithms implemented by models. In the related work section we discuss the mechanistic interpretability field and its relationship to 'circuits' research (Section 5). Neural network behaviors are implemented by algorithms within the model's computational graph, and prior work has identified subgraphs (_circuits_, following Wang et al. (2023)'s definition) that capture the majority of particular behaviors. In this section, we describe a workflow that several prior works have followed that has been fruitful for finding circuits in models. As a concrete example of an approach taken to finding a circuit, Hanna, Liu, and Variengien (2023) prompt GPT-2 Small with a dataset of sentences like "The war lasted from 1517 to 15". GPT-2 Small completes this sentence with "18" or "19" or any larger two digit number, but not with any two digit number that is at most "17" (from here, we refer to prompt completions like this as the "Greater-Than" task). This behavior can be measured by the difference in probability the model places on a completion "18" or "19" or larger and the probability the model places on a completion "17" or smaller. The researchers then create a _corrupted dataset_ of sentences that do not have any bias against particular two digit completions (the '01-dataset' (Hanna, Liu, and Variengien, 2023)). The researchers attribute the greater-than operation to late layer MLPs and then find earlier components that identify the numerical values of years, including attention heads in the model. Finally, Hanna, Liu, and Variengien (2023) interpret the role of each set of components. For example, they identify early model components that respond to the "17" token, and later model components that boost the importance of logits for years greater than 17. There are equivalent steps taken in a growing number of additional works (Heimersheim and Janiak, 2023, the "Docstring" task; Goldowsky-Dill et al., 2023, the "Induction" task; Wang et al., 2023, Figure 1: **Automatically discovering circuits with ACDC.**_Left_: a computational graph for GPT-2 Small, with a recovered circuit for the IOI task highlighted in red. Only edges between adjacent layers are shown. _Right:_ the recovered circuit with labelled nodes. All heads recovered were identified as part of the IOI circuit by Wang et al. (2023). Edge thickness is proportional to importance. the "IOI" task), described in brief in Table 1 and in detail in Appendices E, G and I. We identify the workflow that eventually finds a circuit as following three steps. Researchers: 1. Observe a behavior (or task3) that a neural network displays, create a dataset that reproduces the behavior in question, and choose a metric to measure the extent to which the model performs the task. Footnote 3: Section 3 formally defines “task”. We use “behavior” and “task” interchangeably. 2. Define the scope of the interpretation, i.e. decide to what level of granularity (e.g. attention heads and MLP layers, individual neurons, whether these are split by token position) at which one wants to analyze the network. This results in a computational graph of interconnected model units. 3. Perform an extensive and iterative series of patching experiments with the goal of removing as many unnecessary components and connections from the model as possible. Researchers repeat the previous three steps with a slightly different dataset or granularity, until they are satisfied with the explanation of the circuit components. This work (ACDC) presents a tool to fully automate Step 3. Before we dive into the details of ACDC, we expand on what Steps 1-3 involve, and review examples from previous work that we use to evaluate ACDC. ### Step 1: Select a behavior, dataset, and metric The first step of the general mechanistic interpretability workflow is to choose a neural network behavior to analyze. Most commonly researchers choose a clearly defined behavior to isolate only the algorithm for one particular task, and curate a dataset which elicits the behavior from the model. Choosing a clearly defined behavior means that the circuit will be easier to interpret than a mix of circuits corresponding to a vague behavior. Some prior work has reverse-engineered the algorithm behind a small model's behavior on all inputs in its training distribution (Nanda et al., 2023; Chughtai et al., 2023), though for language models this is currently intractable, hence the focus on individual tasks. We identified a list of interesting behaviors that we used to test our method, summarized in Table 1. These include previously analyzed transformer models (1 and 3 on GPT-2 Small, 2 and 6 on smaller language transformers) where researchers followed a workflow similar to the one we described above. Tasks 4 and 5 involve the full behavior of tiny transformers that implement a known algorithm, compiled with tracr(Lindner et al., 2023). For each task, we mention the metric used in previous work to measure the extent to which the model performs the task on the corresponding dataset. ### Step 2: Divide the neural network into a graph of smaller units To find circuits for the behavior of interest, one must represent the internals of the model as a computational directed acyclic graph (DAG, e.g. Figure 1(a)). Current work chooses the abstraction level of the computational graph depending on the level of detail of their explanations of model behavior. For example, at a coarse level, computational graphs can represent interactions between attention heads and MLPs. At a more granular level they could include separate query, key and value activations, the interactions between individual neurons (see Appendix H), or have a node for each token position (Wang et al., 2023). Node connectivity has to be faithful to the model's computation, but that does not fully specify its definition. For example, following Elhage et al. (2021), many works consider the connections between model components in non-adjacent layers due to the additivity of the residual stream, even though these are computed with dynamic programming in the actual model implementation. Connectivity defines what is considered a direct or a mediated interaction (Pearl, 2009; Vig et al., 2020). See for example Figure 1(a), where component B has both a direct effect on the output node O and an indirect effect on the output through component A. ### Step 3: Patch model activations to isolate the relevant subgraph With the computational DAG specified, one can search for the edges that form the circuit. We test edges for their importance by using recursive _activation patching_: i) overwrite the activation value of a node or edge with a corrupted activation, ii) run a forward pass through the model, and iii) compare the output values of the new model with the original model, using the chosen metric (Section 2.1). One typically starts at the output node, determines the important incoming edges, and then investigates all the parent nodes through these edges in the same way. It is this procedure that ACDC follows and automates in Algorithm 1. Patching with zeros and patching with different activationsActivation patching methodology varies between mechanistic interpretability projects. Some projects overwrite activation values with zeros (Olsson et al., 2022; Cammarata et al., 2021), while others erase activations' informational content using the mean activation on the dataset (Wang et al., 2023). Geiger et al. (2021) prescribe _interchange interventions_ instead: to overwrite a node's activation value on one data point with its value on another data point. Chan et al. (2022) justify this by arguing that both zero and mean activations take the model too far away from actually possible activation distributions. Interchange interventions have been used in more interpretability projects (Hanna et al., 2023; Heinersheim and Janiak, 2023; Wang et al., 2023), so we prefer it. However we also compare all our experiments to replacing activations with zeros (Appendix D.2). ### Explaining the circuit components After successfully isolating a subgraph, one has found a circuit (Section 1). The researcher then can formulate and test hypotheses about the functions implemented by each node in the subgraph. There is early evidence that ACDC is helpful for making novel observations about how language models complete tasks, such as the importance of surprising token positions that help GPT-2 Small predict correctly gendered pronouns (Appendix J). In our work we focus on automating all steps up to this final step, though we think that automating the functional interpretation of model components is an exciting further research direction. \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline **Task** & **Example Prompt** & **Output** & **Metric** \\ \hline 1: IOI (Appendix E.2) & “When John and Mary went to the store, Mary gave a bottle of milk to” & “\_John” & Logit difference \\ \hline 2: Docstring (Appendix G.1) & def(self, files, obj, state, size, shape, option): ******** ************ ************ ************ ************ ************ ************** **************** ************** ************** **************** ************** **************** ****************** **************** **************** **************** **************** **************** ****************** ******************** ********************** ********************** ************************** ## 3 Automating circuit discovery (Step 3) This section describes algorithms to automate Step 3 of the mechanistic interpretability workflow (Section 2.3). In all three cases, we assume that the 'task' being studied is defined by a set of prompts \((x_{i})_{i=1}^{n}\) on which the model's predictions have a noticeable pattern (see Table 1 for examples) and a set of prompts \((x_{i}^{\prime})_{i=1}^{n}\) where this task is not present. We then use the activations of the models on a forward pass on the points \(x_{i}^{\prime}\) as corrupted activations (Section 2.3). Automatic Circuit DisCovery (ACDC).Informally, a run of ACDC iterates from outputs to inputs through the computational graph, starting at the output node, to build a subgraph. At every node it attempts to remove as many edges that enter this node as possible, without reducing the model's performance on a selected metric. Finally, once all nodes are iterated over, the algorithm (when successful) finds a graph that i) is far sparser than the original graph and ii) recovers good performance on the task. To formalize the ACDC process, we let \(G\) be a computational graph of the model of interest, at a desired level of granularity (Section 2.2), with nodes topologically sorted then reversed (so the nodes are sorted from output to input). Let \(H\subseteq G\) be the computational subgraph that is iteratively pruned, and \(\tau>0\) a threshold that determines the sparsity of the final state of \(H\). We now define how we evaluate a subgraph \(H\). **We let \(H(x_{i},x_{i}^{\prime})\) be the result of the model when \(x_{i}\) is the input to the network, but we overwrite all edges in \(G\) that are not present in \(H\) to their activation on \(x_{i}^{\prime}\) (the corrupted input).4 This defines \(H(x_{i},x_{i}^{\prime})\), the output probability distribution of the subgraph under such an experiment. Finally we evaluate \(H\) by computing the KL divergence \(D_{\text{KL}}(G(x_{i})||H(x_{i},x_{i}^{\prime}))\) between the model and the subgraph's predictions. We let \(D_{\text{KL}}(G||H)\) denote the average KL divergence over a set of datapoints. Appendix B discusses alternatives to the KL divergence, and Appendix D.1 explores the consequences of optimizing the task-specific metrics from Table 1 instead. Footnote 4: To implement this experiment, we initially run a forward pass with the unmodified model on the input \(x_{i}^{\prime}\) and cache all activations for use on this later editing step. Algorithm 1 describes ACDC. The order in which we iterate over the parents \(w\) of \(v\) is a hyperparameter. In our experiments the order is lexicographically from later-layer MLPs and heads to earlier-layer MLPs and heads, and from higher- to lower-indexed heads. We note that in one case in our work, the order of the parents affected experimental results (Appendix I). Subnetwork Probing (SP; Cao, Sanh, and Rush, 2021).SP learns a mask over the internal model components (such as attention heads and MLPs), using an objective that combines accuracy and Figure 2: **How ACDC works (Steps 2a-2c). Step 2a: a practitioner specifies a computational graph of the model, the task they want to investigate, and a threshold under which to remove connections. Step 2b: ACDC iterates over nodes in the computational graph, replacing activations of connections between a node and its children, and measuring the effect on the output metric. Connections are removed if their measured effect on the metric under corruption is below the threshold \(\tau\). Step 2c: recursively apply Step 2b to the remaining nodes. The ACDC procedure returns a subgraph of the original computational graph.** sparsity Louizos, Welling, and Kingma (2018), with a regularization parameter \(\lambda\). At the end of training, we round the mask to 0 or 1 for each entry, so the masked computation corresponds exactly to a subnetwork of a transformer. SP aims to retain enough information that a linear probe can still extract linguistic information from the model's hidden states. In order to use it to automate circuit discovery, we make three modifications. We i) remove the linear probe, ii) change the training metric to KL divergence as in Section 2, and iii) use the mask to interpolate between corrupted activations and clean activations (Section 3) rather than zero activations and clean activations. Appendix C.1 explains the details of these changes. Head Importance Score for Pruning (HISP; Michel, Levy, and Neubig (2019).HISP ranks the heads by importance scores (Appendix C.2) and prunes all the heads except those with the top \(k\) scores. Keeping only the top \(k\) heads corresponds to a subnetwork that we can compare to ACDC. We plot the ROC obtained from the full possible range of \(k\). Like SP, this method only considers replacing head activations with zero activations, and therefore we once more generalize it to replace heads with corrupted activations (for details, see Appendix C.2). ## 4 Evaluating Subgraph Recovery Algorithms To compare methods for identifying circuits, we seek empirical answers to the following questions. * **Q1:** Does the method identify the subgraph corresponding to the underlying algorithm implemented by the neural network? * **Q2:** Does the method avoid including components which do not participate in the elicited behavior? We attempt to measure **Q1** and **Q2** using two kinds of imperfect metrics: some grounded in previous work (Section 4), and some that correspond to stand-alone properties of the model and discovered subgraph (Section 4.2). ### Grounded in previous work: area under ROC curves The receiver operating characteristic (ROC) curve is useful because a high true-positive rate (TPR) and a low false-positive rate (FPR) conceptually correspond to affirming **Q1** and **Q2**, respectively. We consider _canonical_ circuits taken from previous works which found an end-to-end circuit explaining behavior for tasks in Table 1. We formulate circuit discovery as a binary classification problem, where edges are classified as positive (in the circuit) or negative (not in the circuit). Appendices E, F, G, H and I describe and depict the canonical circuits for each task. Appendix D.3 considers the node classification problem instead, which is less appropriate for ACDC but more appropriate for other methods. We sweep over a range of ACDC thresholds \(\tau\), SP regularization parameters \(\lambda\), or number of HISP elements pruned \(k\). We plot pessimistic segments between points on the Pareto frontier of TPR and FPR, over this range of thresholds (Fawcett, 2006). ACDC and SP optimize the KL divergence for tasks where this makes sense (all but tracer tasks, which use the L2 distance). All methods employ activations with corrupted data. Appendix B describes and Appendix D experiments with different design choices for the metric and activation patching methodology. Figure 3 shows the results of studying how well existing methods recover circuits in transformers. We find that i) methods are very sensitive to the corrupted distribution, ii) ACDC has competitive performance (as measured by AUC) with gradient-descent based methods iii) ACDC is not robust, and it fails at some settings. Several of the tasks appeared to require specific distributions and metrics for the areas under the curves to be large. For example, ACDC achieved poor performance on both tracer tasks in Fig. 3, but the circuit was perfectly recovered by ACDC at any threshold \(\tau>0\) when patching activations with zeros (Appendix H). Furthermore, ACDC achieves an average AUC of 0.596 across the 5 tasks in this setting, more than HISP (0.407) though less than SP (0.692). As an example of the lack of robustness of ACDC, on the Docstring task we achieve the high performance when using the ACDC algorithm with the docstring metric (Appendix G). However in other tasks such as the IOI task, ACDC performance was worse when optimizing for logit difference. Further research in automated interpretability will likely yield further improvements to the FPR and TPR of circuit discovery. We outline limitations with all current methods, but also gesture at likely fundamental limitations of the false positive and true positive measures. A limitation with all existing methods is that they optimize a single metric. This means they systematically miss internal model components such as the "negative" components found in previous work (IOI, Docstring) that are actively harmful for performance. The IOI recovery runs were not able to recover negative heads when optimizing for logit difference. Even when optimizing for low KL divergence, the negative components were only recovered when very small thresholds were used (Figure 18). Additionally, a more fundamental limitation to measuring the false and true positive rates of circuit recovery methods is that the ground-truth circuits are reported by practitioners and are likely to have included extraneous edges and miss more important edges. The language model circuits studied in our work (Appendices E-G) involve a large number of edges (1041 in the case of IOI) and the full models contain more than an order of magnitude more edges. Since these interpretability works are carried out by humans who often report limitations of their understanding, our 'ground-truth' is not 100% reliable, limiting the strength of the conclusions that can be drawn from the experiments in this section. Figure 3: ROC curves of ACDC, SP and HISP identifying model components from previous work, across 5 circuits in transformers. The points on the plot are cases where SP and ACDC return subgraphs that are not on the Pareto frontier. The corresponding AUCs are in Table 2. ### Stand-alone circuit properties: task-specific test metrics, reset networks This section evaluates the algorithms by measuring the induction task-specific metric (negative log probability loss, Table 1) of recovered circuits on a test set of prompts. This is an indirect measure of **Q1**, with the advantage of not relying on the completeness or correctness of previous works. As an indicator of **Q2**, we also measure the number of edges that a hypothesized circuit contains. A circuit with fewer edges which still obtains a low task-specific metric is less likely to contain components that do not participate in the behavior. In this section we also introduce and explain experiments on **reset networks** that provide more evidence for **Q2**. Our mainline experimental setup is to run the circuit recovery algorithms as described in Algorithm 1 and Section 3 and then measure the negative log probability loss for these circuits on the induction task (Appendix I). In brief, ACDC performs better that the other methods under these experimental conditions with both corrupted and zero activations. For example, the left-hand side of Figure 4 shows that, above 20 edges, ACDC starts having a slight advantage over other methods in terms of behavior recovered per number of edges as all points on the Pareto-frontier with at least this many edges are generated from ACDC runs. Appendix D describes many further experiments with variations on setup to provide a more complete picture of the performance of the circuit recovery algorithms. Our **reset network** experiment setup is motivated by the concern that interpretability explanations may not accurately represent the reasoning process behind models' predictions (Jacovi and Goldberg, 2020). This is particularly relevant to work on subnetworks as empirically some subnetworks in models with randomized weights do not accurately represent such reasoning (Ramanujan et al., 2020). To this end, we study the task-specific metrics on models with permuted weights (which we call **reset networks**(Zhang and Bowman, 2018; Cao, Sanh, and Rush, 2021)) and verify that the circuit recovery algorithms perform worse on these models that do not have underlying algorithms. Specifically, we create the reset network by permuting the head dimension of each layer's Q, K, V matrices, and each MLP's bias term. This disrupts the functionality of the subject model, without changing many facts about the distribution of the activations (e.g. the average magnitude at every layer). In our experiments in Figure 4 the metric used by each algorithm is the KL divergence between the original trained network (with no edges patched), and the activation-patched reset network. The reset network does not exhibit the original network's behavior, and thus it should not be possible to explain the presence of the behavior. This is a strong measure of the negation of **Q2**: if the algorithm is able to find a circuit that performs the behavior on a network that does _not_ exhibit the behavior, then it will likely hallucinate circuit components in normal circumstances. We can see that the test loss achieved for all methods is significantly lower for the trained networks, indicating that all the methods get signal from the neural network's ability to perform induction (Figure 4). HISP and SP with zero activations, and to some extent SP with corrupted activations are Figure 4: Comparison of ACDC and SP with both corrupted-input activations (left) and zero activations (right). We plot task-specific loss metric (Table 1) against the number of edges of each hypothesized circuit. Hollow data points use reset networks as the subject. Darker points include more edges in the hypothesis: they use smaller ACDC \(\tau\), smaller SP regularization \(\lambda\) or a higher percentage of nodes in HISP. also able to optimize the reset network. This suggests that these methods are somewhat more prone to finding circuits that don't exist (i.e. evidence against **Q2**). ## 5 Related work **Mechanistic interpretability** encompasses understanding features learnt by machine learning models (Olah, Mordvintsev, and Schubert, 2017; Elhage et al., 2022), mathematical frameworks for understanding machine learning architteures (Elhage et al., 2021) and efforts to find _circuits_ in models (Nanda et al., 2023; Cammarata et al., 2021; Chughtai, Chan, and Nanda, 2023; Wang et al., 2023). The higher standard of a mechanistic understanding of a model has already had applications to designing better architectures (Fu et al., 2023), though the speculative goal of mechanistic interpretability is to understand the behavior of whole models, perhaps through describing all their circuits and how they compose. Little work has been done to automate interpretability besides (Bills et al., 2023) who use language models to label neurons in language models. **Neural network pruning** masks the weights of neural networks to make their connectivity more sparse (LeCun, Denker, and Solla, 1989). In contrast to our aims, the pruning literature is typically concerned with compressing neural networks for faster inference or to reduce storage requirements (Wang, Wohlwend, and Lei, 2020; Kurtic et al., 2022). Early work (Hassibi and Stork, 1992) hoped pruning would lead to more interpretable networks, but progress towards interpretability via pruning is limited (Grover, Gawri, and Manku, 2022). Pruning techniques may learn masks from data, which is a special case of more generally using gradient information. Masks can also be learned from data, with an objective function that balances model performance and network sparsity (Louizos, Welling, and Kingma, 2018; Wang, Wohlwend, and Lei, 2020; Cao, Sanh, and Rush, 2021). This is a useful comparison to ACDC as learnable masks do not change the weights of our model after pruning (Frantar and Alistarh, 2023). Examples of gradient information being used more generally includes Michel, Levy, and Neubig (2019) who decide which heads should be pruned by using the absolute value of their gradients, while "movement pruning" (Sanh, Wolf, and Rush, 2020) removes parameters that have high velocity to a low magnitude. **Causal interpretation**. Much prior research on understanding language models has drawn inspiration from causal inference (Pearl, 2009), leading to the development of frameworks that provide causal explanations for model outputs (Pearl, 2009; Feder et al., 2021; Geiger et al., 2021; Wu et al., 2022; Kaddour et al., 2022). Other work (Vig et al., 2020) discusses the difference between indirect effects and direct effects inside language models, and experiments on removing subsets of these heads using heads' direct effects as proxies for the overall contribution of these heads. Goldowsky-Dill et al. (2023) introduce 'path patching' to analyze the effects of different subsets of edges in computational graphs of models. In parallel to our work, Wu et al. (2023) develop a method to automatically test whether neural networks implement certain algorithms with causal testing. Our work is focused on finding rather than verifying an outline of an algorithm implemented by a model. **Computational subgraphs for interpretability.** Training dynamics in residual models can be explained by shallow paths through the computational graph (Veit, Wilber, and Belongie, 2016). MLP layers can be modelled as memory that is able to represent certain properties of the network inputs (Geva et al., 2021). Residual transformer models have been modelled as the sum of all different paths through the network (Elhage et al., 2021). Later work has used insights from looking at subgraphs of models in order to edit models' behaviors (Bau et al., 2020; Meng et al., 2022) and test interpretability hypotheses (Chan et al., 2022). ## 6 Conclusion We have identified a common workflow for mechanistic interpretability. First, pin down a behavior using a metric and data set. Second, conduct activation patching experiments to understand which abstract units (e.g. transformer heads) are involved in the behavior. Third, iterate the previous steps with variations of the behavior under study, until the model's algorithm is understood. The main proposed algorithm, ACDC, systematically conducts all the activation patching experiments necessary to find which circuit composed of abstract units is responsible for the behavior. We have shown that ACDC and SP recover most of the compositional circuit that implements a language model behavior, as judged by comparison to previous mechanistic interpretability work (Section 4). ACDC with zero activations fully recovers the circuit of toy models (Fig. 10). Further, there is early evidence of the use of ACDC to help with novel interpretability work, discovering a surprising outline of a subgraph of GPT-2 Small that predicts gendered pronoun completion (Appendix J). However, both ACDC and SP have limitations which prevent them from fully automating step 3 of the identified workflow (activation patching). First, they systematically miss some classes of abstract units that are part of the circuit, for example the negative name mover heads from IOI (Wang et al., 2023). Second, the behavior of the algorithms is very sensitive to hyperparameter and metric choice, leading to varied and non-robust performance in some settings (Figure 3). On balance, the evidence supports the claim that ACDC can automate part of interpretability work, a novel contribution. Automating interpretability research may be necessary to be able to scale methods to the behaviors of the large models which are in use today. We hope that our open-source implementation of ACDC ([https://github.com/ArthurConmy/Automatic-Circuit-Discovery](https://github.com/ArthurConmy/Automatic-Circuit-Discovery)) accelerates interpretability research from the community. For example, future work could systematize and automate the problem of varying the corrupting dataset to understand the functionality of different parts of the circuit. ## Acknowledgments and Disclosure of Funding This work would not have been possible without the generous support of Redwood Research through their REMIX program. We would like to thank Chris Mathwin, Jett Janiak, Chris MacLeod, Neel Nanda, Alexandre Variengien, Joseph Miller, Thomas Kwa, Sydney von Arx and Adam Gleave for feedback on a draft of this paper. Arthur Conmy would like to thank Jacob Steinhardt, Alexandre Variengien and Buck Shlegeris for extremely helpful conversations that shaped ACDC. We would also like to thank Haoxing Du for working on an early tool, Nate Thomas for coming up with the catch name, Daniel Ziegler who discussed experiments that inspired our Subnetwork Probing analysis, and Lawrence Chan who helped us frame our contributions and suggested several experiments. Finally we thank Hofvarpnir Studios, FAR AI and Conjecture for providing compute for this project.
2303.01483
Auxiliary Functions as Koopman Observables: Data-Driven Analysis of Dynamical Systems via Polynomial Optimization
We present a flexible data-driven method for dynamical system analysis that does not require explicit model discovery. The method is rooted in well-established techniques for approximating the Koopman operator from data and is implemented as a semidefinite program that can be solved numerically. Furthermore, the method is agnostic of whether data is generated through a deterministic or stochastic process, so its implementation requires no prior adjustments by the user to accommodate these different scenarios. Rigorous convergence results justify the applicability of the method, while also extending and uniting similar results from across the literature. Examples on discovering Lyapunov functions, performing ergodic optimization, and bounding extrema over attractors for both deterministic and stochastic dynamics exemplify these convergence results and demonstrate the performance of the method.
Jason J. Bramburger, Giovanni Fantuzzi
2023-03-02T18:44:18Z
http://arxiv.org/abs/2303.01483v4
Auxiliary Functions as Koopman Observables: Data-Driven Polynomial Optimization for Dynamical Systems ###### Abstract We present a flexible data-driven method for dynamical system analysis that does not require explicit model discovery. The method is rooted in well-established techniques for approximating the Koopman operator from data and is implemented as a semidefinite program that can be solved numerically. The method is agnostic of whether data is generated through a deterministic or stochastic process, so its implementation requires no prior adjustments by the user to accommodate these different scenarios. Rigorous convergence results justify the applicability of the method, while also extending and uniting similar results from across the literature. Examples on discovering Lyapunov functions and on performing ergodic optimization for both deterministic and stochastic dynamics exemplify these convergence results and demonstrate the performance of the method. ## 1 Introduction In his now famous work [27], Koopman presented an equivalent linear formulation of nonlinear systems through what is now called the Koopman operator. This linear description of genuinely nonlinear systems comes at the expense of lifting the dynamics to an infinite-dimensional Banach space of functions called _observables_. Nevertheless, the Koopman operator has become increasingly popular in recent years because its action on the span of finitely many observables (a _dictionary_) can be approximated through a data-driven approach called _extended dynamic mode decomposition_ (EDMD) [62]. Importantly, EDMD has convergence guarantees as the amount of data increases [53, 62] and as the dictionary grows [31], which justify its good practical performance. Here, we demonstrate that EDMD can be used not just to approximate the Koopman operator, but also to provide system-level information from dynamic data in a model-agnostic way. Our starting point is that statements about dynamical systems can be proved by finding _auxiliary functions_ whose derivatives along trajectories, called _Lie derivatives_, satisfy pointwise inequalities implying the desired result. A familiar example are the Lyapunov functions used in stability analysis [38], which attain a global minimum at an equilibrium and decay monotonically along all other trajectories. Other types of auxiliary functions can be used to bound time averages [16, 58], stochastic expectations [14, 33], and extreme values along trajectories [13] or over attractors [17]; approximate reachable sets [29, 40], basins of attraction [19, 28, 57, 59], attractors [22, 51, 52], and invariant sets [3, 47, 48]; estimate system parameters and propagate uncertainty [8, 43, 55, 56]; solve optimal control and optimal stopping problems [7, 20, 21, 35]. Moreover, and crucially for practical applications, auxiliary functions can be optimized using algorithms for semidefinite programming if the dynamics are governed by known polynomial equations. In this work, we leverage the ability of EDMD to approximate the action of the Koopman operator on observable functions to approximately identify auxiliary functions directly from data. This is possible because the Lie derivative operator entering the constraint on auxiliary functions is the generator of the Koopman operator. Thus, one may view auxiliary functions as special Koopman observables and approximate them by replacing exact Lie derivative with data-driven approximations build using EDMD. This idea has in fact already been used to construct Lyapunov functions and approximate controlled invariant sets from data [30, 45]. Here we go beyond these two particular applications, providing a complete picture of how EDMD can be used to construct approximate Lie derivative from data. We also show how approximate auxiliary functions can be discovered from data through semidefinite programming to make statements about the underlying dynamical system. We stress that our approach does not require the identification of a model from data, and that it can be applied to a very broad class of deterministic or stochastic dynamical processes in either continuous or discrete time. This makes it possible to approximate auxiliary functions from data even when it would be difficult to first discover a model for the dynamics using techniques like SINDy [4, 5, 23, 24, 41, 42, 49, 50]. In summary, our contributions in this work are: 1. For a broad class of stochastic processes, as described in section 2, we present a method for discovering auxiliary functions from data using semidefinite programming (sections 3 and 4). 2. We illustrate this method on a diversity of systems (section 5). 3. We provide rigorous convergence results in the infinite data, dictionary, and sampling rate limits (section 6). 4. We demonstrate subtleties of our method that illustrate the role of certain technical assumptions in our theoretical results (section 7). Section 8 offers concluding remarks and outlines potential avenues for future work. A class of dynamical systems We begin by introducing a general class of dynamical systems that, as shown in section 2.2 below, includes deterministic and stochastic differential equations or maps. We consider stochastic processes from the outset since deterministic systems may be viewed as stochastic ones whose state at time \(t\) is determined almost surely given the state at any previous time \(s<t\). ### The general case Let \(X_{t}\) denote the state at time \(t\) of a stochastic process evolving in a subset \(\mathbb{X}\) of a Banach space over either the continuous time set \(\mathbb{T}=\mathbb{R}_{+}\) or the discrete time set \(\mathbb{T}=\mathbb{N}\). We write \(\mathbb{E}[\varphi(s,X_{s})|X_{t}=x]\) for the expected value of \(\varphi(s,X_{s})\) at time \(s\geq t\) given that \(X_{t}=x\), with the understanding that \(\mathbb{E}[\varphi(s,X_{s})\,|\,X_{t}=x]=\varphi(s,X_{s})\) for deterministic dynamics. The _generator_ of the process is the linear operator \(\mathcal{L}\) defined on the space \(C_{b}(\mathbb{T}\times\mathbb{X})\) of bounded continuous functions on \(\mathbb{T}\times\mathbb{X}\) via \[\mathcal{L}\varphi(t,x):=\mathbb{E}[\varphi(t+1,X_{t+1})\,|\,X_{t}=x]-\varphi (t,x)\] in discrete time and by \[\mathcal{L}\varphi(t,x)=\lim_{\tau\to 0^{+}}\frac{\mathbb{E}[\varphi(t+\tau,X_{t+ \tau})\,|\,X_{t}=x]-\varphi(t,x)}{\tau}\] in continuous time, provided the limit exists uniformly on \(\mathbb{T}\times\mathbb{X}\). We write \(\mathcal{D}(\mathcal{L})\) for the domain of \(\mathcal{L}\) and we call \(\mathcal{L}\varphi\) the _Lie derivative_ of \(\varphi\) since, for deterministic processes, \(\mathcal{L}\varphi\) gives simply the difference (in discrete time) or derivative (in continuous time) along trajectories of the process. Note that \(\varphi\) is assumed bounded to ensure that expectations are finite. For deterministic processes, however, Lie derivatives are well defined even for all sufficiently smooth functions even if they are unbounded. We will consider the general class of stochastic processes that are Markov and solve the so-called _martingale problem_ for their generators \(\mathcal{L}\). This means that, for all times \(s\geq t\) and all \(\varphi\) in the domain of \(\mathcal{L}\), we have \[\mathbb{E}\left[\varphi(s,X_{s})\mid X_{t}\right]=\varphi(t,X_{t})+\sum_{\tau= t}^{s-1}\mathbb{E}\left[\mathcal{L}\varphi(\tau,X_{\tau})\mid X_{t}\right]\] (2.1a) in discrete time and \[\mathbb{E}\left[\varphi(s,X_{s})\mid X_{t}\right]=\varphi(t,X_{t})+\mathbb{E} \left[\int_{t}^{s}\mathcal{L}\varphi(\tau,X_{\tau})\,d\tau\mid X_{t}\right]\] (2.1b) in continuous time. A detailed treatment of martingale problems and their use in characterizing stochastic processes can be found in [11]. Given a Markov process in the class just described and a timestep \(\tau\in\mathbb{T}\), one can define a linear operator \(\mathcal{K}^{\tau}\) on \(C_{b}(\mathbb{T}\times\mathbb{X})\) that maps a function \(\varphi\) to \[\mathcal{K}^{\tau}\varphi(t,x):=\mathbb{E}[\varphi(t+\tau,X_{t+\tau})|X_{t}=x].\] This is sometimes called the _stochastic Koopman operator_ in the literature [9, 26, 61]. One can use the relevant condition in (2.1) and the Markov property to check that the family \(\{\mathcal{K}^{\tau}:\tau\in\mathbb{T}\}\) of Koopman operators is a contraction semigroup on \(C_{b}(\mathbb{T}\times\mathbb{X})\) for the \(L^{\infty}\) norm. Its generator, of course, is \(\mathcal{L}\). ### Classical examples The general framework introduced above includes processes \(X_{t}\) that are governed by deterministic maps, stochastic maps, ODEs, and stochastic differential equations. **Example 2.1** (Deterministic maps).: Suppose \(\{X_{t}\}_{t\in\mathbb{N}}\) is a discrete-time process governed by the deterministic map \(X_{t+1}=f(t,X_{t})\). Then, condition (2.1a) holds with \(\mathcal{L}\varphi(t,x):=\varphi(t+1,f(t,x))-\varphi(t,x)\). **Example 2.2** (Ordinary differential equations).: Set \(\mathbb{X}=\mathbb{R}^{d}\), \(\mathbb{T}=\mathbb{R}_{+}\), and let \(X_{t}\) solve the ODE \(\dot{X}_{t}=f(t,X_{t})\) for some locally Lipschitz continuous function \(f:\mathbb{R}_{+}\times\mathbb{R}^{d}\to\mathbb{R}^{d}\). Then, condition (2.1b) holds with \(\mathcal{L}\varphi:=\partial_{t}\varphi+f\cdot\nabla_{x}\varphi\). **Example 2.3** (Stochastic maps).: Let \(\{X_{t}\}_{t\in\mathbb{N}}\) be a discrete-time stochastic process governed by the random map \(X_{t+1}=f(\omega(t),t,X_{t})\), where the function \(\omega\mapsto f(\omega,\cdot,\cdot)\) is a random variable from some probability space \((\Omega,\mathcal{F},\pi)\) into the space of maps from \(\mathbb{T}\times\mathbb{X}\) to \(\mathbb{X}\). Equivalently, the value of \(X_{t+1}\) is sampled randomly from some probability measure \(\nu_{t,x}\) on \(\mathbb{X}\) that depends on the time \(t\) and on the value \(x\) taken by \(X_{t}\). Condition (2.1a) holds with \[\mathcal{L}\varphi(t,x):=\int_{\Omega}\varphi(t+1,f(\omega,t,x))d\pi(\omega)- \varphi(t,x)=\int_{\mathbb{X}}\varphi(t+1,y)d\nu_{t,x}(y)-\varphi(t,x).\] **Example 2.4** (Stochastic differential equations).: Set \(\mathbb{X}=\mathbb{R}^{d}\), \(\mathbb{T}=\mathbb{R}_{+}\), and let \(X_{t}\) solve the stochastic differential equation \(dX_{t}=f(t,X_{t})dt+g(t,X_{t})dW(t)\) for some locally Lipschitz functions \(f:\mathbb{R}_{+}\times\mathbb{R}^{d}\to\mathbb{R}^{d}\) and \(g:\mathbb{R}_{+}\times\mathbb{R}^{d}\mapsto\mathbb{R}^{d\times k}\), where \(dW(t)\) is a \(k\)-dimensional Brownian process. Dynkin's formula shows that (2.1b) holds with \[\mathcal{L}\varphi:=\partial_{t}\varphi+f\cdot\nabla_{x}\varphi+\frac{1}{2} \left\langle gg^{\top},\nabla_{x}^{2}\varphi\right\rangle,\] where \(\nabla_{x}^{2}\varphi\) is the Hessian of \(\varphi\) with respect to the \(x\) variable and \(\left\langle A,B\right\rangle=\sum_{i,j}A_{ij}B_{ij}\). System analysis via auxiliary functions and Lie derivatives As mentioned in the introduction, many properties of the stochastic processes described in section 2 can be studied by constructing auxiliary functions \(\varphi\), whose evolution along trajectories obeys suitable pointwise inequalities. This section reviews how these constraints encode the system's dynamics through the Lie derivative \(\mathcal{L}\varphi\) in two particular examples: stability analysis and ergodic optimization (i.e., the bounding of infinite-time averages). We focus on deterministic processes for simplicity, but the framework of section 3.2 can be used to bound stationary stochastic expectations simply by using the stochastic definition of \(\mathcal{L}\varphi\) (see, e.g., [14, 30, 33]). ### Global and local stability Let \(\mathbb{X}=\mathbb{R}^{d}\) and let the process \(X_{t}\) be governed either by the deterministic map \(X_{t+1}=f(X_{t})\) or by the ODE \(\dot{X}_{t}=f(X_{t})\), with \(f\) satisfying \(f(0)=0\). From section 2 we have that the Lie derivative of a time-independent \(\varphi:\mathbb{R}^{d}\to\mathbb{R}\) takes the form \[\mathcal{L}\varphi(x):=\varphi(f(x))-\varphi(x)\] in the discrete-time setting and, provided \(\varphi\) is differentiable, \[\mathcal{L}\varphi(x):=f(x)\cdot\nabla\varphi(x)\] in the continuous-time setting. Here we need not consider time-dependent \(\varphi\) because the systems under consideration are autonomous. Lyapunov [38] showed that the equilibrium point \(X=0\) is globally stable if there exists a continuous function \(V:\mathbb{R}^{d}\to\mathbb{R}\) satisfying \[V(x)\geq 0 \forall x\in\mathbb{R}^{d}, \tag{3.1a}\] \[\mathcal{L}V(x)\leq 0 \forall x\in\mathbb{R}^{d},\] (3.1b) \[V(x)\to+\infty \text{as }\|x\|\to+\infty. \tag{3.1c}\] Indeed, the first two of these conditions imply that \(0\leq V(X_{t})\leq V(X_{0})\) for any initial state \(X_{0}\). Thus, the trajectory \(X_{t}\) cannot leave the set \(\{x\in\mathbb{R}^{d}:\;V(x)\leq V(X_{0})\}\), which is compact by virtue of (3.1c). One obtains global asymptotic stability if \(V(0)=0\) and the inequalities in (3.1a) and (3.1b) are strict whenever \(x\neq 0\). One can prove local (asymptotic) stability by imposing (3.1a) and (3.1b) only in a neighbourhood \(S\) of the equilibrium point. In this case, the largest sublevel set of \(V\) included in \(S\) is positively invariant (see, e.g., [25, SS4.8]). ### Ergodic optimization Auxiliary functions can be used to estimate long-time averages, a problem at the heart of ergodic theory. Given a trajectory \(X_{t}\) of an ODE \(\dot{X}_{t}=f(X_{t})\) with initial condition \(X_{0}\) the long-time average of a continuous function \(g:\mathbb{R}^{d}\to\mathbb{R}\) is defined as \[\overline{g}(X_{0})=\limsup_{T\to\infty}\frac{1}{T}\int_{0}^{T}g(X_{t})\mathrm{d }t. \tag{3.2}\] The largest possible long-time average among trajectories starting from a given initial set \(S\subset\mathbb{R}^{d}\), \[\overline{g}^{*}:=\sup_{X_{0}\in S}\overline{g}(X_{0}), \tag{3.3}\] can be determined using auxiliary functions with no need to determine explicit optimal trajectories. Indeed, suppose there exists a continuous function \(D:\mathbb{R}^{d}\to\mathbb{R}\) and a real number \(U\) such that \[U-g(x)-D(x)\geq 0\quad\forall x\in S. \tag{3.4}\] Suppose also there exists a trajectory \(X_{t}\) in \(S\) such that \(\overline{D(X_{t})}=0\). Then, averaging the inequality (3.4) in time shows that \(\overline{g(X_{t})}\leq U\) for that trajectory. A lower bound on \(\overline{g(X_{t})}\) can be derived similarly by reversing the inequality sign in (3.4). A function \(D\) whose infinite-time average vanishes for any trajectory remaining in the set \(S\) can be constructed by setting \(D=\mathcal{L}V(=f\cdot\nabla V)\) for any bounded and continuously differentiable auxiliary function \(V:S\to\mathbb{R}\). Indeed, \[\overline{D(X_{t})}=\overline{f(X_{t})\cdot\nabla V(X_{t})}=\overline{\frac{ \mathrm{d}}{\mathrm{d}t}V(X_{t})}=\limsup_{T\to\infty}\frac{V(X_{T})-V(X_{0})} {T}=0\] since \(V(X_{T})\) is bounded uniformly in \(T\). The choice \(D=\mathcal{L}V\) is in fact optimal if \(S\) is compact, meaning that minimizing the bound \(U\) over \(V\) evaluates \(\overline{g}^{*}\) exactly [58]: \[\overline{g}^{*}=\inf_{V\in C^{1}(B,\mathbb{R})}\left\{U:\;U-g(x)+\mathcal{L}V (x)\geq 0\quad\forall x\in B\right\}. \tag{3.5}\] Near-optimal auxiliary functions can be constructed computationally by strengthening the inequality constraint in (3.5) into an SOS constraint, with excellent results for low-dimensional ODEs [6, 14, 16, 18, 30, 39, 58]. The method can be extended to discrete-time dynamics [2, 30, 39] and to stochastic systems, where infinite-time averages are replaced by stationary expectations [6, 14, 15, 33, 34]. In both cases, one need only solve (3.5) using the suitable expression for \(\mathcal{L}V\). ## 4 Data-driven approximation of auxiliary functions We now turn to the data-driven approximation of Lie derivatives and auxiliary functions. We focus on continuous-time dynamics; results for discrete-time dynamics are readily obtained by replacing quantities depending on a time increment \(\tau\) with their values for \(\tau=1\) ### Approximation of the Lie derivative Let \(\phi_{1},\ldots,\phi_{\ell}\) and \(\psi_{1},\ldots,\psi_{m}\) be two finite dictionaries of observables in \(C_{b}(\mathbb{T}\times\mathbb{X})\). Let us set \[\mathbf{\phi} :=(\phi_{1},\ldots,\phi_{\ell})^{\top}\] \[\mathbf{\psi} :=(\psi_{1},\ldots,\psi_{m})^{\top}\] and write \(\operatorname{span}\mathbf{\phi}\) (resp. \(\operatorname{span}\mathbf{\psi}\)) for the linear span of \(\phi_{1},\ldots,\phi_{\ell}\) (resp. \(\psi_{1},\ldots,\psi_{m}\)). We shall assume throughout that \(\operatorname{span}\mathbf{\phi}\subseteq\mathcal{D}(\mathcal{L})\), so the Lie derivative of \(\varphi\in\operatorname{span}\mathbf{\phi}\) is well defined, and that \(\operatorname{span}\mathbf{\phi}\subseteq\operatorname{span}\mathbf{\psi}\), so \[\mathbf{\phi}=\Theta_{m}\mathbf{\psi} \tag{4.1}\] for some \(\ell\times m\) matrix \(\Theta_{m}\). We also apply operators such as \(\mathcal{K}^{\tau}\) and \(\mathcal{L}\) to vector-valued functions element-wise, so for instance \(\mathcal{L}\mathbf{\phi}:=(\mathcal{L}\phi_{1},\ldots,\mathcal{L}\phi_{\ell})^{\top}\). Our goal is to construct data-driven approximations for the Lie derivative of functions in \(\operatorname{span}\mathbf{\phi}\) using functions in \(\operatorname{span}\mathbf{\psi}\). As we explain below, this can be done using EDMD or the so-called _generator EDMD_ (gEDMD). We will thus call \(\mathbf{\psi}\) the _EDMD dictionary_. #### 4.1.1 Approximation via EDMD Let there be given \(n\) 'data snapshots' \((t_{i},x_{i},y_{i})_{i=1}^{n}\), where \(x_{i}=X_{t_{i}}\) and \(y_{i}=X_{t_{i}+\tau}\) for a fixed time increment \(\tau>0\). Consider the matrices \[\Psi_{n}:=\begin{bmatrix}|&|\\ \mathbf{\psi}(t_{1},x_{1})\;\cdots\;\;\mathbf{\psi}(t_{n},x_{n})\\ |&|\end{bmatrix}\quad\text{and}\quad\Phi_{n}^{\tau}:=\begin{bmatrix}|&|\\ \mathbf{\phi}(t_{1}+\tau,y_{1})\;\cdots\;\;\mathbf{\phi}(t_{n}+\tau,y_{n})\\ |&|\end{bmatrix}.\] The EDMD framework approximates the action of the Koopman operator on \(\operatorname{span}\mathbf{\phi}\) using an _approximate Koopman operator_\(\mathcal{K}^{\tau}_{mn}:\operatorname{span}\mathbf{\phi}\to\operatorname{ span}\mathbf{\psi}\). To construct this operator, one first minimizes the Frobenius norm \(\|\Phi_{n}^{\tau}-K\Psi_{n}\|_{F}\) over all \(\ell\times m\) matrices \(K\). Since the minimizer is not unique unless \(\Psi_{n}\Psi_{n}^{\top}\) is invertible, one chooses the optimizer with smallest Frobenius norm, \[K^{\tau}_{mn}:=\Phi_{n}^{\tau}\Psi_{n}^{\dagger}=(\Phi_{n}^{\tau}\Psi_{n}) \left(\Psi_{n}\Psi_{n}^{\top}\right)^{\dagger}, \tag{4.2}\] where \(\Psi_{n}^{\dagger}\) is the pseudoinverse of \(\Psi_{n}\). Then, for every \(\varphi=\mathbf{c}\cdot\mathbf{\phi}\) with \(\mathbf{c}\in\mathbb{R}^{\ell}\) one sets \[\mathcal{K}^{\tau}_{mn}\varphi:=\mathbf{c}\cdot K^{\tau}_{mn}\mathbf{\psi}.\] Given an approximate Koopman operator, one can construct an approximate Lie derivative operator \(\mathcal{L}^{\tau}_{mn}:\operatorname{span}\mathbf{\phi}\to\operatorname{span}\bm {\psi}\) simply by defining, for every \(\varphi=\mathbf{c}\cdot\mathbf{\phi}\), \[\mathcal{L}^{\tau}_{mn}\varphi=\mathbf{c}\cdot L^{\tau}_{mn}\mathbf{\psi}\qquad\text{ where}\qquad L^{\tau}_{mn}:=\frac{K^{\tau}_{mn}-\Theta_{m}}{\tau}. \tag{4.3}\] This is just a finite-difference approximtion of \(\mathcal{L}\varphi\), since using (4.1) one finds \[\mathcal{L}^{\tau}_{mn}\varphi=\frac{\boldsymbol{c}\cdot K^{\tau}_{mn} \boldsymbol{\psi}-\boldsymbol{c}\cdot\boldsymbol{\phi}}{\tau}=\frac{\mathcal{K }^{\tau}_{mn}\varphi-\varphi}{\tau}\approx\frac{\mathcal{K}^{\tau}\varphi- \varphi}{\tau}\approx\mathcal{L}\varphi.\] Analysis in section 6 rigorously justifies these so-far heuristic approximations. Precisely, under suitable but mild conditions on the data snapshots we prove that \(\mathcal{L}^{\tau}_{mn}\varphi\) converges to \(\mathcal{L}\varphi\) in a suitable norm as \(n\to\infty\), \(\tau\to 0\), and \(m\to\infty\). #### 4.1.2 Approximation via gEDMD In principle, a better approximation to Lie derivatives can be constructed if one can directly sample the Lie derivatives \(\mathcal{L}\boldsymbol{\phi}\). This is the premise of the gEDMD framework [26], which assumes the data snapshots \((t_{i},x_{i},y_{i})_{i=1}^{n}\) are such that \(y_{i}=\mathcal{L}\boldsymbol{\phi}(t_{i}\),\(x_{i}\)). In this case, one can build the data matrix \[\Lambda_{n}:=\begin{bmatrix}|&&|\\ \mathcal{L}\boldsymbol{\phi}(t_{1},x_{1})&\cdots&\mathcal{L}\boldsymbol{ \phi}(t_{n},x_{n})\\ |&&|\end{bmatrix}\] and define an approximate Lie derivative operator \(\mathcal{G}_{mn}:\operatorname{span}\boldsymbol{\phi}\to\operatorname{span} \boldsymbol{\psi}\) by setting, for every \(\varphi=\boldsymbol{c}\cdot\boldsymbol{\phi}\), \[\mathcal{G}_{mn}\varphi:=\boldsymbol{c}\cdot G_{mn}\boldsymbol{\psi}\qquad \text{where}\qquad G_{mn}:=\Lambda_{n}\Psi_{n}^{\dagger}. \tag{4.4}\] One expects \(\mathcal{G}_{mn}\varphi\) to approximate \(\mathcal{L}\varphi\) because the \(\ell\times m\) matrix \(G_{mn}\) minimizes the least-squares error \(\|\Lambda_{n}-G\Psi_{n}\|_{F}\). This expectation can be justified rigously; see [26] for the case of stochastic processes governed by stochastic differential equations, and theorem 6.1 below for the general class of Markov processes described in section 2. One may also expect the EDMD-based approximation \(\mathcal{L}^{\tau}_{mn}\varphi\) constructed in section 4.1.1 to recover \(\mathcal{G}_{mn}\varphi\) as \(\tau\to 0\)[26]. This is a more subtle question, which we address in section 6 as part of a rigorous convergence analysis of our approximate Lie derivatives. For now, we leave technicalities and subtleties aside, and instead proceed to show how approximate Lie derivatives can be combined with semidefinite programming in order to construct approximate auxiliary functions in practice. ### Integration with semidefinite programming The auxiliary function frameworks in section 3, as well as many other ones from the dynamical systems literature, ask one to find functions \(\varphi:\mathbb{T}\times\mathbb{X}\to\mathbb{R}\) satisfying inequalities of the form \[a(t,x)\varphi(t,x)+b(t,x)\mathcal{L}\varphi(t,x)+c(t,x)\geq 0\qquad\forall(t,x )\in S\subset\mathbb{T}\times\mathbb{X}. \tag{4.5}\] Polynomial \(\varphi\) can be constructed with semidefinite programming when \(\mathbb{X}\) is finite dimensional provided \(\mathcal{L}\varphi\) is a polynomial that can be explicitly calculated, the functions \(a,b,c\) are polynomials, and \(S\) is a set defined by polynomial inequalities. The same semidefinite programming tools can be used in a data-driven setting to construct 'approximate' auxiliary function \(\varphi\in\operatorname{span}\boldsymbol{\phi}\) satisfying either of the two inequalities \[a(t,x)\varphi(t,x)+b(t,x)\mathcal{L}_{mn}^{\tau}\varphi(t,x)+c(t,x )\geq 0 \forall(t,x)\in S\subset\mathbb{T}\times\mathbb{X}, \tag{4.6a}\] \[a(t,x)\varphi(t,x)+b(t,x)\mathcal{G}_{mn}\varphi(t,x)+c(t,x)\geq 0 \forall(t,x)\in S\subset\mathbb{T}\times\mathbb{X}, \tag{4.6b}\] obtained by replacing \(\mathcal{L}\varphi\) with its approximations from sections 4.1.1 and 4.1.2. Here we explain this in a setting that generalizes the standard case of polynomial inequalities, focussing on (4.6a) for definitess. Precisely, we assume that: 1. There exist functions \(\boldsymbol{u}=(u_{1},\ldots,u_{k})\) such that \(a\boldsymbol{\phi},b\boldsymbol{\psi},c\in\operatorname{span}\boldsymbol{u}\). 2. \(S=\{(t,x)\in\mathbb{T}\times\mathbb{X}:s(t,x)\geq 0\}\) for some function \(s:\mathbb{T}\times\mathbb{X}\to\mathbb{R}\). 3. There exist functions \(\boldsymbol{v}=(v_{1},\ldots,v_{p})\) and \(\boldsymbol{w}=(w_{1},\ldots,w_{q})\) such that \[\operatorname{span}\boldsymbol{u}\subset\operatorname{span}(\boldsymbol{v} \otimes\boldsymbol{v})\qquad\quad\text{and}\qquad\quad\operatorname{span}(s \boldsymbol{w}\otimes\boldsymbol{w})\subset\operatorname{span}(\boldsymbol{v} \otimes\boldsymbol{v}),\] where \(\boldsymbol{v}\otimes\boldsymbol{v}=\{v_{i}v_{j}\}_{i,j=1,\ldots,p}\) and \(s\boldsymbol{w}\otimes\boldsymbol{w}=\{sw_{i}w_{j}\}_{i,j=1,\ldots,q}\). These assumptions are satisfied, for instance, when \(a\), \(b\), \(c\), \(\boldsymbol{\phi}\), \(\boldsymbol{\psi}\) and \(s\) are polynomials. It is straightforward to extend our discussion to sets \(S\) defined by multiple inequalities, inequalities involving more than one auxiliary function \(\varphi\), and terms \(c\) that depend affinely on tunable variables, such as the constant \(U\) in section 3.2. A tunable auxiliary function \(\varphi\in\operatorname{span}\boldsymbol{\phi}\) can be represented as \(\varphi=\boldsymbol{c}\cdot\boldsymbol{\psi}\) for some tunable vector \(\boldsymbol{c}\in\mathbb{R}^{\ell}\). Then, given assumption (A1) and the definition of \(\mathcal{L}_{mn}^{\tau}\varphi\), one can find a matrix \(A\in\mathbb{R}^{k\times m}\) and a vector \(\boldsymbol{b}\in\mathbb{R}^{k}\) such that \[a(t,x)\varphi(t,x)+b(t,x)\mathcal{L}_{mn}^{\tau}\varphi(t,x)+c(t,x)=(A \boldsymbol{c}+\boldsymbol{b})\cdot\boldsymbol{u}(t,x).\] Furthermore, assumptions (A2) and (A3) guarantee that one can always find symmetric matrices \(P\in\mathbb{R}^{p\times p}\) and \(Q\in\mathbb{R}^{q\times q}\) such that \[(A\boldsymbol{c}+\boldsymbol{b})\cdot\boldsymbol{u}(t,x)=\boldsymbol{v}(t,x )^{\top}P\boldsymbol{v}(t,x)+s(t,x)\boldsymbol{w}(t,x)^{\top}Q\boldsymbol{w} (t,x). \tag{4.7}\] We easily obtain the following sufficient condition for (4.6a), which is a restatement of well-known weighted-sum-of-squares decompositions from polynomial optimization. **Proposition 4.1**.: _Let assumptions (A1)-(A3) hold. Inequality (4.6a) is satisfied if there exist positive semidefinite matrices \(P\) and \(Q\) satisfying (4.7)._ Proof.: Since \(P\) and \(Q\) are positive semidefinite they admit a square root, thus \((A\boldsymbol{c}+\boldsymbol{b})\cdot\boldsymbol{u}=\|P^{1/2}\boldsymbol{w} \|^{2}+s\|Q^{1/2}\boldsymbol{v}\|^{2}\). The result follows because \(s\geq 0\) on \(S\). Condition (4.7) provides a set of linear equality constraints for the entries of \(\boldsymbol{c}\), \(P\) and \(Q\) after both sides are expressed using a basis for \(\operatorname{span}\{\boldsymbol{v}\otimes\boldsymbol{v}\}\). Thus, the construction of approximate auxiliary functions reduces to a _semidefinite program_ (SDP), which can be solved using software packages such as mosek[44]. Moreover, when \(a\), \(b\), \(c\), \(\varphi\) and \(s\) are polynomials the relevant SDPs can be formulated automatically using open-source polynomial optimization toolboxes such as yalmip[36, 37]. Examples This section illustrates the construction of approximate auxiliary functions through semidefinite programming in three examples. The first one discovers a Lyapunov function from data (see [45] for similar examples). The other two examples solve ergodic optimization problems for determinstic and stochastic dynamics. In all cases, we construct polynomial auxiliary functions using yalmip[36, 37] and mosek[44]. We use ChebFun [10] to implement Chebyshev polynomials. Code to reproduce these results is available at [https://github.com/DCN-FAU-AvH/eDMD-sos](https://github.com/DCN-FAU-AvH/eDMD-sos). ### Lyapunov functions The two-dimensional map \[\begin{split} X_{t+1}&=\tfrac{3}{10}X_{t},\\ Y_{t+1}&=-Y_{t}+\tfrac{1}{2}Y_{t}+\tfrac{7}{18}X_{t} ^{2}\end{split} \tag{5.1}\] has a globally asymptotically stable equilibrium at the origin. We seek to prove this by finding a Lyapunov function \(V(x,y)\) satisfying \[V(x,y)-\varepsilon(x^{2}+y^{2}) \geq 0, \tag{5.2a}\] \[-\mathcal{L}V(x,y)-\varepsilon(x^{2}+y^{2}) \geq 0, \tag{5.2b}\] for some hyperparameter \(\varepsilon>0\). These conditions imply that (3.1a)-(3.1c) hold with strict inequality away from the origin, as required to establish asymptotic stability. Note that one can always fix \(\epsilon=1\) because one can always rescale \(V\) by \(\varepsilon\). To look for \(V\) using our data-driven approach, we sampled the map (5.1) at \(n=10^{4}\) uniformly distributed random points in the square \([-2,2]\times[-2,2]\). We then implemented the two inequalities in (5.2) with \(\varepsilon=1\) and with \(\mathcal{L}V\) replaced by its data-driven approximation \(\mathcal{L}_{mn}^{\tau}V\) from section 4.1.1. We searched for polynomial \(V\) of degree \(4\), so \(\boldsymbol{\phi}\) lists the \(\ell=15\) monomials in \((x,y)\) of degree up to \(4\), and we took all \(m=45\) monomials of degree up to \(8\) as the EDMD dictionary \(\boldsymbol{\psi}\). This choice ensures that \(\mathcal{L}V\in\operatorname{span}\boldsymbol{\psi}\), but similar results are obtained when \(\boldsymbol{\psi}\) includes also monomials of higher degree. Assumptions (A1)-(A3) are satisfied with \(S=\mathbb{R}^{2}\), \(s=0\), and \(\boldsymbol{u}\) and \(\boldsymbol{v}\) listing all monomials in \((x,y)\) of degree \(8\) and \(4\), respectively. (There is no need to specify \(\boldsymbol{w}\) when \(s=0\)). Thus, we can construct \(V\) through semidefinite programming. Minimizing the \(\ell^{1}\) norm of the coefficients of \(V\) returns \[V(x,y)=3.0815x^{2}-1.5686xy+1.3333y^{2}-1.3038x^{3}+0.5428x^{2}y+0.2226x^{4},\] where numerical coefficients have been rounded to four decimal places. Of course, this is only an approximate Lyapunov function: its positivity is guaranteed, as we have imposed (5.2a) exactly, but we do not know if its exact Lie derivative, \[\mathcal{L}V(x,y)=V\big{(}\tfrac{3}{10}x,-x+\tfrac{1}{2}y+\tfrac{7}{18}x^{2} \big{)}-V(x,y),\] really satisfies (5.2b) for some \(\varepsilon>0\). This can be verified by maximizing \(\varepsilon\) subject to (5.2b) for the given \(V\). Doing so returns \(\varepsilon\approx 0.9999\), so we have indeed constructed a Lyapunov function for the system. **Remark 5.1**.: The particular quartic \(V\) constructed in this example has the special property that \(\mathcal{L}V\) is also quartic. This means our data-driven approach gives the same answer when \(\boldsymbol{\psi}\) lists only the monomials of degree up to \(4\), i.e., in the special case where \(\operatorname{span}\boldsymbol{\phi}=\operatorname{span}\boldsymbol{\psi}\). This is not true in general: in the following examples, a strict the inclusion \(\operatorname{span}\boldsymbol{\phi}\subset\operatorname{span}\boldsymbol{\psi}\) is necessary to obtain accurate auxiliary functions. ### Ergodic optimization for the van der Pol oscillator Let us consider the van der Pol oscillator, given by the second order ODE \[\ddot{X}_{t}-0.1(1-X_{t}^{2})\dot{X}_{t}+X_{t}=0. \tag{5.3}\] The state-space is \(\mathbb{X}=\mathbb{R}^{2}\), which corresponds to all possible values for \(X_{t}\) and \(\dot{X}_{t}\). We seek upper bounds on the long-time average of the 'energy' of the system, here given by the observable \[g(X_{t},\dot{X}_{t})=X_{t}^{2}+\dot{X}_{t}^{2}. \tag{5.4}\] Equation (5.3) has a stable limit cycle that attracts every initial condition except that at the unstable fixed point \((X_{t},\dot{X}_{t})=(0,0)\). This point saturates the trivial lower bound \(g(X_{t},\dot{X}_{t})\geq 0\), while the long-time average of \(g\) is maximized by the limit cycle. The goal of this example is to demonstrate that nearly sharp upper bounds can be established with less data than is required to observe convergence of a simple numerical empirical average of the same data. For illustration, we generate synthetic data through numerical integration of the system (5.3) with a timestep \(\tau=0.001\), starting from the initial condition \((X_{0},\dot{X}_{0})=(0.1,0.2)\). Notice that the initial condition is chosen close to the unstable fixed point, meaning that there is an initial transient before falling into the stable limit cycle. This initial transient means that the long-time average of \(g\) will take time to converge to its value along the limit cycle. \begin{table} \begin{tabular}{l l l l l} \hline \hline \(T\) & \(\ell=4\) & \(\ell=6\) & \(\ell=8\) & \(\ell=10\) & Empirical Average \\ \hline \(10^{2}\) & 6.1716 & 4.0100 & 4.0013 & 4.0011 & 2.2322 \\ \(10^{5/2}\) & 5.6799 & 4.0100 & 4.0013 & 4.0013 & 3.4418 \\ \(10^{3}\) & 5.3644 & 4.0100 & 4.0013 & 4.0010 & 3.8244 \\ \hline Exact & 6.6751 & 4.0100 & 4.0013 & 4.0012 & — \\ \hline \hline \end{tabular} \end{table} Table 1: Data-driven upper bounds for the energy of the van der Pol oscillator (5.3), obtained with polynomial auxiliary functions of degree \(\alpha\) and different integration times \(T\) for the data collection. The final row gives bounds computed using the exact Lie derivative, (5.5), while the final column reports the average of the energy over the dataset collected for each integration time \(T\). Table 1 presents our results. Each row represents the data integrated up to the value of \(T\) in the first column. The empirical average, obtained by simply computing (3.2) up to the given value of \(T\) with the energy observable (5.4), is presented in the final column. In the final row we provide the computed upper bound using the exact Lie derivative, here acting on differentiable functions \(\varphi:\mathbb{R}^{2}\to\mathbb{R}\) by \[\mathcal{L}\varphi(x,y)=\partial_{x}\varphi(x,y)y+\partial_{y}\varphi(x,y)[0. 1(1-x^{2})y-x]. \tag{5.5}\] Results are presented by taking \(V\in\operatorname{span}\boldsymbol{\phi}\) with \(\boldsymbol{\phi}\) listing all monomials in \((x,\dot{x})\) up to degree \(\alpha\geq 1\), and an EDMD dictionary \(\boldsymbol{\psi}\) that lists all monomials up to degree \(\beta=\alpha+2\). From both integrating (5.3) far into the future on the limit cycle and the final row of table 1, we find that the long-time average of the energy over the limit cycle is approximately \(4.001\), to four significant digits. Notice that for all values of \(T\) presented in the table the empirical average has not converged to this value, meaning that the initial transients are still influencing it. Alternatively, by computing the Lie derivative from the same data and applying our data-driven bounding procedure, we are able to extract sharp bounds even with the limited dataset up to \(T=10^{2}\). Thus, the data-driven method presented herein provides the opportunity to extract important system statistics long before they can be observed in the data itself. The accuracy of the bounds in table 1 is due to an accurate approximation of the Lie derivative from data. Interestingly, we do not even require the transients to obtain such an accurate approximation. Figure 1 demonstrates that using only data sampled on the limit cycle we observe the _global_ pointwise convergence of \(\mathcal{L}_{mn}^{\tau}\varphi\) to \(\mathcal{L}\varphi\) on \(\mathbb{R}^{2}\). Such convergence is a consequence of our analysis in section 6, but roughly follows from the fact that the limit cycle of the van der Pol oscillator is not an algebraic curve [46] and so cannot be contained in the zero level set of any element of an exclusively polynomial dictionary. The result is the ability to approximate the full Lie derivative, not just that restricted to where the data has been sampled from. ### Ergodic optimization for a stochastic logistic map The stochastic logistic map is given by \[X_{t+1}=\lambda_{t}X_{t}(1-X_{t}),\qquad t\in\mathbb{N}, \tag{5.6}\] where \(\lambda_{t}\) is drawn from the uniform distribution on \([0,4]\) for each \(t\in\mathbb{N}\). The state-space is \(\mathbb{X}=\mathbb{R}\) and the unit interval \(S=[0,1]\) is positively invariant. We seek to place upper and lower bounds on the long-time expected value of the observable \(g(x)=x\). The auxiliary function framework for ergodic optimization in section 3.2 applies to stochastic dynamics if one uses the stochastic definition of the Lie derivative. In our example, any auxiliary function \(\varphi:\mathbb{R}\to\mathbb{R}\) has the stochastic Lie derivative \[\mathcal{L}\varphi=\frac{1}{4}\int_{0}^{4}\varphi(\lambda x(1-x))\mathrm{d} \lambda-\varphi(x). \tag{5.7}\] We use our data-driven approach to construct approximate polynomial auxiliary functions of increasing degree \(\alpha\). For numerical stability we represent polynomials using the Chebyshev basis \(\boldsymbol{\phi}=(T_{0}(x),\ldots,T_{\alpha}(x))\) and we take \(\boldsymbol{\psi}=(T_{0}(x),\ldots,T_{2\alpha}(x))\) as our EDMD dictionary. This choice ensures \(\mathcal{L}\varphi\in\operatorname{span}\boldsymbol{\psi}\) for every \(\varphi\in\operatorname{span}\boldsymbol{\phi}\) but, as demonstrated by the left panel of figure 2, the results do not change if one uses \(\boldsymbol{\psi}=(T_{0}(x),\ldots,T_{\beta}(x))\) with \(\beta\geq 2\alpha\). We finally write \(S=\{x\in\mathbb{R}:\,x-x^{2}\geq 0\}\), so assumptions (A1)-(A3) are met with \(\boldsymbol{u}=(T_{0}(x),\ldots,T_{2\alpha}(x))\), \(\boldsymbol{v}=(T_{0}(x),\ldots,T_{\alpha}(x))\) and \(\boldsymbol{w}=(T_{0}(x),\ldots,T_{\alpha-1}(x))\). Our dataset consists of one trajectory of the map with initial condition \(x_{0}\in(0,1)\) and \(n=10^{7}\) iterates, but we also implemented our approach using only the first \(10^{4}\), \(10^{5}\), and \(10^{6}\) datapoints to investigate how results vary with \(n\). Such a large amount of data is required to obtain accurate approximations of the Lie derivative for our stochastic map. Indeed, as shown in the right panel of figure 2, the EDMD matrix \(K^{\tau}_{mn}\) converges at a \(O(1/\sqrt{n})\) rate to its infinite-data limit \(K^{\tau}_{m\infty}\), which can be calculated explicitly for this example using (5.7). The approximate upper and lower bounds on \(\overline{X_{t}}\) we obtained are listed in table 2 alongside exact bounds obtained with the exact Lie derivative (5.7). This can be computed explicitly for polynomial \(\varphi\) since the integral over \(\lambda\) in (5.7) can easily be evaluated analytically. The data-driven 'bounds' appear to converge to the exact ones in a non-monotonic fashion as \(n\) increases, and the two agree to at least two decimal places for \(n=10^{7}\). This confirms our approach works well with sufficient data. The zero lower bound is sharp for (5.6), as it is saturated by the equilibrium trajectory \(X_{t}=0\). Crucially, this trajectory is not part of our dataset, meaning that we learnt information about _all_ possible stationary distributions of the system even though we sampled data only from the _single_ stationary distribution approximated by the empirical distribution of the iterates \(X_{t}\) in our simulated trajectory. The upper bound, instead, decreases as \(\alpha\) is raised and we conjecture it approaches the value \(1/4\) of the stationary expectation of \(X_{t}\), which we estimated by taking the average of our simulated trajectory. We also conjecture that the convergence with increasing \(\alpha\) is slow because, for degree-\(\alpha\) polynomial \(\varphi\), the expression for \(\mathcal{L}\varphi\) depends only on the first \(\ell=\alpha+1\) moments of \(\lambda\), which do not uniquely characterize its distribution. Thus, bounds obtained with polynomial \(\varphi\) of degree \(\alpha\) apply to the _maximum_ stationary average of \(X_{t}\), where the maximum is taken over all possible distributions of \(\lambda\) whose moments of degree up to \(\alpha\) coincide with those of the uniform distribution on \([0,4]\). ## 6 Theoretical analysis The examples in section 5 show that replacing exact Lie derivatives with data-driven approximations works well in practice. We now rigorously justify this observation, proving that approximate Lie derivatives converge to exact ones in a suitable sense in the limits of infinite data (\(n\to\infty\)), infinite data sampling rate (\(\tau\to 0\)), and infinite EDMD dictionary \(\boldsymbol{\psi}\) (\(m\to\infty\)). Some of our results are similar to those already available in the literature [9, 26, 31, 53, 61], but we prove them for a broader class of stochastic processes and under weaker assumptions. Some subtleties highlighted by our analysis are illustrated by example in section 7. ### Preliminaries We again focus on continuous-time processes (\(\mathbb{T}=\mathbb{R}_{+}\)); the discrete-time case can be obtained by setting \(\tau=1\) and ignoring results about the \(\tau\to 0\) limit. Recall from section 2 that, in the continuous-time setting, the Lie derivative operator \(\mathcal{L}\) is the generator of the Figure 2: Left: Upper bounds on \(\overline{X_{t}}\) for the the stochastic logistic map (5.6), obtained with increasing \(m\) and polynomial auxiliary functions of degree \(\alpha=2\) (squares), \(6\) (triangles) and \(8\) (circles). Symbols are full if \(\beta\geq 2\alpha\). Bounds are constant for \(\beta\geq 2\alpha\). Right: Decay of the Frobenius norm \(\|K_{mn}^{\tau}-K_{m\infty}^{\tau}\|_{F}\) with the number of data snapshots. Results are for \((\alpha,\beta)=(4,8)\), so \(m=9\), but are representative of other \((\alpha,\beta)\) combinations. The dotted line decays as \(O(1/\sqrt{n})\). Koopman semigroup on \(C_{b}(\mathbb{T}\times\mathbb{X})\). Its domain, \(\mathcal{D}(\mathcal{L})\), is the set of functions \(\varphi\in C_{b}(\mathbb{T}\times\mathbb{X})\) such that the different quotient \[\mathcal{L}^{\tau}\varphi:=\frac{\mathcal{K}^{\tau}\varphi-\varphi}{\tau}\] converges uniformly on \(\mathbb{T}\times\mathbb{X}\) as \(\tau\to 0\), and this limit is \(\mathcal{L}\varphi\) by definition. For our analysis, it will be convenient to view the spaces \(C_{b}(\mathbb{T}\times\mathbb{X})\), \(\operatorname{span}\boldsymbol{\phi}\) and \(\operatorname{span}\boldsymbol{\psi}\) as subspaces of \(L^{2}_{\mu}(\mathbb{T}\times\mathbb{X})\), the Lebesgue space of functions \(\varphi:\mathbb{T}\times\mathbb{X}\to\mathbb{R}\) that are square-integrable with respect to some probability measure \(\mu\) on \(\mathbb{T}\times\mathbb{X}\). (This measure will be chosen below). The norm on this space is \[\|\varphi\|_{L^{2}_{\mu}}:=\int_{\mathbb{T}\times\mathbb{X}}|\varphi(t,x)|^{2 }d\mu(t,x).\] The projection of a function \(f\in L^{2}_{\mu}(\mathbb{T}\times\mathbb{X})\) onto \(\operatorname{span}\boldsymbol{\psi}\) is \[\mathcal{P}^{\mu}_{m}f:=\operatorname*{argmin}_{u\in\operatorname{span} \boldsymbol{\psi}}\ \|u-f\|_{L^{2}_{\mu}}\,. \tag{6.1}\] Note that the minimizer in this problem is unique only as an element of \(L^{2}_{\mu}\), but could be attained by more than one function in \(\operatorname{span}\boldsymbol{\psi}\). It is well known that \(\mathcal{P}^{\mu}_{m}\) is a linear operator and satisfies the contraction property \(\|\mathcal{P}^{\mu}_{m}f\|_{L^{2}_{\mu}}\leq\|f\|_{L^{2}_{\mu}}\). ### Exact and empirical data sampling measures As in section 4.1.1, we assume that the data snapshots \((t_{i},x_{i},y_{i})_{i=1}^{n}\) satisfy \(x_{i}=X_{t_{i}}\) and \(y_{i}=X_{t_{i}+\tau}\). If the distribution of the random variable \(X_{t+\tau}\) given \(X_{t}=x\) is described by \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & & \(\alpha=2\) & \(\alpha=4\) & \(\alpha=6\) & \(\alpha=8\) & \(\alpha=10\) & \(\alpha=12\) & \(\alpha=14\) \\ \hline \multirow{4}{*}{\(\operatorname{span}\boldsymbol{\psi}\)} & \(n=10^{4}\) & 0.3765 & 0.3162 & 0.3186 & 0.2844 & 0.2851 & 0.2858 & 0.2856 \\ & \(n=10^{5}\) & 0.3751 & 0.3126 & 0.3086 & 0.2835 & 0.2814 & 0.2775 & 0.2757 \\ & \(n=10^{6}\) & 0.3749 & 0.3124 & 0.3072 & 0.2832 & 0.2821 & 0.2758 & 0.2730 \\ & \(n=10^{7}\) & 0.3751 & 0.3126 & 0.3070 & 0.2830 & 0.2817 & 0.2766 & 0.2737 \\ \cline{2-8} & Exact & 0.3750 & 0.3125 & 0.3069 & 0.2829 & 0.2816 & 0.2765 & 0.2736 \\ \hline \multirow{4}{*}{\(\operatorname{span}\boldsymbol{\psi}\)} & \(n=10^{4}\) & 0.0032 & 0.0055 & 0.0142 & 0.0107 & 0.0098 & 0.0090 & 0.0088 \\ & \(n=10^{5}\) & 0.0004 & 0.0016 & 0.0070 & 0.0057 & 0.0030 & 0.0248 & 0.0023 \\ \cline{1-1} & \(n=10^{6}\) & 0.0004 & 0.0010 & 0.0027 & 0.0059 & 0.0017 & 0.0032 & 0.0024 \\ \cline{1-1} & \(n=10^{7}\) & 0.0001 & 0.0001 & 0.0003 & 0.0011 & 0.0010 & 0.0016 & 0.0019 \\ \cline{1-1} \cline{2-8} & Exact & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 \\ \hline \hline \end{tabular} \end{table} Table 2: Data-driven ‘bounds’ on \(\overline{X_{t}}\) for the random logistic model (5.6). Computations used degree-\(\alpha\) polynomial auxiliary functions, an EDMD dictionary \(\boldsymbol{\psi}\) listing monomials of degree up to \(2\alpha\), and \(n=10^{4}\)–\(10^{7}\) data snapshots. Exact bound values were computed using the exact Lie derivative (5.7). a probability measure \(\nu_{t,x}\) on \(\mathbb{X}\), the data points \(y_{i}\) are random variables with distribution \(\nu_{t_{i},x_{i}}\). We will also assume that each pair \((t_{i},x_{i})\) is sampled from a probabilty measure \(\mu\) on \(\mathbb{T}\times\mathbb{X}\). Thus, the joint distribution \(\rho\) of the data snapshots \((t_{i},x_{i},y_{i})\) is a probability measure on \(\mathbb{T}\times\mathbb{X}\times\mathbb{X}\) satisfying \[\rho(E_{t},E_{x},E_{y}):=\int_{E_{t}\times E_{x}}\nu_{t,x}(E_{y})\;d\mu(t,x)\] for every Borel subsets \(E_{t}\subset\mathbb{T}\) and \(E_{x},E_{y}\subset\mathbb{X}\). Empirical approximations of \(\mu\), \(\nu_{t,x}\) and \(\rho\) can be built using the data snapshots. For each pair \((t,x)\), consider the set \(S_{i}=\{i:(t_{i},x_{i})=(t,x)\}\) and write \(|S_{i}|\) for its cardinality. We can then define \[\mu^{n}(E_{t},E_{x}) :=\frac{1}{n}\sum_{i=1}^{n}\delta_{t_{i}}(E_{t})\delta_{x_{i}}(E_ {x}),\] \[\nu_{t,x}^{n}(E_{y}) :=\frac{1}{|S_{i}|}\sum_{i\in S_{i}}\delta_{y_{i}}(E_{y}),\] and \[\rho^{n}(E_{t},E_{x},E_{y}):=\frac{1}{n}\sum_{i=1}^{n}\nu_{t_{i},x_{i}}^{n}(E_ {y})\delta_{t_{i}}(E_{t})\delta_{x_{i}}(E_{x})=\frac{1}{n}\sum_{i=1}^{n} \delta_{t_{i}}(E_{t})\delta_{x_{i}}(E_{x})\delta_{y_{i}}(E_{y}).\] Note that the empirical average of any bounded function \(g:\mathbb{T}\times\mathbb{X}\times\mathbb{X}\to\mathbb{R}\) can be expressed as an integral against \(\rho^{n}\): \[\frac{1}{n}\sum_{i=1}^{n}g(t_{i},x_{i},y_{i})=\int_{\mathbb{T}\times\mathbb{ X}\times\mathbb{X}}g(t,x,y)\,d\hat{\rho}^{n}(t,x,y).\] Our analysis will assume that the data snapshots are sampled such that \[\int g(t,x,y)\,d\rho^{n}(t,x,y)\stackrel{{ n\to\infty}}{{\to}}\int g(t,x,y)\,d\rho(t,x,y)\quad\forall g \in C_{b}(\mathbb{T}\times\mathbb{X}\times\mathbb{X}) \tag{6.2}\] almost surely (integration sets will be omitted when clear from the context). This assumption is standard in the analysis of EDMD and its variations, and can be ensured in two ways. If \(\rho\) is an ergodic measure, one can collect the snapshots from a trajectory of the dynamical system. Alternatively, if \(\mathbb{X}\) is a separable Banach space, one can sample the snapshots independently from \(\rho\)[60]. In the latter case, the theory of Monte Carlo integration (see, e.g., [12]) ensures convergence at a rate of \(1/\sqrt{n}\). In the former case, instead, no general convergence rate can be stated because ergodic averages can converge arbitrarily slowly [32]. ### A convenient problem reformulation To analyze the convergence of the approximate Lie derivatives constructed in section 4, we need to study the \(n,m\to\infty\) and \(\tau\to 0\) limits of the matrices \(K^{\tau}_{mn}\), \(L^{\tau}_{mn}\) and \(G_{mn}\) from (4.2) to (4.4). This task can be conveniently carried out by considering the matrices \[\begin{split} A^{\tau}_{n}&:=\int\mathbf{\phi}(t+\tau, y)\mathbf{\psi}(t,x)^{\top}\,d\rho^{n}(t,x,y)\\ B_{n}&:=\int\mathbf{\psi}(t,x)\mathbf{\psi}(t,x)^{\top}\,d \mu^{n}(t,x)\\ C_{n}&:=\int\mathcal{L}\mathbf{\phi}(t,x)\mathbf{\psi}(t,x )^{\top}\,d\mu^{n}(t,x),\\ D^{\tau}_{n}&:=\int\tau^{-1}\left[\mathbf{\phi}(t+\tau, y)-\mathbf{\phi}(t,x)\right]\mathbf{\psi}(t,x)^{\top}\,d\rho^{n}(t,x,y)\end{split} \tag{6.3}\] and their 'infinite-data' equivalents, \[\begin{split} A^{\tau}&:=\int\mathbf{\phi}(t+\tau,y) \mathbf{\psi}(t,x)^{\top}\,d\rho(t,x,y)\\ B&:=\int\mathbf{\psi}(t,x)\mathbf{\psi}(t,x)^{\top}\,d\mu (t,x)\\ C&:=\int\mathcal{L}\mathbf{\phi}(t,x)\mathbf{\psi}(t,x)^{ \top}\,d\mu(t,x),\\ D_{h}&:=\int\tau^{-1}\left[\mathbf{\phi}(t+\tau,y)-\mathbf{ \phi}(t,x)\right]\mathbf{\psi}(t,x)^{\top}\,d\rho(t,x,y)\end{split} \tag{6.4}\] The link comes from the following identities, the first two of which are well known. **Lemma 6.1**.: _There holds_ \[\begin{split} K^{\tau}_{mn}&=A^{\tau}_{n}B^{\dagger }_{n},\\ G_{mn}&=C_{n}B^{\dagger}_{n},\\ L^{\tau}_{mn}&=D^{\tau}_{n}B^{\dagger}_{n}+\tau^{- 1}\Theta_{m}(B_{n}B^{\dagger}_{n}-I).\end{split}\] Proof.: Observe that \[\Phi^{\tau}_{n}\Psi^{\top}_{n}=\sum_{i=1}^{n}\mathbf{\phi}(t_{i}+\tau,y_{i})\mathbf{ \psi}(t_{i},x_{i})^{\top}=\int\mathbf{\phi}(t+\tau,y)\mathbf{\psi}(t,x)^{\top}\,d\rho^ {n}(t,x,y)=A^{\tau}_{n}.\] Similar calculations show that \(\Psi_{n}\Psi^{\top}_{n}=B_{n}\) and \(\Lambda_{n}\Psi^{\top}_{n}=C_{n}\), so \[\begin{split} K^{\tau}_{mn}&=\Phi^{\tau}_{n}\Psi^{ \dagger}_{n}=\left(\Phi^{\tau}_{n}\Psi^{\top}_{n}\right)\left(\Psi_{n}\Psi^{ \top}_{n}\right)^{\dagger}=A^{\tau}_{n}B^{\dagger}_{n},\\ G_{mn}&=\Lambda_{n}\Psi^{\dagger}_{n}=\left(\Lambda _{n}\Psi^{\top}_{n}\right)\left(\Psi_{n}\Psi^{\top}_{n}\right)^{\dagger}=C_{n} B^{\dagger}_{n}.\end{split}\] For the third identity in the statement of the Lemma, note that \[L_{mn}^{\tau}=\tau^{-1}\left(A_{n}^{\tau}-\Theta_{m}B_{n}\right)B_{n}^{\dagger}+ \tau^{-1}\Theta_{m}(B_{n}B_{n}^{\dagger}-I)\] and \[\tau^{-1}\left(A_{n}^{\tau}-\Theta_{m}B_{n}\right)=\int\tau^{-1} \left[\boldsymbol{\phi}(t+\tau,y)\boldsymbol{\psi}(t,x)^{\top}-\Theta_{m} \boldsymbol{\psi}(t,x)\boldsymbol{\psi}(t,x)^{\top}\right]d\rho^{n}\\ \stackrel{{\eqref{eq:L_mn}}}{{=}}\int\tau^{-1} \left[\boldsymbol{\phi}(t+\tau,y)-\boldsymbol{\phi}(t,x)\right]\boldsymbol{ \psi}(t,x)^{\top}d\rho^{n}\ =\ D_{n}^{\tau}.\] ### The infinite-data limit We now study the limit of infinite data (\(n\to\infty\)). By lemma 6.1, it is enough to identify the limits of \(A_{n}^{\tau}\), \(B_{n}^{\dagger}\), \(C_{n}\) and \(D_{n}^{\tau}\). Our assumption that condition (6.2) holds almost surely ensures that \[A_{n}^{\tau}\stackrel{{ n\to\infty}}{{\to}}A^{\tau},\qquad \qquad B_{n}\stackrel{{ n\to\infty}}{{\to}}B,\qquad\qquad C_{n} \stackrel{{ n\to\infty}}{{\to}}C,\qquad\qquad D_{n}^{\tau} \stackrel{{ n\to\infty}}{{\to}}D^{\tau}.\] However, pseudo-inversion is not a continuous operation in general. The next lemma resolves this complication (see also [61] for the case in which \(B\) has full rank). **Lemma 6.2**.: _Suppose the snapshots \((t_{i},x_{i},y_{i})_{i=1}^{n}\) are sampled such that (6.2) holds almost surely. Then, \(B_{n}^{\dagger}\to B^{\dagger}\) almost surely as \(n\to\infty\)._ Proof.: Since pseudo-inversion is continuous along constant-rank sequences [54], it suffices to prove that \(\operatorname{rank}(B_{n})=\operatorname{rank}(B)\) almost surely for sufficiently large \(n\). On the one hand, we almost surely have \(\operatorname{rank}(B_{n})\leq\operatorname{rank}(B)\) when \(n\) is large enough because \(\ker(B)\subset\ker(B_{n})\) almost surely. Indeed, if \(\boldsymbol{v}\in\ker(C)\) then \[0=\boldsymbol{v}^{\top}B\boldsymbol{v}=\int\left|\boldsymbol{v}\cdot \boldsymbol{\psi}(t,x)\right|^{2}\,d\mu(t,x),\] so \(\boldsymbol{v}\cdot\boldsymbol{\psi}(t,x)=0\) almost everywhere on the support of \(\mu\). This means \(\boldsymbol{v}\cdot\boldsymbol{\psi}(t_{i},x_{i})=0\) almost surely for each data snapshot, so almost surely \[B_{n}\boldsymbol{v}=\Psi_{n}\Psi_{n}^{\top}\boldsymbol{v}=\sum_{i=1}^{n} \boldsymbol{\psi}(t_{i},x_{i})\boldsymbol{\psi}(t_{i},x_{i})^{\top} \boldsymbol{v}=0.\] On the other hand, \(\operatorname{rank}(B_{n})\geq\operatorname{rank}(B)\) for all large enough \(n\) because \(\ker(B)\) does not contain the \(r=\operatorname{rank}(B)\) orthonormal eigenvectors \(\boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{r}\) of \(B\) whose eigenvalues \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{r}\) are positive. Indeed, by (6.2) there almost surely exists \(n_{0}\in\mathbb{N}\) such that \(\|B_{n}-B\|_{F}\leq\frac{1}{2}\lambda_{r}\) when \(n\geq n_{0}\), hence for every \(j\in\{1,\ldots,r\}\) \[\boldsymbol{v}_{j}^{\top}B_{n}\boldsymbol{v}_{j}\geq\boldsymbol{v}_{j}^{\top} B\boldsymbol{v}_{j}-\left\|B_{n}-B\right\|_{F}\left\|\boldsymbol{v}_{j}\right\|^{2} \geq\lambda_{j}-\tfrac{1}{2}\lambda_{r}>0.\] Combining Lemmas 6.1 and 6.2 shows that the EDMD approximations \(\mathcal{K}^{\tau}_{mn}\varphi\), \(\mathcal{L}^{\tau}_{mn}\varphi\) and \(\mathcal{G}_{mn}\varphi\) of \(\mathcal{K}^{\tau}\varphi\), \(\mathcal{L}^{\tau}\varphi\) and \(\mathcal{L}\varphi\) converge to \(L^{2}_{\mu}\)-orthogonal projections of the latter onto \(\operatorname{span}\boldsymbol{\psi}\) as \(n\to\infty\). This result, stated precisely in theorem 6.1 below, generalizes analogous statements for discrete-time processes in [9, 31, 61] in two ways. First, it applies to a broad class of Markov stochastic processes. Second, we do not assume the matrix \(B\) to be invertible, so \(L^{2}_{\mu}\)-orthogonal projections onto \(\operatorname{span}\boldsymbol{\psi}\) are not uniquely defined as elements of \(\operatorname{span}\boldsymbol{\psi}\) in general. This assumption was already dropped in [26] when applying gEDMD to stochastic differential equations, even though that work does not explicitly justify why \(B^{\dagger}_{n}\) converges to \(B^{\dagger}\). **Theorem 6.1**.: _Suppose the snapshots \((t_{i},x_{i},y_{i})_{i=1}^{n}\) are sampled such that \((\ref{eq:cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-condcond-cond-cond-cond-cond-condcond-cond-cond-cond-cond-condcond-cond-cond-cond-cond-cond-condcond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-condcond-cond-cond-cond-cond-cond-cond-cond-cond-condcond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-cond-condcond-condcond-cond-cond-condcond-cond-condcond-condcond-cond-cond-cond-cond-condcond-cond-cond-condcond-condcond-cond-cond-cond-condcond-condcond-cond-condcond-cond-cond-cond-cond-cond-cond-cond-cond-cond-condcond-cond-cond-cond-condcond-cond-condcond-cond-condcond-condcond-cond-condcond-cond-cond-cond-condcond-cond-cond-cond-condcond-cond-condcond-cond-condcond-condcond-condcond-condcond-condcond-condcond-condcond-condcond-cond-condcond-condcond-condcond-condcond-condcond-condcond-condcond-condcond-condcond-condcondcond-condcond-condcondcond-condcond-condcondcond-condcond-condcondcond-condcondcond-condcondcond-condcondcond-condcondcondcondcond-cond of positive eigenvalues and \(V_{+}\) is the corresponding matrix of orthonormal eigenvectors. Writing \(V_{-}\) for the matrix of eigenvectors of \(B\) with zero eigenvalue, we find that \(BB^{\dagger}-I=V_{-}V_{-}^{\top}\) is a projection onto the kernel of \(B\). This space is orthogonal to \(\mathbf{\psi}(t,x)\) for almost every \((t,x)\in\operatorname{supp}\mu\) since \[\int\|V_{-}^{\top}\mathbf{\psi}\|^{2}\,d\mu=V_{-}^{\top}\left(\int\mathbf{\psi}\mathbf{\psi }^{\top}\,d\mu\right)V_{-}=V_{-}^{\top}BV_{-}=0.\] We conclude that \((BB^{\dagger}-I)\mathbf{\psi}\), hence \(\eta\), vanishes almost everywhere on \(\operatorname{supp}\mu\). ### The infinite-sampling-rate limit Having studied the infinite-data limit, we now turn to the limit of infinite sampling rate (\(\tau\to 0\)). Recall from section 6.1 that \(\mathcal{P}_{m}^{\mu}\), the \(L_{\mu}^{2}\)-orthogonal projection operator onto \(\operatorname{span}\mathbf{\psi}\), is linear and satisfies \(\|\mathcal{P}_{m}^{\mu}f\|_{L_{\mu}^{2}}\leq\|f\|_{L_{\mu}^{2}}\). Then, we deduce from theorem 6.1 that \[\lim_{\tau\to 0}\|\mathcal{L}_{m}^{\tau}\varphi-\mathcal{G}_{m} \varphi\|_{L_{\mu}^{2}} =\lim_{\tau\to 0}\|\mathcal{P}_{m}^{\mu}\mathcal{L}^{\tau} \varphi-\mathcal{P}_{m}^{\mu}\mathcal{L}\varphi\|_{L_{\mu}^{2}}\] \[\leq\lim_{\tau\to 0}\|\mathcal{L}^{\tau}\varphi-\mathcal{L} \varphi\|_{L_{\mu}^{2}}\] \[=0.\] The last equality here follows because, by definition, \(\mathcal{L}^{\tau}\varphi\) converges uniformly to \(\mathcal{L}\varphi\) on the whole space \(\mathbb{T}\times\mathbb{X}\), hence in particular on the support of the measure \(\mu\). Convergence in \(L_{\mu}^{2}\), together with the continuity of \(\mathcal{L}_{m}^{\tau}\varphi\) and \(\mathcal{G}_{m}\varphi\), implies that \(\mathcal{L}_{m}^{\tau}\varphi\to\mathcal{G}_{m}\varphi\) on \(\operatorname{supp}\mu\). Thus, in the infinite sampling rate limit the EDMD approximation to \(\mathcal{L}\varphi\) from section 4.1.1 recovers the one built with gEDMD in section 4.1.2 on the support of the data sampling measure. However, Theorem 6.1 immediately implies the following more precise result. **Theorem 6.2**.: _Let \(\varphi=\mathbf{c}\cdot\mathbf{\phi}\) for \(\mathbf{c}\in\mathbb{R}^{\ell}\). If \(\varphi\in\mathcal{D}(\mathcal{L})\), then_ \[\lim_{\tau\to 0}\mathcal{L}_{m}^{\tau}\varphi(t,x)=\begin{cases}\mathcal{G}_{m} \varphi(t,x)&\text{if }\mathbf{c}\cdot\Theta_{m}(BB^{\dagger}-I)\mathbf{\psi}(t,x)=0,\\ \infty&\text{otherwise.}\end{cases}\] Proof.: Since \(\mathcal{L}^{\tau}\varphi\to\mathcal{L}\varphi\) uniformly as \(\tau\to 0\), we have \(D^{\tau}\to C\) and \[\lim_{\tau\to 0}\mathcal{L}_{m}^{\tau}\varphi(t,x) =\lim_{\tau\to 0}\Big{[}\mathbf{c}\cdot D^{\tau}B^{\dagger}\mathbf{ \psi}(t,x)+\mathbf{c}\cdot\tau^{-1}\Theta_{m}(BB^{\dagger}-I)\mathbf{\psi}(t,x)\Big{]}\] \[=\mathbf{c}\cdot CB^{\dagger}\mathbf{\psi}(t,x)+\lim_{\tau\to 0}\mathbf{c} \cdot\tau^{-1}\Theta_{m}(BB^{\dagger}-I)\mathbf{\psi}(t,x)\] \[=\mathcal{G}_{m}\varphi(t,x)+\lim_{\tau\to 0}\mathbf{c}\cdot\tau^{-1} \Theta_{m}(BB^{\dagger}-I)\mathbf{\psi}(t,x).\] The last limit is finite if and only if \(\mathbf{c}\cdot\Theta_{m}(BB^{\dagger}-I)\mathbf{\psi}(t,x)=0\). Theorem 6.2 implies that \(\mathcal{L}_{m}^{\tau}\varphi\to\mathcal{G}_{m}\varphi\) pointwise if \(B=\int\mathbf{\psi}\mathbf{\psi}^{\top}\,d\mu\) is invertible. This is true if and only if the following condition is met (cf. [31, Assumption 1]). **Assumption 6.1**.: If \(u\in\operatorname{span}\boldsymbol{\psi}\) vanishes \(\mu\)-almost-everywhere, then \(u\equiv 0\). **Corollary 6.1**.: _Under assumption 6.1, \(\mathcal{L}_{m}^{\tau}\varphi\to\mathcal{G}_{m}\varphi\) pointwise on \(\mathbb{T}\times\mathbb{X}\)._ This corollary applies, for instance, when \(\mathbb{X}=\mathbb{R}^{d}\), \(\boldsymbol{\psi}\) is a polynomial dictionary, and \(\operatorname{supp}\mu\) is not an algebraic set. This is true for the van der Pol oscillator example in section 5.2. The lack pointwise convergence, instead, is illustrated in section 7. Finally, the results in this section carry over to the finite-data case in the special case of _deterministic_ dynamics. In this case, the matrix \(D_{n}^{\tau}\) in (6.3) reduces to \[D_{n}^{\tau}:=\int\mathcal{L}^{\tau}\boldsymbol{\phi}(t,x)\boldsymbol{\psi}(t,x)^{\top}\,d\mu^{n}(t,x).\] Since this integral is just a finite sum, there is no issue in letting \(\tau\to 0\) to find that \(D_{n}^{\tau}\to C_{n}\). Thus, for every \(\varphi=\boldsymbol{c}\cdot\boldsymbol{\phi}\), \[\lim_{\tau\to 0}\mathcal{L}_{mn}^{\tau}\varphi =\lim_{\tau\to 0}\left(\boldsymbol{c}\cdot D_{n}^{\tau}B_{n}^{ \dagger}\boldsymbol{\psi}\right)+\lim_{\tau\to 0}\left(\boldsymbol{c}\cdot \tau^{-1}\Theta_{m}(B_{n}B_{n}^{\dagger}-I)\boldsymbol{\psi}\right)\] \[=\left(\boldsymbol{c}\cdot C_{n}B_{n}^{\dagger}\boldsymbol{\psi} \right)+\lim_{\tau\to 0}\left(\boldsymbol{c}\cdot\tau^{-1}\Theta_{m}(B_{n}B_{n}^{ \dagger}-I)\boldsymbol{\psi}\right)\] \[=\mathcal{G}_{mn}\varphi+\lim_{\tau\to 0}\left(\boldsymbol{c} \cdot\tau^{-1}\Theta_{m}(B_{n}B_{n}^{\dagger}-I)\boldsymbol{\psi}\right).\] This limit is finite and equal to \(\mathcal{G}_{mn}\varphi\) only at points \((t,x)\) satisfying \(\boldsymbol{c}\cdot\Theta_{m}(B_{n}B_{n}^{\dagger}-I)\boldsymbol{\psi}(t,x)=0\), which is true in particular on the support of the empirical measure \(\mu^{n}\). Thus, as \(\tau\to 0\) the function \(\mathcal{L}_{mn}^{\tau}\varphi\) recovers \(\mathcal{G}_{mn}\varphi\) at least on the data points \((t_{i},x_{i})\). ### The infinite EDMD dictionary limit We finally turn to studying how the EDMD-based approximate Lie derivative \(\mathcal{L}_{mn}^{\tau}\varphi\) from section 4.1.1 behaves as the approximation space \(\operatorname{span}\boldsymbol{\psi}\) is enlarged. Precisely, we replace a fixed dictionary \(\boldsymbol{\psi}\) with a sequence \(\{\boldsymbol{\psi}^{m}\}_{m\geq\ell}\) of dictionaries of increasing size \(m\). All results extend _mutatis mutandis_ to the gEDMD-based approximate Lie derivatives in section 4.1.2. Our first (standard) result is that approximate Lie derivatives become increasingly accurate if the sequence \(\{\boldsymbol{\psi}^{m}\}_{m\geq\ell}\) has the following approximation property. **Assumption 6.2**.: For every \(u\in L_{\mu}^{2}(\mathbb{T}\times\mathbb{X})\), there exists \(u_{m}\in\operatorname{span}\boldsymbol{\psi}^{m}\) such that the sequence \(\{u_{m}\}_{m\geq\ell}\) converges to \(u\) in \(L_{\mu}^{2}\). Observe that this assumption does not require the inclusion \(\operatorname{span}\boldsymbol{\psi}^{m}\subset\operatorname{span}\boldsymbol{ \psi}^{m+1}\), even though this is often true in practice. This inclusion fails, for example, if \(\{\boldsymbol{\psi}^{m}\}_{m\geq\ell}\) is a sequence of finite-element bases on increasingly fine but not nested meshes. **Theorem 6.3**.: _If the dictionaries \(\{\boldsymbol{\psi}^{m}\}_{m\geq\ell}\) satisfy assumption 6.2, then_ \[\lim_{m\to\infty}\lim_{\tau\to 0}\lim_{n\to\infty}\left\|\mathcal{L}_{mn}^{ \tau}\varphi-\mathcal{L}\varphi\right\|_{L_{\mu}^{2}}=\lim_{\tau\to 0}\lim_{m\to\infty}\lim_{n\to\infty}\left\| \mathcal{L}_{mn}^{\tau}\varphi-\mathcal{L}\varphi\right\|_{L_{\mu}^{2}}=0.\] _In particular, \(\mathcal{L}_{mn}^{\tau}\varphi(t,x)\to\mathcal{L}\varphi(t,x)\) for \(\mu\)-almost-every \((t,x)\in\mathbb{T}\times\mathbb{X}\)._ Proof.: Recall from theorem 6.1 that \(\mathcal{L}_{m}^{\tau}\varphi=\mathcal{P}_{m}^{\mu}\mathcal{L}^{\tau}\varphi\). Recall also that \(\mathcal{P}_{m}^{\mu}\) is a linear operator such that \(\|\mathcal{P}_{m}^{\mu}f\|_{L_{\mu}^{2}}\leq\|f\|_{L_{\mu}^{2}}\) and \(\|\mathcal{P}_{m}^{\mu}f-f\|_{L_{\mu}^{2}}\leq\|u-f\|_{L_{\mu}^{2}}\) for every \(f\) and \(u\) (cf. section 6.1). Given functions \(u_{m}\in\operatorname{span}\boldsymbol{\psi}^{m}\) converging to \(\mathcal{L}\varphi\) in \(L_{\mu}^{2}\), which exist by assumption, we can therefore use the triangle inequality to estimate \[\|\mathcal{L}_{mn}^{\tau}\varphi-\mathcal{L}\varphi\|_{L_{\mu}^{2}} \leq\|\mathcal{L}_{mn}^{\tau}\varphi-\mathcal{L}_{m}^{\tau} \varphi\|_{L_{\mu}^{2}}+\|\mathcal{L}_{m}^{\tau}\varphi-\mathcal{P}_{m}^{\mu} \mathcal{L}\varphi\|_{L_{\mu}^{2}}+\|\mathcal{P}_{m}^{\mu}\mathcal{L}\varphi- \mathcal{L}\varphi\|_{L_{\mu}^{2}}\,.\] \[\leq\|\mathcal{L}_{mn}^{\tau}\varphi-\mathcal{L}_{m}^{\tau} \varphi\|_{L_{\mu}^{2}}+\|\mathcal{L}^{\tau}\varphi-\mathcal{L}\varphi\|_{L_{ \mu}^{2}}+\|u_{m}-\mathcal{L}\varphi\|_{L_{\mu}^{2}}\,.\] The first term on the right-hand side vanishes as \(n\to\infty\) by theorem 6.1. The other two terms vanish as \(\tau\to 0\) and \(m\to\infty\) because, by definition, \(\mathcal{L}^{\tau}\varphi\to\mathcal{L}\varphi\) uniformly and \(u_{m}\to\mathcal{L}\varphi\) in \(L_{\mu}^{2}\). These two limits can clearly be taken in any order. It is of course desirable to complement theorem 6.3 with explicit convergence rates, but we do not pursue this here because the answer depends on the particular choices for the dictionaries \(\boldsymbol{\phi}\), \(\boldsymbol{\psi}^{m}\) and for the data sampling strategy. Interested readers can find an example of what can be achieved in [64], which estimates convergence rates for the EDMD-based identification of deterministic continuous-time systems. Instead, to fully justify the good performance of approximate Lie derivatives in the examples of section 5, we study in more detail the special case in which every \(\varphi\in\operatorname{span}\boldsymbol{\phi}\cap\mathcal{D}(\mathcal{L})\) satisfies \(\mathcal{L}\varphi\in\operatorname{span}\boldsymbol{\psi}^{m}\) for all large enough \(m\). This assumption is usually hard to verify in practice. When it holds, however, one recovers \(\mathcal{L}\varphi\) pointwise on the full space if the dictionaries \(\boldsymbol{\psi}^{m}\) also satisfy assumption 6.1. **Theorem 6.4**.: _Suppose there exists \(m_{0}\geq\ell\) such that, for every \(m\geq m_{0}\):_ 1. \(\mathcal{L}\varphi\in\operatorname{span}\boldsymbol{\psi}^{m}\) _for every_ \(\varphi\in\operatorname{span}\boldsymbol{\phi}\cap\mathcal{D}(\mathcal{L})\)_._ 2. _If_ \(u\in\operatorname{span}\boldsymbol{\psi}^{m}\) _vanishes_ \(\mu\)_-almost-everywhere, then_ \(u\equiv 0\)_._ _Then, for every \(\varphi\in\operatorname{span}\boldsymbol{\phi}\cap\mathcal{D}(\mathcal{L})\) and every \(m\geq m_{0}\),_ \[\lim_{\tau\to 0}\lim_{n\to\infty}\mathcal{L}_{mn}^{\tau}\varphi=\mathcal{L} \varphi\qquad\text{pointwise on $\mathbb{T}\times\mathbb{X}$}.\] Proof.: theorem 6.1 and corollary 6.1 guarantee that \(\mathcal{L}_{mn}^{\tau}\varphi\) converges to the function \(\mathcal{G}_{m}\varphi\) pointwise on \(\mathbb{T}\times\mathbb{X}\) as \(n\to\infty\) and \(\tau\to 0\). Since \(\mathcal{G}_{m}\varphi\) minimizes \(\|u-\mathcal{L}\varphi\|_{L_{\mu}^{2}}\) over all \(u\in\operatorname{span}\boldsymbol{\psi}^{m}\) and \(\mathcal{L}\varphi\in\operatorname{span}\boldsymbol{\psi}^{m}\) by assumption 1, we must have \(\mathcal{G}_{m}\varphi=\mathcal{L}\varphi\) on \(\operatorname{supp}\mu\). This implies \(\mathcal{G}_{m}\varphi=\mathcal{L}\varphi\) on \(\mathbb{T}\times\mathbb{X}\) by assumption 2. ## 7 A further example illustrating the theory We conclude with an example illustrating that if assumption 6.1 does not hold, then the Lie derivative approximations from sections 4.1.1 and 4.1.2 can behave very differently from what one might expect. Nevertheless, this behaviour is perfectly consistent with the results proved section 6. In practice, therefore, one must be careful not to misinterpret results obtained with approximate auxiliary functions. ### The problem Consider the two-dimensional ODE \[\begin{split}\dot{X_{1}}&=-X_{2}+X_{1}(1-X_{1}^{2}-X_{ 2}^{2})\\ \dot{X_{2}}&=\quad X_{1}+X_{2}(1-X_{1}^{2}-X_{2}^{2}), \end{split} \tag{7.1}\] which has an unstable equilibrium point at \((x_{1},x_{2})=(0,0)\) and an attracting circular limit cycle \(X_{t}=(\cos t,\sin t)\). We will use the auxiliary function framework of section 3.2 to find a lower bound \(L\) on the quantity \(g(x_{1},x_{2})=x_{1}^{2}+x_{2}^{2}\). To make things concrete, we will look for a quadratic auxiliary function of the form \[V(x_{1},x_{2})=\gamma\left(1+x_{1}^{2}+x_{2}^{2}\right), \tag{7.2}\] where \(\gamma\in\mathbb{R}\) should be chosen such that the inequality \[x_{1}^{2}+x_{2}^{2}+\mathcal{L}V(x_{1},x_{2})-L\geq 0 \tag{7.3}\] holds for all \(x_{1},x_{2}\) and the largest possible \(L\). The exact Lie derivative is \[\mathcal{L}V(x)=2\gamma\left(x_{1}^{2}+x_{2}^{2}\right)\left(1-x_{1}^{2}-x_{2} ^{2}\right) \tag{7.4}\] so with \(\gamma=0\) we obtain the lower bound \(L=0\). This lower bound is sharp, as it is saturated by the unstable equilibrium at the origin. ### Data-driven lower bound via EDMD We now seek data-driven lower bounds when \(\mathcal{L}V\) in (7.3) is replaced by its EDMD-based approximation \(\mathcal{L}_{mn}^{\tau}V\). We use \(n\) data snapshots \((t_{i},x_{i},y_{i})\) sampled at a rate \(\tau\) from the limit cycle, so \(t_{i}=i\tau\), \(x_{i}=(\cos t_{i},\sin t_{i})\) and \(y_{i}=(\cos(t_{i}+\tau),\sin(t_{i}+\tau))\). The function \(V\) in (7.2) belongs to the span of \(\boldsymbol{\phi}=(1,x_{1}^{2},x_{2}^{2})\) and we use the particular EDMD dictionary \(\boldsymbol{\psi}=(1,x_{1}^{2},x_{1}x_{2},x_{2}^{2})\). Similar results are obtained with any dictionary \(\boldsymbol{\psi}=(1,x_{1}^{2},x_{1}x_{2},x_{2}^{2},\psi_{5},\ldots,\psi_{m})\) where \(\psi_{5},\ldots,\psi_{m}\) are monomials. With these choices, the approximate Lie derivative \(\mathcal{L}_{mn}^{\tau}V\) can be calculated analytically using trigonometric identities for every \(n\) and \(\tau\) to find \[\mathcal{L}_{mn}^{\tau}V(x_{1},x_{2})=\frac{\gamma}{3\tau}\left(1-x_{1}^{2}-x _{2}^{2}\right).\] Thus, the approximate version of (7.3) with \(\mathcal{L}V\) replaced by \(\mathcal{L}_{mn}^{\tau}V\) requires \[\frac{\gamma}{3\tau}-L+\left(1-\frac{\gamma}{3\tau}\right)\left(x_{1}^{2}+x_{2 }^{2}\right)\geq 0\qquad\forall x_{1},x_{2}.\] Setting \(\gamma=3\tau\) we find the lower bound \(L=1\), which is evidently incorrect as it is violated by the equilibrium point at the origin. This apparent contradiction can be explained by recalling from section 3.2 that a lower bound proved using the inequality \(x_{1}^{2}+x_{2}^{2}+\mathcal{L}_{mn}^{\tau}V(x_{1},x_{2})\geq L\) applies only to trajectories for which \(\mathcal{L}_{mn}^{\tau}V=\mathcal{L}V\), which is true only on the circles with radii \(1\) and \(1/(6\tau)\). The system's limit cycle is the only trajectory remaining inside this set at all times, so the lower bound \(L=1\) applies only to it (and is in fact sharp). ### Data-driven lower bound via gEDMD Next, we repeat the exercise using the gEDMD-based approximate Lie derivative \(\mathcal{G}_{mn}V\) from section 4.1.2 instead of \(\mathcal{L}_{mn}^{\tau}V\). For this, we use data snapshots \(\{(t_{i},x_{i},y_{i})\}_{i=1}^{n}\) where \(t_{i}=i\tau\) and \(x_{i}=(\cos t_{i},\sin t_{i})\) as before, but \[y_{i}=\mathcal{L}\boldsymbol{\phi}(x_{i})=\begin{pmatrix}0\\ -2\cos t_{i}\sin t_{i}\\ 2\cos t_{i}\sin t_{i}\end{pmatrix}.\] For our auxiliary function \(V=\gamma(1+x_{1}^{2}+x_{2}^{2})\) and dictionary \(\boldsymbol{\psi}=(1,x_{1}^{2},x_{1}x_{2},x_{2}^{2})\), one has \(\mathcal{G}_{mn}V\equiv 0\) independently of \(n\). The best lower bound provable with the inequality \(x_{1}^{2}+x_{2}^{2}+\mathcal{G}_{mn}V(x_{1},x_{2})\geq L\) is therefore \(L=0\), which is correct and sharp for all trajectories of (7.1). Strictly speaking, however, this bound applies only to trajectories for which \(\mathcal{G}_{mn}V=\mathcal{L}V\); it just so happens that these are exactly the unstable equilibrium and the limit cycle, which are the only invariant trajectories of the system. ### Discussion In the examples above, the EDMD- and gEDMD-based Lie derivatives behave very differently when used to construct auxiliary functions. In particular, it is evident that \(\mathcal{L}_{mn}^{\tau}V\neq\mathcal{G}_{mn}V\), and none of these two functions recovers the exact Lie derivative (7.4) on the full space. The same is true in the infinite-data limit (\(n\to\infty\)) because \(\mathcal{L}_{mn}^{\tau}V=\mathcal{L}_{m}^{\tau}V\) and \(\mathcal{G}_{mn}V=\mathcal{G}_{m}V\), as the left-hand sides are independent of \(n\). Moreover, the function \(\mathcal{L}_{m}^{\tau}V\) converges to \(\mathcal{G}_{m}V\) as \(\tau\to 0\) only at points \((x_{1},x_{2})\) satisfying \(x_{1}^{2}+x_{2}^{2}=1\). This is exactly what theorem 6.2 predicts, since in our example we have \(\boldsymbol{c}=(\gamma,\gamma,\gamma)\), \(\boldsymbol{\psi}=(1,x_{1}^{2},x_{1}x_{2},x_{2}^{2})\), \[\Theta_{m}=\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\end{bmatrix}\qquad\text{ and }\qquad B=\frac{\pi}{4}\begin{bmatrix}8&2&0& 2\\ 4&3&0&1\\ 0&0&1&0\\ 4&1&0&3\end{bmatrix},\] giving \(\boldsymbol{c}\cdot\Theta_{m}(BB^{\dagger}-I)\boldsymbol{\psi}=\gamma(1-x_{1}^ {2}-x_{2}^{2})\). Here, the matrix \(B=\int\boldsymbol{\psi}\boldsymbol{\psi}^{\top}d\mu\) was computed by taking \(\mu\) to be the uniform measure on the unit circle, which is the right choice for our data sampling strategy. Finally, we stress that the results in this example are very different to those obtained for the van der Pol oscillator in section 5.2, where \(\mathcal{L}_{mn}^{\tau}V\) converged to \(\mathcal{L}V\) pointwise on \(\mathbb{R}^{2}\) (cf. figure 1). This could be anticipated because the limit cycle of (7.1) is an algebraic curve, meaning that it is the zero level set of a polynomial. Polynomial dictionaries \(\boldsymbol{\psi}\) whose span includes polynomials in the form \(p(x_{1},x_{2})(1-x_{1}^{2}-x_{2}^{2})\) cannot therefore satisfy assumption 6.1. In contrast, the limit cycle of the van der Pol oscillator is not an algebraic curve [46], so _any_ polynomial dictionary \(\boldsymbol{\psi}\) satisfies assumption 6.1. Therefore, rather remarkably, one is able to recover information about the global system dynamics even when sampling only on the limit cycle. Conclusion In this work we have provided a data-driven method for deducing information about dynamical systems without first discovering an explicit model. Our method combines two areas that are by now well-developed, namely, system analysis via auxiliary functions (sometimes also called Lyapunov or Lyapunov-like functions) and the data-driven approximation of the Koopman operator via EDMD. We also extended some known convergence results for EDMD to a broad class of stochastic systems, often under weaker assumptions than usual (cf. section 6). The result is a flexible and powerful method that can be applied equally easily to data generated by deterministic and stochastic dynamics, without any special preprocessing or other modifications to handle the stochasticity. Our examples have shown that we can accurately obtain Lyapunov functions from data, provide sharp upper bounds on long-time averages using less data than is required for an empirical average to converge, and bound expectations of stochastic processes. We expect a similar success when using auxiliary functions to study other properties of nonlinear systems. One potentially promising application of our method is as a pre-conditioner to discovering accurate and parsimonious dynamical models. For example, knowledge of Lyapunov functions, basins of attraction, or absorbing sets can improve data-driven model discovery from noisy or incomplete datasets [1]. In particular, one can easily extend a variation of the SINDy method for constructing fluid flow models with an absorbing ball [24] to general systems with an absorbing set that need not be a ball: it suffices to first use our data-driven methods to identify a candidate absorbing set, and then construct a model that for which this set is indeed absorbing. Crucially, both steps can be implemented with convex optimization. Although our theory does not put any limitations on the dimension of the data, both EDMD and the construction of auxiliary functions using semidefinite programming exhibit computational bottlenecks when the state-space dimension is not small. This can be seen clearly when the EDMD dictionaries are polynomial, since \(\ell\) and \(m\) grow considerably with the state-space dimension. Therefore, for even moderately-sized input data the resulting semidefinite programs could be prohibitively large. To overcome this issue in the setting of EDMD, [63] proposes a kernel-based EDMD formulation that transfers one from estimating the Koopman operator with a matrix of size given by the large dictionary to learning one of size given by the number of snapshots \(n\). This kernel formulation offers a significant computational speed-up in understanding the Koopman operator for systems such as discretized PDEs, where the state-space dimension is high and temporal data is difficult to produce. It remains to be seen whether similar techniques can help within our framework. There are many more potential avenues for future work. One is to establish convergence rates in the spirit of [64]. Although it is impossible to prove universal results in this direction [32], one could hope to identify classes of systems and dictionaries for which convergence rates can be proved. Another interesting problem is to quantify the gaps between predictions made using data-driven auxiliary functions (e.g. bounds on time averages) and their rigorous model-based counterparts. In summary, we believe that this work only scratches the surface on what is possible at the intersection of the Koopman operator, EDMD, auxiliary functions, and semidefinite programming. ## Acknowledgments We are grateful for the hospitality of the University of Surrey during the 2022 'Data and Dynamics' workshop, where this work was started. We also thank Stefan Klus and Enrique Zuazua for their insight into EDMD. JB was partially supported by an Institute of Advanced Studies Fellowship at Surrey.